This post goes over the steps necessary to make your bot open to the public. 1) Load your bot code onto a webapp on Azure (Microsoft’s Cloud). 2) Register your bot
** Note: Be sure that you are on the new Azure portal as the old dashboard is slowly being depreciated. **
When you first log onto the Azure portal you will see a Dashboard that has a bunch of tiles. Click on the ‘New +’ symbol at the top to create a new service. Then Select “Web + Mobile” > Web App. Fill out the form, select ‘pin to dashboard’ and click ‘create’.
Once the Web App is created a tile will appear on your dash. Click it to access your application.
After clicking you will be taken to the Overview of your Web App. On the right hand side you should see ‘Deployment Options’. In Azure the default connection with your Web App is with an FTP endpoint. However with Deployment Options we can select a variety of ways to deploy source code. I will connect mine to GitHub, but there are other options like Visual Studio Team Services or local Git.
After you select ‘Deployment Options’ > Choose Source “Configure required settings” > You’ll see a list of options. Select the desired one and connect to that service with appropriate credentials.
Once you’ve connected to a source, or used FTP to upload your files, we can now register our bot.
Fill out the Bot Registration form and use your Web App url (https://yoursite.azurewebsites.net/api/messages/) for the message endpoint input under Configuration.
** Note: Make sure you are using the secure url link for your message endpoint aka HTTPS **
After you filled everything out and Created and Microsoft App ID and password. Click Register, you should be taken to a Dashboard for your bot.
Linking your code to the Registered Bot
On your dashboard hit ‘Test Connection’. It should fail.
This happens because, your code does not have the ID and Password Authentication codes.
In your Web.config file you should see the following lines:
<appSettings>
<!-- update these with your BotId, Microsoft App Id and your Microsoft App Password-->
<add key="BotId" value="YourBotId" />
<add key="MicrosoftAppId" value="" />
<add key="MicrosoftAppPassword" value="" />
</appSettings>
Copy and past your MicrosoftAppId into the value slot for AppId and the same for the Password you obtained when you registered your bot.
Now push the updates to the Web App. If you hit test connection it should work! From there you can add multiple channels that your bot can communicate through. The skype and Web Channels are turned on by default, so you can get started with those two first.
And that’s all you have to do to get your bot online and ready to go. =)
Luis may be overkill for the the bot you want to create. If you only need your bot to answer questions (especially ones already on a FAQ site) try QnA bots from Microsoft Cognitive Services. QnAMaker automatically trains your service based on existing FAQ pages to save a bunch of time. In this post, I walk you through creating one and the code needed for your bot to link to the service. QnAMaker is currently in Preview as of January 2016; more information can be found at qnamaker.ai.
QnA Service vs. LUIS Service
First what is Microsoft QnA Maker. Well “Microsoft QnA Maker is a free, easy-to-use, REST API and web-based service that trains AI to respond to user’s questions in a more natural, conversational way.” It streamlines production of a REST API that your bot or other application can ping. So why use LUIS? If you want automation of a service that requires multiple response from your user (i.e. phone automation systems, ordering a sandwich, modifying settings on a services), LUIS’s interface and pipeline manages that development process better.
Getting started with the QnA bots
First, go to QnAmaker.ai and sign in with your live ID.
Once you’ve signed in create a new service
Type in the name of the service and the FAQ link you want to use, I’m linking to Unity’s FAQ page in this example. What’s great is that you can add more than one URL for your QnA bot to pull from. So if the site you are using has a FAQ that redirects to different pages to answer questions you can add those other pages too. You don’t need to use a url, uploading your own questions and answers works too.
Hit “Create” at the bottom of the page.
After you hit create the page will take you a new page with the questions and answers that the service was able to identify from the source (url or file) you provided. The questions and answers that the service identifies is called your Knowledge Base (KB).
Natural Language Testing
To train your service you can start plugging natural language questions and the service return the FAQ answers that would best match. If the service can’t get a high enough probability percentage for an answer it will return multiple answers, that you can choose from.
You also have the ability to provide alternate phrasing for question you just asked in the right hand side of the tool, so that they can map to the same answer.
Any time you make an adjustment to what the service returned for an answer, be sure to save what you’ve done by clicking Save and Retrain button.
Once you’ve finished training the service you can hit publish. You’ll be taken to a page with a summary of the numbers of changes that you had, before the service will be published.
** Note: The service won’t be published until you hit the publish button on this summary page. **
Once your service is published the site will provide an sample HTTP request that you can test with any rest client.
Microsoft Bot Template – C# for Visual Studio: You can use this by adding the .zip to your C:/user/documents/VisualStudio/Templates/ProjectTemplates folder
Microsoft Bot Emulator: there was a new emulator published in Dec 2015 so if you haven’t upgraded since then do that now
The MessagesController class and create a QnAMakerResult class hold the most important parts of code. Depending on the complexity of your bot you may want to look into dialogs and chains, instead of putting your handler in the MessageController class.
QNAMAKER RESULT
public class QnAMakerResult
{
///
<summary>
/// The top answer found in the QnA Service.
/// </summary>
[JsonProperty(PropertyName = "answer")]
public string Answer { get; set; }
///
<summary>
/// The score in range [0, 100] corresponding to the top answer found in the QnA Service.
/// </summary>
[JsonProperty(PropertyName = "score")]
public double Score { get; set; }
}
Be sure to add the Newtonsoft Library to the class.
using Newtonsoft.Json;
Message Controller
Inside the MessageController Post task, in the if(activity.Type == Message) block add the following:
ConnectorClient connector = new ConnectorClient(new Uri(activity.ServiceUrl));
var responseString = String.Empty;
var responseMsg = "";
//De-serialize the response
QnAMakerResult QnAresponse;
// Send question to API QnA bot
if (activity.Text.Length > 0)
{
var knowledgebaseId = "YOUR KB ID"; // Use knowledge base id created.
var qnamakerSubscriptionKey = "YOUR SUB KEY"; //Use subscription key assigned to you.
//Build the URI
Uri qnamakerUriBase = new Uri("https://westus.api.cognitive.microsoft.com/qnamaker/v1.0");
var builder = new UriBuilder($"{qnamakerUriBase}/knowledgebases/{knowledgebaseId}/generateAnswer");
//Add the question as part of the body
var postBody = $"{{\"question\": \"{activity.Text}\"}}";
//Send the POST request
using (WebClient client = new WebClient())
{
//Set the encoding to UTF8
client.Encoding = System.Text.Encoding.UTF8;
//Add the subscription key header
client.Headers.Add("Ocp-Apim-Subscription-Key", qnamakerSubscriptionKey);
client.Headers.Add("Content-Type", "application/json");
responseString = client.UploadString(builder.Uri, postBody);
}
try
{
QnAresponse = JsonConvert.DeserializeObject<QnAMakerResult>(responseString);
responseMsg = QnAresponse.Answer.ToString();
}
catch
{
throw new Exception("Unable to deserialize QnA Maker response string.");
}
}
// return our reply to the user
Activity reply = activity.CreateReply(responseMsg);
await connector.Conversations.ReplyToActivityAsync(reply);
You can now test your code by running it and opening up the emulator. Be sure to pass in the correct localhost port in the emulator to connect to your project. The default ID and password is blank so you won’t have to add anything when testing locally.
Your Bot
Okay so far we create a REST service that will answer questions based on a Knowledge Base built on specific FAQs. That service can be accessed by any type of application, including the Microsoft Bot Framework. With the code snippets we can use the bot framework to manage users input before pinging the QnA REST service. However we still need to build and host the bot.
We need to create Register a Bot on the Microsoft Bot Framework site. You can host the code on on a Web App within Azure then connect that to the Registered Bot. I use continuous GitHub deployment to update my code. The Microsoft Bot Framework enables the Web and Skype channels by default but there are others that you can easily add to your bot like Slack and Facebook Messenger. If you follow my previous post I have instruction on how to do this or you can look on the Microsoft Bot Framework documentation.
That’s it. You should have your FAQ bot up and working within a couple hours =)
It works! I managed to get HoloLens inputs working with the LUIS integration I did before in Unity. The project melds phrase recognition with dictation from HoloAcademy’s github example and then pings the LUIS API.
Okay so this is a very short post. Most of the harder parts were completed before this in my LUIS post and the Hololen’s Academy code helped a lot. I’ll just mention here some of the pains I went through and how it works.
Phrase Recognition vs. Dictation
HoloLens has 3 main types of input control for users. Gaze, Touch, and Voice. This application focuses on the last one. The voice input uses the Speech library for Windows.
using UnityEngine.Windows.Speech;
This library allows the HoloLens to then use Phrase Recognition to trigger specific actions or commands in your project. Yet that defeats the point of natural Language processing. In order to then interact with LUIS we needed to feed in what the user is saying. So to do this, I integrated the Communicator Class from the HoloLens Example into my project. This class handles the Phrase Recognition of the project but it also handles dictation, enabling natural language to be captured from the user. My Communicator is slightly different than the HoloLens because of the LUIS interactions as well as reworking the code to call for multiple dictation requests.
Now to activate the dictation Phrase Recognition commands are used. So no need to Tap to activate.
Dev Notes
I did have some speech/phrase recognition trouble. The original phrase to activate dictation was “Allie” (the character on the 100 which my original bot project is based off); however, the recognizer doesn’t recognize that spelling of her name. Changing it to “ally” caused the recognizer to trigger. DictationRecognizer is similar to the PhraseRecognizer in that it also doesn’t recognize the spelling of many names; for example, I would say “Tell me about Clarke.” and the dictation recognizer would write “tell me about clark.”. To fix the dictation errors I used Regex to replace the spelling before querying the LUIS API. One can also change their LUIS API to accept the speech recognition spelling but because multiple bots and applications are connected to my LUIS API I couldn’t implement that solution.
private void DictationRecognizer_DictationResult(string text, ConfidenceLevel confidence)
{
// Check to see if dictation is spelling the names correctly
text = Checknames(text);
// 3.a: Append textSoFar with latest text
textSoFar.Append(text + ". ");
// 3.a: Set DictationDisplay text to be textSoFar
DictationDisplay.text = textSoFar.ToString();
}
Anyway that’s all there is to it. All the major code is in the:
So for the past month I’ve tried to push myself to code everyday, and git something on Github… See what I did there 😉
Why did I start –
This month has been really chaotic. I was learning about the new Microsoft Bot Framework, I was in an AD, I was working almost every weekend for events and hackathons, and I was also moving. Suffice to say, not pushing something EVERY day would have been fine. However, 5 days into the month I realized my week commit streak was all green. My record in the past had been 10 days, and I thought wow this is a perfect time to break it.
Milestones –
I decided to start small. Aim for a week, then 10 days to tie my record, then 15 days, then 20, then 25, and 30 at the end of June. See, if I, in this crazy month of upheaval in my personal life as well as having one of the busiest work months, could put aside time every day to code my challenge goal would be achieved.
In the first 5 days I was working on Unity Intro project and my Bot Framework Example, and decided that focusing on just those would be best for narrowing the scope as well.
Day 13 – Starting to Waver // “Cheating”
Like I said I was moving and doing a bunch of events and all of a sudden working on code seemed too much of a commitment, but I desperately wanted to continue my streak. The solution, updating the Readme. It’s a simple solution and I felt really guilty about it. How can that count as working on code?
Well on Day 18, the weekend of Holohacks, a weekend long hackathon with the Hololens team at Microsoft in SF, I got to talking to a bunch of developers. Turns out that the team was “strongly encouraged” (lol pretty much forced) to write all their documentation as they were working and make it high quality. Documentation is such an important part of development especially for open source projects where others will want to add to your work.
Documentation Driven Development (DDD)
Now, those one off commits to the Readme didn’t seem like cheating. I was spending time improving a document that was the guideline for my work. Updating links to point to newly implemented documentation, adding to the feature list and directions, creating templates for other devs were all justified.
I didn’t come up with the phrase DDD, but I believe that’s how many projects should be worked on. Why? Well normally when developers write an amazing piece of code, writing documentation about it is the worst, most boring part. Decent documentation is a god send when most of the time it seems to be out of date in our ever-evolving industry. Imagine trying to build Ikea furniture with outdated instructions. Sure you can figure it out, and hopefully it stays together, but having an accurate guide makes life so much easier.
Day 30
After that hackathon I packed, moved to LA, did another Hololens weekend hackathon, and had an important presentation the following week. However, even with all that I managed to push many updates to my code and write up important documentation for it as well. Day 30 hit and my goal now is to see how long I can keep it up. It’s become more of a habit now. Kind of like working out or brushing your teeth.
Just thought I’d share some of the life updates with everyone.
Developing with LUIS is easy but slightly tedious. This post shares some tips for developing with LUIS and provides links to get started.
In part 1 I talked about my experience getting started with the Microsoft Bot Framework and getting responses from simple keywords. Which is great; but I want users to be able to have a conversation with my Bot, ALIE (another The 100 reference). In order to get my bot to understand natural language I used Microsoft’s LUIS, part of their cognitive services suite, and integrated it into my bot.
LUIS
LUIS (Language Understanding Intelligent Service) is a service in Microsoft’s extensive cognitve services suite. It provides extensive models from bing and cortana for developers to use in their application. LUIS also allows developers to create their own models and creates http endpoints that can be pinged to return simple JSON responses. Bellow is a Tutorial about how to use LUIS. LUIS – Tutorial
Things to note
Video differences
LUIS has been updated since the release of this video so some differences: The Model Features area on the right has been changed to reflect more of the features you can add:
LUIS supports only one action per intent. Each action can include a group of parameters derived from entities. The parameter can be optional or required, LUIS assumes that an action is triggered only when all the required parameters are filled. These will be the main driving force of your bot responses, and actions come into play when publishing.
Publishing Model for Bot Framework and SLACK
Here is the link to how to publish your model: https://www.luis.ai/Help#PublishingModel. What they neglect to mention is that when publishing for the Bot Framework or SLACK you need to be in preview mode to access those features since they are still in Beta. To get to the preview of LUIS click the button on the top right of the page, it takes about a minute to regenerate the page.
Publishing – Action needed
Now, this might next part might change soon since the beta keeps being improved upon. When I first wanted to publish the model with Bot Framework the service required me to make at least one of my Intents return an action.
Adding Action
In preview select an Intent. A window will popup. Select Add Action.
Next check the Fulfilment Box.
The fulfillment type determines what type of response will be included in the JSON object. I’ve selected Writeline since all of the actions that I have on ALIEbot so far do not use the integrated services, like weather or time.
After you select a fulfillment type you can add parameters and an action setting (which is what will be returned in the JSON object).
In my application I returned the parameter XName which is the name of the character that was in the users response.
Integrating LUIS Code
Setting up LUIS class
Before you can receive responses from LUIS you need to import the appropriate LUIS libraries, and create a class that extends the Luis Dialog. This class must also include the serializable and Luis Model tags.
This should be LUIS cODE Model
The LUIS model ID and Key can be found when you are publishing your Application:
LUIS Intent Methods
Your class must now have methods that act upon the specific LUIS intents.
//This needs to match the Intent Name from JSON
[LuisIntent("XName")]
public async Task XNameResponse(IDialogContext context, LuisResult result)
{
var entitiesArray = result.Entities;
var reply = context.MakeMessage();
foreach (var entityItem in result.Entities)
{
if (entityItem.Type == "Character")
{
switch (entityItem.Entity)
{
case "raven":
reply.Text = "Raven the Best";
reply.Attachments = new List<Attachment>();
reply.Attachments.Add(new Attachment
{
Title = "Name: Raven Reyes",
ContentType = "image/jpeg",
ContentUrl = "URL_PIC_LINK",
Text = "It won't survive me"
});
break;
case "clarke":
reply.Text = "Clarke is the main character";
break;
default:
reply.Text = "I don't know this character";
break;
}
await context.PostAsync(reply);
context.Wait(MessageReceived);
}
}
}
Summary
Once you have set up a LUIS model, you need to publish it. After it has been published your bot can then connect to it via the LUIS Model tag. Connecting the bot to LUIS will enable the bot to understand natural language your users will use, however you still need to code the responses with LUIS Intent tags for your Task or Post methods in the Bot Framework.
So that should be everything to get LUIS working. Remember that this content is all in Beta and is subject to change, but I’ll keep it updated as much as possible.
Microsoft released their new Bot Framework early this year at the Build Conference. So, naturally, I wanted to create my own; eventually integrating it into a game. In this post I talk about some of my learnings and what the bot framework provides.
I decided to work with the Microsoft Bot Connector, part of the Microsoft Bot Framework, as a way to get my bot up and running on the most platforms as quickly as possible. I haven’t worked with bots in the past so this was my first dive into the territory. My bot was built in C# however, Microsoft’s Bot Framework can also be built in Node.js. My colleague Sarah wrote a post about getting started with Node here: https://blogs.msdn.microsoft.com/sarahsays/2016/06/01/microsoft-bot-framework-part-1/
The bot I wanted to create was a simple chat bot that I could build upon for interactivity with users. If your familar with The 100, you’ll figure out what my bot does. All the code for what I did can be looked at here: https://github.com/KatVHarris/ALIEbot
What I used
Microsoft Bot Framework is super powerful and makes it easy to create a bot of your own. You can use any of the following to get started:
Bot Connector
Bot Builder C#
Bot Builder Node.js
I used the Bot Connector, an easy way to create a single back-end and then publish to a bunch of different platforms called Channels.
** Note: It’s really important for Visual Studio to be updated in order to use this, as well as download the web tools in the Visual Studio Setup when you download.**
Open a new project with the Bot Template, and install the nuget package for the Microsoft’s Bot Builder: install-package Microsoft.Bot.Builder
Message Controller
The file that dictates the flow of responses is the MessageController.cs in the “Controllers” folder. The class handles systems messages and allows you to control what happens when a message comes through.
Adding the following conditional statement to the Post function allows you to cater the response to the user.
Now you can stick with this model and add in bits of functionality but I like to add a more powerful messaging system with Dialogs.
Dialogs
** Now there are slight differences between the BotConnector Dialogs for Node vs C#. Everything in this post pertains to the C# verison.**
Bot Builder uses dialogs to manage a bots conversations with a user. The great thing about dialogs is that they can be composed with other dialogs to maximize reuse, and a dialog context maintains a stack of dialogs active in the conversation.
To use dialogs all you need to do is add the [Serializable] tag and extend the IDialog<> from Microsoft.Bot.Connector;
Dialogs handle asynchronus communication with the user. Because of this the MessageController will instead use the Conversation class to create an async call to a Dialog Task that uses the context to create a ReplyMessage with more functionality. What does that all mean? It means that with dialogs, you can implement a conversation with a user asynchronously when certain keywords are triggered. For example if the user types in the keyword “reset” we can have a PromptDialog to add the confirmation. One of the most powerful ways of creating a an actual dialog with the user and the bot is to add Chain Dialogs.
Chain Dialogs
Explicit management of the stack of active dialogs is possible through IDialogStack.Call and IDialogStack.Done, explicitly composing dialogs into a larger conversation. It is also possible to implicitly manage the stack of active dialogs through the fluent Chain methods.To look at all the possible ways to respond to a user with Dialogs you can look at the EchoBot Sample in Github.
As I said earlier make sure all of your tools are on the latest update. My web tooling was not on the latest version when I first tried to publish my bot, so the directions were slightly different than the tutorial and caused issues later.
TIP 2: Don’t skip any of the steps
The first time I published my bot it didn’t work. I still have no idea why, but I believe it was because I missed a minor step in the creation process.
TIP 3: It should work immediately
Your bot should work immediately after you activate the web channel. If it doesn’t check your code again. My first bot was not working immediatley and I ended up just registering a new one with the same code. That worked.
TIP 4: Web disabled
If you look at my channel picture you can see that the web channel is registered but it’s status says “diabled”
Don’t worry about this. Your Bot web should still work.
TIP 5: Registering your bot
You don’t need to register your bot for it to work. Registering your bot will allow it to be in the publish gallery later. Make sure your bot does something useful before submitting as well. Simple chat bots do not count.
That’s it! You should have a bot published and all ready to Chat with.
Okay so there are many ways that your bot can respond to your user. However specific keywords are needed and it’s less user friendly, and conversational than we would like. In order to make our bot respond to natural language that users will most likely be using to integrate LUIS. Which I’ll talk about in the next post with part 2.
Reading the Bot Framework Docs was extremely helpful when getting started, so if you haven’t looked at them I recommend you take a look here: http://docs.botframework.com/
The Microsoft Bot Framework is opensource so you can help contribute to the project here: Microsoft/BotBuilder. They also have more samples included in their source code.