Bots

TL;DR

This post goes over the steps necessary to make your bot open to the public. 1) Load your bot code onto a webapp on Azure (Microsoft’s Cloud). 2) Register your bot

Hosting Code on Azure

Okay in the last post I go over creating a bot with Microsoft Cognitive services QnAMaker. The code works natively on local host and pings the online Knowledge base when questions are asked, but it’s not yet hosted anywhere.

What you need to get started:

  • A Microsoft Live ID Subscription

Setting up Web App

Go to: https://portal.azure.com

** Note: Be sure that you are on the new Azure portal as the old dashboard is slowly being depreciated. **

When you first log onto the Azure portal you will see a Dashboard that has a bunch of tiles. Click on the ‘New +’ symbol at the top to create a new service. Then Select “Web + Mobile” > Web App. Fill out the form, select ‘pin to dashboard’ and click ‘create’.

Once the Web App is created a tile will appear on your dash. Click it to access your application.

After clicking you will be taken to the Overview of your Web App. On the right hand side you should see ‘Deployment Options’. In Azure the default connection with your Web App is with an FTP endpoint. However with Deployment Options we can select a variety of ways to deploy source code. I will connect mine to GitHub, but there are other options like Visual Studio Team Services or local Git.

After you select ‘Deployment Options’ > Choose Source “Configure required settings” > You’ll see a list of options. Select the desired one and connect to that service with appropriate credentials.

Once you’ve connected to a source, or used FTP to upload your files, we can now register our bot.

Registering your Bot

To Register your bot, simply go to https://dev.botframework.com/ and click on “Register Bot”.

Fill out the Bot Registration form and use your Web App url (https://yoursite.azurewebsites.net/api/messages/) for the message endpoint input under Configuration.

** Note: Make sure you are using the secure url link for your message endpoint aka HTTPS **

After you filled everything out and Created and Microsoft App ID and password. Click Register, you should be taken to a Dashboard for your bot.

Linking your code to the Registered Bot

On your dashboard hit ‘Test Connection’. It should fail.

This happens because, your code does not have the ID and Password Authentication codes.

In your Web.config file you should see the following lines:

  <appSettings>
    <!-- update these with your BotId, Microsoft App Id and your Microsoft App Password-->
    <add key="BotId" value="YourBotId" />
    <add key="MicrosoftAppId" value="" />
    <add key="MicrosoftAppPassword" value="" />
  </appSettings>

Copy and past your MicrosoftAppId into the value slot for AppId and the same for the Password you obtained when you registered your bot.

Now push the updates to the Web App. If you hit test connection it should work! From there you can add multiple channels that your bot can communicate through. The skype and Web Channels are turned on by default, so you can get started with those two first.

And that’s all you have to do to get your bot online and ready to go. =)

Happy Hacking!

– TheNappingKat

Error Fixing, Windows

TL;DR

Mismatched Taskbar error prevents everything in the task bar from working, including the windows key. Reset your shell. Instructions in the “How to Solve” section of this post.

The Problem

So, I love windows 10 multiple desktops (FINALLY!). But I noticed recently that when I switch back and forth several times in rapid succession, the task bar has a tendency to lag; resulting in a mis-match from the correct desktop.

Example Desktop 1 will have the corresponding task bar for Desktop 2 and Desktop 2 will have the task bar for Desktop 1.

How to Solve

Reset your Shell. If you don’t know how to do that here’s how.

Because this error prevented everything in the task bar from working, including the windows key, use Ctrl+Alt+Del to pull up the task manager.

Then find the first instance of explorer.exe. This is the shell containing the task bar. End that process.

To run it again. Click File > Run and type explorer, this will boot the shell again and get your task bar running.

Happy Hacking

– TheNappingKat

Bots

TL;DR

Luis may be overkill for the the bot you want to create. If you only need your bot to answer questions (especially ones already on a FAQ site) try QnA bots from Microsoft Cognitive Services. QnAMaker automatically trains your service based on existing FAQ pages to save a bunch of time. In this post, I walk you through creating one and the code needed for your bot to link to the service. QnAMaker is currently in Preview as of January 2016; more information can be found at qnamaker.ai.

QnA Service vs. LUIS Service

First what is Microsoft QnA Maker. Well “Microsoft QnA Maker is a free, easy-to-use, REST API and web-based service that trains AI to respond to user’s questions in a more natural, conversational way.” It streamlines production of a REST API that your bot or other application can ping. So why use LUIS? If you want automation of a service that requires multiple response from your user (i.e. phone automation systems, ordering a sandwich, modifying settings on a services), LUIS’s interface and pipeline manages that development process better.

Getting started with the QnA bots

First, go to QnAmaker.ai and sign in with your live ID.

Once you’ve signed in create a new service

Type in the name of the service and the FAQ link you want to use, I’m linking to Unity’s FAQ page in this example. What’s great is that you can add more than one URL for your QnA bot to pull from. So if the site you are using has a FAQ that redirects to different pages to answer questions you can add those other pages too. You don’t need to use a url, uploading your own questions and answers works too.

Hit “Create” at the bottom of the page.

After you hit create the page will take you a new page with the questions and answers that the service was able to identify from the source (url or file) you provided. The questions and answers that the service identifies is called your Knowledge Base (KB).

Natural Language Testing

To train your service you can start plugging natural language questions and the service return the FAQ answers that would best match. If the service can’t get a high enough probability percentage for an answer it will return multiple answers, that you can choose from.

You also have the ability to provide alternate phrasing for question you just asked in the right hand side of the tool, so that they can map to the same answer.

Any time you make an adjustment to what the service returned for an answer, be sure to save what you’ve done by clicking Save and Retrain button.

Once you’ve finished training the service you can hit publish. You’ll be taken to a page with a summary of the numbers of changes that you had, before the service will be published.

** Note: The service won’t be published until you hit the publish button on this summary page. **

Once your service is published the site will provide an sample HTTP request that you can test with any rest client.

Code – Connecting it to Microsoft Bot Framework

If this your first time working with the Microsoft Bot Framework you might want to check out my post about it here: Microsoft Bot Framework, or read up about it on the Microsoft’s site: https://docs.botframework.com/en-us/.

For this example I’m using:

The MessagesController class and create a QnAMakerResult class hold the most important parts of code. Depending on the complexity of your bot you may want to look into dialogs and chains, instead of putting your handler in the MessageController class.

QNAMAKER RESULT

 public class QnAMakerResult
    {
        ///

<summary>
        /// The top answer found in the QnA Service.
        /// </summary>


        [JsonProperty(PropertyName = "answer")]
        public string Answer { get; set; }

        ///

<summary>
        /// The score in range [0, 100] corresponding to the top answer found in the QnA    Service.
        /// </summary>


        [JsonProperty(PropertyName = "score")]
        public double Score { get; set; }
    }

Be sure to add the Newtonsoft Library to the class.

using Newtonsoft.Json;

Message Controller

Inside the MessageController Post task, in the if(activity.Type == Message) block add the following:

ConnectorClient connector = new ConnectorClient(new Uri(activity.ServiceUrl));
                var responseString = String.Empty;
                var responseMsg = "";

                //De-serialize the response
                QnAMakerResult QnAresponse;

                // Send question to API QnA bot
                if (activity.Text.Length > 0)
                {
                    var knowledgebaseId = "YOUR KB ID"; // Use knowledge base id created.
                    var qnamakerSubscriptionKey = "YOUR SUB KEY"; //Use subscription key assigned to you.

                    //Build the URI
                    Uri qnamakerUriBase = new Uri("https://westus.api.cognitive.microsoft.com/qnamaker/v1.0");
                    var builder = new UriBuilder($"{qnamakerUriBase}/knowledgebases/{knowledgebaseId}/generateAnswer");

                    //Add the question as part of the body
                    var postBody = $"{{\"question\": \"{activity.Text}\"}}";

                    //Send the POST request
                    using (WebClient client = new WebClient())
                    {
                        //Set the encoding to UTF8
                        client.Encoding = System.Text.Encoding.UTF8;

                        //Add the subscription key header
                        client.Headers.Add("Ocp-Apim-Subscription-Key", qnamakerSubscriptionKey);
                        client.Headers.Add("Content-Type", "application/json");
                        responseString = client.UploadString(builder.Uri, postBody);
                    }

                    try
                    {
                        QnAresponse = JsonConvert.DeserializeObject<QnAMakerResult>(responseString);
                        responseMsg = QnAresponse.Answer.ToString();
                    }
                    catch
                    {
                        throw new Exception("Unable to deserialize QnA Maker response string.");
                    }
                }

                // return our reply to the user
                Activity reply = activity.CreateReply(responseMsg);
                await connector.Conversations.ReplyToActivityAsync(reply);

You can now test your code by running it and opening up the emulator. Be sure to pass in the correct localhost port in the emulator to connect to your project. The default ID and password is blank so you won’t have to add anything when testing locally.

Your Bot

Okay so far we create a REST service that will answer questions based on a Knowledge Base built on specific FAQs. That service can be accessed by any type of application, including the Microsoft Bot Framework. With the code snippets we can use the bot framework to manage users input before pinging the QnA REST service. However we still need to build and host the bot.

We need to create Register a Bot on the Microsoft Bot Framework site. You can host the code on on a Web App within Azure then connect that to the Registered Bot. I use continuous GitHub deployment to update my code. The Microsoft Bot Framework enables the Web and Skype channels by default but there are others that you can easily add to your bot like Slack and Facebook Messenger. If you follow my previous post I have instruction on how to do this or you can look on the Microsoft Bot Framework documentation.

That’s it. You should have your FAQ bot up and working within a couple hours =)

Happy Hacking!

– TheNappingKat

Oculus, XR

TL;DR:

This post talks about technical implementation of movement from different VR experiences I’ve tried in the last 2 months. All of them delivered great immersive experiences without breaking presence, and wanted to share my analysis of their techniques. Part 2 will analyze environment interactions. All the experiences I’m talking about showcased different hardware (Vive, Touch, Omni and Gamepad) and genres.

Teleportation is one of the best techniques to use for movement. If you don’t want to use Teleportation, use constant forward movement. If you want to implement strafing, make sure its smooth and continuous and try limiting the degree of movement.

Reviews/Technical Analysis

So the past 3 months have had me traveling around the U.S. enabling me to take part in many amazing Gaming and VR conferences: Pax West, GaymerX4, Oculus Connect 3. I wanted to use this post to talk about some of my experiences on the different games and hardware as well as dive into the unique technical solutions that each of these experiences implement. Most of what I talk about will revolve around player movement and environment interaction; two of the most common areas where presence is broken.

Velocibeasts by Churroboros – HTC Vive Multiplayer

Technical Highlights: Attack and Movement Controls

Game Description:

“Have you ever wanted to throw something at someone? VELOCIBEASTS can make that dream a reality. Pick a weapon, throw it, teleport, and kill your friends.

Battle as a variety of animals equipped with floating mecha suits in this fast paced multiplayer battle arena VR title.”

-Churroboros

Review:

I managed to get a pretty in depth demo at GaymerX4 this year. The highlight of this game is the attack and movement controls. In VR, player movement is one of the fastest ways to break a users sense of presence. So, why am I impressed? I’ll explain. In the game you are a mecha-suit beast in a large arena, trying to kill your opponent.

You attack by throwing your weapon, similarly to casting a fishing line, toward your enemy. Pressing and holding the trigger grips your weapon, and releasing the trigger to throws it. However the interesting part of the game play is when you press the trigger again. When you do, you instantly teleport to your weapons location. The mix of coordinating attacks and movement simultaneously creates a fun, fast pace experience that immerses players.

Now in general I’m not prone to motion sickness in VR. However, in most first person shooter/fighting games, falling and strafing cause the most motion sickness issues for me. Velocibeasts avoids both by allowing your beast to float (because it’s in a mecha-suit), avoiding falling, and teleportation avoiding strafing. The floating mechanism also gives users complete 6 degrees of freedom for moving around the arena.

I’m impressed because many games use the teleportation technique but not many of them make it so well integrated into game play. The movement controls were also very easy to use and only took a few throws to get the timing and rhythm down. Below are some pictures of me playing the game, getting really into it.

Links

@ChurroborosVR
https://www.facebook.com/ChurroborosVR/

World War Toons by Reload Studios – Omni, Gamepad, Rift, PSVR

Technical Highlights: Full FPS style game, Good use of strafing controls, and Omni integration

Game Description:

“World War Toons is a cartoony first-person shooter where you never know if you’re going to turn the corner and see a rocket rushing towards you, grand pianos falling from the sky, or a massive tank staring you in the face.”

– Reload Studios

Technical Review and First Opinions:

I played this game at PAX West and got the opportunity to play with the Rift and gamepad, as well as the Rift with the Omni. It was a very polished game, the mechanics played like an FPS, which isn’t always the best thing in VR. World War Toons is one of the few games that I’ve played that has strafing (lateral movement independent of camera position) in VR. The reason why VR experience shy away from this? Because users get sick, really, really quickly.

Now, despite having strafing, I only felt nauseous a few times during game play; specifically when my character was falling off the side of a wall, and when being launched in the air by trampolines.

The creators limited movement to just the d-pad directions (left, right, forward, backward), to limit the amount of discomfort when players were strafing.

However, when playing the game on the Omni, I had no issues with nausea. The hardware made a huge difference when the character is launched about the arena, and falling off drops. It was also completely immersive, compared to full game pad controls.

Links

http://voicesofvr.com/455-vr-first-person-shooters-esports-with-world-war-toons/

roqovan.com

@StudioRoqovan

Eagle Flight by Ubisoft – Ubisoft

Technical Highlights: Player Movement, Gamepad, Multiplayer

Description:

“50 years after humans vanished from the face of the Earth, nature reclaimed the city of Paris, leaving a breathtaking playground. As an eagle, you soar past iconic landmarks, dice through narrow streets, and engage in heart-punding aerial dog fights to protect your territory from opponents.”

-Ubisoft

Technical Review and First Opinions:

I first saw this game at GDC this year, and at Oculus Connect 3 I was able to play it. A 3 v 3 multiplayer, capture the flag, game. My team won 7-0. YAY!

Game start: The opening of the game you can see your fellow teammates as Eagles in a for match making. You are able to look around at your teammates’ eagle bodies, whose head movements correspond to the players’ head movements. I mention this because these little details increase player immersion. Once everyone is ready in the lobby the game beings.

Game Play: When the game finally starts the camera, fades in onto the scene. You as the eagle, is already in flight. In VR, if you want to have the player move around your scene (not teleportation) there are only a few ways to do it without getting them sick. One of the ways is to have a semi constant speed, and always have forward movement in the direction the player is looking. Eagle Flight employs this technique, with head tilts, to turn yourself left and right. However, I did still feel some discomfort as I was getting used to the controls of moving around Paris.

The other thing Ubisoft does to help immerse the player, is add a beak to the players view. There have been VR studies that show adding a nose, or a grounding element to the player’s view helped to immerse them faster and alleviate motion sickness. I hadn’t seen any games employ this technique though, until this one.

The third technique Ubisoft uses for movement and immersion is vignetting the players view, when increasing speed; similar to tunnel vision. I’ve seen this technique a few times when player movement is increased. I like this technique since it eases my motion sickness by limiting the amount of visual inputs.

Eagle Flight is an Oculus Rift Gamepad game, it’s also coming to PSVR. I usually dislike gamepad games for VR because I think it takes away from presence, however this game only used a few buttons, for firing, shield, and speed controls. If you are going to use a gamepad for your First Person VR game, I suggest simplifying the controls, keeping them as intuitive as possible, and styling your game on a third person view.

You can see some of the gameplay from E3 here:

Links

https://www.ubisoft.com/en-US/game/eagle-flight

@UbisoftVR

Summary

Figuring out which technique you want your user to use to explore your Virtual World is important. Take into consideration the limits of hardware and style of gameplay when making your decision. In Velocibeasts I doubt I would have enjoyed the game play as much if I had to alternate between teleporting and fighting my opponent due to the game’s fast pace flow. Eagle flight had to center it’s gameplay around a constant movement since players are birds. It would have felt super disconnected if our birds were teleporting everywhere instead of peacefully gliding.

Teleportation is one of the best techniques to use for movement. If you don’t want to use Teleportation, use constant forward movement. If you want to implement strafing, make sure its smooth and continuous and try limiting the degree of movement.

Now that I’m done traveling more videos and posts about how to implement all the awesome techniques I talked about to come =)

Happy Hacking!

– The Napping Kat

Bots, Unity, XR

TL;DR

It works! I managed to get HoloLens inputs working with the LUIS integration I did before in Unity. The project melds phrase recognition with dictation from HoloAcademy’s github example and then pings the LUIS API.

All the code is here: Github/KatVHarris

LUIS + UNITY + JSON post is here

LUIS 101 post is here


Okay so this is a very short post. Most of the harder parts were completed before this in my LUIS post and the Hololen’s Academy code helped a lot. I’ll just mention here some of the pains I went through and how it works.

Phrase Recognition vs. Dictation

HoloLens has 3 main types of input control for users. Gaze, Touch, and Voice. This application focuses on the last one. The voice input uses the Speech library for Windows.

using UnityEngine.Windows.Speech;

This library allows the HoloLens to then use Phrase Recognition to trigger specific actions or commands in your project. Yet that defeats the point of natural Language processing. In order to then interact with LUIS we needed to feed in what the user is saying. So to do this, I integrated the Communicator Class from the HoloLens Example into my project. This class handles the Phrase Recognition of the project but it also handles dictation, enabling natural language to be captured from the user. My Communicator is slightly different than the HoloLens because of the LUIS interactions as well as reworking the code to call for multiple dictation requests.

Now to activate the dictation Phrase Recognition commands are used. So no need to Tap to activate.

Dev Notes

I did have some speech/phrase recognition trouble. The original phrase to activate dictation was “Allie” (the character on the 100 which my original bot project is based off); however, the recognizer doesn’t recognize that spelling of her name. Changing it to “ally” caused the recognizer to trigger. DictationRecognizer is similar to the PhraseRecognizer in that it also doesn’t recognize the spelling of many names; for example, I would say “Tell me about Clarke.” and the dictation recognizer would write “tell me about clark.”. To fix the dictation errors I used Regex to replace the spelling before querying the LUIS API. One can also change their LUIS API to accept the speech recognition spelling but because multiple bots and applications are connected to my LUIS API I couldn’t implement that solution.

private void DictationRecognizer_DictationResult(string text, ConfidenceLevel confidence)
    {
        // Check to see if dictation is spelling the names correctly 
        text = Checknames(text);

        // 3.a: Append textSoFar with latest text
        textSoFar.Append(text + ". ");

        // 3.a: Set DictationDisplay text to be textSoFar
        DictationDisplay.text = textSoFar.ToString();
    }

Anyway that’s all there is to it. All the major code is in the:

Hope this helps, and let me know if you have any questions with it on Twitter @KatVHarris

Happy Hacking

– TheNappingKat

Bots

So for the past month I’ve tried to push myself to code everyday, and git something on Github… See what I did there 😉

Why did I start –

This month has been really chaotic. I was learning about the new Microsoft Bot Framework, I was in an AD, I was working almost every weekend for events and hackathons, and I was also moving. Suffice to say, not pushing something EVERY day would have been fine. However, 5 days into the month I realized my week commit streak was all green. My record in the past had been 10 days, and I thought wow this is a perfect time to break it.

Milestones –

I decided to start small. Aim for a week, then 10 days to tie my record, then 15 days, then 20, then 25, and 30 at the end of June. See, if I, in this crazy month of upheaval in my personal life as well as having one of the busiest work months, could put aside time every day to code my challenge goal would be achieved.

In the first 5 days I was working on Unity Intro project and my Bot Framework Example, and decided that focusing on just those would be best for narrowing the scope as well.

Day 13 – Starting to Waver // “Cheating”

Like I said I was moving and doing a bunch of events and all of a sudden working on code seemed too much of a commitment, but I desperately wanted to continue my streak. The solution, updating the Readme. It’s a simple solution and I felt really guilty about it. How can that count as working on code?

Well on Day 18, the weekend of Holohacks, a weekend long hackathon with the Hololens team at Microsoft in SF, I got to talking to a bunch of developers. Turns out that the team was “strongly encouraged” (lol pretty much forced) to write all their documentation as they were working and make it high quality. Documentation is such an important part of development especially for open source projects where others will want to add to your work.

Documentation Driven Development (DDD)

Now, those one off commits to the Readme didn’t seem like cheating. I was spending time improving a document that was the guideline for my work. Updating links to point to newly implemented documentation, adding to the feature list and directions, creating templates for other devs were all justified.

I didn’t come up with the phrase DDD, but I believe that’s how many projects should be worked on. Why? Well normally when developers write an amazing piece of code, writing documentation about it is the worst, most boring part. Decent documentation is a god send when most of the time it seems to be out of date in our ever-evolving industry. Imagine trying to build Ikea furniture with outdated instructions. Sure you can figure it out, and hopefully it stays together, but having an accurate guide makes life so much easier.

Day 30

After that hackathon I packed, moved to LA, did another Hololens weekend hackathon, and had an important presentation the following week. However, even with all that I managed to push many updates to my code and write up important documentation for it as well. Day 30 hit and my goal now is to see how long I can keep it up. It’s become more of a habit now. Kind of like working out or brushing your teeth.

Just thought I’d share some of the life updates with everyone.

Happy Hacking

-TheNappingKat

Bots

TL;DR

Developing with LUIS is easy but slightly tedious. This post shares some tips for developing with LUIS and provides links to get started.

In part 1 I talked about my experience getting started with the Microsoft Bot Framework and getting responses from simple keywords. Which is great; but I want users to be able to have a conversation with my Bot, ALIE (another The 100 reference). In order to get my bot to understand natural language I used Microsoft’s LUIS, part of their cognitive services suite, and integrated it into my bot.

LUIS

LUIS (Language Understanding Intelligent Service) is a service in Microsoft’s extensive cognitve services suite. It provides extensive models from bing and cortana for developers to use in their application. LUIS also allows developers to create their own models and creates http endpoints that can be pinged to return simple JSON responses. Bellow is a Tutorial about how to use LUIS.
LUIS – Tutorial

Things to note

Video differences

LUIS-Features

LUIS has been updated since the release of this video so some differences:
The Model Features area on the right has been changed to reflect more of the features you can add:

Working with Intents

https://www.luis.ai/Help is a great resource. LUIS has very detailed documentation.

LUIS supports only one action per intent. Each action can include a group of parameters derived from entities. The parameter can be optional or required, LUIS assumes that an action is triggered only when all the required parameters are filled. These will be the main driving force of your bot responses, and actions come into play when publishing.

Publishing Model for Bot Framework and SLACK

Here is the link to how to publish your model: https://www.luis.ai/Help#PublishingModel.
What they neglect to mention is that when publishing for the Bot Framework or SLACK you need to be in preview mode to access those features since they are still in Beta. To get to the preview of LUIS click the button on the top right of the page, it takes about a minute to regenerate the page.

LUIS_Publish1

Publishing – Action needed

Now, this might next part might change soon since the beta keeps being improved upon. When I first wanted to publish the model with Bot Framework the service required me to make at least one of my Intents return an action.

Adding Action

In preview select an Intent. A window will popup. Select Add Action.

LUIS_AddingAction

Next check the Fulfilment Box.

The fulfillment type determines what type of response will be included in the JSON object. I’ve selected Writeline since all of the actions that I have on ALIEbot so far do not use the integrated services, like weather or time.

After you select a fulfillment type you can add parameters and an action setting (which is what will be returned in the JSON object).

In my application I returned the parameter XName which is the name of the character that was in the users response.

Integrating LUIS Code

Setting up LUIS class

Before you can receive responses from LUIS you need to import the appropriate LUIS libraries, and create a class that extends the Luis Dialog. This class must also include the serializable and Luis Model tags.

This should be LUIS cODE Model

The LUIS model ID and Key can be found when you are publishing your Application:

LUIS Intent Methods

Your class must now have methods that act upon the specific LUIS intents.

        //This needs to match the Intent Name from JSON
        [LuisIntent("XName")]
        public async Task XNameResponse(IDialogContext context, LuisResult result)
        {
            var entitiesArray = result.Entities;
            var reply = context.MakeMessage();
            foreach (var entityItem in result.Entities)
            {
                if (entityItem.Type == "Character")
                {

                    switch (entityItem.Entity)
                    {
                        case "raven":
                            reply.Text = "Raven the Best";
                            reply.Attachments = new List<Attachment>();
                            reply.Attachments.Add(new Attachment
                            {
                                Title = "Name: Raven Reyes",
                                ContentType = "image/jpeg",
                                ContentUrl = "URL_PIC_LINK",
                                Text = "It won't survive me"
                            });
                            break;
                        case "clarke":
                            reply.Text = "Clarke is the main character";
                            break;
                        default:
                            reply.Text = "I don't know this character";
                            break;
                    }
                    await context.PostAsync(reply);
                    context.Wait(MessageReceived);
                }
            }
        }

Summary

Once you have set up a LUIS model, you need to publish it. After it has been published your bot can then connect to it via the LUIS Model tag. Connecting the bot to LUIS will enable the bot to understand natural language your users will use, however you still need to code the responses with LUIS Intent tags for your Task or Post methods in the Bot Framework.

So that should be everything to get LUIS working. Remember that this content is all in Beta and is subject to change, but I’ll keep it updated as much as possible.

Happy Hacking!

-TheNapping Kat

Unity

TL;DR

What channels you’re using matters! Test out on all your desired platforms before publishing code.

Testing out the limits of the bot framework I tried to create multi-line responses for my bot. The Text property of replies was Markdown, so, I thought it should be easy enough to implement. However, I quickly realized it didn’t always look the way I wanted. Here’s some tips to getting your responses to look just right =).

These examples were all using the

reply = context.MakeMessage();

and implemented the PostAsync method since my responses are all Tasks.

  await context.PostAsync(reply);
  context.Wait(MessageRecieved). 

Multi-Line Repsonses

You must use \n\n in the string.

Input:

reply.Text = "Hi I'm one line \n\n " +
"I'm line two" +
"I'm line three?" ; 

Output Web:

Output Facebook:

NewLine-FB

Lists

In a list you must have the new line syntax \n\n as well as an ‘*’ with a space after it, be careful here since ‘*’ are also used for italicization. You can also see that the spacing is slightly different between the two channels.

Input:

reply.Text = "Hi I'm one line \n\n" +
"* Item 1 \n\n" +
"* Item 2 " ; 

Output Web:

Output Facebook:

List-FB

Block Quote with Horizontal Rule

Quoted Text must have ‘>’ with one space after to denote that the next chunk of text will be a quote. The Horizontal Rule is marked by ‘—‘. We can especially see the limitations between channels even more in this example.

Input:

  reply.Text = "Block quote below bar \n\n" +
    "---" +
    "\n\n > Something about life. I'm an existential quote \n\n" + 
    "-BOT ";

Output Web:

HRQuote-Web

Output Facebook: 

HRQuote-FB

Headers / Bold, Italics and Strike Throughs

This time – drastic differences between Facebook versus the Web. Note that with headers you must type in \n\n after the header text or the entire string will be part of the first header syntax. And typing ‘ *** ‘ will get bold italics. However Facebook does not register ANY of these.

Input:

  reply.Text = "# Don't know if I need new lines \n\n" +
     "~~You **don't** *need* new lines~~ \n\n" +
     "***yes you do***";

Output Web:

Headers-Web

Output Facebook: 

Headers-FB

Links and Pictures in an Ordered List

Some more differences with Facebook and the Web, but less so. Remember to put \n\n after every item in your list and to leave a space after the ‘.’ following the number.

Input:

  reply.Text = "### List \n\n" +
     "1. Link: [bing](http://bing.com) \n\n" +
     "2. Image Link: ![duck](http://aka.ms/Fo983c)";

Output Web:

Links-Web

Output Facebook:

Links-FB

Hope this little guide helps.

Happy Hacking!

– TheNappingKat

Bots

Microsoft released their new Bot Framework early this year at the Build Conference. So, naturally, I wanted to create my own; eventually integrating it into a game. In this post I talk about some of my learnings and what the bot framework provides.

I decided to work with the Microsoft Bot Connector, part of the Microsoft Bot Framework, as a way to get my bot up and running on the most platforms as quickly as possible. I haven’t worked with bots in the past so this was my first dive into the territory. My bot was built in C# however, Microsoft’s Bot Framework can also be built in Node.js. My colleague Sarah wrote a post about getting started with Node here: https://blogs.msdn.microsoft.com/sarahsays/2016/06/01/microsoft-bot-framework-part-1/

The bot I wanted to create was a simple chat bot that I could build upon for interactivity with users. If your familar with The 100, you’ll figure out what my bot does. All the code for what I did can be looked at here: https://github.com/KatVHarris/ALIEbot

What I used

Microsoft Bot Framework is super powerful and makes it easy to create a bot of your own. You can use any of the following to get started:

  • Bot Connector
  • Bot Builder C#
  • Bot Builder Node.js

I used the Bot Connector, an easy way to create a single back-end and then publish to a bunch of different platforms called Channels.

I started out by following the steps in the getting started section of the docs and downloaded the Bot Template for Visual Studio here: http://docs.botframework.com/downloads/#navtitle

** Note: It’s really important for Visual Studio to be updated in order to use this, as well as download the web tools in the Visual Studio Setup when you download.**

** Another Note: if you have never downloaded a template for Visual Studio before here are some instructions: http://docs.botframework.com/connector/getstarted/#getting-started-in-net. You’ll have to save the zip into the %USERPROFILE% folder on your computer. **

Set Up

Open a new project with the Bot Template, and install the nuget package for the Microsoft’s Bot Builder: install-package Microsoft.Bot.Builder

Message Controller

The file that dictates the flow of responses is the MessageController.cs in the “Controllers” folder. The class handles systems messages and allows you to control what happens when a message comes through.

Adding the following conditional statement to the Post function allows you to cater the response to the user.

Let’s create a simple response:

public async Task Post([FromBody]Message message)
{
    if (message.Type == "Message")
    {
        return message.CreateReplyMessage($"You said:{message.Text}");
    }
    else
    {
        return HandleSystemMessage(message);
    }
}

Now you can stick with this model and add in bits of functionality but I like to add a more powerful messaging system with Dialogs.

Dialogs

** Now there are slight differences between the BotConnector Dialogs for Node vs C#. Everything in this post pertains to the C# verison.**

Bot Builder uses dialogs to manage a bots conversations with a user. The great thing about dialogs is that they can be composed with other dialogs to maximize reuse, and a dialog context maintains a stack of dialogs active in the conversation.

To use dialogs all you need to do is add the [Serializable] tag and extend the IDialog<> from Microsoft.Bot.Connector;

Dialogs handle asynchronus communication with the user. Because of this the MessageController will instead use the Conversation class to create an async call to a Dialog Task that uses the context to create a ReplyMessage with more functionality. What does that all mean? It means that with dialogs, you can implement a conversation with a user asynchronously when certain keywords are triggered. For example if the user types in the keyword “reset” we can have a PromptDialog to add the confirmation. One of the most powerful ways of creating a an actual dialog with the user and the bot is to add Chain Dialogs.

Chain Dialogs

Explicit management of the stack of active dialogs is possible through IDialogStack.Call and IDialogStack.Done, explicitly composing dialogs into a larger conversation. It is also possible to implicitly manage the stack of active dialogs through the fluent Chain methods.To look at all the possible ways to respond to a user with Dialogs you can look at the EchoBot Sample in Github.

Publishing your Bot

Okay now that you have tested your bot and got it to respond to your user, how do we publish? Well steps to getting your bot on the bot connector are here: http://docs.botframework.com/directory/publishing/#navtitle

TIP 1: Update Visual Studio and tools

As I said earlier make sure all of your tools are on the latest update. My web tooling was not on the latest version when I first tried to publish my bot, so the directions were slightly different than the tutorial and caused issues later.

TIP 2: Don’t skip any of the steps

The first time I published my bot it didn’t work. I still have no idea why, but I believe it was because I missed a minor step in the creation process.

TIP 3: It should work immediately

Your bot should work immediately after you activate the web channel. If it doesn’t check your code again. My first bot was not working immediatley and I ended up just registering a new one with the same code. That worked.

TIP 4: Web disabled

If you look at my channel picture you can see that the web channel is registered but it’s status says “diabled”

Don’t worry about this. Your Bot web should still work.

TIP 5: Registering your bot

You don’t need to register your bot for it to work. Registering your bot will allow it to be in the publish gallery later. Make sure your bot does something useful before submitting as well. Simple chat bots do not count.

That’s it! You should have a bot published and all ready to Chat with.

Next Steps – LUIS

Okay so there are many ways that your bot can respond to your user. However specific keywords are needed and it’s less user friendly, and conversational than we would like. In order to make our bot respond to natural language that users will most likely be using to integrate LUIS. Which I’ll talk about in the next post with part 2.

Reading the Bot Framework Docs was extremely helpful when getting started, so if you haven’t looked at them I recommend you take a look here: http://docs.botframework.com/

The Microsoft Bot Framework is opensource so you can help contribute to the project here: Microsoft/BotBuilder. They also have more samples included in their source code.

Happy Hacking!

-TheNappingKat

Error Fixing

Here are a list of errors I’ve seen while working with HoloLens Emulator. I’ll be adding to the post regularly. If I’ve missed something, please comment below and I’ll can add it.

For this demo I was following the HoloAcademy Origami Tutorial: https://developer.microsoft.com/en-us/windows/holographic/holograms_101e

System Specs:

  • Window 10 Enterprise
  • Intel(R) Core(TM) i5-4300U
  • 2.50 GHz
  • RAM – 8GB
  • 64 bit, x64 processor

Error – Exception Code 0xc0000409 Error

Solution: Check Versions of Unity and VS tools as well as Emulator Version

This error occurred because the download link to the latest Unity Editor build was not compatible with the Emulator Unity Tools for Visual Studio. The Unity version of the editor I had was 5.4.0b14. The Origami demo and the Emulator tools currently work with b10, if your looking at this post several months down the line just make sure your versions are compatible. Also make sure you have the VS Update 2 installed.

VersionB10

Error cs0234: the type or namespace name wsa' does not exist in the namespace unityengine.vr’. are you missing an assembly reference?

Solution: Make sure the correct version of Unity and VS Emulator Tools are installed. Then make sure the correct version of UWP tools are installed.

You should have the 10.0.10586 UWP tools not Win 10 SDK 10.0.10240. The Win 10 SDK at the moment conflicts with the tools for some reason when I was trying to deploy my project. This may change in the future.

Error – Connectivity.Remote.Device.Ping()

Solution: Check to see if Remote tools version 10.0.10586 is installed and then try downloading the Remote tools.

Error – Project not Deploying

There are a number of reasons for faulty deployment

Solution: Wait. The first time I ran the Emulator it took 15 minutes to run and load my app.

Solution: Make sure your versions are correct.

Solution: Look at the Project in Visual Studio; make sure there are no popup windows that are halting the debugging and stopping the deployment. The first time you run the emulator Visual Studio will ask you if you want to continue debugging in Emulator mode, if you select “Continue (always use this option)” they Deployment process won’t hang waiting for your permission.

Solution: Make sure you don’t have too many other programs running

Error – Project is Deploying to Emulator but not starting

Solution: Hit the plus on the right side of the menu Window.

This will take you to All Apps Running in the Emulator, you should see your app there.

Error – No ‘Home’ or ‘Menu’ Window in emulator

Solution: Hit the Windows Key. If that doesn’t work restart.

Error – Stuck in Windowed Mode of the Emulator

If you see the Unity Logo with a white screen you will be stuck in Windowed mode of the Emulator and be unable to run your app.


Solution: Turn off emulator. Clean your Solution. Build it. Then hit Run again for the Emulator. The emulator is still new and sometimes will get stuck.

Error – HoloLens Emulator is not appearing in Visual Studio Devices drop-down

Solution: Make sure the tools are downloaded and you are in x86 mode with Release Mode selected

So those are the main ones. Again I’ll keep adding them. Let me know what cool projects you’re working on and if you ran into more errors that I can add =)

Happy Hacking!

-TheNappingKat