Unreal

So in recent months, life has been very hectic. But I’ve finally managed to get a break and do some more development in Unreal Engine. Taking a break from the Carnival Game that I was developing on twitch I’ve pivoted to Virtual Production; getting accepted to Epic’s Unreal Virtual Production Fellowship Program: https://www.unrealengine.com/en-US/fellowship

The first tool I made during that fellowship was a tool that combined the new SunSky component with the old SkySphere component; allowing the us to use the SunSky to light the clouds and stars instead of a gradient. I even made a video on how I did it.

Unreal

TL;DR

I’m a Unity Engine Developer trying to learn Unreal Engine. So I decided to make a VR Carnival comprised of your typical carnival games to learn the basics of Unreal Engine development.

***Note: I’m a Unity developer, so most of these terms are going to be in Unity vocabulary instead of Unreal. The entirety of this series will be like this as I’m targeting myself and other Unity developers.***

What I’m making

So to start off, I’ll give some clarity around what I’m doing; I talk about my “why I’m doing this” in-depth in a previous post. But in summary: I want to learn Unreal, and the best way for me to learn is to do. And, the best way for me to do is to solidify an idea for a game and execute it. The game idea is Carnival VR; I only realized weeks later that this game kinda already exists but who cares this is for me to learn.

Why choose VR and why a carnival game? Well, VR/AR, XR, whatever we call it next year, is the industry that I mainly work in. And since phone development is a pain and I don’t want to think about optimization until I have a solid foundation for development, I decided to tackle VR first. Now, I chose to build a carnival because carnival games have interesting interaction mechanics with straight forward designs. Throwing a ball, throwing a ring, hitting a lever with a mallet, aiming a device projectile at a target, all of these interactions will force me to learn how to manipulate objects in different ways and allow me to explore how interaction systems can be built in Unreal. Carnivals are also colorful and loud, giving me the opportunity to learn how materials in UE4 work as well as sound. Also, it seems fun!

So out of all the different carnival games, I could choose I decided to go with the Clown Teeth game… It sounds a bit weird and scary but I swear it’s fun, at least I think I remember having fun.

Clown Teeth game images

This game has a bunch of basic mechanics and foundational systems that I want to learn:

  1. General Lighting
  2. Creating Collidables and Collision Events
  3. Object to Object communication
  4. Physics Constraints
  5. Picking up and throwing objects
  6. Game Manager
  7. Timer
  8. UI
  9. Materials
  10. Sound

This list can also be applied to any game in general and is scalable for developing the other carnival games.

Architecture and Design

So I’ll briefly discuss how I’m thinking about architecting this game. I feel like architecture and design are left out of a bunch of tutorials but I like having it in here since it will give you the answer of WHY am I deciding to make the thing in a specific way.

Design

For this game I want the player to be able to “Throw” a set number of “balls” (let’s start with 3), at “Clown Teeth” (let’s start with 4) that will be a set distance from the player (let’s start with 3 meters). I also want the “score” to be displayed at all times for the user. End conditions for the game are either 1) All the balls have been thrown 2) All the teeth have fallen over or 3) The player leaves the game. When the game ends on a win condition a “particle” or “lighting” effect will trigger; when the game ends on a loose condition a different “particle” or “lighting” effect will trigger. The game restarts when the user hits the “restart/start button”.

Components / Requirements

Great now that we know what we want to make I’ll list out what we’ll need for this game:

  1. Paddle GameObject to act as the teeth – x4
  2. Ball GameObject to act as the ball – x3
  3. Reset/Start Game Button to reset the game
  4. A table to spawn the ball objects
  5. UI to display Score
  6. Particle/Lighting TBD – x2

Architecture

Architecture is just defining how classes will be structured and what the relationship between scripts will be.

I’m not going to draw a UML, this is just a brief description:

Paddle, will be the generic logic for the Teeth. I want this to be generic because I know other carnival games have objects that act like these teeth paddles. In my mind, that means Paddle will be a base object that others can extend from OR will be made with an interface that other game’s paddles will use.

Ball, is very generic. Many games have balls, so whatever logic I use to make this object pick-up-able (sp?) and throwable must be reusable everywhere. I’m thinking of extending whatever generic pick up and throw functionality that might come in the VR starter pack. Ball’s will also have to be destroyed after a period of time or after going out of some boundary.

Both Ball and Paddle need to collide with each other. The paddle will own the collision event and tell its game manager that it has been hit. This means that paddle needs a reference to the game it’s a part of OR the GameManager needs to only listen to events from Paddles it owns. I’ll probably start with the first suggestion for ease of development but will need to change it in the future.

Game Manager, needs to handle score logic and reset logic. So when a paddle has been hit, if the manager cares, it will handle the logic of how that hit affects the score. It also needs to tell the UI to update with the new score value. When the user hits the reset button, the Game Manager needs to reset everything. Meaning it has a reference to the ball spawn points and the initial position and orientation of all the paddles. Because many games will have this similar logic I also would want to create an Interface for Game Managers.

Reset Button, needs to interact with the user’s hand. It also needs to tell the Game Manager that a reset has been requested. This will also be in many games so it will also need an interface.

And that’s where I’ll leave it for now.

Summary

So I’m making this game in Unreal Engine to learn Unreal Engine. It’s a game that is comprised of a bunch of mini-games. Cool.

I’m not sure how to sign off on these posts yet. But I’ll just say, thanks for reading.

You can watch the journey live on Twitch: https://www.twitch.tv/thenappingkat/

And you can follow on Twitter @KatVHarris

Happy Hacking!

The Napping Kat

Unreal

I, Katherine Harris, a certified Unity developer, am making the plunge into Unreal Engine, and I’m doing it live!

I sent this tweet out last night before live broadcasting myself starting a new Unreal project. After the 3-hour endeavor of me just getting a skybox, a cube and a sphere that can collide with each other, I had a few people asking me, “But why…?”

Good question. Why at this time, am I deciding to learn a new engine? Is it work? Is it money? A hackathon?

Well, some history. Unity has been my engine of choice since 2011. I worked for Microsoft for 5.5 years. C#, .NET, Prefabs, Monobheviours this is my comfort zone. Unity development is my English and Unreal has been my Japanese. I tried learning Unreal back in 2011 as well, and could not grasp it. Since then I’ve tried to hop in and learn every year or so with no success.

Until now.

The only thing that has really changed in my journey to learn Unreal is, I finally have a person that can answer all my questions and hold my hand through development. That’s it.

Before January of 2010 I did not know ANYTHING about coding. I learned what coding was and learned how computers work by being in the most basic CS class at my university. It was nick-named “baby CS” and was meant to be a 000 100 and 101 course. It was 3 hours per class because the lab was during the class itself. There were abundant TA’s and a Teacher professor (meaning he only taught and didn’t do research) who held our hands as well learned how to walk. I believe that class is the only reason I’m a software engineer.

For me to learn something I have to really sit with it and explore every aspect of how it works; the foundation/building blocks. I ask a lot of questions. Sometimes it takes a while for me to “get it” but once I do, applying that foundational knowledge in new and interesting ways becomes second nature.

Back in August 2019 my desk was moved next to a colleague, Nick. He is an Unreal expert and prototyper, just like me and Unity. I started looking over his shoulder when he would mock something up, started asking questions, and he graciously answered them all. He encouraged me to download the engine and play around with it and helped me understand some of the basics and tips. But I never made anything. So after he stopped sitting behind me in October 2019 my momentum took a hit. Work got more hectic and I eventually stopped even trying.

Now the frustrating thing is that with other technology, if I don’t have someone to learn fun it’s fine. I take the time, make a thing, and learn by doing. But the problem with Unreal, is that if I want to take the time and do the thing. My brain just goes… “Why aren’t you just doing this in Unity?” The concepts of making a game are the exact same between the engines, so my brain just locks in its cycles because it knows how to do the thing in Unity and yet this wall of spaghetti, wire, node code means nothing to it.

Back to my English versus Japanese analogy. I’m a native English speaker, no I’m an English Professor, I have mastery over the language and I’m trying to learn Japanese. I have no teacher. I watch anime but other than knowing a few words, and that Japanese uses kanji I can’t speak or write it.

Unreal uses visual scripting. It’s a completely different language. I know what I want to say in English but I had no idea where to start in Japanese.

Now I had a person to give me an alphabet, give me a dictionary. But that’s still not enough to really learn a language. Immersion is also key.

So as a new 2020 goal, I’ve decided to capitalize on finally having someone amazing and patient to hold my hand as I dive into a familiar but brand new world.

You can watch the progress on Twitch: https://www.twitch.tv/thenappingkat

Episode 1 is here: https://www.twitch.tv/videos/540398428

I’ll also be making small blog posts after each session summarizing what I’ve learned. =)

So let’s do this. Happy Hacking!

The Napping Kat

P.S. I still love Unity. I use it every day (well 5 days a week). It’s still my native language and I’m still learning new things with all the updates. But learning new languages sometimes helps you understand your native one even better.

Bots

TL;DR

This post goes over the steps necessary to make your bot open to the public. 1) Load your bot code onto a webapp on Azure (Microsoft’s Cloud). 2) Register your bot

Hosting Code on Azure

Okay in the last post I go over creating a bot with Microsoft Cognitive services QnAMaker. The code works natively on local host and pings the online Knowledge base when questions are asked, but it’s not yet hosted anywhere.

What you need to get started:

  • A Microsoft Live ID Subscription

Setting up Web App

Go to: https://portal.azure.com

** Note: Be sure that you are on the new Azure portal as the old dashboard is slowly being depreciated. **

When you first log onto the Azure portal you will see a Dashboard that has a bunch of tiles. Click on the ‘New +’ symbol at the top to create a new service. Then Select “Web + Mobile” > Web App. Fill out the form, select ‘pin to dashboard’ and click ‘create’.

Once the Web App is created a tile will appear on your dash. Click it to access your application.

After clicking you will be taken to the Overview of your Web App. On the right hand side you should see ‘Deployment Options’. In Azure the default connection with your Web App is with an FTP endpoint. However with Deployment Options we can select a variety of ways to deploy source code. I will connect mine to GitHub, but there are other options like Visual Studio Team Services or local Git.

After you select ‘Deployment Options’ > Choose Source “Configure required settings” > You’ll see a list of options. Select the desired one and connect to that service with appropriate credentials.

Once you’ve connected to a source, or used FTP to upload your files, we can now register our bot.

Registering your Bot

To Register your bot, simply go to https://dev.botframework.com/ and click on “Register Bot”.

Fill out the Bot Registration form and use your Web App url (https://yoursite.azurewebsites.net/api/messages/) for the message endpoint input under Configuration.

** Note: Make sure you are using the secure url link for your message endpoint aka HTTPS **

After you filled everything out and Created and Microsoft App ID and password. Click Register, you should be taken to a Dashboard for your bot.

Linking your code to the Registered Bot

On your dashboard hit ‘Test Connection’. It should fail.

This happens because, your code does not have the ID and Password Authentication codes.

In your Web.config file you should see the following lines:

  <appSettings>
    <!-- update these with your BotId, Microsoft App Id and your Microsoft App Password-->
    <add key="BotId" value="YourBotId" />
    <add key="MicrosoftAppId" value="" />
    <add key="MicrosoftAppPassword" value="" />
  </appSettings>

Copy and past your MicrosoftAppId into the value slot for AppId and the same for the Password you obtained when you registered your bot.

Now push the updates to the Web App. If you hit test connection it should work! From there you can add multiple channels that your bot can communicate through. The skype and Web Channels are turned on by default, so you can get started with those two first.

And that’s all you have to do to get your bot online and ready to go. =)

Happy Hacking!

– TheNappingKat

Error Fixing, Windows

TL;DR

Mismatched Taskbar error prevents everything in the task bar from working, including the windows key. Reset your shell. Instructions in the “How to Solve” section of this post.

The Problem

So, I love windows 10 multiple desktops (FINALLY!). But I noticed recently that when I switch back and forth several times in rapid succession, the task bar has a tendency to lag; resulting in a mis-match from the correct desktop.

Example Desktop 1 will have the corresponding task bar for Desktop 2 and Desktop 2 will have the task bar for Desktop 1.

How to Solve

Reset your Shell. If you don’t know how to do that here’s how.

Because this error prevented everything in the task bar from working, including the windows key, use Ctrl+Alt+Del to pull up the task manager.

Then find the first instance of explorer.exe. This is the shell containing the task bar. End that process.

To run it again. Click File > Run and type explorer, this will boot the shell again and get your task bar running.

Happy Hacking

– TheNappingKat

Bots

TL;DR

Luis may be overkill for the the bot you want to create. If you only need your bot to answer questions (especially ones already on a FAQ site) try QnA bots from Microsoft Cognitive Services. QnAMaker automatically trains your service based on existing FAQ pages to save a bunch of time. In this post, I walk you through creating one and the code needed for your bot to link to the service. QnAMaker is currently in Preview as of January 2016; more information can be found at qnamaker.ai.

QnA Service vs. LUIS Service

First what is Microsoft QnA Maker. Well “Microsoft QnA Maker is a free, easy-to-use, REST API and web-based service that trains AI to respond to user’s questions in a more natural, conversational way.” It streamlines production of a REST API that your bot or other application can ping. So why use LUIS? If you want automation of a service that requires multiple response from your user (i.e. phone automation systems, ordering a sandwich, modifying settings on a services), LUIS’s interface and pipeline manages that development process better.

Getting started with the QnA bots

First, go to QnAmaker.ai and sign in with your live ID.

Once you’ve signed in create a new service

Type in the name of the service and the FAQ link you want to use, I’m linking to Unity’s FAQ page in this example. What’s great is that you can add more than one URL for your QnA bot to pull from. So if the site you are using has a FAQ that redirects to different pages to answer questions you can add those other pages too. You don’t need to use a url, uploading your own questions and answers works too.

Hit “Create” at the bottom of the page.

After you hit create the page will take you a new page with the questions and answers that the service was able to identify from the source (url or file) you provided. The questions and answers that the service identifies is called your Knowledge Base (KB).

Natural Language Testing

To train your service you can start plugging natural language questions and the service return the FAQ answers that would best match. If the service can’t get a high enough probability percentage for an answer it will return multiple answers, that you can choose from.

You also have the ability to provide alternate phrasing for question you just asked in the right hand side of the tool, so that they can map to the same answer.

Any time you make an adjustment to what the service returned for an answer, be sure to save what you’ve done by clicking Save and Retrain button.

Once you’ve finished training the service you can hit publish. You’ll be taken to a page with a summary of the numbers of changes that you had, before the service will be published.

** Note: The service won’t be published until you hit the publish button on this summary page. **

Once your service is published the site will provide an sample HTTP request that you can test with any rest client.

Code – Connecting it to Microsoft Bot Framework

If this your first time working with the Microsoft Bot Framework you might want to check out my post about it here: Microsoft Bot Framework, or read up about it on the Microsoft’s site: https://docs.botframework.com/en-us/.

For this example I’m using:

The MessagesController class and create a QnAMakerResult class hold the most important parts of code. Depending on the complexity of your bot you may want to look into dialogs and chains, instead of putting your handler in the MessageController class.

QNAMAKER RESULT

 public class QnAMakerResult
    {
        ///

<summary>
        /// The top answer found in the QnA Service.
        /// </summary>


        [JsonProperty(PropertyName = "answer")]
        public string Answer { get; set; }

        ///

<summary>
        /// The score in range [0, 100] corresponding to the top answer found in the QnA    Service.
        /// </summary>


        [JsonProperty(PropertyName = "score")]
        public double Score { get; set; }
    }

Be sure to add the Newtonsoft Library to the class.

using Newtonsoft.Json;

Message Controller

Inside the MessageController Post task, in the if(activity.Type == Message) block add the following:

ConnectorClient connector = new ConnectorClient(new Uri(activity.ServiceUrl));
                var responseString = String.Empty;
                var responseMsg = "";

                //De-serialize the response
                QnAMakerResult QnAresponse;

                // Send question to API QnA bot
                if (activity.Text.Length > 0)
                {
                    var knowledgebaseId = "YOUR KB ID"; // Use knowledge base id created.
                    var qnamakerSubscriptionKey = "YOUR SUB KEY"; //Use subscription key assigned to you.

                    //Build the URI
                    Uri qnamakerUriBase = new Uri("https://westus.api.cognitive.microsoft.com/qnamaker/v1.0");
                    var builder = new UriBuilder($"{qnamakerUriBase}/knowledgebases/{knowledgebaseId}/generateAnswer");

                    //Add the question as part of the body
                    var postBody = $"{{\"question\": \"{activity.Text}\"}}";

                    //Send the POST request
                    using (WebClient client = new WebClient())
                    {
                        //Set the encoding to UTF8
                        client.Encoding = System.Text.Encoding.UTF8;

                        //Add the subscription key header
                        client.Headers.Add("Ocp-Apim-Subscription-Key", qnamakerSubscriptionKey);
                        client.Headers.Add("Content-Type", "application/json");
                        responseString = client.UploadString(builder.Uri, postBody);
                    }

                    try
                    {
                        QnAresponse = JsonConvert.DeserializeObject<QnAMakerResult>(responseString);
                        responseMsg = QnAresponse.Answer.ToString();
                    }
                    catch
                    {
                        throw new Exception("Unable to deserialize QnA Maker response string.");
                    }
                }

                // return our reply to the user
                Activity reply = activity.CreateReply(responseMsg);
                await connector.Conversations.ReplyToActivityAsync(reply);

You can now test your code by running it and opening up the emulator. Be sure to pass in the correct localhost port in the emulator to connect to your project. The default ID and password is blank so you won’t have to add anything when testing locally.

Your Bot

Okay so far we create a REST service that will answer questions based on a Knowledge Base built on specific FAQs. That service can be accessed by any type of application, including the Microsoft Bot Framework. With the code snippets we can use the bot framework to manage users input before pinging the QnA REST service. However we still need to build and host the bot.

We need to create Register a Bot on the Microsoft Bot Framework site. You can host the code on on a Web App within Azure then connect that to the Registered Bot. I use continuous GitHub deployment to update my code. The Microsoft Bot Framework enables the Web and Skype channels by default but there are others that you can easily add to your bot like Slack and Facebook Messenger. If you follow my previous post I have instruction on how to do this or you can look on the Microsoft Bot Framework documentation.

That’s it. You should have your FAQ bot up and working within a couple hours =)

Happy Hacking!

– TheNappingKat

Oculus, XR

TL;DR:

This post talks about technical implementation of movement from different VR experiences I’ve tried in the last 2 months. All of them delivered great immersive experiences without breaking presence, and wanted to share my analysis of their techniques. Part 2 will analyze environment interactions. All the experiences I’m talking about showcased different hardware (Vive, Touch, Omni and Gamepad) and genres.

Teleportation is one of the best techniques to use for movement. If you don’t want to use Teleportation, use constant forward movement. If you want to implement strafing, make sure its smooth and continuous and try limiting the degree of movement.

Reviews/Technical Analysis

So the past 3 months have had me traveling around the U.S. enabling me to take part in many amazing Gaming and VR conferences: Pax West, GaymerX4, Oculus Connect 3. I wanted to use this post to talk about some of my experiences on the different games and hardware as well as dive into the unique technical solutions that each of these experiences implement. Most of what I talk about will revolve around player movement and environment interaction; two of the most common areas where presence is broken.

Velocibeasts by Churroboros – HTC Vive Multiplayer

Technical Highlights: Attack and Movement Controls

Game Description:

“Have you ever wanted to throw something at someone? VELOCIBEASTS can make that dream a reality. Pick a weapon, throw it, teleport, and kill your friends.

Battle as a variety of animals equipped with floating mecha suits in this fast paced multiplayer battle arena VR title.”

-Churroboros

Review:

I managed to get a pretty in depth demo at GaymerX4 this year. The highlight of this game is the attack and movement controls. In VR, player movement is one of the fastest ways to break a users sense of presence. So, why am I impressed? I’ll explain. In the game you are a mecha-suit beast in a large arena, trying to kill your opponent.

You attack by throwing your weapon, similarly to casting a fishing line, toward your enemy. Pressing and holding the trigger grips your weapon, and releasing the trigger to throws it. However the interesting part of the game play is when you press the trigger again. When you do, you instantly teleport to your weapons location. The mix of coordinating attacks and movement simultaneously creates a fun, fast pace experience that immerses players.

Now in general I’m not prone to motion sickness in VR. However, in most first person shooter/fighting games, falling and strafing cause the most motion sickness issues for me. Velocibeasts avoids both by allowing your beast to float (because it’s in a mecha-suit), avoiding falling, and teleportation avoiding strafing. The floating mechanism also gives users complete 6 degrees of freedom for moving around the arena.

I’m impressed because many games use the teleportation technique but not many of them make it so well integrated into game play. The movement controls were also very easy to use and only took a few throws to get the timing and rhythm down. Below are some pictures of me playing the game, getting really into it.

Links

@ChurroborosVR
https://www.facebook.com/ChurroborosVR/

World War Toons by Reload Studios – Omni, Gamepad, Rift, PSVR

Technical Highlights: Full FPS style game, Good use of strafing controls, and Omni integration

Game Description:

“World War Toons is a cartoony first-person shooter where you never know if you’re going to turn the corner and see a rocket rushing towards you, grand pianos falling from the sky, or a massive tank staring you in the face.”

– Reload Studios

Technical Review and First Opinions:

I played this game at PAX West and got the opportunity to play with the Rift and gamepad, as well as the Rift with the Omni. It was a very polished game, the mechanics played like an FPS, which isn’t always the best thing in VR. World War Toons is one of the few games that I’ve played that has strafing (lateral movement independent of camera position) in VR. The reason why VR experience shy away from this? Because users get sick, really, really quickly.

Now, despite having strafing, I only felt nauseous a few times during game play; specifically when my character was falling off the side of a wall, and when being launched in the air by trampolines.

The creators limited movement to just the d-pad directions (left, right, forward, backward), to limit the amount of discomfort when players were strafing.

However, when playing the game on the Omni, I had no issues with nausea. The hardware made a huge difference when the character is launched about the arena, and falling off drops. It was also completely immersive, compared to full game pad controls.

Links

http://voicesofvr.com/455-vr-first-person-shooters-esports-with-world-war-toons/

roqovan.com

@StudioRoqovan

Eagle Flight by Ubisoft – Ubisoft

Technical Highlights: Player Movement, Gamepad, Multiplayer

Description:

“50 years after humans vanished from the face of the Earth, nature reclaimed the city of Paris, leaving a breathtaking playground. As an eagle, you soar past iconic landmarks, dice through narrow streets, and engage in heart-punding aerial dog fights to protect your territory from opponents.”

-Ubisoft

Technical Review and First Opinions:

I first saw this game at GDC this year, and at Oculus Connect 3 I was able to play it. A 3 v 3 multiplayer, capture the flag, game. My team won 7-0. YAY!

Game start: The opening of the game you can see your fellow teammates as Eagles in a for match making. You are able to look around at your teammates’ eagle bodies, whose head movements correspond to the players’ head movements. I mention this because these little details increase player immersion. Once everyone is ready in the lobby the game beings.

Game Play: When the game finally starts the camera, fades in onto the scene. You as the eagle, is already in flight. In VR, if you want to have the player move around your scene (not teleportation) there are only a few ways to do it without getting them sick. One of the ways is to have a semi constant speed, and always have forward movement in the direction the player is looking. Eagle Flight employs this technique, with head tilts, to turn yourself left and right. However, I did still feel some discomfort as I was getting used to the controls of moving around Paris.

The other thing Ubisoft does to help immerse the player, is add a beak to the players view. There have been VR studies that show adding a nose, or a grounding element to the player’s view helped to immerse them faster and alleviate motion sickness. I hadn’t seen any games employ this technique though, until this one.

The third technique Ubisoft uses for movement and immersion is vignetting the players view, when increasing speed; similar to tunnel vision. I’ve seen this technique a few times when player movement is increased. I like this technique since it eases my motion sickness by limiting the amount of visual inputs.

Eagle Flight is an Oculus Rift Gamepad game, it’s also coming to PSVR. I usually dislike gamepad games for VR because I think it takes away from presence, however this game only used a few buttons, for firing, shield, and speed controls. If you are going to use a gamepad for your First Person VR game, I suggest simplifying the controls, keeping them as intuitive as possible, and styling your game on a third person view.

You can see some of the gameplay from E3 here:

Links

https://www.ubisoft.com/en-US/game/eagle-flight

@UbisoftVR

Summary

Figuring out which technique you want your user to use to explore your Virtual World is important. Take into consideration the limits of hardware and style of gameplay when making your decision. In Velocibeasts I doubt I would have enjoyed the game play as much if I had to alternate between teleporting and fighting my opponent due to the game’s fast pace flow. Eagle flight had to center it’s gameplay around a constant movement since players are birds. It would have felt super disconnected if our birds were teleporting everywhere instead of peacefully gliding.

Teleportation is one of the best techniques to use for movement. If you don’t want to use Teleportation, use constant forward movement. If you want to implement strafing, make sure its smooth and continuous and try limiting the degree of movement.

Now that I’m done traveling more videos and posts about how to implement all the awesome techniques I talked about to come =)

Happy Hacking!

– The Napping Kat

Bots, Unity, XR

TL;DR

It works! I managed to get HoloLens inputs working with the LUIS integration I did before in Unity. The project melds phrase recognition with dictation from HoloAcademy’s github example and then pings the LUIS API.

All the code is here: Github/KatVHarris

LUIS + UNITY + JSON post is here

LUIS 101 post is here


Okay so this is a very short post. Most of the harder parts were completed before this in my LUIS post and the Hololen’s Academy code helped a lot. I’ll just mention here some of the pains I went through and how it works.

Phrase Recognition vs. Dictation

HoloLens has 3 main types of input control for users. Gaze, Touch, and Voice. This application focuses on the last one. The voice input uses the Speech library for Windows.

using UnityEngine.Windows.Speech;

This library allows the HoloLens to then use Phrase Recognition to trigger specific actions or commands in your project. Yet that defeats the point of natural Language processing. In order to then interact with LUIS we needed to feed in what the user is saying. So to do this, I integrated the Communicator Class from the HoloLens Example into my project. This class handles the Phrase Recognition of the project but it also handles dictation, enabling natural language to be captured from the user. My Communicator is slightly different than the HoloLens because of the LUIS interactions as well as reworking the code to call for multiple dictation requests.

Now to activate the dictation Phrase Recognition commands are used. So no need to Tap to activate.

Dev Notes

I did have some speech/phrase recognition trouble. The original phrase to activate dictation was “Allie” (the character on the 100 which my original bot project is based off); however, the recognizer doesn’t recognize that spelling of her name. Changing it to “ally” caused the recognizer to trigger. DictationRecognizer is similar to the PhraseRecognizer in that it also doesn’t recognize the spelling of many names; for example, I would say “Tell me about Clarke.” and the dictation recognizer would write “tell me about clark.”. To fix the dictation errors I used Regex to replace the spelling before querying the LUIS API. One can also change their LUIS API to accept the speech recognition spelling but because multiple bots and applications are connected to my LUIS API I couldn’t implement that solution.

private void DictationRecognizer_DictationResult(string text, ConfidenceLevel confidence)
    {
        // Check to see if dictation is spelling the names correctly 
        text = Checknames(text);

        // 3.a: Append textSoFar with latest text
        textSoFar.Append(text + ". ");

        // 3.a: Set DictationDisplay text to be textSoFar
        DictationDisplay.text = textSoFar.ToString();
    }

Anyway that’s all there is to it. All the major code is in the:

Hope this helps, and let me know if you have any questions with it on Twitter @KatVHarris

Happy Hacking

– TheNappingKat

Bots

So for the past month I’ve tried to push myself to code everyday, and git something on Github… See what I did there 😉

Why did I start –

This month has been really chaotic. I was learning about the new Microsoft Bot Framework, I was in an AD, I was working almost every weekend for events and hackathons, and I was also moving. Suffice to say, not pushing something EVERY day would have been fine. However, 5 days into the month I realized my week commit streak was all green. My record in the past had been 10 days, and I thought wow this is a perfect time to break it.

Milestones –

I decided to start small. Aim for a week, then 10 days to tie my record, then 15 days, then 20, then 25, and 30 at the end of June. See, if I, in this crazy month of upheaval in my personal life as well as having one of the busiest work months, could put aside time every day to code my challenge goal would be achieved.

In the first 5 days I was working on Unity Intro project and my Bot Framework Example, and decided that focusing on just those would be best for narrowing the scope as well.

Day 13 – Starting to Waver // “Cheating”

Like I said I was moving and doing a bunch of events and all of a sudden working on code seemed too much of a commitment, but I desperately wanted to continue my streak. The solution, updating the Readme. It’s a simple solution and I felt really guilty about it. How can that count as working on code?

Well on Day 18, the weekend of Holohacks, a weekend long hackathon with the Hololens team at Microsoft in SF, I got to talking to a bunch of developers. Turns out that the team was “strongly encouraged” (lol pretty much forced) to write all their documentation as they were working and make it high quality. Documentation is such an important part of development especially for open source projects where others will want to add to your work.

Documentation Driven Development (DDD)

Now, those one off commits to the Readme didn’t seem like cheating. I was spending time improving a document that was the guideline for my work. Updating links to point to newly implemented documentation, adding to the feature list and directions, creating templates for other devs were all justified.

I didn’t come up with the phrase DDD, but I believe that’s how many projects should be worked on. Why? Well normally when developers write an amazing piece of code, writing documentation about it is the worst, most boring part. Decent documentation is a god send when most of the time it seems to be out of date in our ever-evolving industry. Imagine trying to build Ikea furniture with outdated instructions. Sure you can figure it out, and hopefully it stays together, but having an accurate guide makes life so much easier.

Day 30

After that hackathon I packed, moved to LA, did another Hololens weekend hackathon, and had an important presentation the following week. However, even with all that I managed to push many updates to my code and write up important documentation for it as well. Day 30 hit and my goal now is to see how long I can keep it up. It’s become more of a habit now. Kind of like working out or brushing your teeth.

Just thought I’d share some of the life updates with everyone.

Happy Hacking

-TheNappingKat

Bots

TL;DR

Developing with LUIS is easy but slightly tedious. This post shares some tips for developing with LUIS and provides links to get started.

In part 1 I talked about my experience getting started with the Microsoft Bot Framework and getting responses from simple keywords. Which is great; but I want users to be able to have a conversation with my Bot, ALIE (another The 100 reference). In order to get my bot to understand natural language I used Microsoft’s LUIS, part of their cognitive services suite, and integrated it into my bot.

LUIS

LUIS (Language Understanding Intelligent Service) is a service in Microsoft’s extensive cognitve services suite. It provides extensive models from bing and cortana for developers to use in their application. LUIS also allows developers to create their own models and creates http endpoints that can be pinged to return simple JSON responses. Bellow is a Tutorial about how to use LUIS.
LUIS – Tutorial

Things to note

Video differences

LUIS-Features

LUIS has been updated since the release of this video so some differences:
The Model Features area on the right has been changed to reflect more of the features you can add:

Working with Intents

https://www.luis.ai/Help is a great resource. LUIS has very detailed documentation.

LUIS supports only one action per intent. Each action can include a group of parameters derived from entities. The parameter can be optional or required, LUIS assumes that an action is triggered only when all the required parameters are filled. These will be the main driving force of your bot responses, and actions come into play when publishing.

Publishing Model for Bot Framework and SLACK

Here is the link to how to publish your model: https://www.luis.ai/Help#PublishingModel.
What they neglect to mention is that when publishing for the Bot Framework or SLACK you need to be in preview mode to access those features since they are still in Beta. To get to the preview of LUIS click the button on the top right of the page, it takes about a minute to regenerate the page.

LUIS_Publish1

Publishing – Action needed

Now, this might next part might change soon since the beta keeps being improved upon. When I first wanted to publish the model with Bot Framework the service required me to make at least one of my Intents return an action.

Adding Action

In preview select an Intent. A window will popup. Select Add Action.

LUIS_AddingAction

Next check the Fulfilment Box.

The fulfillment type determines what type of response will be included in the JSON object. I’ve selected Writeline since all of the actions that I have on ALIEbot so far do not use the integrated services, like weather or time.

After you select a fulfillment type you can add parameters and an action setting (which is what will be returned in the JSON object).

In my application I returned the parameter XName which is the name of the character that was in the users response.

Integrating LUIS Code

Setting up LUIS class

Before you can receive responses from LUIS you need to import the appropriate LUIS libraries, and create a class that extends the Luis Dialog. This class must also include the serializable and Luis Model tags.

This should be LUIS cODE Model

The LUIS model ID and Key can be found when you are publishing your Application:

LUIS Intent Methods

Your class must now have methods that act upon the specific LUIS intents.

        //This needs to match the Intent Name from JSON
        [LuisIntent("XName")]
        public async Task XNameResponse(IDialogContext context, LuisResult result)
        {
            var entitiesArray = result.Entities;
            var reply = context.MakeMessage();
            foreach (var entityItem in result.Entities)
            {
                if (entityItem.Type == "Character")
                {

                    switch (entityItem.Entity)
                    {
                        case "raven":
                            reply.Text = "Raven the Best";
                            reply.Attachments = new List<Attachment>();
                            reply.Attachments.Add(new Attachment
                            {
                                Title = "Name: Raven Reyes",
                                ContentType = "image/jpeg",
                                ContentUrl = "URL_PIC_LINK",
                                Text = "It won't survive me"
                            });
                            break;
                        case "clarke":
                            reply.Text = "Clarke is the main character";
                            break;
                        default:
                            reply.Text = "I don't know this character";
                            break;
                    }
                    await context.PostAsync(reply);
                    context.Wait(MessageReceived);
                }
            }
        }

Summary

Once you have set up a LUIS model, you need to publish it. After it has been published your bot can then connect to it via the LUIS Model tag. Connecting the bot to LUIS will enable the bot to understand natural language your users will use, however you still need to code the responses with LUIS Intent tags for your Task or Post methods in the Bot Framework.

So that should be everything to get LUIS working. Remember that this content is all in Beta and is subject to change, but I’ll keep it updated as much as possible.

Happy Hacking!

-TheNapping Kat