Unity

Implementing Analytics in Unity

So in Part 1 of Unity Gaming: Analytics, I talked about the importance of analytics, what it is, why you would need them, and how to understand the data. This part will go over how to integrate them into your game and connect to the Unity Analytics Dashboard. Remember these new analytics are still in Beta and can only work with Unity 5.2 or above. Let’s get started.

Step 1: Sign-Up

Make sure you have a Unity Account before you get started. You will also need a Unity Services account as well. To get one go to https://unity3d.com/services/analytics to sign-up and try the free beta.

1Signup

Step 2: Enable Unity Services

Now that you have an account you need to open your game in unity. In the upper right hand corner of the Editor you should see the unity services tab (if you don’t see the tab, hit the cloud icon in the upper right).

ServicesCloud

Before you can enable start using analytics you need to create a Unity Project ID so you can collect your data and view it with Unity’s Analytic Dashboard.

Select “Create” to have Unity create a Unity Project ID (if you already created a project ID in the Analytics Dashboard tool, you can use that ID to connect to your game with the “I already have a Unity Project ID link below the Create button).

2EnableUnityServices

Step 3: Enable Analytics

After you’ve turned on services and connected generated your Unity Project ID. You should see the available services that Unity provides within the Editor. Currently Analytics and Ads are the only ones available, however multiplayer options and Cloud Build are in the pipeline for future integration and use with the editor.

Turn on Analytics by clicking the “Off” toggle to “On” for analytics.

2.1PickServices

The Services tab will then open to the Analytics portion. Click the “Enable analytics

2.1EnableAnnayltics

Step 4: Analtyics Integration and Validation

To view and test your analytics you now need to go to the Analytics Dashboard found online at https://analytics.cloud.unity3d.com. The easiest way to get there is to click on “Go to Dashboard” (make sure you’re connected to wifi).

2.1FinishedEnablingAnayltics

The link will open your default browser and navigate you to the integration tab on your Unity Dashboard.

3Dashboard

To find out if your Analytics Services are correctly integrated, navigate through the documentation, by clicking the Next button. You’ll see a “Play to Validate” page.

Go back to your application and Play it in the editor. The empty box on the Dashboard should now be displaying data about your game.

3.2AnalyticsValidate

Trouble Shooting

If there is no data being displayed, stop your game and give the system time to refresh the dashboard. If it still isn’t working make sure that the Project ID in the Dashboard and the Project ID in the editor are the same.

Step 5: Write Custom Events

Now all that’s left to do is figure out what data is important to learn how users are interacting with your game. The next post will explain how to write code to collect custom information specific to your game/application.

Happy Coding!

-TheNappingKat

Unity

The Importance of Analytics

Analytics are one of the cheapest ways you can increase profit in your game.

When should you Implement?

You’ve created a good playable game, but haven’t released it yet. That’s when you should add analytics.

Also when you ask yourself: Which type of player spends the most money? Is it Americans, older players, ones that struggle to complete levels? When do my players stop playing my game? How long do they play? Why aren’t they spending money on my game?

Why do you need them?

In order for you to take your game to the next level you need to know about your game. More specifically, how your users play your game. It’s nearly impossible to make any good decisions about how to continue development if you don’t have any information to go off of.

Anayltics implemented well, will give you all information you need to learn about who’s staying in your game, who’s stuck and who’s spending money and why. That last one is one of the most important. Game developers can then use those insights to make data-driven decisions to make players love your games even more.

And when users love your game, they are more willing to spend money by buying expansions, in app purchases or play more which means…watch more ads, getting you more money.

What are Analytics

Analytics are event triggers that you’ve implemented at certain points/milestones/areas in your game. They can tell you almost anything about your users. For example, these triggers can tell you: where a player goes in your game; how often they go there; whether or not they collect particular items and special achievements; or which level they die on.

Unity is great because it provides an easy to use and easy to understand Dashboard to visualize your games data.

Unity Dashboard – Metric Monitor

UnityAnalyticDashboard

The picture above shows the main Metric Data that Unity provides for your game. I’ll explain the most common terminology used in the dashboard but the whole glossary of terms can be found here: Unity Glossary of Metrics.

Sessions – how many times your users play your game per week

DAU – Daily Active Users

MAU – Monthly Active Users

Retention – The percentage of average sessions per week

You can also modify what data you collect and display it on a chart in the Data Explorer Tab.

Data Explorer

By default Unity provides metrics on Player Data, Session Data, Retention Data, and Revenue. In chart Below I’ve customized my view to look at: Player Data – MAU; Session Data -Total Daily Play Time; Retention Data – Classified by geo location in the U.S., and a Custom Event that I created in my Game – Bubble Collected. Having this much data in one chart doesn’t really tell me too much though. I’ll talk about how to read data later in the post.

DataExplorer

Funnels

Funnels help you identify where drop offs happen in your game aka, when users stop completing a series of events. They are based of custom events that you create in your game. Funnels data is slightly different than regular metric data, in that it needs to be linear. What does that mean? Well when making your custom events the player needs to complete custom event 1, before completing event 2, in order for the data to show up on the funnel. The best example of this is level completion. In the picture below you can see that only 2.4% of users finish the game, and that 100% of users complete level 1 but then drops to 70% at level 2, but then drops drastically to 38% at level 3.

Funnels

Segments

Segment data is used to qualify/organize users based on certain metrics. For example grouping users by location; how much money they spend in your game; how long their session time is, or by age groups. Based on these certain criteria, you can analyze what types of users are completing your game or which ones are spending money.

How do I read this data

Well understanding what you are collecting is very important. Looking at my mock data I can tell that users initially spend money, but my retention for the game is non existent, so users are no longer spending money in the game since they aren’t playing it.

I can then look at my funnel data and see that 100% of users are completing my 100% of the levels. Meaning 1)The levels are too easy, and I should make them harder. 2) I should make more levels so the users stay in the game longer and end up playing multiple times. 3)I need to create new incentives to bring players back into the game.

Alright now how do you implement the analytics in your game? Well that will be in my next post (Unity Gaming: Analytics (part 2))

Happy Coding!

-TheNappingKat

Unity

Infinite Runner – Unity 5 Using Your Body as the Controller

Hey Everyone! NEW ANNOUNCEMENT!

As you know (or are just know learning) the Unity Gaming series on this blog is stepping through how to create a 3D Infinite Runner with Oculus Integration and Kinect Controls! Here’s a video explaining about it below.

Well, there is good news. UNITY 5.2 was released! And as such I’ve decided that this series would be perfect for exporing 5.2’s new features. Thus, the series has been re-vamped for Unity 5! (Well 5.2)

This post will highlight the major changes so you can integrate Kinect and Oculus into your Project!

So let’s get coding!

Unity Changes

This section will walk through all the changes that affect the Infinite runner I’m showing you how to create. This way when you follow along with Unity Gaming Series the code will work in unity 5.2.

Visual Studio

So if you download the new Unity 5.2 you’ll now notice that Visual Studio is now included! This is perfect for development debugging and building for Windows 10.

Oculus Integration AND Windows 10

The awesome thing about Unity 5.2 is optimization for Windows 10, Oculus Rift and other VR/AR devices!

Unity5Oculus

The Code

Create Your Own

The Unity Gaming series was meant to walk you through creating an infinite runner from scratch; teaching good coding practices along the way. You can go back to the beginning with the first post: Unity Gaming: Good Practices Unity

Completed Infinite Runner Game

If you’re only curious in the Kinect and Oculus portion you can start from a completed infinite runner game. The code for the game is on my github here: Base Unity 5 Infinite Runner Game

Here are the step by step guides to integrate:

Completed Game with Oculus and Kinect

If none of those appeal to you and you want to download the whole thing. The link to the completed repo is here: Gravity Infinite Runner

Happy Coding!

-TheNappingKat

Unity

Hey Everyone!

With Unity 5.2 out and all the cool new features with Virtual and Augmented Reality. I thought I’d do a quick tutorial about how to integrate Oculus into your Unity 5.2 or higher project. I’m walking through set up for a windows machine.

Step 1: Download

Go to Unity’s Home page and download Unity 5.2. Then go to the Oculus websites downloads section. Download the SDK, Runtime, and Utilities for Unity.

OculusDownloads

Step 2: Unity Plugins

Import the Oculus Plugin for Unity. In your Unity Project go to the Menu and select Assets>Import Package>Custom Package

In the File Explorer Select the Plugin you downloaded from the Oculus website.

UnityPlugin

Step 3: Oculus Prefab

Drag the Prefab into your scene, or the camera rig onto your character.

Prefab

Step 4: Enable VR development in Unity

The newest and most important step – ENABLE VR FOR THE EDITOR! Do this by going to Player Settings and selecting the box for Virtual Reality Development

PlayerSettings
VRSupported

Now run the Game. It should exactly the same. But if you have an Rift set up the it will mirror the Unity Editor Game View.

And there you have it. Complete integration =)

Happy Coding!

-TheNappingKat

Windows

Last but not least, Number 3! (or number 1 since this is a count, down…)

Adaptive Layout

As I explained in the first post of this mini series, the Universal Windows Platform (UWP) allows for one build to run immediately on phone and desktop! But by doing this developers have to consider adaptive UI that can change for different views/devices (phones, Xbox, desktops, tablets, HoloLens). State Triggers and Adaptive Triggers are the two ways to accomplish this adaptive layout.

Continuing with the example from post two this example will show state triggers and how to adjust layouts as the view width decreases.

State Triggers

First we need to add a Visual State Manager to the bottom of our code, but within the tag.

 <VisualStateManager.VisualStateGroups>
    <!-- Visual states reflect the application's window size -->
    <VisualStateGroup>
        <VisualState x:Name="WideLayout">
 
        </VisualState>
 
        <VisualState x:Name="NarrowLayout">
 
        </VisualState>
    </VisualStateGroup>
</VisualStateManager.VisualStateGroups>

Next we need to define what our States are and what happens to the layout as a result of changing states.

 <VisualStateManager.VisualStateGroups>
        <!-- Visual states reflect the application's window size -->
        <VisualStateGroup>
            <VisualState x:Name="WideLayout">
                <VisualState.StateTriggers>
                    <AdaptiveTrigger MinWindowWidth="600" />
                </VisualState.StateTriggers>
                <VisualState.Setters>
                    <Setter Target="MySplitView.DisplayMode" Value="Inline" />
                    <Setter Target="MySplitView.IsPaneOpen" Value="True" />
                </VisualState.Setters>
            </VisualState>
 
            <VisualState x:Name="NarrowLayout">
                <VisualState.StateTriggers>
                    <AdaptiveTrigger MinWindowWidth="0" />
                </VisualState.StateTriggers>
                <VisualState.Setters>
                     <Setter Target="MySplitView.DisplayMode" Value="Overlay" />
                     <Setter Target="MySplitView.IsPaneOpen" Value="False" />
                </VisualState.Setters>
            </VisualState>
        </VisualStateGroup>
 </VisualStateManager.VisualStateGroups>

In my Visual State Set Trigger I targeted the SplitView Pane, and adjusted it’s display mode so it doesn’t push our content to the right. Because we are setting the Split View’s Display mode in the State Triggers we need to remove it from the inline statement of the to avoid confusion.

Run it on the Local Machine and watch as you narrow the window how the menu automatically changes!

WIDTH > 600:

SplitView1

WIDTH < 600:

SplitView2

WIDTH < 600 and OVERLAY:

SplitView2-1

And there you have it. The 3 most notable/important changes (in my opinion) to the XAML UI from 8.1 to 10!

Hope you learned something. Let me know what your top three are, in the comments below!

Happy Coding

-TheNappingKat

Windows

Continuing the countdown!

TL;DR: Code is all on my GitHub, here http://bit.ly/UWP-UI-github

Last Post we learned about the new Visual Studio, and how to add in Relative Panels.

Okay so number two on my list is Split View. What is Split View you ask? Well, Split view is the Universal Windows Platform’s (UWP’s) “Hamburger” menus. There are four styles or DisplayModes that you can choose from:

  • Overlay – Panel Comes out over Content
  • Inline – Panel comes out an shifts content over
  • Compact Overlay – Behaves like Overlay, but when Panel retracts a portion of it (usually icons) is still visible
  • Compact Inline – Behaves like Inline, but when Panel retracts a portion of it (usually icons) is till visible

Here’s some more documentation on it from MSDN and here’s how you add it to your code.

Split View

The first part is to add a button that will trigger the panel to come into view.

 <RelativePanel Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
    <Button Name="SplitViewButton" Background="Transparent" Padding="0,-6" Margin="12" Click="SplitViewButton_Click">
        <FontIcon FontFamily="{ThemeResource ContentControlThemeFontFamily}" Glyph="&#x2261;" FontSize="32" Margin="0,-8,0,0"/>
     </Button>
     <TextBlock Style="{ThemeResource SubheaderTextBlockStyle}" Text="APP Title Goes Here"
                   RelativePanel.RightOf="SplitViewButton" />
</RelativePanel>

Next let’s add the Split View. In all Split views your items will exist in a SplitView.Pane xaml object.

 <SplitView x:Name="MySplitView" DisplayMode="Inline"  PaneBackground="{ThemeResource ApplicationPageBackgroundThemeBrush}" OpenPaneLength="200"
                   RelativePanel.AlignRightWithPanel="True" RelativePanel.AlignLeftWithPanel="True" RelativePanel.Below="SplitViewButton">
   <SplitView.Pane>
       <RelativePanel>
            <!-- Static Relativepanel, substitutes nested stackpanel or grid with rows/columns for this simple scenario -->
            <AppBarButton x:Name="BackgroundButton" Icon="Pictures"/>
                <TextBlock Text="Background"
                               RelativePanel.RightOf="BackgroundButton" />
                <AppBarButton x:Name="LockButton" Icon="SetLockScreen"
                                  RelativePanel.Below="BackgroundButton"/>
                <TextBlock Text="Lock screen"
                               RelativePanel.RightOf="LockButton" RelativePanel.Below="BackgroundButton"  />
                <AppBarButton x:Name="CameraButton" Icon="Camera"
                                  RelativePanel.Below="LockButton" />
                <TextBlock Text="Camera"
                               RelativePanel.RightOf="CameraButton" RelativePanel.Below="LockButton" />
                </RelativePanel>
    </SplitView.Pane>
</SplitView>

Now we need to move the Relative Panel we made before with the ScrollViewer into our SplitView, after the SplitView.Pane, so that they can be inline with each other. The code should look like this:

</SplitView.Pane>
     <ScrollViewer VerticalScrollBarVisibility="Auto" VerticalScrollMode="Auto" HorizontalScrollBarVisibility="Disabled" HorizontalScrollMode="Disabled">
         <RelativePanel Margin="50,38,0,0" VerticalAlignment="Top" HorizontalAlignment="Left" Width="300" Grid.Row="1">
             <Rectangle x:Name="Rectangle1" Fill="Red" Height="100" Width="100"/>
             <Rectangle x:Name="Rectangle2" Fill="Blue" Height="100" Width="100" RelativePanel.RightOf="Rectangle1" Margin="8,0,0,0"/>
             <Rectangle x:Name="Rectangle3" Fill="Green" Height="100" Width="100" RelativePanel.Below="Rectangle1" Margin="0,8,0,0"/>
             <Rectangle x:Name="Rectangle4" Fill="Yellow" Height="100" Width="100" RelativePanel.AlignBottomWithPanel="True" RelativePanel.AlignRightWithPanel="True" Margin="0,8,0,0"/>
          </RelativePanel>
     </ScrollViewer>
 </SplitView>
</RelativePanel>

The visual side of the code is now complete. To have it work we need to add the Button Functionality. In the Main.xaml.cs file add the following to the Class:

 private void SplitViewButton_Click(object sender, RoutedEventArgs e)
{
    MySplitView.IsPaneOpen = !MySplitView.IsPaneOpen;
}

Now we Run the code again, and we can see the panel come out and retract.

PANEL OUT:

relaitvePanel3

PANEL IN:

relaitvePanel3-1

YAY!

The next post is the Third change out of the Three Most Important Changes – State Triggers

Happy Coding

-TheNappingKat

Windows

Hey Everybody!

I’m taking a little detour from my usual Unity Gaming Tutorial Series, to give you the top 3 most important changes/updates (in my opinion) that have happened for the Universal Windows Platform (UWP).

  1. Relative Panel
  2. Split View
  3. State Triggers

TL;DR: Code is all on my GitHub, here http://bit.ly/UWP-UI-github

Relative Panel

Now this is a super cool new feature that really comes in handy with Microsoft’s new UWP style of development. What do I mean by this? UWP allows for one build to run immediately on phone and desktop! But by doing this developers have to consider adaptive UI that can change for different views/devices (phones, Xbox, desktops, tablets, HoloLens).

So what is Relative Panel. Well it’s “a style of layout that is defined by the relationships between its child elements” – (read more here). When implementing this style of layout, normally there will be a child object acting as an anchor for the other children derive their location.

Getting Started

So I’m using the new Visual Studio 2015 RC. Interesting fact: You Do NOT need Windows 10 to build for Windows 10!

When you open Visual Studio you should see this interface.

uwp0

If you are familiar with Visual Studio not much has changed.

Create a new UWP project.

uwp1

A new project should be generated.

uwp2

If you look at the Solution Explorer you should notice that there is no longer a shared folder, or two separate projects in the same solution for phone and store apps, like in 8.1.

Relative Panel Dev

Now I’m going to show you how Relative Panels work. Double click the Main.xaml file if it’s not already open.

uwp3

I’ve circle two important things to notice when designing your app. The orientation and the screen that you are dev-ing on at the moment. Also notice the Grid tag? This is where most of our XAML code will be written. Let’s continue.

First: Add a Relative Panel to your Grid

1<RelativePanel Width=”300″ Margin=”0,100,0,0″ VerticalAlignment=”Top”></RelativePanel>

Second: Lets add some content to this panel this will act as our anchor object, aka Square 1.

1<Rectangle x:Name=”Square1″ Fill=”Red” Height=”50″ Width=”50″/>

Third: The following squares will now be placed in the Panel relative to that initial Square. Let’s also make sure that we can still view everything though when the width changes. We can do this with adaptive triggers, which I’ll talk about later, but for now let’s add a scroller.

1<ScrollViewer HorizontalScrollBarVisibility=”Auto” VerticalScrollBarVisibility=”Auto” HorizontalScrollMode=”Enabled” IsEnabled=”True”>

Fourth: Run the Code

Now this is where the new Universal Windows Platform really shines!

Click the run on local machine.

uwp4

The code runs in a Windows 10 Window much like we would suspect.

LOCAL MACHINE:

relaitvePanel1

Now, click the arrow next to local machine. You should see a list of emulators you can build on.

uwp4-1
uwp4-2

Select one of the phones. **Caveat – To have this work you need to download the mobile SDK if you didn’t when you downloaded Visual Studio. **

Now run again. The emulator will start and run the app!

EMULATOR:

relaitvePanel2

WHAT!? No changes in code or anything! I’m sorry. I’m just really excited about this, and if you do cross platform development you probably share my enthusiasm.

Next I’ll show you the new Split View in XAML

Happy Coding!

-TheNappingKat

Unity

Kinect Skeleton in Unity

So most of this code is used from the SDK, however I will be going into a detailed explanation about how it all works. As well as how to put it into Unity.

First things First. Create a new Folder inside your Scripts folder Called KinectScripts. Now in that folder create two new scripts; one called BodyManger and the other called BodyView.

UnityScriptsOrg

BodyManager Script

First we need a BodyManager script.  This object will Mange the Kinect sensor connection and read in all the body data coming from the Kinect. Import Kinect Library with:

using Windows.Kinect;

Then we need some fields for the manager to store and the data and get sensor data.

 private KinectSensor _Sensor;
private BodyFrameReader _Reader;
private Body[] _Data = null;
 
public Body[] GetData()
{
    return _Data;
}

Okay so now in the Start method we want to establish the connection for the Kinect.

 void Start()
{
    _Sensor = KinectSensor.GetDefault();
 
    if (_Sensor != null)
    {
        _Reader = _Sensor.BodyFrameSource.OpenReader();
 
        if (!_Sensor.IsOpen)
        {
            _Sensor.Open();
        }
    }
}

Now that the connection is open and reading in the data we need to store it in the Body array. We will do this every frame of the game, therefore we need to edit the Update() method.

First we check to see if the _Reader has been established and the connection has been completed. If it has we will take the last frame, the reader read in and if it’s not null, we can then check to see if the data is there.

 void Update()
{
    if (_Reader != null)
    {
        var frame = _Reader.AcquireLatestFrame();
        if (frame != null)
        {
            if (_Data == null)
            {
            }
        }
    }
}

We still need to get the Body data from the Senor. To do this we will need to create a new Body array with data from the Sensor.BodyFrameSource.BodyCount.

At the end the method should look like this:

 void Update()
{
    if (_Reader != null)
    {
        var frame = _Reader.AcquireLatestFrame();
        if (frame != null)
        {
            if (_Data == null)
            {
                _Data = new Body[_Sensor.BodyFrameSource.BodyCount];
            }
        }
    }
}

Then we need to refresh the stream of data from the Reader. By adding the following code to manipulate the frame.

 void Update()
{
    if (_Reader != null)
    {
        var frame = _Reader.AcquireLatestFrame();
        if (frame != null)
        {
            if (_Data == null)
            {
                _Data = new Body[_Sensor.BodyFrameSource.BodyCount];
            }
 
            frame.GetAndRefreshBodyData(_Data);
 
            frame.Dispose();
            frame = null;
        }
    }
}

The last method in the Body Manager Class is OnApplicationQuit(), which Disposes the Reader, and closes the Sensor stream, sets it to null.

 void OnApplicationQuit()
  {
      if (_Reader != null)
      {
          _Reader.Dispose();
          _Reader = null;
      }
 
      if (_Sensor != null)
      {
          if (_Sensor.IsOpen)
          {
              _Sensor.Close();
          }
 
          _Sensor = null;
      }
  }

Now onto drawing the skeleton in the scene.

BodyView Script

The next Script to write is one to draw the skeletal structure. We won’t necessarily need to see the skeleton for the game, however, I’ll show you how to show skeletal body tracking. We also need the skeletal data to track the hands, whose state will dictate controller commands.

For this MonoBehavoir class we will need, a material to draw the bone in the Unity scene. A gameobject to store the BodyManger, to control the stream of the Kinect.

public Material BoneMaterial;
public GameObject BodyManager;

We also need a BodyManager object and a Dictionary to store the bodies being tracked.

 private Dictionary<ulong, GameObject> _Bodies = new Dictionary<ulong, GameObject>();
private BodyManager _BodyManager;

Next we need to map out all the bones by the two joints that they will be connected to.

 
private Dictionary<Kinect.JointType, Kinect.JointType> _BoneMap = new Dictionary<Kinect.JointType, Kinect.JointType>()
{
    { Kinect.JointType.FootLeft, Kinect.JointType.AnkleLeft },
    { Kinect.JointType.AnkleLeft, Kinect.JointType.KneeLeft },
    { Kinect.JointType.KneeLeft, Kinect.JointType.HipLeft },
    { Kinect.JointType.HipLeft, Kinect.JointType.SpineBase },
 
    { Kinect.JointType.FootRight, Kinect.JointType.AnkleRight },
    { Kinect.JointType.AnkleRight, Kinect.JointType.KneeRight },
    { Kinect.JointType.KneeRight, Kinect.JointType.HipRight },
    { Kinect.JointType.HipRight, Kinect.JointType.SpineBase },
 
    { Kinect.JointType.HandTipLeft, Kinect.JointType.HandLeft }, //Need this for HandSates
    { Kinect.JointType.ThumbLeft, Kinect.JointType.HandLeft },
    { Kinect.JointType.HandLeft, Kinect.JointType.WristLeft },
    { Kinect.JointType.WristLeft, Kinect.JointType.ElbowLeft },
    { Kinect.JointType.ElbowLeft, Kinect.JointType.ShoulderLeft },
    { Kinect.JointType.ShoulderLeft, Kinect.JointType.SpineShoulder },
 
    { Kinect.JointType.HandTipRight, Kinect.JointType.HandRight }, //Needthis for Hand State
    { Kinect.JointType.ThumbRight, Kinect.JointType.HandRight },
    { Kinect.JointType.HandRight, Kinect.JointType.WristRight },
    { Kinect.JointType.WristRight, Kinect.JointType.ElbowRight },
    { Kinect.JointType.ElbowRight, Kinect.JointType.ShoulderRight },
    { Kinect.JointType.ShoulderRight, Kinect.JointType.SpineShoulder },
 
    { Kinect.JointType.SpineBase, Kinect.JointType.SpineMid },
    { Kinect.JointType.SpineMid, Kinect.JointType.SpineShoulder },
    { Kinect.JointType.SpineShoulder, Kinect.JointType.Neck },
    { Kinect.JointType.Neck, Kinect.JointType.Head },
};

BodyView Update()

Now in the Unity Update() method we need to check to see if the Body Manager is not null and that it has data.

 void Update()
 {
     int state = 0;
 
     if (BodyManager == null)
     {
         return;
     }
 
     _BodyManager = BodyManager.GetComponent<BodyManager>();
     if (_BodyManager == null)
     {
         return;
     }
 
     Kinect.Body[] data = _BodyManager.GetData();
     if (data == null)
     {
         return;
     }
 }

Next, while still in the Update() method, we need to get the amount of bodies in the list of tracked bodies. Then delete unknown bodies.

 
List<ulong> trackedIds = new List<ulong>();
foreach (var body in data)
{
    if (body == null)
    {
        continue;
    }
 
    if (body.IsTracked)
    {
        trackedIds.Add(body.TrackingId);
    }
}
 
List<ulong> knownIds = new List<ulong>(_Bodies.Keys);
 
// First delete untracked bodies
foreach (ulong trackingId in knownIds)
{
    if (!trackedIds.Contains(trackingId))
    {
        Destroy(_Bodies[trackingId]);
        _Bodies.Remove(trackingId);
    }
}

Now that we have the keys for tracking the bodies we need to create a body object with that tracking ID key. We need to write two more methods. A CreateBodyObject() method that will take a ulong id and a RefreashBodyObject() method that will take a Kinect.Body object and a GameObject for the body. We will use these methods after we go through the data, and find if bodies inside are being tracked or not. If it is tracked but doesn’t have a TrackingId, then we need to create a body with that TrackingID. If it is being tracked and has a TrackingId then we just need to refresh the drawn body.

 foreach (var body in data)
      {
          if (body == null)
          {
              continue;
          }
 
          if (body.IsTracked)
          {
              if (!_Bodies.ContainsKey(body.TrackingId))
              {
                  _Bodies[body.TrackingId] = CreateBodyObject(body.TrackingId);
              }
 
              RefreshBodyObject(body, _Bodies[body.TrackingId]);
          }
      }
 
  }

CreateBodyObject()

The CreateBodyObject takes an ID and returns a body gameobject. So we first need to create a gameobject that will store the appropriate data retrieved; then we need a for loop to go through every joint to draw the body.

 private GameObject CreateBodyObject(ulong id)
 {
     GameObject body = new GameObject("Body:" + id);
 
     for (Kinect.JointType jt = Kinect.JointType.SpineBase; jt <= Kinect.JointType.ThumbRight; jt++)
     {
 
     }
 
     return body;
 }

For every joint in the body we create a cube and add a lineRenderer to that cube. The cube will be drawn at each joint while the line renderer will be drawn to connect the joints.


private GameObject CreateBodyObject(ulong id)
 {
     GameObject body = new GameObject("Body:" + id);
 
     for (Kinect.JointType jt = Kinect.JointType.SpineBase; jt <= Kinect.JointType.ThumbRight; jt++)
     {
         GameObject jointObj = GameObject.CreatePrimitive(PrimitiveType.Cube);
 
         LineRenderer lr = jointObj.AddComponent<LineRenderer>();
         lr.SetVertexCount(2);
         lr.material = BoneMaterial;
         lr.SetWidth(0.05f, 0.05f);
 
         jointObj.transform.localScale = new Vector3(0.3f, 0.3f, 0.3f);
         jointObj.name = jt.ToString();
         jointObj.transform.parent = body.transform;
     }
 
     return body;
 }

RefreashBodyObject()

Now to write the ResfreshBodyObject method. In this method we need to go through each joint type possible just like we did in the CreateBodyObject method. But this time we are passing in the current body, as well as the appropriate tracking number so we don’t draw the bones for the wrong person.

 private void RefreshBodyObject(Kinect.Body body, GameObject bodyObject)
 {
     for (Kinect.JointType jt = Kinect.JointType.SpineBase; jt <= Kinect.JointType.ThumbRight; jt++)
     {
 
     }
 }

Inside this for loop we need to get the key value pairs we made before in the bone loop for each joint.

 private void RefreshBodyObject(Kinect.Body body, GameObject bodyObject)
 {
     for (Kinect.JointType jt = Kinect.JointType.SpineBase; jt <= Kinect.JointType.ThumbRight; jt++)
     {
         Kinect.Joint sourceJoint = body.Joints[jt];
         Kinect.Joint? targetJoint = null;
 
         if(_BoneMap.ContainsKey(jt))
         {
             targetJoint = body.Joints[_BoneMap[jt]];
         }
     }
 }

We also need to update the skeletons position so it’s in the accurate place on the screen. To do this we need to write a method to get the Vetcor3 from the sourceJoint.

 private static Vector3 GetVector3FromJoint(Kinect.Joint joint)
{
    return new Vector3(joint.Position.X * 10, joint.Position.Y * 10, joint.Position.Z * 10);
}

The scale by 10 is to enlarge the skeleton, which will make it easier to work with. Now we have position to correct the gameObjects position.

 private void RefreshBodyObject(Kinect.Body body, GameObject bodyObject)
 {
     for (Kinect.JointType jt = Kinect.JointType.SpineBase; jt <= Kinect.JointType.ThumbRight; jt++)
     {
         Kinect.Joint sourceJoint = body.Joints[jt];
         Kinect.Joint? targetJoint = null;
 
         if(_BoneMap.ContainsKey(jt))
         {
             targetJoint = body.Joints[_BoneMap[jt]];
         }
 
         Transform jointObj = bodyObject.transform.FindChild(jt.ToString());
         jointObj.localPosition = GetVector3FromJoint(sourceJoint);
     }
 }

Next step in the for-loop is to get the linerenderer from the bodyObject, which was the cube we create for each joint. Then we need to see if target joint has a value. If it does we can then draw a line from the original joint to the target.

 LineRenderer lr = jointObj.GetComponent<LineRenderer>();
if(targetJoint.HasValue)
{
    lr.SetPosition(0, jointObj.localPosition);
    lr.SetPosition(1, GetVector3FromJoint(targetJoint.Value));
}
else
{
    lr.enabled = false;
}

Great! So we are almost done with drawing the skeleton. There is a bit more information that will be helpful that the SDK gives you, which is tracking status. There are three states to choose from, Tracked, Inferred, or NotTracked. We can have the line renderer show us the state of tracking by changing it’s color. To do this we need a method that will return a color based on the current state.

 private static Color GetColorForState(Kinect.TrackingState state)
{
    switch (state)
    {
        case Kinect.TrackingState.Tracked:
            return Color.green;
 
        case Kinect.TrackingState.Inferred:
            return Color.red;
 
        default:
            return Color.black;
    }
}

Now we add one more line to the for-loop of the RefreachBodyObject method and we are done.

 
private void RefreshBodyObject(Kinect.Body body, GameObject bodyObject)
{
    for (Kinect.JointType jt = Kinect.JointType.SpineBase; jt <= Kinect.JointType.ThumbRight; jt++)
    {
        Kinect.Joint sourceJoint = body.Joints[jt];
        Kinect.Joint? targetJoint = null;
 
        if (_BoneMap.ContainsKey(jt))
        {
            targetJoint = body.Joints[_BoneMap[jt]];
        }
 
        Transform jointObj = bodyObject.transform.FindChild(jt.ToString());
        jointObj.localPosition = GetVector3FromJoint(sourceJoint);
 
        LineRenderer lr = jointObj.GetComponent<LineRenderer>();
        if (targetJoint.HasValue)
        {
            lr.SetPosition(0, jointObj.localPosition);
            lr.SetPosition(1, GetVector3FromJoint(targetJoint.Value));
            lr.SetColors(GetColorForState(sourceJoint.TrackingState), GetColorForState(targetJoint.Value.TrackingState));
        }
        else
        {
            lr.enabled = false;
        }
    }
}

And that’s it for drawing the skeleton!

Putting the Skeleton into Unity

Now in Unity I’ve created another scene by going to file new scene (it will prompt you to save your current scene if you haven’t already). This empty scene will make it easier for you to test and see what you’re doing.

In the empty scene, which I have saved as kinectTest, create two empty game objects. Call them Body Manager and Body View.

UnityObjectsBVBM

Attach the Body Manager script to the body manager object. Attach the Body View script to the BodyView object.

BodyManagerInspector

Select the Body View object. In the inspector you need to fill in the Body Manager slot and Bone Material slot. Click and drag the BodyManager from the hierarchy to the slot. Then in your materials folder click and drag the Bone Material into the bone material slot.

BodyViewObjectSettings

Now hit run. You should see nothing on your screen at first. This is because the Kinect hasn’t registered your skeleton yet. Stand back about 4 ft, and you should see the skeleton appear.

UnityKinectScene

If you look closely you can tell that the legs of the skeleton are red. This is because the Kinect is interpreting that’s where my legs should be. My desk was blocking the sensor from detecting where my legs actually were.

But wait! You say. We don’t need a skeleton in our infinite runner game, just the hand gestures and states! So was all this a waste? NO! Of course it wasn’t. We just don’t need to draw the whole skeleton in the scene anymore. But we now have the know-how if you need to in the future. YAY!

The next part of this mini series will be getting those hand gestures to work in the game!

HAPPY CODING!

-TheNappingKat

Unity

Kinect and Unity – Setup

Okay so in part 1 I showed you how to get the kinect working on your computer. But how do we get it in a Unity Project you ask? Oh, you didn’t, well I’ll tell you anyway. =)

First step is download the Kinect for Windows Unity Package. You can find it at this link: https://www.microsoft.com/en-us/kinectforwindows/develop/downloads-docs.aspx

UnityProSDKdownload

To get started we will use the steps directly outlined in the Kinect for Windows Unity Package from Microsoft; but slightly edited since we already have a project created.

  1. Expand the .Zip file, and move Kinect.2.0.1410.19000.UnityPackageto a well known <location>
  2. Open UnityPro (you need to have a Pro edition to pull in custom packages and plugins)
  3. Open your Unity Project
  4. Click on the menu item Assets->Import Package->Custom Package…
  5. Navigate to the <location> from step 1
  6. Select the Kinect.2.0.1410.19000.UnityPackage
  7. Click “Open”
  8. Click “Import” in the lower right hand corner of the “Importing Package” Dialog (which Unity will launch after step 7)**Before you do step 8 here is an important thing to note – When Importing notice that the folder is called StandardAssets. This is the same name as the Sample Assets from the Unity Store. If you are using Unity’s Standard Assets package the import manager will embed the new Kinect files into the already existing folder. Be careful! If you don’t keep track of what you’re importing you might lose files within the numerous folders of your project. So, to keep things organized keep note of the files that are not in subfolders and are just in the Standard Assets folder.
    In this case there are 10:
    • EventPump.cs
    • KinectBuffer.cs
    • KinectSpecialCases.cs
    • CameraIntrinsics Import Settings
    • CollectionMap Import Settings
    • ExecptionHelper Import Settings
    • INativeWrapper Import Settings
    • NativeObject Import Settings
    • SmartGCHandle Import Settings
    • ThreadSafeDictionary Import Settings
  9. Okay now back to the steps**
  10. If you wish to see the Kinect in action there are two sample scenes available from the zip.
  11. If you wish to use VisualGestureBuilder within Unity, repeat steps 1 through 8 with Kinect.VisualGestureBuilder.2.0.1410.19000.unitypackage
  12. If you wish to use the Face functionality within Unity, repeat steps 1 through 8 with Kinect.Face.2.0.1410.19000.unitypackage

Okay lets stay organized. Create a new Folder in your Assets Folder called KinectPackage. Then, add the 10 files I mentioned above as well as Windows Folder from StandardAssets.

KinectFolder

And that’s it for part 2. In part 3 I’ll show you how to track your joints and start scripting as well as get a skeleton in your project! For the Infinite Runner Game we will use the kinect to act as a controller!

Happy Coding!

-TheNappingKat

Unity

Okkie Dokkie. So the next few posts will be about how to get a kinect working in Unity! This is will eventually be integrated into the Infinite Runner game that the Unity Gaming series is about. However this will be a mini series inside of that since the process is kinda confusing, especially if it’s your first time using the Kinect. So let’s get started!

Software Setup

Now before you start just plugging in any Kinect from random Xbox One or Xbox 360, you need the appropriate software so your computer knows what you’re plugging into it. You also can’t use any rando Kinect. This tutorial uses the Kinect v2; that’s the one that is shipped with the Xbox One. However, in order to use the v2 with your computer you need to make sure you have the Windows adapter!

First let’s install the SDK. We need to go here:

https://www.microsoft.com/en-us/kinectforwindows/develop/downloads-docs.aspx

You can also go to the Kinect for Windows main site and go to their Technical documentation and tools. On the Technical Documentation and Tools page they have a list of useful links for documentation and essential downloads, including the Unity Plugin that we will also need.

DownladsPage
UnityProSDKdownload

After you download the SDK, run it. You will be prompted with a Kinect for Windows Setup wizard.

SDKDownload

After you install the SDK you should have some new programs installed on your computer.

  • SDK Browser
  • Kinect Studio v 2.0
  • Kinect Gesture Builder
KinectNewSoftware
KinectNewSoftware2

The SDK Browser will be the most useful of the new software, because it contains links to all the other programs/software as well as demos and example code you can use.

Hardware Setup

Setting up the hardware is pretty straight forward, which is great!

**Mac users** you will not be able to work with the Kinect unless you have Bootcamp installed (or some other way to partition your hard drive) with a windows OS, sorry.

KinectHardwareSetup

If you need more help setting up the hardware here is a helpful guide from Microsoft: https://www.microsoft.com/en-us/kinectforwindows/purchase/sensor_setup.aspx

Once you Plug in the Kinect to the USB3 port on your computer we can test the connection with the Kinect SDK applications that were automatically downloaded.

Open the SDK Browser for Kinect. The first program in the list should be the Kinect Configuration Verifier. Run it. The Kinect’s light should now turn on if it’s connected properly and the Verifier will open a window, if everything is correct, it should look like this:

KinectHardwareVerifier2

If something is wrong you will get a red X identifying the error:

KinectHardwareVerifier

Even though I have a warning about my USB port, my Kinect still runs. Now to see the information collected by the Kinect we will run Kinect Studio. It will be the third item in the SDK browser; or you can open it straight from your applications.

Kinect Studio looks like this:

KinectStudio

Don’t worry if you don’t see anything. At startup. Although your Kinect is on and streaming data to your computer Kinect Studio isn’t reading in that data yet.

You need to hit the plug icon at the top left hand corner so you can see anything.

KinectStudioConnected
KinectStudioConnected3

There are 7 streams of data coming from the Kinect:

  • Body Frame
  • Body index
  • Calibration Data
  • Depth
  • IR
  • Title Audio
  • Uncompressed Color

You can toggle what streams of information you want to receive by deselecting the check boxes on the left. The most important streams of data are the Body Frame/Body Index, Depth, and Infra Red (IR).

You can close Kinect Studio we don’t need it anymore. And we are done with Part 1! There are a bunch of other cool examples you can take a look at in the Kinect SDK Browser. Part 2 coming soon.

Happy Coding!

-TheNappingKat