Unity

Last time on Unity Gaming – Getting a hold of learning how to aim and shoot…lol.

We created a scene where we aimed by getting the orientation of the camera and where we were looking. But then, when we tried using this same technique with the Oculus it didn’t work! Now our heroes are stuck. How will they get around their null reference exception and get their shooting ability working!?!?!

Well I mentioned it before.

“The other [method] draws a line from an object, like a gun, or hand out to a target.”

So let’s continue on!

Vector/Object Raycasting

There is no main camera in the scene. So now we need to implement the second common way of aiming and shooting. Which is from an object. Because we can no longer rely on the camera we can create a ray from an object right in front of our character to an angle created by the mouse that will still aim straight to where we are looking. =)

Create an empty GameObject and call it ShotSpawner (this object will act as the origin of our raycast). Child it to the OVRCameraRig.

Note: Remeber this can work without the oculus as well. So you can do this for shooting a gun. Just put the ShotSpawner on the tip of the gun!

ShotSpawnerOVR

In my scene I have moved the TargetCube to (0, 0, 15) to move it out of the way, but still give me a reference. And instead of drawing a Line in debugging I’m going to get the tag of whatever my raycast hits in debugging.

So let’s write the script that will do this.

In ShotSpawner add a new C# script component and name it OVRShoot. Unlike the other scripts we will be saving the gameobject our ray intersects; so we need a RaycastHit object, and a GameObject.

RaycastHit hit;
GameObject hitObject;

The Update method will then look like this:

 void Update()
{
    if (Physics.Raycast(transform.position, transform.forward, out hit, 10))
    {
        hitObject = hit.collider.gameObject;
        Debug.Log(hitObject.tag);
    }
}

So whatever the raycast hits will be stored in the hit parameter of the method if it is true. We can then get the hit’s object, IF it has a collider! So remember if you want to be able to interact with an object it’s top most parent object needs to have a collider we can interact with! (<<– this is super important, and causes a lot of people grief because the raycast will hit a child objects collider instead of the parent’s and then the code breaks. So avoid that grief and remember this tid-bit, TRUST ME! Avoid common mistakes and developing will go a lot more smoothly.)

Before we run the Script now we need to add tags to the cubes in the scene. I’ve added the Enemy tag to them in the Tag drop down in the Inspector.

OculusRaycastWorking

And there you have it! It’s printing the Enemy Tag! So now you can use these methods in your own code. I’ll eventually show you how to use it in the Infinite Runner because I use a slightly different method there becuase of some assets I use.

Happy Coding!

-TheNappingKat

Unity

So there are two main types of aiming. One, uses the camera’s view to shoot a ray directly out in front of you. This is great if you want to look at things to target them. The other draws a line from an object, like a gun, or hand out to a target.

Let’s do the camera one first.

Camera Raycasting

For this I am using the same scene I made in part 1 but I’ve added some cubes to aim at that are in the air to test out my targeting.

So I’m going to create a cube called TargetObject and attach it to my FirstPersonCamera under a gameobject. Then I’ll change the position to (0, 0, 10), and set it’s box collider to Is Trigger so it doesn’t interfere with my characters movement. I do this so I can get a good idea of where my ray is going to pointing. In Unity you can’t see the raycast that emitted unless you draw it with a line render or call debug.drawline (but that only draws it in the scene view). If I were to play the game now. I should see a cube 10 units away from where I’m looking at all times.

TargetObject

Great so now let’s write the raycast like before. Create a new C# script called RaycastShoot

In this update add the following lines:

 void Update () {
    Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
    RaycastHit hit;
    if (Physics.Raycast(ray, out hit, 100))
        Debug.DrawLine(ray.origin, hit.point);
    }

What the script is doing is creating a Ray where the orgin is the main camera and the direction is related to the mousePosition and the angle between it and the camera. Then if the raycast hits anything draw a line. In our scene the line should always be drawn since our target object is in front of our character.

DebugRaycast

It’s difficult to see the line, but you can tell in the pic it’s slightly darker than the others.

Now let’s try with the Oculus! You’ll see why we have to change our methods of aiming in a second.

Oculus and Camera Aiming

If you haven’t set up your environment to integrate oculus don’t worry I’ve posted about it before, here, Oculus setup! Again you don’t need the hardware to develop for VR.

First let’s disable our main Character by clicking on the checkbox in the upper left hand corner in the inspector; in disabled mode there should no longer be a check and the object should be greyed out in the hierarchy.

Cool now click and drag in the Oculus Player Controller.

OVRStartScene

Then create another Target cube so we can have a reference as to where we are looking. I moved it to under the OVRCamerRig to (0,0,5) position.

Next we need to add the Raycast script like before. I’ve added it to the OVRCameraRig.

OVRTargetCube

Now when we run it the game still plays but we don’t see the line in the scene view like before. This is because we get a Null ReferenceException from the RaycastShoot Script.

OculusError

The error above is referring to this line of code:

 void Update () {
    Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition); //this one
    RaycastHit hit;
    if (Physics.Raycast(ray, out hit, 100))
        Debug.DrawLine(ray.origin, hit.point);
    }

There is no main camera in the scene. So now we need to implement the second common way of aiming and shooting; which is from an object. Because we can no longer rely on the camera, we must create a ray from an object right in front of our character, to an angle created by the mouse. And since the mouse dictates the orientation of the head, we will still aim straight to where we are looking. =)

Correct Oculus shooting and the second type of shooting will be in part 3…

Happy Coding!

-TheNappingKat

Unity

Okay so this is a little jump ahead of the Unity Gaming Infinite Runner Series. I’ve gotten a lot of questions about this so I’m writing about it now. Because it’s out of order, the examples are in a blank scene, not the Infinite Runner main scene.

Okay, so, most likely when you are shooting something it’s from either right from of the center of the screen or it’s, for the most part, originating from another objet like a hand or gun. This gets kinda complicated when all of a sudden you are using two cameras to judge the “center” of what you are looking at; like when using an oculus…

But no fear learning the general method of shooting isn’t too bad; and then adding the oculus bit will be easy to understand.

Raycasting

So when you aim for or align something in real life do you draw an invisible line from where you are to the thing you are trying to hit? Yes, Great! If not well, cool, but that’s what Raycasting is.

There are 5 possible parameters for the raycasting method in Unity.

  1. Vector3 origin – which is the position in 3d space where you want the ray to start
  2. Vector3 direction – which is the direction you want the ray to point, the first to parameters make a Ray.
  3. *RaycastHit hitInfo – get whatever the Ray hits first and store it in this parameter
  4. *Float maxDistance – this is the magnitude of the Ray aka how far out you want the ray to point
  5. *Int layerMask – what the Ray can Hit

* Indicates that these parameters are optional

There are also two main ways the Raycast method is written.

  • Raycast(Vector3 origin, Vector3 direction, float maxDistance, int layerMask)
  • Raycast(Vector3 origin, Vector3 direction, out RaycastHit hitInfo, float maxDistance, int layerMask)

In Unity Raycast method return a Boolean value. True, if the ray hit something, and False if nothing was hit. I should also mention that in Unity Raycasts do not detect colliders for which the raycast is inside. However if you are creating an animation or moving, you should keep the Raycast method in a FixedUpdate method so that the physics library can update the data structures before the Raycast hits a collider at its new position.

Aiming

Now there are 2 types of aiming. One is simply shooting in the direction that the player is facing and one is shooting in the direction that the player is looking.

Let’s do the easy one first – aiming where the player is facing aka not aiming just shooting.

Cool so in my scene I have the regular First Person Character from the Unity Sample Assets package and created some ground by scaling a cube to have dimensions (10, 1, 30).

StartScene

Now lets create a shoot script called simpleShoot.

CreateSimpleScript

In the Update function we want to shoot a new sphere every time I click and shoot it in the “forward direction”. So let’s add the following lines.

 void Update () {
    if (Input.GetButtonDown("Fire1"))
        {
            GameObject clonedBullet;
            clonedBullet = Instantiate(GameObject.CreatePrimitive(PrimitiveType.Sphere),
                transform.position, transform.rotation) as GameObject;
            clonedBullet.AddComponent<Rigidbody>();
            clonedBullet.GetComponent<Rigidbody>().AddForce(clonedBullet.transform.forward * 2000);
        }
 
    }

Cool and if you check it out I’m shooting and the bullets go in the forward direction of the transform of my character! YAY!

SimpleShootScene

Mouse Aim

Okay but that doesn’t shoot the ball in the upward direction when I’m looking up, you say to me. Yes I know that’s the next thing we are going to do. =) …in part two!

Happy Coding! Part 2 coming soon =D

-TheNappingKat

Unity, XR

Hi All,

I solved the NO HMD DETECTED, Tracker Connected error you get with trying to do extended screen when using the headset for the Unity Editor.

NoHMD

Well I got it working, not necessarily solved.

Specs:

  • Lenovo X1 Carbon
  • Intel Core i7 – 3667U CPU 2.0ghz
  • 8gb ram
  • Windows 8.1
  • 64-bit
  • Intel HD Graphics 4000
  • Oculus DK2
  • SDK 5.0.1

To start I detached my oculus from the computer. Reattched the oculus, made sure it was working in the normal Direct HMD Access mode. It was.

1) Then I hit “Windows + P” and made sure my Projection setting was on extended.

2) Then I switched the mode to Extended in the Utility

3)  Then on the desktop I right clicked and hit Screen Resolution.

4) I selected the second screen and hit “Detect”, a window came up with Another display not detected.

5) I made the Oculus screen primary and then switched back to the main computer being primary and it worked. The screen now appeared in the Oculus but the orientation was off. so I just adjusted it in the Screen resolution window.

DisplaySettings

Now the Oculus Configuration Utility looks like this, but it works.

AttachedNoTracker

In the Unity Editor I can move the game tab to the Headset Screen and maximize it, I can still see the awkward black rim around the screen but it’s better than nothing. Hopefully the Oculus team can fix this soon.

OculusExtendedScreenFull

Hope this helps, Happy Coding!

-TheNappingKat

Unity

SignalR and Backend Coding

Hey Everyone. So the backend part of games tends to be some of the most difficult code to write in the project. I’m also not of fan of backend Server code (but that’s just me). In this project I pushed out of my comfort zone to step more into the hardware back end world.

Really this tutorial is about Signal R in general. That way you can follow along and make a hide and seek game, or create your own game based on these principles and fundamentals.

Getting Started

So for those of you that haven’t used Visual Studio, that is the IDE I’m programming with in this tutorial. You can get for FREE here: http://www.visualstudio.com/ 

  1. Make a new empty project

Signal R – Backend

Create a new Project in web

Then, in the New ASP.NET Project window select MVC and Change Authentication to No Authentication.  Click Create Project. Now you need to get the nugget package from the Package Manager.

Tools> Nuget Package Manager> Package Manager Console

Then type:

PM> install-package Microsoft.AspNet.SignalR

If you expand your Scripts folder you will see that the libraries for SignalR have been added.

Now Right-Click the Hubs folder, and click Add | New Item, select the Visual C# | Web | SignalR node in the Installed pane, select SignalR Hub Class (v2) from the options. And Create a new hub called GameHub.cs.

This Hub will be the host of your server and game logic. It will also be what all clients send their messages to.

Add the following code:

Under our project create a new class called Startup.cs, by right clicking the project Add > Class.

using Microsoft.Owin;
using Owin;
 
[assembly: OwinStartupAttribute(typeof(SRHS2backend.Startup))]
namespace SRHS2backend
{
public partial class Startup
{
public void Configuration(IAppBuilder app)
{
app.MapSignalR();
}
}
}

Now go to HomeController.cs in your Controllers folder.

Add the following code snippet:

public ActionResult Game()
{
return View();
}

Now this next part is mostly for seeing which messages are getting to the server and displaying them for the developers to see. We didn’t implement it in this code but there are sources online that show you the javascript and jQuery needed to print to the new view we just created her http://www.asp.net/signalr/overview/getting-started/tutorial-getting-started-with-signalr-and-mvc

Now, your gamehub.cs class is derived from SignalR’s Hub class. This allows you to access all the currently connected clients and the methods within those clients.

To understand it better, the CreateGame(User user, Game g) method would be called in the client code, and defined in GameHub. While the Clients.All.gameGreate(g, sm) is defined in the client side of the code.

The User, Game, and ServerMessage are classes that I created for this specific game. They hold information that is required by each User, Game, and ServerMessage. While AllGamesList and AvailableGames, are Lists that I create in GameHUb so the server can reference all the active and passive games currently in progress.

Signal R – Front End

Now we will make the front end that links with Signal R.

First we want to create a new blank Universal App.

Next we install the SignalR Client NuGet package for both the phone and the windows 8.1 project.

Now the way that the client interacts with the server code that was written is by connecting with the hub and sending messages through that hub.

For the SignalR portion of the client code create a new folder call SignalRCommunication, that contains the following classes: ISignalRHubs.cs, SignalREventArgs.cs, SignalRMessagingContainers.cs, and SignalRMessagingHub.cs.

The ISignalRHub is the interface for your SignalRMessagingHub.

And the SingnalREventArgs.cs acts as the interface allowing all parts of the project to access the messaging events.

The SignalRMessagingHub is where the connection is created between the server hub and client is initiated.

Now to connect with the Server Hub we need the following code:

#region “Implementation”
 
public async virtual void UserLogin(User tabletChatClient)
{
// Fire up SignalR Connection & join chatroom.
try
{
await gameConnection.Start();
 
if (gameConnection.State == Microsoft.AspNet.SignalR.Client.ConnectionState.Connected)
{
await SignalRGameHub.Invoke(“Login”, tabletChatClient);
}
}
catch (Exception ex)
{
 
// Do some error handling. Could not connect to Sever Error.
Debug.WriteLine(“Error: “+ ex.Message);
}

I put mine inside of the first User interaction with the server, so in this case when the User logs in.

For the Phone application if using the emulator you will need to use long polling since the phone emulator uses the PC’s identification number and a lot more work has to go into formatting your computer to run it.

This is the code you will need

#if WINDOWS_PHONE_APP
connection.Start(new LongPollingTransport());
#else
connection.Start();
#endif

Now we need to write code that will listen to events on the SignalR Server and wire them up appropriately

Calling the .On method is how our proxy hub listens to any messages the server passes to the clients.

Lets make this easier to understand by going through .On and explaining what each part is doing.

SignalRGameHub.On<Game, ServerMessage>(“gameCreated”, (g, sm) =>
{
SignalREventArgs gArgs = new SignalREventArgs();
gArgs.CustomGameObject = g;
gArgs.CustomServerMessage = sm;
// Raise custom event & let it bubble up.
SignalRServerNotification(this, gArgs);
});

The .On method can take in as many parameters as needed depending on which server call it’s defining On<x,y,z>.

Before you saw the server gamehub call Clients.All.gameCreated(); in the client code the quotes in SignalRGameHub.On method refer to which ever method we are listening to and then dictates what the client code should do based on the delegate following the On method.

The (g, sm) are the parameters that we defined earlier <x,y> in this case <Game, ServerMessage>. They are a part of a delegate that will create the SignalREventArgs, to parse out a gameObject and a ServerMessage. Then it raises a SignalRServerNotification(this, gArgs) that will trigger an event. You still need to write that event in other parts of your code.

We now need to write the method for the SignalRServerNotification(this, gArgs)

#region “Methods”
 
public virtual void OnSignalRServerNotificationReceived(SignalREventArgs e)
{
if (SignalRServerNotification != null)
{
SignalRServerNotification(this, e);
}
}
 
#endregion

Cool now on to defining calls made by the client that will be sent to the Server.

Make sure to define all the methods that will be sending information to the server. These async virtual methods will be called by the ISignalRHub. The quoted part “UpdateUser”, “CreateGame” and “JoinGame” refer to the methods on the server side GameHub, if the names are not exactly correct the server methods won’t invoke.

SingnalRMesagingContainers refers to the objects that you want to send through JSON to the server to manipulate. Meaning you would define your object classes within this .cs file. However if you already defined your models, in a models(or other name) folder, that will work too, in fact it’s preferred.

Referencing Gamehub

In order for your Signal R Hub to be accessed by all parts/pages of your project you will need to modify the App.xaml.cs in Shared.

public static new App Current
{
get { return Application.Current as App; }
}

Then inside the OnLaunched(LaunchActivatedEventArgs e) method add

App.Current.SignalRHub = new SignalRMessagingHub();

Important! If you are going to be passing objects through SignalR both projects need to have the exact same code for the objects.

Now when calling/listening to the SignalR hub from different pages of your app easy now that we have everything set up.

Inorder to approipately handle the triggered SignalRServerNotifaction we need to add SignalRServerNorification for that page by referencing the App.Current.SignalRHub.SignalRServerNotification

When implementing the SignalRHub_SignalRServerNotification be sure to appropriately handle the dispatcher so the page can still be responsive when events are triggered.

In my code I use the CustomSererMessage to find which state the server is on versus the game. You can implement changing and checking game state however you think best suits your game.

Publishing your Server code

Eventually you want to publish your code to an azure website so anyone who downloads the application can connect to the server.

Use the steps in this tutorial to do so: http://www.asp.net/signalr/overview/deployment/using-signalr-with-azure-web-sites

Sphero

Okay so we finally have the backend set up. Now it’s time to implement more of the front end. Depending on your game you might want to change some of the XAML and front end to suit your purposes but the connection and control of the Sphero will remain the same.

First thing is to add the Sphero SDK that you downloaded to your project references.

https://github.com/SoatExperts/sphero-sdk

Connecting to Sphero

Second is to modify the App.xaml.cs so all pages in the project can remain connected to the same Sphero.

Now go into the page that you want to initiate connection with your sphero. For Hide and Seek I put it in the lobbyPage.xaml.cs since I wanted to make sure users were connected before they started the game. One thing to note about the DiscoverSpheros method; it will only return a list of Spheros that are on and in Bluetooth connection state (blinking red and blue).

private async void DiscoverSpheros()
{
try
{
// Discover paired Spheros
List<SpheroInformation> spheroInformations = new List<SpheroInformation>(await SpheroConnectionProvider.DiscoverSpheros());
 
if (spheroInformations != null && spheroInformations.Count > 0)
{
// Populate list with Discovered Spheros
SpherosDiscovered.ItemsSource = spheroInformations;
}
else
{
// No sphero Paired
MessageDialog dialogNSP = new MessageDialog(“No sphero Paired”);
await dialogNSP.ShowAsync();
}
 
}
catch (NoSpheroFoundException)
{
MessageDialog dialogNSF = new MessageDialog(“No sphero Found”);
dialogNSF.ShowAsync();
}
catch (BluetoothDeactivatedException)
{
// Bluetooth deactivated
MessageDialog dialogBD = new MessageDialog(“Bluetooth deactivated”);
dialogBD.ShowAsync();
}
}

After you discover the Spheros you then need to connect to one.

Controlling Sphero

With your device now connected to the Sphero that you choose, it’s time to implement the controls. Go to the page that you want the controls to appear and open the page’s .xaml file. In this case it was my GamePlayPage.xaml.

Then add the following to the <Page> tag properties:  xmlns:Controls=”using:Sphero.Controls”

Now depending on what else you want to put on the page the placement of this next code block will differ, but the content is still the same.

<Controls:Joystick x:Name=”spheroJoystick” HorizontalAlignment=”Left” Margin=”30,0,0,30″ Grid.Row=”1″ VerticalAlignment=”Bottom” Calibrating=”spheroJoystick_Calibrating” CalibrationReleased=”SpheroJoystick_CalibrationReleased” Moving=”SpheroJoystick_Moving” Released=”SpheroJoystick_Released” PointerReleased=”SpheroJoystick_PointerReleased”/>

The important properties of the Controls:Joystick input are Calibrating, CalibrationReleased, Moving, Released, and PointerReleased. These can be added directly in the XAML or can be added in the Events tab of the Properties Window in Visual Studio.

Almost done, with Sphero. In order for the code in XAML to actually do anything, we need to make sure that the events are implemented and make sure the connection is still active.

private SpheroDevice _spheroDevice;

We check the connection and start the Joystick in the page’s initialize method.

Since we saved the connection in the App.CurrentConnection we can reference even if we navigate to a different page after the initial connection.

And that’s it! You can now get more information from the Sphero if you want to, and track more data. Look in the API for on how to do more with your Sphero.

Hope this helps, Happy Coding!

-TheNappingKat

Oculus, Unity, XR

Setting Up

GREAT NEWS! In 2015 you no longer need Unity Pro Edition to integrate Oculus into your projects. YAY!

Things you’ll need for integration:

  • An Oculus (really you don’t need one to develop but how else will you test it out?)
  • Oculus SDK
  • Oculus Runtime
  • Unity 4 Integration

Okay so, the first thing you’ll need is to have your game all set up and running, in Unity. If you’ve been following my blog then you should have the bulk of the game running.

Cool. Next we need to grab the Oculus pieces from their site.

https://developer.oculus.com/downloads/

Now if you have a Mac or Linux download from those links.

WebsiteDownloads

After you download the links you now need to install the runtime. Then restart your computer.

Integrating into Unity

Extract the files from the Unity Integration Package you downloaded. Go Into Unity to Assets>Import Package> Custom Package

Find where you extracted the files and navigate to the Unity Plugin.

ImportingPackage2

Then hit import.

ImportingPackage3

Now you should have a new folder in your Assets called OVR

AssetsOVR

Cool so now it’s integrated lets Start using the Oculus Camera in the game.

Using Oculus Cameras

Now using the Oculus Package is super easy. Oculus has already created prefabs for developers to use. They have a prefab with just the camera rig as well as one with the rig and a character motor.

OVRPrefabs

To use them. Just do what you what you would normally do with prefabs. Click and Drag it into your scene.  I created a test scene called OVRTest to make sure everything would work without worrying about the infinite platforms generating around me.

I placed the OVRPlayerController at 0, 2, 0.

OVRinScene

Cool Now try running the game. You should have something that looks like this:

OVRGame

YAY! See super easy. The double circle screen is what will be fed to your Oculus, and with the lenses and the headset it should become one image with a 3 dimensional feel.

Now that you have the basic character installed you can add it to the main game scene and try it with the infinite platforms.

Happy Hacking!

-TheNappingKat

Unity

Quaternions

In Unity Rotations are stored as Quaternions. Quaternions work like vectors but their coordinates of x, y, z, and w are interdependent. These are different from the Euler angle rotation that you see in the inspector.

Because quaternion coordinates are interdependent you should never change them individually as you might when placing objects in the world initially. The reason Quaternions are the preferred method for rotating is because they allow for incremental rotation, without being subject to gimbal lock, which sometimes locks objects in 2D rotation when you need 3 dimensions.

There are 4 main quaternion functions used for rotation; AngleAxis, RotateTowards, LookRotation, and FromToRoatation.

To create the world gravity shift we rotate the platforms around the character.

Rotating platforms

Okay so for the rotation and Flipping in the game we need a rotation object that has the platforms as children.

  1. Create a empty object
  2. Rename it PlatformRotator
  3. Move the object to 0,0,0
  4. In the hierarchy move the platforms and the platform spawner controller that we made earlier into the PlatformRotator to parent it

Great, now for the hard part. It took me several weeks to get this rotation to go a perfect 90 with each flip as well as 180 degrees. In this game we want the user to  press a key have the rotation go without use needing to hold down the key or press it a bunch of times. We could make it happen instantly, but for VR and UX purposes we don’t want to do that either. Now many beginner tutorials won’t explain how to do this. The answer is coroutines. Coroutines allow the rotation method to execute multiple times as it gives back control to other parts of the game.

The other difficult part finding which of the quaternion functions to use and how do you rotate spawned objects. Okay, so with that said lets start with the quaternion function we want to use and how to apply it to all my platforms.

Rotation Script

This rotation script will be attached to our PlatformRotator. What we want to do is get user input, see if the platforms are rotating, and if they aren’t, start the rotation.

Lets add a boolean to see if the platforms are rotating and a float to keep the degree of the angle we want to go to.

private bool rotating;
private float angle = 0.0f;

So to get user input we need to write the following inside the Update function.

 
void Update () {
    if(Input.GetKeyUp(KeyCode.Q)){
         
        if(!rotating) {
             
        }
    }
 
    if(Input.GetKeyUp(KeyCode.E)){
             
        if(!rotating) {
             
        }
    }
 
    if(Input.GetKeyUp("space")){
             
        if(!rotating) {
             
        }
    }
}

We also need to calculate the angle of exact rotation. In order to do a smooth rotation we will be adding and subtracting 90 degrees from the current angle, and restarting if it gets back to zero.

float getNextLeftAngle (float oAngle){
    oAngle = oAngle + 90;
    return oAngle;
}

float getNextFlip (float oAngle)
{
    oAngle = oAngle + 180;
    return oAngle;
}
 
float getNextRightAngle (float oAngle)
{
    oAngle = oAngle - 90;
    return oAngle;
}

Now we use the angles to pass into the coroutines. Every coroutine is an IEnumertor function. Which means we need to import the System.Collections library.

We need two coroutines, one for rotating, one for flipping. The coroutine works by having a while loop that returns yield return null. Yield return null tells the engine that when you reach this method again, start from the while loop.

 
IEnumerator FlipMe (float nextstep)
{
    rotating = true;
    float step = 232 * Time.smoothDeltaTime;
    Quaternion fromAngle = transform.rotation;
    Quaternion newRotation = Quaternion.Euler (new Vector3(0, 0, nextstep));	
 
    while (transform.rotation != newRotation) {//the original angle from the input key dot with 90 degree < !=  0
        transform.rotation = Quaternion.RotateTowards(transform.rotation, newRotation, step);
            yield return null;
    }
    rotating = false;       	
}

I use the rotate towards quaternion function because we have a goal angle that we get closer to each time we run the coroutine.

We do the same for the Rotate coroutine. I have two methods because the step aka speed of rotation. I wanted the rotation to be slightly faster than flipping because in VR a full 180 degree flip is slightly more jarring, so slower speed is needed.

 
IEnumerator RotateMe(float nextstep) {
        rotating = true;
        float step = 500 * Time.smoothDeltaTime;
        Quaternion fromAngle = transform.rotation;
        Quaternion newRotation = Quaternion.Euler (new Vector3(0, 0, nextstep));	
        while(transform.rotation != newRotation){//the original angle from the input key dot with 90 degree < !=  0
            transform.rotation = Quaternion.RotateTowards(transform.rotation, newRotation, step);//newRotation;
            yield return null;
 
        }
        rotating = false;
    }

Now we need to call the coroutine in the Update function:

void Update () {
    if(Input.GetKeyUp(KeyCode.Q)){
     
        if(!rotating) {
            angle = getNextLeftAngle(angle);
            StartCoroutine(RotateMe(angle));
        }
    }
if(Input.GetKeyUp(KeyCode.E)){
         
        if(!rotating) {
            angle = getNextRightAngle(angle);
            StartCoroutine(RotateMe(angle));
        }
    }
 
    if(Input.GetKeyUp("space")){
         
        if(!rotating) {
            angle = getNextFlip(angle);
            StartCoroutine(FlipMe(angle));
        }
    }
}

Now we add this script to the PlatformRotator Object. Run the code and now we see that each time we hit ‘q’, ‘e’, or ‘space’ the platfroms rotate appropriately. YAY!

Okay and now we are done with the biggest chunk of the game! Really all that’s left is adding game mechanics like ending the game and getting points.

Happy Coding!

-TheNappingKat

Unity

Okay so looking at Unity it’s hard to determine how to organize the classes and interaction within your project. Many tutorials don’t stress the two principles of development in Unity: Single Responsibility and Modularization.

Single Responsibility and Modularization

From the Unity site: Single Responsibility means that each class is responsible for its own task. Ideally Combining tasks together to accomplish larger more complicated goals.

If you’ve played around with unity and pre-made player/characters to control, called prefabs, you’ve probably seen how those objects are actually several small objects and components placed into one container. Usually you will see that those Player prefabs have a CharacterController and CharacterMotor. That is because of the Single Responsibility ideology.

Unity does a good job of explaining it here:

Unity3d.com/learn/tutorials/modules/intermediate/scripting/code-practices

Dependency Inversion

One way of doing this is Creating Interfaces for all your classes

The Second is to have your classes inherent from one another.

One way to organize your code is to draw out how everything is related and note what every class will need. I know, I know this sounds like doing a UML but abstracting your game idea is super useful for when documenting and organizing your code. If you are planning on making a really cool complex game; creating the core and expanding out is a lot easier, when your code isn’t a huge mess and almost impossible to abstract. Doing this early saves time later on. TRUST ME. It happens all the time especially since Unity makes it so easy to hit the ground running. Or, when you want to rush into development and end up building directly from your proof of concept. Both are sure fire ways of digging your code’s grave at that point.

Anyway. Some food for thought. Also,

Happy Coding!

Unity

How to set up a Unity Environment

Setting up an environment to work in is one of the most important parts of developing. If you mess up here it could mean a world of hurt for you later, in terms of recovering data, and debugging random errors. Bellow is a summary of the steps to set up your environment for using Unity with at as your version control.

  1. Create Your Git Repostory First
  2. Create the Unity project in the Git Folder
  3. You only want to collaborate/share Assets and Project Settings Folders
    1. There are 4 folders generated by unity on initial creation
  4. Use Sublime or other text editor to edit .ignore file
  5. Then Do the UNITY edit preferences stuff
  6. Then do your first Commit =)
    1. You’ll see that the assets folder wasn’t added because it was empty
  7. Make a new branch for devlopment
    1. Add a cube and then add component>C# called cube controller
    2. Commit and sync
  8. Now use MonoGame and/or Visual Studio to edit scripts

Downloading GIT and Using GitHub and GitShell

Now to start I’m assuming you know what Git is. If you don’t watch this video here: http://www.youtube.com/watch?v=fotbHkt1jQc#t=113

Here’s a common question, or one I had when starting out. What is GIT versus GitHub? Well one is a client that makes it easier to use GIT.

Now some of you are also new to using the command line. Well in this tutorial I’m going to show you how to use the client as well as commands. Using git this way is a great way to dip your toes in the water for getting used to the intimidating command prompt.

Download Git/GitHub/GitShell

You can download git here: http://www.git-scm.com/download/win

It will download a shell for you as well. You’ll then need to go to GitHub. I assuming you already have a GitHub account. If you don’t it’s really easy to make one. Once your account is all set up you’ll need to download the GitHub Client.

After it’s created Publish to make sure that everything was setup correctly.

Now locate the folder in the project finder/explorer

It can also be found in Git Shell, which will open a separate bash, or the windows power shell.

 

Cool so you see that two files were created one was the .gitattributes and the other is .ignore

I’ll explain at high level how we are going to use Git, in case you skipped my link to the documentation. You have two repositories (an online one, and a local one) and 3 states of your project: 1)the one you are working on 2)the one saved in your local repository 3)and the one saved in your online repo.

If you want to learn more about Git Methodlogy and other stuff you can look here GitStuff

It’s really important for you to know how GIT works. It will be a pain the first couple of times you try to commit things as you’re learning how it works, but practice makes perfect.

Okay now that you’ve read all about the Git Methodology and how it works, you might still ne a little confused. If your not great! Skip to the Unity parts. If your still struggling with the practical application I’m going to explain how it works with both the command line and the GitHub Client. (I’m running windows, but these commands are universal and the client is pretty much the same as well)

Working With GIT

Initial Practice with adding, committing, and pushing, and viewing new files and changes.

Lets work with the shell first, it’s great practice for people who don’t really like command line actions to get used to it.

Lets create a new .txt file

gitshellNewFile
gitshellNewFile2
gitshellNewFile3

I’m going to add some rando text now. The command > git status , tells us the current status of your local repository versus your server repo and tracks which files have yet to be committed.

Gitshellnewfile4

You Can see that our random.txt has not been added to list of files that we are traking.

You can also see that our GitHub Client can also see that there has been a new file that was created.

gitHubNewFile
gitHubNewFile2
gitHubNewFile3

You Can see on the top right hand side of the client that there is a +1. That signifies that your local repo is ahead by 1 commit.

Going back to the shell we can see the new status here as well.

Shell1

If we were to have added the file to the local repository in the shell the command would have been “git add random.txt”

Now let’s do as it prompts and sync to the online repository

shell2

And there you have it your first create, add, push with git

shell3

Merging gets a bit complicated especially with Unity however I’ll get to that a bit latter… Let’s go on to creating and adding a Unity project.

UNITY

Now just a quick note before we start. If you want to build for Xbox you can only download the 4.3 version of Unity. Unity decided to skip Xbox integration for their 4.4 version but will be available for 4.5 or 5 (whenever it becomes publicly available). Go to the Unity site and download whatever version will suit your needs.

Creating a Project

Before I create a new git project I want to make sure that all my development will be saved in my git repo location on my computer. To do this I simply make a new folder in the git location with the name of the project I want to create.

Now in Unity go to File>NewProject. In the Project wizard be sure to select the new folder you created just a second ago.

UnityCreate

Then hit Create. You’ll notice now in your Unity Folder there are some new items:

UnityFolders
UnityGit

Here’s a little explanation of the folders and what they contain:

  • Assets folder – contains all your games assets that you add and components for them.
  • Library – A lot of files that Unity keeps track of in regards to cache and performance
  • ProjectSettings – contains the settings important for setting up your Unity environment.
  • Temp – Temporary folder that contains items specific to your environment, and the unity lock file.

Cool so now we have a Unity in our git path. But we can’t start committing yet

USING GIT AND UNITY – IMPORTANT!

Now here is the most important part of using Unity with GIT. Unlike most projects where you can Sync your entire project to the repository and be fine, Unity has files that are specific to your computer and make it impossible to simply share the between two different users. Among such files is the unity lock file, in the temp folder that prevents other people from working on your project… Which is the opposite of what we want.

Luckily there is a way to get around this though! By ignoring the folders that unity generates for your specific environment we can then share binary files that will let other Unity projects know about the changes that were made even if they were created on a different computer.

So how do we do this?

Through the git .ignore file I talked about earlier.

First we need to configure Unity to save it’s files in a format that we can read and see what changes have been made for when we eventually want to merge files.

Edit>Project Settings> Editor

UnityFormat

Then edit the Version Control mode to Meta files. I like visible meta files so we can look in the git client at specific changed information.

Also change the Asset Serialization mode to Force Text. Otherwise the meta data will be incomprehensible.

UnityFormat2

After this change the preferences in Edit>Preferences> Packages and make sure Repository is set to External

UnityFormat3

Okay now we need to set up your git to .ignore the pieces of generated Unity code that will prevent us from sharing.

Go to your file explorer (finder for mac) and open the .gitignore file in a text editor

UnityFormat4

In the .gitignore file you can see that there specifications that are pre-ignored. These are the most common IDEs that git has run into and makes it really convenient for users. However what they don’t have pre ignored is a Unity section. So we have to write one. Luckily this has already been done for us. If we search for unity gitignore there are several but this one: http://kleber-swf.com/the-definitive-gitignore-for-unity-projects/ is really good.

unityformat7

So copy and paste that into your .gitignore file. If we take a look at your GitHub Client we can see that changes were made to our .gitignore file as well as all the new files that Unity generated.

unityFormat8

One caveat. Even though we can now share our work with our teammates it would still be a great idea to keep people working on different parts of the projects. Merging Unity documents is still difficult and you will make life a lot easier if you segment/modularize your scenes and scripts.

And there you have it! You have a working Unity environment with version control, YAY!

Below I mention a few more things that you might want to do before you dive right in.

Adding Collaborators

How do your collaborators get the code? Collaborators should have git working on their computer. They should also have a version of unity working. They also need a github account, and you have to add them to your project there. Now they can go to github and get the repository location so they can clone it on their computer.

GitHub Client:

GitHub Client should see that you’ve been added to a new repository and you can clone it by clicking on the project and hitting clone. This might take some time based on how much has been developed before they joined.

GitShell:

git clone <repsoitorylocationfromGitHub>

Now your collaborator can go into Assets, see the newscript in sublime and then add more code. Word to the wise. You should always look at the workflow before you commit anything…

Check to see if it works

Okay so I’m going to create a random object and add some components to it and save the scene. What we should see is that only the ProjectSettings folder and the Assets folder are changed and will be committed.

unitytestp1
unitytestp2
unitytestp3

And now we see it in the Client. Now lets commit this code.

UnityTestp4

Great everything is committed. But we really shouldn’t be developing on the master branch of the project, because that should be the version of code that always works. so now lets create a new branch for development and then commit and merge with the master.

Git and Unity Workflow – Branching and merging

Here is some really good explanation of branching and merging:

http://www.git-scm.com/book/en/v2/Git-Branching-Basic-Branching-and-Merging

GitHub Client:

UnityTestp5

GitShell:

When using the shell creating the branch does not switch to it automatically you then need to checkout the new branch you created.

Commands:
  • git branch cube1ScriptDev
  • git checkout cube1ScriptDev

Now were ready to add some more code and commit that to this new development branch.

UnityTestp6

And that’s all Folks!
I hope this helps and Happy Coding!