Friday, April 25, 2014

Physics Helper for Universal Windows apps

An update to the Physics Helper XAML project is now available with support for Universal Windows apps. There are several advantages to Universal apps:
  • simplified code (no more #if compiler directives)
  • increased re-use of both game assets and game logic
  • support both Windows 8.1 and Windows Phone 8.1, and eventually the XBOX One
  • if your assets use XAML vector graphics, they will be scaled to each screen size without any pixilation or loss of detail
Please watch this short video for details on the new Universal apps support:


Wednesday, March 26, 2014

babylon.js: physics and character animation


In this post we’ll see how we can create physics-enabled environments mixed with character animation for gaming using babylon.js and WebGL.
Kick around some boxes and beach balls in the demo

In previous posts, I showed how we can use babylon.js to create 3D Maps using Bing and share 3D Scans from Kinect. But the heart of babylon.js is all about games, and here we’ll talk about aspects of interactive, physics-based 3D games using the framework.

Skeletal Animation

Skeletal animation is used to animate meshes based on bones – such as the walking character animation shown here. Babylon.js supports skeletal animation, and will accept animations exported from Blender. When creating the animation for this demo, I needed separate animations for walking, kicking, jumping and standing. What I chose to do was just append all of these separate animations into a single group of 90 frames, which you can see in the image here.

To control this animation in babylon.js, we can pass in the start frame and end frame we wish to play back:

scene.beginAnimation(_skeletonRegularGuy, 31, 60, true, 1.0);

One important note when exporting animations from Blender: we need to bake the location, rotation and scaling for the animation before export! To do this, select your mesh in Object Mode and then press Ctrl+A and choose Location. Repeat this for Rotation (and Scale if you have any key frames affecting scale).



Keyboard Input

In this demo, I used a simple means of tracking which keys were down, first by adding standard event handlers:
        // key input
        window.addEventListener("keydown", this.handleKeyDown, false);        window.addEventListener("keyup", this.handleKeyUp, false);
… and then tracking the states of each key in a simple attribute:
this.handleKeyDown = function (evt) {
keyStates[evt.keyCode] = true;
};

To make things a bit easier to track, we can add an enumeration that equates key codes to directions for our character:
var enumDirectionKeys = {
    keyNone: 0,
    keyUp: 87,
    keyLeft: 65,
    keyRight: 68,
    keyDown: 83,
    keyJump: 32
}

Virtual (Touch) Joysticks

If you happen to try the demo on a touch-enabled device, you can use the left side of the screen to control a virtual joystick for movement, and the tapping anywhere on the right side of the screen makes the character kick. This is made possible by using the virtual joystick class from the GameFX library, which is included as babylon.virtualJoystick.js in the babylon source.
We can create an instance of the joystick with:
   joystick = new BABYLON.GameFX.VirtualJoystick(true);
… and then read the delta position of the joystick like so:
   var delta = (joystick.deltaJoystickVector);

Sound

For sound support, I used create.js. It’s pretty straight forward to use the sound library portion of create – first you initialize and add a manifest:
        // sound fx
        createjs.Sound.initializeDefaultPlugins();        var audioPath = "sounds/";
        var manifest = [            { id: "kick", src: "kick.mp3" },            { id: "wood", src: "wood.mp3" },            { id: "ball", src: "ball.mp3" },        ];
 createjs.Sound.registerManifest(manifest, audioPath);
… after which you can play any file by its id:

createjs.Sound.play(
"wood");

Adding Physics

Babylon.js uses Cannon.js to support 3D Physics, and it makes it very easy to do so through imposters. For example, to add physics to our “beach balls,” we can use the SphereImposter in a call to setPhysicsState like so:

ball.setPhysicsState({ impostor: BABYLON.PhysicsEngine.SphereImpostor, mass: 0.05, friction: 0.5, restitution: 0.9 });
However, there are some advanced scenarios where a simple imposter will not do – for example, in the picture to the right you see our “dude” physics in the demo. Since we want our dude character to walk around and move/kick the physics objects around him, we need to have a more advanced imposter. In this case, we can join a “wheel sphere” to  a “body box” and then apply impulse to the wheel to move the box around. Additionally, the “kick region” is used to detect collisions with other physics objects when the character performs his kick move.
To accomplish the character physics, three physics imposters are used.

Getting Help and Support

If you choose to use babylon.js for your next project, here are some resources that can get you started.
Babylon.js forums on HTML5 Game Devs – this is an awesome community of like-minded game devs using javascript frameworks such as babylon.
Babylon.js samples and source – if there is one thing babylon provides, it’s a lot of cool samples. Poking around through this code is invaluable in learning the ropes.
Babylon.js Wiki – full of some good tutorials to get started.

Saturday, February 15, 2014

Lenovo Yoga 2 Pro: Fixing the Quirks


After shopping around for a new ultrabook, I finally settled on the Lenovo Yoga Pro 2. It features a 3200x1800 screen, 8 GB RAM and a 256 GB SSD, all weighing in at just over 3 lbs. The Yoga design allows the screen to rotate 360 degrees so that you get a (albeit hefty) tablet mode as well.


I have to admit, I have had a bad "honeymoon period" with the Yoga because of a few annoying quirks that almost had me packing the device up for return. But after a bit of adjustment, I think I am happy now with the device and will use it as my primary dev machine (I also picked up a Pluggable USB 3 Docking Station

So, I thought I would share my solutions to some common annoyances that other developers will have with the Yoga.

Trackpad - Back to Basics

I have heard a lot of complaints about the trackpad on the Yoga 2, and I have to agree that its gesture support just gets in the way. I would become really frustrated inside Visual Studio when trying to drag items around or use right-click context menus - whenever I tried to right-click, the mouse would move just a little bit off target before the click registered.


So here is what I did to make the trackpad a more "basic" experience, removing the gestures.
  •           Hit the Windows Key
  •           Type "Control Panel"
  •           In the Search box, enter "mouse"
  •           Select "Change mouse settings"
  •           Go to the Device Settings tab
  •           Click the Settings Button
  •           Now you will see the Synaptics Control Panel. Be sure to visit all three tabs here and uncheck all of the annoying gestures to your needs.


Function Keys

The top row of keys on the Yoga hav e both "function" keys and "utility" keys (for example, media controls, airplane mode, etc.) The default for these keys is to invoke utility keys when pressed, and as a developer you most definitely want the Function keys to be the default.
Here is how to change the default to Function Keys.
  • Shut down your Yoga 2.
  • On the right-side of the Yoga, right next to the power button is a small circular button with a little "refresh arrow" right next to it. You need to depress this tiny button with a small screwdriver or any pointed tool.
  • This will get you into the BIOS. Navigate to Configuration, and then HotKey Mode and select "Disabled":


-

Auto-Adjust Screen Brightness

The Yoga 2 has a light sensor which automatically brightens or dims the screen depending on lighting conditions it is in. The problem is, you will sometimes be in a light condition where the screen will flicker rapidly because it can't settle on a brightness!

You can, however disable the auto brightness.


  •           On the task bar, click the little battery icon at the right-most part of the task bar.
  •           Select "more power options"
  •           Select "change plan settings"
  •           Select "change advanced power settings"
  •           Expand Display / Enable adaptive brightness, and turn them off:




The Display


We've been taught that more pixels are better, but now I am not so sure! The Yoga 2 comes with a 3200 x 1800 display, which has a very high pixel density. Don't get me wrong, the display is gorgeous. But I have found several desktop applications that were not designed properly to scale to this PPI. For example, the SSMS tool for SQL Server has lots of scaling issues, and it's "Locate Backup File" dialog shows up like this:



(I found that resizing the dialog above fixes the paint issues, but it is a bit disconcerting!)

A few other issues with the high PPI:
  •           Splash screens come up teeny-tiny
  •           Screen Shots are HUGE. If you're used to taking screenshots or video, be prepared for huge files.
  •           The Command Prompt... Another window that comes up very tiny. You can set the font sizes in the window properties, but in my configuration and font selection I am not able to get it to a comfortable viewing size.
  •      The only fix I have found for these issues is to kick down the Yoga's resolution when needed - for example, if you're doing some Camtasia recording, it might be a good idea to lower the res before doing so.


In Summary

If you're a developer and considering the Yoga 2 Pro, I would recommend checking out the numerous reviews from Engadget, CNET, and others. And while you will likely hit the same quirks as I did above, you sho uld be able to make these tweaks to improve your experience.

Saturday, January 11, 2014

Sharing 3D Scans in WebGL using BabylonJS

In this post, I’ll show how to optimize and display color 3D scans in any WebGL-enabled browser using BabylonJS. I’ll be using Kinect for Windows for my 3D Scan, but you should be able to use these steps using other 3D Scanners as well.

Click the images below to see a couple of sample 3D scans – a monkey and a fruitbowl.







The idea of using the Kinect for creating 3D Scans has been around for a while, and there are commercial packages such as ReconstructMe and Skanect which provide an easy interface for 3D Scanning. However, these packages have limitations and costs associated with them.

As a free alternative, we can use Kinect Fusion and Blender to create and optimize our 3D Scans. Then, we can share our 3D Scans in any WebGL-enabled browser using BabylonJS – a JavaScript library for 3D display and animation.


Step 1: Get the Tools

In this post, we’ll be using a few free tools and libraries:

Kinect for Windows SDK (and Developer Toolkit)Download the latest SDK and Developer Toolkit to enable scanning using Kinect Fusion.

Blender
We’ll use Blender, a free 3D Design tool, to optimize our mesh scans and prepare them for Web sharing

BabylonJS
This free JavaScript library makes it easy to create 3D scenes and games using WebGL.

BabylonJS / Blender Export Plug-in
This plugin for Blender allows you to export a Blender 3D Scene to Babylon format. Download the plug-in and read the install instructions from the link to make it available in Blender.


Step 2: 3D Scan using Kinect Fusion

First load the “Developer Toolkit Browser” which was installed with your Kinect Developer Toolkit. Scroll down and run the Kinect Fusion Explorer – D2D sample.

For Mesh Format, select PLY and check the “Capture Color” checkbox:
You will also likely need to tweak the Reconstruction Volume and Voxel Resolution settings. For my close-up scans, I found the following settings to be the best, but you will want to try different settings for your particular scene.



Capture your 3D Scan by either rotating the object in the Kinect’s view, or walking around the object to gather all of the geometry. In this case, I am using a cute little stuffed monkey:


Once you capture all angles, click the “Create Mesh” button to save a .PLY format file (which includes color information) to disk.


Step 3: Optimize the Mesh using Blender

At this point, we have a large amount of data in a very large file, so in order to display it in a browser using BabylonJS, we will need to optimize it. We also need to clean up the model and smooth it a bit, as the 3D Scan adds unwanted artifacts. To do so, we will use Blender. If you are not familiar with Blender, you may be a bit overwhelmed at first – but stick with it, it’s a valuable and fun tool! This post will not include all of the little details on basics of using Blender.
1)      CREDITS go to CrowTreeStudios for the vertex color baking process! For a video walkthrough of these steps, see their video here.
2)      Open Blender and delete the default cube by pressing “X”
3)      Select File/Import/Stanford (PLY)
a)      Browse to the PLY file that you exported in Step 2 above.
b)      This might take a little while, the scans are large!
4)      When the model loads, it will not be oriented correctly, so rotate and center it appropriately. You should have something like below, with the model centered in the viewport:

5)      Make a copy of the object, by first selecting it and then select Object / Duplicate Objects. In the next steps, we will optimize this copy (by reducing vertex count) and then create a texture map for it using the original mesh’s vertex color information (this is called “Vertex Baking”).
a)      Important: Make sure that both the original mesh and the copied mesh are in the exact same Position. This is necessary for proper vertex baking! They should appear in a perfect overlapped position at this point. If you need to tweak, just go to the Object properties panel and set the Location X,Y, and Z.
6)      Next, we will optimize and cleanup the mesh, so that it appears smoother and is not so huge on disk and in memory.
a)      Hide the original mesh so you can concentrate on the duplicate mesh.
b)      Select the duplicate mesh (Blender will add a “.001” to the mesh name)
c)       Activate the Modifies panel and select Add Modifier.

d)      Select ReMesh and adjust the Octree Depth up to 6 and the Mode to Smooth. Observe the shape of your modified mesh and you may need to increase or decrease the Octree depth for the proper smoothing and cleanup. When you are happy, click apply. This reduces the vertex count and smooths the scan data out a bit.
7)      Your copy should now look much smoother, and any hanging artifacts should be removed:

a)      Show the original mesh again by making it visible.
8)      Add a material to original, scanned mesh.
a)      select the original mesh
b)      add a new material
c)       under shading, select "Shadeless”

d)      under options, check "Vertex Color Paint"


9)      Back under the main Blender View, under "Viewport Shading" (the small sphere icon), select "Texture" (this shows the vertex color data)

10)   Now, we can see the scanned in vertex color data from the original mesh. In the next steps, we will create a Texture map from this color data (a process known as Vertex Baking).

11)   No we’ll add a material to the optimized (copied) mesh
a)      select the copied mesh
b)      add a new material
c)       under shading, select "Shadeless"
12)   Next we need to unwrap the mesh, as a setup for creating a texture map image.
a)      Split the Blender View into two views
b)      in the left view, select UV/Image Editor

c)       in the right view, go to Edit mode on the copied mesh
d)      select all vertices
e)      hit "U" for unwrap and then select "Smart UV Project"
13)   Now we’ll create an image to store the texture map.
a)      in the left-side Image editor, create a New Image

b)      Give the image a name and uncheck Alpha, then click OK.

14)   For proper vertex color baking, we need to change the default color management in Blender.
a)      Select the Scene Panel in Blender

b)      Under Color Management, set Display Device to None.

15)   No we can Bake our Color Vertex Data to a Texture Map.
a)      go to Render Panel (the Camera icon)
b)      Select the (original) hi-res mesh
c)       Hold Shift down and Shift+Click the Copied (Optimized Mesh). In your scene treeview, you should see the original mesh circled in orange with black text, and the copied mesh in white like so:

16)   Scroll down to the Bake Menu
a)      check "Selected to active"
b)      uncheck “Clear”

c)       Click the Bake button
17)   In the left-hand view, you should see you vertex colors bake into a texture map:


18)   Save your texture map to a local file.
a)      Select Image/Save as Image from the Image Editor menu…

19)   Now we’ll apply this texture map image to the optimized mesh.
a)      select the (copied) optimized mesh
b)      add a new texture
c)       set type = image or movie
d)      open the map image file
e)      go to mapping and select Coordinates: UV
f)       select Map: UVMap

20)   Now that we have both a Texture Map and an Optimized Mesh, we can delete the original (scanned data) mesh from the project.
a)      Select the original mesh and delete it by selecting Object/Delete.
21)   Export the Scene to Babylon format.
a)      Select File/Export/Babylon JS (if you do not see this option, then go back to the Step 1: Downloads step and read the plugin section)
b)      The export should create two files: a *.babylon and a *.png (texture map) file.




Step 4: Load and Display with BabylonJS

Now that we have a *.babylon scene file and a *.png texture map, we can easily load and display our 3D Scan using BabylonJS.

1)  Make sure your web server has the Babylon Mime Types enabled. In IIS, you would add the following to the system.webServer section (under configuration):

<staticContent>
<mimeMap fileExtension=".fx" mimeType="application/shader" />
<mimeMap fileExtension=".babylon" mimeType="application/babylon" />
</< SPAN>staticContent>
2)      We’ll need to include two scripts from the Babylon download (place these in the scripts subfolder) : hand.js and Babylon.js
3)      Create a new HTML file, and add the following code to create the Babylon scene from the exported Blender scene file (comments are inline).

<html>
<head>
<title>Simple Babylon Demo</< SPAN>title>
<link href="index.css" rel="stylesheet" />

<script src="scripts/hand.js"></< SPAN>script>
<script src="scripts/babylon.js"></< SPAN>script>
<script>
window.onload = (function () {
var canvas = document.getElementById("canvas");

// Check that browser supports WebGL
if (!BABYLON.Engine.isSupported()) {
  document.getElementById("notSupported").className = "";
  document.getElementById("opacityMask").className = "";
} else {

  // create Babylon Engine
  var engine = new BABYLON.Engine(canvas, true);

  // create the Scene
  var scene = new BABYLON.Scene(engine);

  // add a light
  var light = new BABYLON.HemisphericLight("Hemi0", new BABYLON.Vector3(0, 1, 0), scene);

  // Add a camera that allows rotating view around a point
  var camera = new BABYLON.ArcRotateCamera("Camera", 0, 0.8, 100, new BABYLON.Vector3.Zero(),   scene);

  // load the scene (exported from blender)
  BABYLON.SceneLoader.ImportMesh("", "images/models/", "monkey.babylon", scene, function      (newMeshes, particleSystems) {
  // optional - position and size the mesh
  var monkey = newMeshes[0];
  monkey.scaling.x = 70;
  monkey.scaling.y = 70;
  monkey.scaling.z = 70;
  monkey.position.z = -48;
  monkey.position.y = 20;
  monkey.position.x = 0;

  // optional - add in a material and platform
  var platform = newMeshes[1];
  platform.material = new BABYLON.StandardMaterial("texture1", scene);
  });

  scene.activeCamera.attachControl(canvas);

  // register a render loop to render scene
  engine.runRenderLoop(function () {
  scene.render();
  });

  // Resize
  window.addEventListener("resize", function () {
    engine.resize();
  });
}
});
</< SPAN>script>

</< SPAN>head>
<body>
<div id="opacityMask" class="hidden"></< SPAN>div>
<canvas id="canvas"></< SPAN>canvas>
<div id="notSupported" class="hidden">Your browser does not support WebGL</< SPAN>div>
</< SPAN>body>
</< SPAN>html>


Conclusion

Once again, you can see samples of this process a here and a here. As you can see, it is a bit of work to optimize a mesh for display using WebGL, but by mastering these steps we have ultimate control over the size and detail of our meshes. And using WebGL and BabylonJS, we have a cross-platform solution which works in IE11, Firefox, Chrome, and even some mobile browsers.

Friday, December 6, 2013

Bing 3D Maps using WebGL and BabylonJS


In this post, we will look at how we can use BabylonJS and WebGL  to create a 3D height map and route path using data from Bing Maps REST Services.

TRY THE DEMO [IE11, Chrome, or FF required] |  
DOWNLOAD THE SOURCE




NOTE: Before you read any further, please note that at the time of this writing, there is rumored to be a native Bing Maps 3D WebGL control, and the release is likely imminent! Still, this sample will be very useful if you are interested in terrain mapping and other aspects of using BabylonJS!

Overview

Now that IE11 supports WebGL along with Chrome, Firefox, and a host of other browsers, I am betting that we will see a resurgence of interest in this 3D browser tech. BabylonJS is an easy to use 3D library written in JavaScript that leverages the power of WebGL to create cross-platform 3D experiences. If we couple that with the Bing Maps API, which allows querying of satellite imagery, driving directions, and elevation data, we can fairly easily build a realistic 3D scene of any area on earth!

Bing Maps Tile System

When you visit Bing Maps (or google maps for that matter), the view you get is actually composed of a series of square "tiles" - each being 256x256 pixels wide. As you pan the map around, new tiles are loaded into the view so that it is filled.

The details on this tile system are available on this MSDN page, but in short, each 256x256 tile has a "Quad Key" which is a numeric representation based on Zoom Level and Location.



At the bottom of the msdn article mentioned, pay special notice to the sample code. These functions let us get tiles for any point (latitude, longitude) on the map. To get a tile image's Quad Key for a particular Latitude and Longitude, we first convert the Lat/Long to a Pixel Offset in the world map, then convert the Pixel Offset to a Tile Offset and then finally convert the Tile Offset to a Quad Key.

NOTE: All of the code in the msdn article is given in C#, but the download project has a JavaScript conversion of these utility functions.

Querying Elevations using Bing
Back in January, Bing Maps added an Elevations API which allows for querying of elevation data for a bounding box on the map. This msdn article gives the details on the REST API for this data.
For our purposes, we will query a set of equally spaced positions on each tile to get the elevation for those points. We can then later set the e levation (Y-axis) values of those points to create a 3D effect.
Bounding box grid showing order of elevations
Note that, to query Elevations and Directions from the Bing Maps API, you will need a developer key, which you can get here. Note that Bing has pretty generous licensing for mobile apps and some websites.
Once you get your key, you will need to modify ProxyController.cs in the sample code to use your key, on this line:

const
string _bingMapsKey = " *** ENTER YOUR BING MAPS API KEY HERE *** ";

Displaying the Tiles and Elevation in 3D

Now that we have the Image Tiles and the Elevation data from Bing Maps, we can display these in Babylo nJS. In 3D Space, we have X,Y, and Z axes to display our 2D tiles plus elevation data on. Below is how we'll do it:




As you can see in the image above, the 2D tiles will stretch in the X (latitude) and Z (longitude) axes, and the Y-axis will be used to show the elevation.

The key to creating the tile's elevation (height map) is the following function. It takes a list of elevations and applies them to the Y-axis (height) value of a tile.

BINGWEBGL.Engine.prototype.createGroundFromElevationData = function (name, elevations, width, height, subdivisions, minHeight, maxHeight, scene, updatable) {
        var ground;

        ground = this._scene.getMeshByName(name);
        if (ground == null)
            ground = new BABYLON.Mesh(name, this._scene);

     &nbsp ;  ground._isReady = false;

        var indices = [];
        var positions = [];
        var normals = [];
        var uvs = [];
        var row, col;

        // Getting height map data
        var heightMapWidth, heightMapHeight;
        if (elevations != null) {
            heightMapWidth = elevations.length / 2
       &nbsp ;    heightMapHeight = elevations.length / 2;
        }

        // Vertices
        var elevationIndex = 0;
        for (row = 0; row <= subdivisions; row++) {
            for (col = 0; col <= subdivisions; col++) {
                var position = new BABYLON.Vector3((col * width) / subdivisions - (width / 2.0), 0, ((subdivisions - row) * height& lt;/SPAN>) / subdivisions - (height / 2.0));

                // Compute height
                if (elevations != null) {
                    var heightMapX = (((position.x + width / 2) / width) * (heightMapWidth - 1)) | 0;
           &am p;nbsp;        var heightMapY = ((1.0 - (position.z + height / 2) / height) * (heightMapHeight - 1)) | 0;

                    position.y = (elevations[elevationIndex] - this._meanElevation);  // Math.random() * 20;
                    elevationIndex++;
                }

                // Add  vertex
                positions.push(position.x, position.y, position.z);
                normals.push(0, 0, 0);
                uvs.push(col / subdivisions, 1.0 - row / subdivisions);
            }
        }

        // Indices
        for (row = 0;&l t;/SPAN> row < subdivisions; row++) {
            for (col = 0; col < subdivisions; col++) {
                indices&l t;SPAN class=p>.
push(col + 1 + (row + 1) * (subdivisions + 1));
                indices.push(col + 1 + row * (subdivisions +1
));
                indices.push(col + row * (subdivisions + 1));

                indices.push(< /SPAN>col + (row + 1) * (subdivisions + 1));
                indices.push(col + 1 + (row + 1) * (subdivisions + 1));
                indices.push(col + row * (subdivisions + 1));
            }
        }

        // Normals
        BABYLON.Mesh.ComputeNormal(positions, normals, indices);

        // Transfer
        ground.setVerticesData(positions, BABYLON.VertexBuffer.PositionKind, updatable);
        ground.setVerticesData(normals, BABYLON.VertexBuffer.NormalKind, updatable);
        ground.setVerticesData(uvs, BABYLON.VertexBuffer.UVKind, updatable);
        ground.setIndices(indices);

        ground._updateBoundingInfo();

        ground._isReady = true;

        return ground;</ FONT>
    };

}
Summary
This is just a short post to give an overview of the concepts used for the Babylon/Bing WebGL demo. If you are interested in more details, I suggest you check out the source code and post any questions in the comments area below!