Extraterrestrial Map Kinections


Fig 1 – LRO Color Shaded Relief map of moon – Silverlight 5 XNA with Kinect interface


Silverlight 5 was released after a short delay, at the end of last week.
Just prior to exiting stage left, Silverlight, along with all plugins, shares a last aria. The spotlight now shifts abruptly to a new diva, mobile html5. Backstage the enterprise awaits with a bouquet of roses. Their concourse will linger long into the late evening of 2021.

The Last Hurrah?

Kinect devices continue to generate a lot of hacking interest. With the release of an official Microsoft Kinect beta SDK for Windows, things get even more interesting. Unfortunately, Kinect and the web aren’t exactly ideal partners. It’s not that web browsers wouldn’t benefit by moving beyond the venerable mouse/keyboard events. After all, look at the way mobile touch, voice, inertia, gyro, accelerometer, gps . . . have all suddenly become base features in mobile browsing. The reason Kinect isn’t part of the sensor event farmyard may be just a lack of portability and an ‘i’ prefix. Shrinking a Kinect doesn’t work too well as stereoscopic imagery needs a degree of separation in a Newtonian world.

[The promised advent of NearMode (50cm range) offers some tantalizing visions of 3D voxel UIs. Future mobile devices could potentially take advantage of the human body’s bi-lateral symmetry. Simply cut the device in two and mount one half on each shoulder, but that isn’t the state of hardware at present. ]


Fig 2 – a not so subtle fashion statement OmniTouch


For the present, experimenting with Kinect control of a Silverlight web app requires a relatively static configuration and a three-step process: the Kinect out there, beyond the stage lights, and the web app over here, close at hand, with a software piece in the middle. The Kinect SDK, which roughly corresponds to our visual and auditory cortex, amplifies and simplifies a flood of raw sensory input to extract bits of “actionable meaning.” The beta Kinect SDK gives us device drivers and APIs in managed code. However, as these APIs have not been compiled for use with Silverlight runtime, a Silverlight client will by necessity be one step further removed.

Microsoft includes some rich sample code as part of the Kinect SDK download. In addition there are a couple of very helpful blog posts by David Catuhe and a codeplex project, kinect toolbox.

Step 1:

The approach for using Kinect for this experimental map interface is to use the GestureViewer code from Kinect Toolbox to capture some primitive commands arising from sensory input. The command repertoire is minimal including four compass direction swipes, and two circular gestures for zooming, circle clockwise zoom in, and circle counter clockwise zoom out. Voice commands are pretty much a freebie, so I’ve added a few to the mix. Since GestureViewer toolbox includes a learning template based gesture module, you can capture just about any gesture desired. I’m choosing to keep this simple.

Step 2:

Once gesture recognition for these 6 commands is available, step 2 is handing commands off to a Silverlight client. In this project I used a socket service running on a separate thread. As gestures are detected they are pushed out to local port 4530 on a tcp socket service. There are other approaches that may be better with final release of Silverlight 5.

Step 3:

The Silverlight client listens on port 4530, reading command strings that show up. Once read, the command can then be translated into appropriate actions for our Map Controller.


Fig 3 – Kinect to Silverlight architecture

Full Moon Rising


But first, instead of the mundane, let’s look at something a bit extraterrestrial, a more fitting client for such “extraordinary” UI talents. NASA has been very busy collecting large amounts of fascinating data on our nearby planetary neighbors. One data set that was recently released by ASU, stitches together a comprehensive lunar relief map with beautiful color shading. Wow what if the moon really looked like this!


Fig 4 – ASU LRO Color Shaded Relief map of moon

In addition to our ASU moon USGS has published a set of imagery for Mars, Venus, Mercury, as well as some Saturn and Jupiter moons. Finally, JPL thoughtfully shares a couple of WMS services and some imagery of the other planets:

This type of data wants to be 3D so I’ve brushed off code from a previous post, NASA Neo 3D XNA, and adapted it for planetary data, minus the population bump map. However, bump maps for depicting terrain relief are still a must have. A useful tool for generating bump or normal imagery from color relief is SSBump Generator v5.3 . The result using this tool is an image that encodes relative elevation of the moon’s surface. This is added to the XNA rendering pipeline to combine a surface texture with the color relief imagery, where it can then be applied to a simplified spherical model.


Fig 5 – part of normal map from ASU Moon Color Relief imagery

The result is seen in the MoonViewer client with the added benefit of immediate mode GPU rendering that allows smooth rotation and zoom.

The other planets and moons have somewhat less data available, but still benefit from the XNA treatment. Only Earth, Moon, Mars, Ganymede, and Io have data affording bump map relief.

I also added a quick WMS 2D viewer html using OpenLayers against the JPL WMS servers to take a look at lunar landing sites. Default OpenLayers isn’t especially pretty, but it takes less than 20 lines of js to get a zoomable viewer with landing locations. I would have preferred the elegance of Leaflet.js, but EPSG:4326 isn’t supported in L.TileLayer.WMS(). MapProxy promises a way to proxy in the planet data as EPSG:3857 tiles for Leaflet consumption, but OpenLayers offers a simpler path.


Fig 6 – OpenLayer WMS viewer showing lunar landing sites

Now that the Viewer is in place it’s time to take a test drive. Here is a ClickOnce installer for GestureViewer modified to work with the Silverlight Socket service:

Recall that this is a Beta SDK, so in addition to a Kinect prerequisite, there are some additional runtime installs required:

Using the Kinect SDK Beta

Download Kinect SDK Beta 2:

Be sure to look at the system requirements and the installation instructions further down the page. This is Beta still, and requires a few pieces. The release SDK is rumored to be available the first part of 2012.

You may have to download some additional software as well as the Kinect SDK:

Finally, we are making use of port 4530 for the Socket Service. It is likely that you will need to open this port in your local firewall.

As you can see this is not exactly user friendly installation, but the reward is seeing Kinect control of a mapping environment. If you are hesitant to go through all of this install trouble, here is a video link that will give you an idea of the results.

YouTube video demonstration of Kinect Gestures


Voice commands using the Kinect are very simple to add so this version adds a few.

Here is the listing of available commands:

       public void SocketCommand(string current)
            switch (command)
                    // Kinect voice commands
                case "mercury-on": { MercuryRB.IsChecked = true; break; }
                case "venus-on": { VenusRB.IsChecked = true; break; }
                case "earth-on": { EarthRB.IsChecked = true; break; }
                case "moon-on": { MoonRB.IsChecked = true; break;}
                case "mars-on": { MarsRB.IsChecked = true; break;}
                case "marsrelief-on": { MarsreliefRB.IsChecked = true; break; }
                case "jupiter-on": { JupiterRB.IsChecked = true; break; }
                case "saturn-on": { SaturnRB.IsChecked = true; break; }
                case "uranus-on": { UranusRB.IsChecked = true; break; }
                case "neptune-on": { NeptuneRB.IsChecked = true; break; }
                case "pluto-on": { PlutoRB.IsChecked = true; break; }

                case "callisto-on": { CallistoRB.IsChecked = true; break; }
                case "io-on": { IoRB.IsChecked = true;break;}
                case "europa-on": {EuropaRB.IsChecked = true; break;}
                case "ganymede-on": { GanymedeRB.IsChecked = true; break;}
                case "cassini-on": { CassiniRB.IsChecked = true; break; }
                case "dione-on":  {  DioneRB.IsChecked = true; break; }
                case "enceladus-on": { EnceladusRB.IsChecked = true; break; }
                case "iapetus-on": { IapetusRB.IsChecked = true;  break; }
                case "tethys-on": { TethysRB.IsChecked = true; break; }
                case "moon-2d":
                        MoonRB.IsChecked = true;
                        Uri uri = Application.Current.Host.Source;
                        System.Windows.Browser.HtmlPage.Window.Navigate(new Uri(uri.Scheme + "://" + uri.DnsSafeHost + ":" + uri.Port + "/MoonViewer/Moon.html"), "_blank");
                case "mars-2d":
                        MarsRB.IsChecked = true;
                        Uri uri = Application.Current.Host.Source;
                        System.Windows.Browser.HtmlPage.Window.Navigate(new Uri(uri.Scheme + "://" + uri.DnsSafeHost + ":" + uri.Port + "/MoonViewer/Mars.html"), "_blank");
                case "nasaneo":
                        EarthRB.IsChecked = true;
                        System.Windows.Browser.HtmlPage.Window.Navigate(new Uri(""), "_blank"); break;
                case "rotate-east": {
                        RotationSpeedSlider.Value += 1.0;
                        tbMessage.Text = "rotate east";
                case "rotate-west":
                        RotationSpeedSlider.Value -= 1.0;
                        tbMessage.Text = "rotate west";
                case "rotate-off":
                        RotationSpeedSlider.Value = 0.0;
                        tbMessage.Text = "rotate off";
                case "reset":
                        RotationSpeedSlider.Value = 0.0;
                        orbitX = 0;
                        orbitY = 0;
                        tbMessage.Text = "reset view";

                //Kinect Swipe algorithmic commands
                case "swipetoleft":
                        orbitY += Microsoft.Xna.Framework.MathHelper.ToRadians(15);
                        tbMessage.Text = "orbit left";
                case "swipetoright":
                        orbitY -= Microsoft.Xna.Framework.MathHelper.ToRadians(15);
                        tbMessage.Text = "orbit right";
                case "swipeup":
                        orbitX += Microsoft.Xna.Framework.MathHelper.ToRadians(15);
                        tbMessage.Text = "orbit up";
                case "swipedown":
                        orbitX -= Microsoft.Xna.Framework.MathHelper.ToRadians(15);
                        tbMessage.Text = "orbit down";

                //Kinect gesture template commands
                case "circle":

                        if (scene.Camera.Position.Z > 0.75f)
                            scene.Camera.Position += zoomInVector * 5;
                        tbMessage.Text = "zoomin";
                case "circle2":
                        scene.Camera.Position += zoomOutVector * 5;
                        tbMessage.Text = "zoomout";

Possible Extensions

After posting this code, I added an experimental stretch vector control for zooming and 2 axis twisting of planets. These are activated by voice: ‘vector twist’, ‘vector zoom’, and ‘vector off.’ The Map control side of gesture commands could also benefit from some easing function animations. Another avenue of investigation would be some type of pointer intersection using a ray to indicate planet surface locations for events.


Even though Kinect browser control is not prime time material yet, it is a lot of experimental fun! The MoonViewer control experiment is relatively primitive. Cursor movement and click using posture detection and hand tracking is also feasible, but fine movement is still a challenge. Two hand vector controlling for 3D scenes is also promising and integrates very well with SL5 XNA immediate mode graphics.

Kinect 2.0 and NearMode will offer additional granularity. Instead of large swipe gestures, finger level manipulation should be possible. Think of 3D voxel space manipulation of subsurface geology, or thumb and forefinger vector3 twisting of LiDAR objects, and you get an idea where this could go.

The merger of TV and internet holds promise for both whole body and NearMode Kinect interfaces. Researchers are also adapting Kinect technology for mobile as illustrated by OmniTouch.

. . . and naturally, lip reading ought to boost the Karaoke crowd (could help lip synching pop singers and politicians as well).


Fig 7 – Jupiter Moon Io

Alice in Mirrorland – Silverlight 5 Beta and XNA

“In another moment Alice was through the glass, and had jumped lightly down into the Looking-glass room”

Silverlight 5 Beta was released into the wild at MIX 11 a couple of weeks ago. This is a big step for mirror land. Among many new features is the long anticipated 3D capability. Silverlight 5 took the XNA route to 3D instead of the WPF 3D XAML route. XNA is closer to the GPU with the time tested graphics rendering pipeline familiar to Direct3D/OpenGL developers, but not so familiar to XAML developers.

The older WPF 3D XAML aligns better with X3D, the ISO sanctioned XML 3D graphics standard, while XNA aligns with the competing WebGL javascript wrapper for OpenGL. Eventually XML 3D representations also boil down to a rendering pipeline, but the core difference is that XNA is immediate mode while XML 3D is kind of stuck with retained mode. Although you pick up recursive control rendering with XML 3D, you lose out when it comes to moving through a scene in the usual avatar game sense.

From a Silverlight XAML perspective, mirror land is largely a static machine with infrequent events triggered by users. In between events, the machine is silent. XAML’s retained mode graphics lacks a sense of time’s flow. In contrast, enter XNA through Alice’s DrawingSurface, and the machine whirs on and on. Users occasionally throw events into the machine and off it goes in a new direction, but there is no stopping. Frames are clicking by apace.

Thus time enters mirror land in frames per second. Admittedly this is crude relative to our world. Time is measured out in the proximate range of 1/20th to 1/60th a second per frame. Nothing like the cusp of the moment here, and certainly no need for the nuance of Dedekind’s cut. Time may be chunky in mirror land, but with immediate mode XNA it does move, clicking through the present moment one frame at a time.

Once Silverlight 5 is released there will be a continuous XNA API across Microsoft’s entire spectrum: Windows 7 desktops, Windows 7 phones, XBox game consoles, and now the browser. Silverlight 5 and WP7 implementations are a subset of the full XNA game framework available to desktop and XBox developers. Both SL5 and WP7 will soon have merged Silverlight XNA capabilities. For symmetry sake XBox should have Silverlight as apparently announced here. It would be nice for a web browsing XBox TV console.

WP7 developers will need to wait until the future WP7 Mango release before merging XNA and Silverlight into a single app. It’s currently an either/or proposition for the mobile branch of XNA/SL.

At any rate, with SL5 Beta, Silverlight and 3D XNA now coexist. The border lies at the <DrawingSurface> element:

<DrawingSurface Draw="OnDraw" SizeChanged="DrawingSurface_SizeChanged" />

North of the border lies XML and recursive hierarchies, a largely language world populated with “semantics” and “ontologies.” South of the border lies a lush XNA jungle with drums throbbing in the night. Yes, there are tropical white sands by an azure sea, but the heart of darkness presses in on the mind.

XAML touches the academic world. XNA intersects Hollywood. It strikes me as one of those outmoded Freudian landscapes so popular in the 50’s, the raw power of XNA boiling beneath XAML’s super-ego. I might also note there are bugs in paradise, but after all this is beta.

Merging these two worlds causes a bit of schizophrenia. Above is Silverlight XAML with the beauty of recursive hierarchies and below is all XNA with its rendering pipeline plumbing. Alice steps into the DrawingSurface confronting a very different world indeed. No more recursive controls beyond this border. Halt! Only immediate mode allowed. The learning curve south of the border is not insignificant, but beauty awaits.

XNA involves tessellated models, rendering pipelines, vertex shaders, pixel shaders, and a high level shading language, HLSL, accompanied by the usual linear algebra suspects. Anytime you run across register references you know this is getting closer to hardware.

…a cry that was no more than a breath: “The horror! The horror!”

sampler2D CloudSampler : register(s0);
static const float3 AmbientColor = float3(0.5f, 0.75f, 1.0f);
static const float3 LampColor = float3(1.0f, 1.0f, 1.0f);
static const float AmbientIntensity = 0.1f;
static const float DiffuseIntensity = 1.2f;
static const float SpecularIntensity = 0.05f;
static const float SpecularPower = 10.0f;

Here is an overview of the pipeline from Aaron Oneal’s MIX talk:

So now that we have XNA it’s time to take a spin. The best way to get started is to borrow from the experts. Aaron Oneal has been very kind to post some nice samples including a game engine called Babylon written by David Catuhe.

The Silverlight 5 beta version of Babylon uses Silverlight to set some options and SL5 DrawingSurface to host scenes. Using mouse and arrow keys allows the camera/avatar to move through the virtual environment colliding with walls etc. For those wishing to get an idea of what XNA is all about this webcafe model in Babylon is a good start.

The models are apparently produced in AutoCAD 3DS and are probably difficult to build. Perhaps 3D point clouds will someday help, but you can see the potential for navigable high risk complex facility modeling. This model has over 60,000 faces, but I can still walk through exploring the environment without any difficulty and all I’m using is an older NVidia motherboard GPU.

Apparently, SL5 XNA can make a compelling interactive museum, refinery, nuclear facility, or WalMart browser. This is not a stitched pano or photosynth interior, but a full blown 3D model.

You’ve gotta love that late afternoon shadow affect. Notice the camera is evidently held by a vampire. I checked carefully and it casts no shadow!

But what about mapping?

From a mapping perspective the fun begins with this solar wind sample. It features all the necessary models, and shaders for earth, complete with terrain, multi altitude atmosphere clouds, and lighting. It also has examples of basic mouse and arrow key camera control.

Solar Wind Globe
Fig 4 – Solar Wind SL5 XNA sample

This is my starting point. Solar Wind illustrates generating a tessellated sphere model with applied textures for various layers. It even illustrates the use of a normal (bump) map for 3D effects on the surface without needing a tessellated surface terrain model. Especially interesting is the use of bump maps to show a population density image as 3D.

My simple project is to extend this solar wind sample slightly by adding layers from NASA Neo. NASA Neo conveniently publishes 45 categories and 129 layers of a variety of global data collected on a regular basis. The first task is to read the Neo GetCapabilities XML and produce the TreeView control to manage such a wealth of data. The TreeView control comes from the Silverlight Toolkit project. Populating this is a matter of reading through the Layer elements of the returned XML and adding layers to a collection which is then bound to the tree view’s ItemsSource property.

    private void CreateCapabilities111(XDocument document)
        //WMS 1.1.1
        XElement GetMap = document.Element("WMT_MS_Capabilities").Element("Capability")
        XNamespace xlink = "http://www.w3.org/1999/xlink";
        getMapUrl = GetMap.Attribute(xlink + "href").Value;
        if (getMapUrl.IndexOf("?") != -1) getMapUrl =
                  getMapUrl.Substring(0, getMapUrl.IndexOf("?"));

        ObservableCollection layers = new ObservableCollection();
        foreach (XElement element in
            if (element.Descendants("Layer").Count() > 0)
                WMSLayer lyr0 = new WMSLayer();
                lyr0.Title = (string)element.Element("Title");
                lyr0.Name = "header";
                foreach (XElement element1 in element.Descendants("Layer"))
                    WMSLayer lyr1 = new WMSLayer();
                    lyr1.Title = (string)element1.Element("Title");
                    lyr1.Name = (string)element1.Element("Name");

        LayerTree.ItemsSource = layers;

Once the tree is populated, OnSelectedItemChanged events provide the trigger for a GetMap request to NASA Neo returning a new png image. I wrote a proxy WCF service to grab the image and then write it to png even if the source is jpeg. It’s nice to have an alpha channel for some types of visualization.

The difficulty for an XNA novice like myself is understanding the hlsl files and coming to terms with the rendering pipeline. Changing the source image for a Texture2D shader requires dropping the whole model, changing the image source, and finally reloading the scene model and pipeline once again. It sounds like an expensive operation but surprisingly this re-instantiation seems to take less time than receiving the GetMap request from the WMS service. In WPF it was always interesting to put a Video element over the scene model, but I doubt that will work here in XNA.

The result is often a beautiful rendering of the earth displaying real satellite data at a global level.

Some project extensions:

  • I need to revisit lighting which resides in the cloud shader hlsl. Since the original cloud model is not real cloud coverage, it is usually not an asset to NASA Neo data. I will need to replace the cloud pixel image with something benign to take advantage of the proper lighting setup for daytime.
  • Next on the list is exploring collision. WPF 3D provided a convenient RayMeshGeometry3DHitTestResult. In XNA it seems getting a point on the earth to trigger a location event requires some manner of collision or Ray.Intersects(Plane). If that can be worked out the logical next step is grabbing DEM data from USGS for generating ground level terrain models.
  • There is a lot of public LiDAR data out there as well. Thanks to companies like QCoherent, some of it is available as WMS/WFS. So next on the agenda is moving 3D LiDAR online.
  • The bump map approach to displaying variable geographic density as relief is a useful concept. There ought to be lots of global epidemiology data that can be transformed to a color density map for display as a relief bump map.

Lots of ideas, little time or money, but Silverlight 5 will make possible a lot of very interesting web apps.

Helpful links:
Silverlight 5 Beta: http://www.silverlight.net/getstarted/silverlight-5-beta/
Runtime: http://go.microsoft.com/fwlink/?LinkId=213904

Silverlight 5 features:
“Silverlight 5 now has built-in XNA 3D graphics API”

XNA: http://msdn.microsoft.com/en-us/aa937791.aspx

NASA Neo: http://localhost/NASA-Neo/publish.htm

Babylon Scenes: Michel Rousseau, courtesy of Bewise.fr

Babylon Engine: David Catuhe / Microsoft France / DPE


“I am real!” said Alice, and began to cry.

“You won’t make yourself a bit realler by crying,” Tweedledee remarked: “there’s nothing to cry about.”

“If I wasn’t real,” Alice said – half-laughing through her tears, it all seemed so ridiculous – “I shouldn’t be able to cry.”