TileLayer and the CORS Curse

Fig 1 – Sample Bing Maps v8 county TileLayer hosted in Azure blob storage

Bing Maps Ajax v7 API is due to retire June 30, 2017 replaced by the newer and much better Bing Maps Ajax v8 API. I’ve been migrating some Bing Maps v7 API web apps to the new Bing Maps v8 API. Here is Microsoft’s migration guide: https://www.microsoft.com/maps/discon-control-migrat-guide.aspx

Most of the migration is fairly straightforward and the performance boost is quite significant. Large numbers of points, polylines, and complex polygons can now be rendered without impact on map navigation performance. There are many improvements to the v8 API including spatial geometry functions discussed in a previous post, Spatial to the Browser.

However, I ran into a problem with the Microsoft.Maps.TileLayer. It seems that the way Bing Maps v8 handles tiles causes CORS error for sporadic tiles in Chrome, Edge, and Firefox, but not IE. In the sample below png image tiles were stored in Azure Blob storage.

Here is a small test app that illustrated the problem:

<!DOCTYPE html>
        <meta http-equiv='Content-Type' content='text/html; charset=utf-8'/>

        <div id='myMap' style='width: 100vw; height: 100vh;'></div>
        <script type='text/javascript'>
            function loadMapScenario() {
                //temp bing key test1
                var map = new Microsoft.Maps.Map(document.getElementById('myMap'), {
                    credentials: '<Your Bing Map Key>',
                    center: new Microsoft.Maps.Location(40, -95),
                    zoom: 5

                var tileSource = new Microsoft.Maps.TileSource({
                    uriConstructor: 'http://onterratest.blob.core.windows.net/bingtiles2/county/{quadkey}.png'
                boundaryLayer = new Microsoft.Maps.TileLayer({ mercator: tileSource, animationDisplay: "hide" });
        <script type='text/javascript' src='http://www.bing.com/api/maps/mapcontrol?branch=experimental&callback=loadMapScenario' async defer></script>


EDGE developer tool doesn’t show the problem even though some tiles are not rendered as seen in Fig 1 above.

Fig 2 – EDGE browser developer tool showing tile requests

Tile 03023.png is accessible and returns an image but TileLayer does not render the image.

Fig 3 – sample tile 03023.png

Switching to Chrome we can see some additional details.

Fig 4 – Chrome browser sample Bing Maps v8 with TileLayer hosted in Azure blob storage

Chrome Developer Tools Network panel.

Fig 5 – Chrome Developer tools Network showing tile requests

Some tile requests are showing a (canceled) status with a CORS policy violation indicated:

Access to Image at ‘http://onterratest.blob.core.windows.net/bingtiles2/county/02301.png’ from origin ‘http://onterrawms.blob.core.windows.net’ has been blocked by CORS policy: No ‘Access-Control-Allow-Origin’ header is present on the requested resource. Origin ‘http://onterrawms.blob.core.windows.net’ is therefore not allowed access.

Cross-origin resource sharing (CORS) is a mechanism that allows restricted resources on a web page to be requested from another domain outside the domain from which the first resource was served.”

Back in 2013 Microsoft added CORS to blob storage tools.

“Beginning with version 2013-08-15, the Azure storage services support Cross-Origin Resource Sharing (CORS) for the Blob, Table, and Queue services.”

AllowedOrigins = new List() { “*” } will allow any other domain to use the blob files without CORS violations.

Here is some sample c# code changing the CORS rules for a blob storage account to allow cross origin access:

CloudStorageAccount storageAccount = CloudStorageAccount.Parse("DefaultEndpointsProtocol=https;AccountName=<name>;AccountKey=<your azure key>");
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();

//set CORS header for png images
var serviceProperties = blobClient.GetServiceProperties();

serviceProperties.Cors.CorsRules.Add(new CorsRule()
AllowedHeaders = new List<string>() { "*" },
AllowedMethods = CorsHttpMethods.Put | CorsHttpMethods.Get,
       AllowedOrigins = new List<string>() { "*" },
       ExposedHeaders = new List<string>() { "*" },
       MaxAgeInSeconds = 1800 // 30 minutes

Using the above CORS settings corrects the CORS errors when using EDGE, Chrome, and Firefox browsers.

Fig 6 – Chrome browser Bing Maps v8 sample using TileLayers from Blob storage with CORS

Another approach to modifying CORS settings for blob storage is Azure PowerShell. Azure PowerShell is a module that provides tools for managing Azure. https://docs.microsoft.com/en-us/azure/storage/storage-powershell-guide-full
Here are the steps for setting CORS using Azure PowerShell:

  1. Run Add-AzureAccount to sign into your account
  2. See your subscriptions in azure Get-AzureSubscription | Format-Table SubscriptionName, IsDefault, IsCurrent, CurrentStorageAccountName
  3. Set desired subscription $SubscriptionName = ‘Your subscription name’
  4. Check your desired blob Get-AzureStorageBlob
  5. Now you need to create authorization context for your blob $ctx = New-AzureStorageContext and enter desired parameters.
  6. You are now ready to get or set CORS rules for your blob. Check current CORS rules Get-AzureStorageCORSRule -ServiceType Blob -Context $ctx
  7. Set current CORS rules for example: $CorsRules = (@{ AllowedHeaders=@(“*”); AllowedOrigins=@(“*”); ExposedHeaders=@(“content-length”); MaxAgeInSeconds=200; AllowedMethods=@(“Get”,”Connect”, “Head”)})
  8. Set-AzureStorageCORSRule -ServiceType Blob -CorsRules $CorsRules -Context $ctx


CORS is a useful mechanism for allowing cross browser access. CORS AllowedOrigins shouldn’t be necessary for images such as png or jpeg, but for the current Bing Maps v8 TileLayer API there are problems with popular browsers: Chrome, EDGE, and Firefox that can be resolved by setting a CORS policy in the blob container.

Spatial to the Browser

Fig 1 – example using Bing Maps v8 SpatialMath module

I first noticed migration of spatial functions out to the browser back in 2014 with Morgan Herlocker’s github project turf.js, ref Territorial Turf blog.

Bing Maps version 8 SDK for web applications, released last summer, follows this trend, adding a number of useful modules that previously required custom programming or at least modification of open source projects.

Bing Maps v8
Bing Maps v8 API
Bing Maps v8 Modules

Among the many useful modules published with this version is Microsoft.Maps.SpatialMath. There are 25 geometry functions in this release for such things as intersection, buffer, convex hull, distance, and many others. Leveraging these geometry functions lets us move analytic functions from a SQL backend or C# .Net service layer back out front to the user’s browser.

Some useful side effects of this migration for projects using jsonp services include:

  • No need to host SQL data
  • No need to write a service layer
  • Dispersed computing that relieves compute loads on servers
  • No need to host in a Web Server such as IIS or Apache
  • Ability to publish in simple cloud storage with access to Edge Cache worldwide performance enhancement

As an example of this approach let’s look at the extensive GIS services exposed by the District of Columbia, DCGIS_Apps and DCGIS_Data. In addition to visualizing some layers it would be useful to detect parcels with certain spatial attributes such as distance to street access and area filters. With this ability Alley isolated parcels can be highlighted as potential alley development properties.

First setup a map panel using Bing Maps v8 loading ‘Microsoft.Maps.SpatialMath’

    // Create the v8 map
    getMap: function () {

        app.initialCenter = new Microsoft.Maps.Location(38.89892, -76.9883);
        app.initialZoom = 18;
        var mapEl = document.getElementById("map"),
            mapOptions = {
                customizeOverlays: true,
                zoom: app.initialZoom,
                credentials: '<Your Bing Key goes here>',
                mapTypeId: Microsoft.Maps.MapTypeId.road,
                showBreadcrumb: false,
                showDashboard: true,
                showMapTypeSelector: true,
                enableClickableLogo: false,
                enableSearchLogo: false,
                center: app.initialCenter

        var dcocto_url = 'https://oblique.sanborn.com/dcocto/?ll='
                     + app.initialCenter.latitude + ',' + app.initialCenter.longitude;
        $('#dcocto_frame').attr('src', dcocto_url);

        try {
            app.map = new Microsoft.Maps.Map(mapEl, mapOptions);
            app.layers = {
                properties: new Microsoft.Maps.Layer({ zIndex: 500 }),
                streets: new Microsoft.Maps.Layer({ zIndex: 600 }),
                alleys: new Microsoft.Maps.Layer({ zIndex: 700 }),
                zones: new Microsoft.Maps.Layer({ zIndex: 800 }),
                interiors: new Microsoft.Maps.Layer({ zIndex: 900 })
            app.showlayers = {
                properties: true,
                streets: true,
                alleys: false,
                zones: false,
                interiors: false
            _.each(app.layers, function (layer) {


            //viewchangeend handler
            Microsoft.Maps.Events.addHandler(app.map, 'viewchangeend', app.getLayers);

            app.infobox = new Microsoft.Maps.Infobox(app.map.getCenter(), {
                title: 'Title',
                description: 'Description',
                visible: false

District of Columbia DCGIS services provide jsonp callbacks circumventing crossbrowser security constraints and allowing simpler ajax calls without back and forth trips to a proxy server layer.

if (app.showlayers.properties) {
    var bds = app.map.getBounds();
    var urlProperty = 'http://maps2.dcgis.dc.gov/dcgis/rest/services/DCGIS_APPS/Real_Property_Application/MapServer/3/query?
+ bds.getWest() + ',' + bds.getSouth() + ',' + bds.getEast() + ',' + bds.getNorth()
+ '&geometryType=esriGeometryEnvelope&inSR=4326&outFields=*&outSR=4326&callback=';
    $.getJSON(urlProperty, function (json) {

LocationRect buffer method simplifies extending the street viewport to guarantee all parcels in a city block are checked against surrounding street centerlines. bds.buffer(1.5);

if (app.showlayers.streets) {
	var bds = app.map.getBounds();
	var urlStreet = 'https://maps2.dcgis.dc.gov/dcgis/rest/services/DCGIS_DATA/Transportation_WebMercator/MapServer/41/query?
f=json&returnGeometry=true&spatialRel=esriSpatialRelIntersects&maxAllowableOffset=0&geometry=' + bds.getWest()
+ ',' + bds.getSouth() + ',' + bds.getEast() + ',' + bds.getNorth()
+ '&geometryType=esriGeometryEnvelope&inSR=4326&outFields=*&outSR=4326&callback=';
	$.getJSON(urlStreet, function (json) {

The algorithm for discovering interior parcels is greatly simplified using lodash and the new Bing Maps v8 spatial geometry functions. Notice that Geometry.distance handles a shortest distance calculation between a property polygon and an array of polylines, “allstreets.”

findInteriors: function () {
        var allproperties = app.layers.properties.getPrimitives();
        var allstreets = app.layers.streets.getPrimitives();
        var interiorProperties = [];
        _.each(allproperties, function (property) {
            var distance = Microsoft.Maps.SpatialMath.Geometry.distance(property, allstreets,
                             Microsoft.Maps.SpatialMath.DistanceUnits.Feet, true);
            var area = Microsoft.Maps.SpatialMath.Geometry.area(property, Microsoft.Maps.SpatialMath.AreaUnits.SquareFeet);
            if (distance > 100 && distance < 1000 && area > 450) {


        _.each(interiorProperties, function (property) {
            property.setOptions({ fillColor: new Microsoft.Maps.Color(200, 255, 0, 0) });

Function findInteriors checks for shortest distance in feet for each parcel against every street centerline. The filter then checks for shortest distances falling between 100 ft and 1000 ft and parcel area sf > 450. Parcels meeting this filter criteria are set to fill red.

The Bing spatial distance function is using the higher accuracy Vincenty’s algorithm, but still performs reasonably well. Experiments using the less accurate Haversine option showed no significant performance difference in this case.

DC GIS Services limit the number of features in a request result to a maximum of 1000. At zoom level 18 the viewport always returns less than this maximum feature limit, but lower zooms can hit this limit and fail to return all parcels in the view. A warning is triggered when feature counts hit 1000 since the interior parcel algorithm will then have an incomplete result.


The new Bing Maps v8 adds a great number of features simplifying web mapping app development. In addition Bing Maps v8 improves performance by making better use of html5 canvas and immediate mode graphics. This means a larger number of features can be added to a map before map navigation begins to slow. I was able to test with up to 25000 features with no significant problems using map zoom and pan navigation.

Bing Maps SpatialMath module provides many useful spatial algorithms that previously required a server and/or SQL backend. The result is simpler web map applications that can be hosted directly in Azure blob storage.

Sample DCProperties code is available on github:

IoT – Internet of Things
(das Ding an sich)

“Noumenon the thing-in-itself (das Ding an sich) as opposed to phenomenon—the thing as it appears to an observer.”

Fig 1 – Microsoft Azure IoT Suite offers complete working demonstration solutions

Heidegger Thinginess

Is Heidegger serious or just funn’n us, when his discursive rambling winds past the abolition of all distances, wanders around thinginess, and leads us to “some-thing” from “no-thing?”

“The failure of nearness to materialize in consequence of the ‘abolition of all distances has brought the distanceless to dominance. In the default of nearness the thing remains annihilated as / a thing in our sense. But when and in what way do things exist as things? This is the question we raise in the midst of the dominance of the distanceless.”

“The emptiness, the void, is what does the vessel’s holding. The empty space, this nothing of the jug, is what the jug is as the holding vessel.”

“The jug’s essential nature, its presencing, so experienced and thought of in these terms, is what we call thing.”

“The Thing” from Poetry, Language, Thought 1971
Heidegger Translated by Albert Hofstader

So class, we may conclude that our spatial attribute is not the essence of the thing. However, IoT does not concern itself with das Ding an sich, but with the mechanism of appearance, or how “noumenon” communicates “phenomenon” within the internet. Therefore, we must suppose IoT remains Kantian in spite of Heidegger’s prolix lecturing. And, spatial attributes do still exist.

No? … Really?
Phew I was worried about my job for a minute!
(Actually I always wanted to drag Heidegger into a post on maps.)

IoT Things
Of course, IoT just wants “things”, “stuff”, “devices” to have a part in the cloud just like the rest of us. Dualism, Monism who cares? It’s all about messages. Which is where Microsoft Azure IoT comes in.

Fig 2 – Azure and IoT Dominic Betts

For Microsoft, IoT is an opportunity to provide infrastructure at a couple of levels with the central piece the Azure IoT Hub:

Messsage Creation

Devices, sensors, are just small computers for which Microsoft introduced Windows IoT Core. This is a scaled down Windows OS for devices like Raspberry Pi, offered freely to feed the IoT Hub. The Maker community can now use Windows and Visual Studio Express to latch up Gpio and send telemetry messages via Bluetooth or WiFi. At $49, Microsoft’s Raspberry Pi 3 Starter Kit offers the latest single board computer with a MicroSD embedded Window IoT Core for experimenters. It should make hardware playtime easier for anyone in the Microsoft community.

Useful site: Connect your device to Azure IoT Hub

The ultimate device is still your smart phone. With the release of Xamarin in Visual Studio 2015 update2, native Mobile App development across android, iOS, Windows Phone is much easier.

Message Pipeline

Azure IoT Hub is the key piece of technology. IoT Hub is infrastructure for handling messages across a wide array of devices and software which scales to enterprise dimensions. Security, monitoring, and device management are built in. The value proposition is easy to see if you’ve ever dealt with fleet management or SCADA networks. Instead of writing services on multiple VMs to catch tcp packets and sort to various storage and events, it’s easy to sign up for an Azure IoT Hub and let Azure worry about reliability, scaling, and security.

Fig 3 – Azure IoT Hub with Stream Analytics - Getting Started with the Internet of Things (IoT)

Note that Machine Learning is part of the platform diagram. Satya Nadella’s Build 2016 keynote emphasized “the intelligent cloud” and of course R Project plays a role in predictive intelligence, so we can begin to see Microsoft marshalling services and tools for the next generation of cloud AI.

Thinking of ubiquitous sensors naturally (or unnaturally depending on your pre-disposition regarding the depravity of man and machine), brings to mind primitive organism possibilities as well as shades of Hal. Also noteworthy, “IoT Message Queues can be bidirectional,” so the order of Things and Humans can easily be reversed. Perhaps Microsoft’s embrace of artificial intelligence will cycle it back to the preeminent “seat of evil corporate empire” currently occupied by Google.

Azure IoT Hub deployment

Fig 4 – Azure Portal IoT Hub deployment

Once the Azure IoT Hub is deployed the next step is to add a Stream Analytics Job to the pipeline. These are jobs for processing telemetry streams into sinks such as SQL storage or visualizations. A Stream Analytic Job connects an input to an output with a processing query filter in between.

Fig 5 – Azure IoT Hub Stream Analytic Job = Input + Query + Output

Fig 6 – Azure IoT Hub Stream Analytic Job Input from the deployed message stream IoT Hub or Blob

Fig 7 – Azure IoT Hub Stream Analytic Job Output to several options including Azure SQL Server

Finally the query connecting Input to Output

Fig 8 – Stream Analytics Query – sample queries

Message store or visualization

As seen above the Azure IoT Hub offers several ways to store or visualize data streams.
This tutorial includes simple test and simulated device code:

Fig 9 – some test code for sending simulated messages to an Azure IoT Hub

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.Azure.Devices.Client;
using Newtonsoft.Json;
using System.Threading;

namespace SimulatedDevice
    class Program
        static DeviceClient deviceClient;
        static string iotHubUri = "<name of your IoT Hub>.azure-devices.net";
        static string deviceKey = "<your device key>";

        static void Main(string[] args)
            Console.WriteLine("Simulated device\n");
            deviceClient = DeviceClient.Create(iotHubUri, new DeviceAuthenticationWithRegistrySymmetricKey("myFirstDevice", deviceKey));


        private static async void SendDeviceToCloudMessagesAsync()
            double avgWindSpeed = 10; // m/s
            Random rand = new Random();
            double latitude = 39.008208;
            double longitude = -104.797239;

            while (true)
                double currentWindSpeed = avgWindSpeed + rand.NextDouble() * 4 - 2;
                var telemetryDataPoint = new
                    deviceId = "myFirstDevice",
                    windSpeed = currentWindSpeed,
                    latitude = latitude,
                    longitude = longitude
                var messageString = JsonConvert.SerializeObject(telemetryDataPoint);
                var message = new Message(Encoding.ASCII.GetBytes(messageString));

                await deviceClient.SendEventAsync(message);
                Console.WriteLine("{0} > Sending message: {1}", DateTime.Now, messageString);


Fig 10 – Resulting simulated device test records inserted into the gpsTest table by Stream Analytics Job

A much more involved example of an IoT and Mobile App is furnished by Microsoft: My Driving
Microsoft’s complete solution is available on GitHub with details.


Microsoft is forging ahead with Azure, offering numerous infrastructure options that make IoT a real possibility for small and medium businesses. Collecting data from diverse devices is getting easier with the addition of Windows IoT Core, VS2015 Xamarin, and Azure IoT Hubs with Stream Analytic Jobs. Fleet management services will never be the same.

Spatial data still plays a big part in telemetry since every stationary sensor involves a location and every mobile device a gps stream. Ubiquitous sensor networks imply the need for spatial sorting and visualization, at least while humans are still in the loop. Remove the human and Heidegger’s “abolition of all distances” reappears, but then sadly you and I disappear.

The R Project for Maps

Fig 1 – interactive Leaflet choropleth of Census ACS household income $60k-$75k using Microsoft Open R

The main stay of web mapping applications for the last couple of decades has been three tier: Model – SQL, View – web UI, and Controller – server code. There are many variations on this theme: models residing in image tile pyramids, SQL Server, PostGIS, or Oracle; controller server code as Java, C#, or PHP. The visible action is on the viewer side. Html5 with ever expanding JavaScript libraries like jQuery, bootstrap, and angular.js make life interesting, while node.js is pushing JavaScript upstream to the controller.

For building end user applications it helps to know all three tiers and have at least one tool in each. With the right tools you can eventually accomplish just about anything spatially interesting. Emphasis is on the word “eventually.” SQL <=> C# <=> html5/JavaScript is very powerful, but extravagant for “one off” analytical work.

For ad hoc spatial work it was usually best to stick to a desk top application such as one of the big dollar Arc___ variations or better yet something open source like QGIS. In the early days these generally consisted of modular C/C++ functions threaded together with an all-purpose scripting language. If you wanted to get a little closer to the geo engine, knowledge of a scripting language (PHP, TCL, Python, or Ruby) helped to script modular toolkits like GDAL/OGR, OSSIM, GEOS, or GMT. This all works fine except for learning and relearning often arcane syntax, while repeatedly discovering and reading data documentation on various public resources from Census, USGS, NOAA, NASA, JPL … you get the idea.

R changes things in the geospatial world. The R project originated as a modular statistics and graphics toolkit. Unless you happen to be a true math prodigy, statistics are best visualized graphically. With powerful graphics libraries, R has evolved into a useful platform for ad hoc spatial analysis.

Coupled with an IDE such as RStudio, or the new Microsoft R Tools for Visual Studio, R wraps a large stable of component libraries into a script interpreter environment, ideal for “one off” analysis. Although learning arcane syntax is still a prerequisite, there is at least a universal environment with a really large contributor community. You can think of it as open source replacement for Tableau or Power BI but without proprietary limitations.

Example: networkD3 R library for creating D3 JavaScript network graphs.

# only a few lines of script
data(MisLinks, MisNodes)
forceNetwork(Links = MisLinks, Nodes = MisNodes, Source = "source",
             Target = "target", Value = "value", NodeID = "name",
             Group = "group", opacity = 0.4)

Community contributions are found in CRAN, Comprehensive R Archive Network for the R programming language. A search of CRAN or MRAN (Microsoft R Archive Network) for the term “spatial” yields a list of 145 R libraries.

Example: dygraph R library for creating interactive charts.

 # only a few lines of script
  dygraph(nhtemp, main = "New Haven Temperatures") %>%
   dyRangeSelector(dateWindow = c("1920-01-01", "1960-01-01"))

Here are just a few samples of CRAN libraries useful for spatial analysis:

library(rgdal)  # reading spatial files with gdal
library(ggmap)  # simple mapping and more
library(raster)  # defining extents and raster processing
      brick  # raster cube objects useful for multispectral operations
      stack  # multilayer raster manipulation
library(sp)  # working with spatial objects
library(leaflet)  # interactive web mapping using Leaflet
library(rgeos)  # R GEOS wrapper
library(tigris)  # downloading geography spatial census tiger
library(FedData)  # downloading federal data NED, NHD, SSURGO, GHCN
library(acs)  # tabular census data (American Community Survey) ACS, SF1, SF3
library(UScensus2010)  # spatial and demographic Census 2010 data county/tract/blkgrp/blk
library(RColorBrewer)  # color palettes for thematic mapping

For example, tigris is a useful library for reading US Census TIGER files. With just a couple lines of R scripting you can zoom around a polygonal plot of US Census urban areas. Library(tigris) handles all the details of obtaining the TIGER polygons and loading into local memory. Library(leaflet) handles creating the polygons and displaying over a default Leaflet map as tiles.

ua <- urban_areas(cb = TRUE)
ua %>% leaflet() %>% addTiles() %>% addPolygons(popup = ~NAME10)

Fig 2 – RStudio Interactive Leaflet plot of Census TIGER urban area extracted with tigris

Fig 3 – RStudio script of Census ACS tract household income percentage for $60k-$75K

These samples follow examples found in Zev Ross’s blog posts which contain a wealth of scripts using R for spatial analytics, including these posts on using FedData and rgdal.

Microsoft R

Microsoft recently entered the R world with several enhanced R tools, including:
Microsoft R Open
RTVS R Tools for Visual Studio
Microsoft R Server
SQL Server R Services
MRAN Microsoft R Application Network
Microsoft Azure R Server
Microsoft R Server for Hadoop

Apparently Data Science is a growth industry and Microsoft has an interest in providing useful tools beyond Power BI.

Microsoft R Open Microsoft R Open

Free Microsoft version of R script engine with a couple of enhancements:
• intel enhanced math library
• multi core support
• multithreading

Fig 4 – slide from Derek Norton webinar on R Server showing relative performance boost with enhanced Microsoft R Open

RTVS R Tools for Visual Studio RTVS R Tools for Visual Studio
Microsoft R Visual Studio IDE using the Data Science R settings. Users of Visual Studio will find all the familiar debug stepping, variable explorer, and intellisense editing they are using for other development languages.

Microsoft R Server Microsoft R Server
Licensed enterprise R Service that scales by avoiding in-memory data limitations using parallel chunked data streams.

Fig 5 – slide from Derek Norton webinar showing R Server scale enhancements

SQL Server 2016 R Services SQL Server R Services

SQL R Services – data ETL and visualization tool inside SQL.
T-SQL R interface with Database next to R code on the same server.

sp_execute_external_script – R code embedding
receives inputs, passes to external R runtime, and returns R results.
invoke sp to run R code in T-SQL

MRAN Microsoft R Application Network MRAN Microsoft R Application Network

CRAN fixed date snapshots allow shared R code pointing to compatible library versions
Checkpoint reproducibility

Fig 6 – RVST R Visual Studio 2015 Tools Leaflet demographic script

Example R Leaflet demographic script (ref Fig 1 above):

library(tigris)  # TIGER data
library(acs)     # ACS data
library(stringr) # to pad fips codes
library(dplyr)   # data manipulation
library(leaflet) # interactive mapping

#Colorado Front range counties
counties <- c(1, 5, 13, 31, 35, 39, 41, 59)
tracts <- tracts(state = 'CO', county = c(1, 5, 13, 31, 35, 39, 41, 59), cb = TRUE)

api.key.install(key = "<insert your own Census.gov api key here>")
geo <- geo.make(state = c("CO"),
              county = c(1, 5, 13, 31, 35, 39, 41, 59), tract = "*")

income <- acs.fetch(endyear = 2012, span = 5, geography = geo,
                  table.number = "B19001", col.names = "pretty")
attr(income, "acs.colnames")
##  [1] "Household Income: Total:"
## [12] "Household Income: $60,000 to $74,999"  

income_df <- data.frame(paste0(str_pad(income@geography$state, 2, "left", pad = "0"),
                               str_pad(income@geography$county, 3, "left", pad = "0"),
                               str_pad(income@geography$tract, 6, "left", pad = "0")),
                        income@estimate[, c("Household Income: Total:",
                                           "Household Income: $60,000 to $74,999")],
                        stringsAsFactors = FALSE)

income_df <- select(income_df, 1:3)
rownames(income_df) <- 1:nrow(income_df)
names(income_df) <- c("GEOID", "total", "income60kTo75k")
income_df$percent <- 100 * (income_df$income60kTo75k / income_df$total)

income_merged <- geo_join(tracts, income_df, "GEOID", "GEOID")

popup <- paste0("GEOID: ", income_merged$GEOID, "<br>", "Percent of Households $60k-$75k: ", round(income_merged$percent, 2))
pal <- colorNumeric(
  palette = "YlGnBu",
  domain = income_merged$percent)

incomemap <- leaflet() %>%
  addProviderTiles("CartoDB.Positron") %>%
  addPolygons(data = income_merged,
              fillColor = ~pal(percent),
              color = "#b2aeae", # you need to use hex colors
fillOpacity = 0.7,
              weight = 1,
              smoothFactor = 0.2,
              popup = popup) %>%
  addLegend(pal = pal,
            values = income_merged$percent,
            position = "bottomright",
            title = "Percent of Households<br>$60k-$75k",
            labFormat = labelFormat(suffix = "%"))


Hillshade example using public SRTM 90 data:

    alt <- getData('alt', country = 'ITA')
    slope <- terrain(alt, opt = 'slope')
    aspect <- terrain(alt, opt = 'aspect')
    hill <- hillShade(slope, aspect, 40, 270)

    leaflet() %>% addProviderTiles("CartoDB.Positron") %>%
      addRasterImage(hill, colors = grey(0:100 / 100), opacity = 0.6)

Fig 7 – RVST R Visual Studio 2015 Tools Leaflet Hillshading image


R provides lots of interesting modules that help with spatial analytics. The script engine makes it easy to perform ad hoc visualization and publish the results online. However, there are limitations in performance and extents that make it more of a competitor to desktop GIS products or the newer commercial data visualizers like Tableau or PowerBI. For public facing web applications with generalized extents three tier performance using SQL + server code + web UI still makes the most sense.

The advent of Microsoft R Server and SQL Server R Services add scaling performance to make R solutions more competitive with the venerable three tier approach. It will be interesting to see how developers make use of SQL Server R Services. As a method of adding raster functionality to SQL Server, R sp_execute_external_script overlaps somewhat with PostGIS Raster. Exploring SQL Server 2016 R Services must await a future post.

Example: threejs R library with world flight data

Demographic Terrains

Fig 1 – Population skyline of New York - Census SF1 P0010001 and Bing Road Map

Demographic Terrain

My last blog post, Demographic Landscapes, described leveraging new SFCGAL PostGIS 3D functions to display 3D extruded polygons with X3Dom. However, there are other ways to display census demographics with WebGL. WebGL meshes are a good way to handle terrain DEM. In essence the US Census is providing demographic value terrains, so 3D terrain meshes are an intriguing approach to visualization.

Babylon.js is a powerful 3D engine written in javascript for rendering WebGL scenes. In order to use Babylon.js, US Census SF1 data needs to be added as WebGL 3D mesh objects. Babylon.js offers a low effort tool for generating these meshes from grayscale height map images: BABYLON.Mesh.CreateGroundFromHeightMap.

Modifying the Census WMS service to produce grayscale images at the highest SF1 polygon resolution i.e. tabblock, is the easiest approach to generating these meshes. I added an additional custom request to my WMS service, “request=GetHeightMap,” which returns a PixelFormat.Format32bppPArgb bitmap with demographic values coded as grayscale. This is equivalent to a 255 range classifier for population values.

Example GetHeightMap request:
http://<server>/CensusSF1/WMSService.svc/<token>/NAD83/USA/SF1QP/WMSLatLon?REQUEST=GetHeightMap&SERVICE=WMS&VERSION=1.3.0&LAYERS=TabBlock&STYLES=P0010001 &FORMAT=image/png&BGCOLOR=0xFFFFFF&TRANSPARENT=TRUE&CRS=EPSG:4326 &BBOX=39.25212350301827,-105.70779069335937,40.22049698135099,-104.23836930664062

Fig 2 - grayscale heightmap from Census SF1 P001001 at tabblock polygon level for Denver metro area

Once a grayscale image is available it can be added to the Babylon scene, as a heightmap.

/* Name
  * Height map image url
  * mesh Width
  * mesh Height
  * Number of subdivisions (increase the complexity of
this mesh in order to improve the visual quality of it)
  * Minimum height : The lowest level of the mesh
  * Maximum height : the highest level of the mesh
  * scene
  * Updatable: if this mesh can be updated dynamically in the future (Boolean)
var ground = BABYLON.Mesh.CreateGroundFromHeightMap("ground", heightmap,
256, 256, 500, 0, 10, scene, false);

WMS GetMap requests are also useful for adding a variety of textures to the generated demographic terrain.

var groundMaterial = new BABYLON.StandardMaterial("ground", scene);
groundMaterial.diffuseTexture = new BABYLON.Texture( image, scene);

Fig 3 – texture from Bing Maps AerialWithLabels over population terrain

Sample Babylon.js Demographic Terrain Scene using ArcRotateCamera
<!DOCTYPE html>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
    <title>Babylon HeightMap</title>
    <script type="text/javascript" src="//code.jquery.com/jquery-1.11.0.min.js"></script>
    <!-- Babylon.js -->
    <script src="http://www.babylonjs.com/hand.minified-1.2.js"></script>
    <script src="http://cdn.babylonjs.com/2-1/babylon.js"></script>
        html, body {
            overflow: hidden;
            width: 100%;
            height: 100%;
            margin: 0;
            padding: 0;

        #renderCanvas {
            width: 100%;
            height: 100%;
            touch-action: none;
    <canvas id="renderCanvas"></canvas>
        if (!BABYLON.Engine.isSupported()) {
            $("body").html('<div id="webglmessage"><h1>Sorry, your Browser does no implement WebGL</h1></div>');
        else {
            var canvas = document.getElementById("renderCanvas");
            var engine = new BABYLON.Engine(canvas, true);

            var createScene = function () {
                var scene = new BABYLON.Scene(engine);

                // Light
                var spot = new BABYLON.SpotLight("spot", new BABYLON.Vector3(0, 30, 10), new BABYLON.Vector3(0, -1, 0), 17, 1, scene);
                spot.diffuse = new BABYLON.Color3(1, 1, 1);
                spot.specular = new BABYLON.Color3(0, 0, 0);
                spot.intensity = 0.5;

                //ArcRotateCamera Camera
                var camera = new BABYLON.ArcRotateCamera("Camera", -1.57079633, 1.0, 256, new BABYLON.Vector3.Zero(), scene);
                camera.lowerBetaLimit = 0.1;
                camera.upperBetaLimit = (Math.PI / 2) * 0.9;
                camera.lowerRadiusLimit = 30;
                camera.upperRadiusLimit = 256;
                scene.activeCamera = camera;

                var image = 'textures/NY_Map.jpg';
                var heightmap = 'textures/NY_HeightMap.jpg';

                // Ground
                var groundMaterial = new BABYLON.StandardMaterial("ground", scene);
                groundMaterial.diffuseTexture = new BABYLON.Texture(image, scene);

                var ground = BABYLON.Mesh.CreateGroundFromHeightMap("ground", heightmap, 256, 141, 750, 0, 2.5, scene, false);
                ground.material = groundMaterial;

                // Skybox
                var skybox = BABYLON.Mesh.CreateBox("skyBox", 800.0, scene);
                var skyboxMaterial = new BABYLON.StandardMaterial("skyBox", scene);
                skyboxMaterial.backFaceCulling = false;
                skyboxMaterial.reflectionTexture = new BABYLON.CubeTexture("textures/skybox", scene);
                skyboxMaterial.reflectionTexture.coordinatesMode = BABYLON.Texture.SKYBOX_MODE;
                skyboxMaterial.diffuseColor = new BABYLON.Color3(0, 0, 0);
                skyboxMaterial.specularColor = new BABYLON.Color3(0, 0, 0);
                skybox.material = skyboxMaterial;

                return scene;

            var scene = createScene();

            engine.runRenderLoop(function () {

            // Resize
            window.addEventListener("resize", function () {

Modifying the experimental Census UI involves adding a link button, scale dropdown, and a camera selector to the Demographics tab.

Fig 4 – “View HeightMap” button with Camera selector added to experimental Census UI

Babylon.js offers a range of cameras, including cameras for use with VR headsets.

  • ArcRotate – rotates a target pivot using mouse and cursor keys
  • Free – first person shooter camera
  • Touch – includes touch gesture events for camera control
  • WebVRFree – dual eye images for VR head set devices

WebVRFreeCamera is suited for use in Google Cardboard headsets.

Fig 5 – Babylon.js WebVRFreeCamera Census P001001 terrain

I have a $4 Google Cardboard headset on order. Soon I can try it using a Babylon.js WebVRFreeCamera to navigate population terrains.

Fig 6 - $4.00 Google cardboard kit

Microsoft HoloLens will up the coolness but not the hipster factor while improving usability immensely ( when released ). I’m inclined to the minimalist movement myself, but I’d be willing to write a Windows 10 app with the HoloLens SDK to see how well it performs.

Fig 7 - Microsoft HoloLens is a future possibility for viewing Census Terrains


Viewing performance is fine once loaded as WebGL, but producing the height map involves:

  1. (Server) PostGIS query join to return intersected tabblocks of selected demographic
  2. (Server) Polygon draw to grayscale image with population value normalized by US maximum
  3. (Client) Babylon translates grayscale to webgl mesh

This can require some patience looking at the heavenly but empty skybox for a few seconds, 5-10s on my laptop.

Future directions for inquiry

  • Compare performance with batch processed SF1 tiles. Tiles could combine a 3D vector mesh with a 2D value array to reduce size of multiple demographic tile pyramids.
  • Explore Babylon.js LOD mesh simplification.
  • Explore Babylon.js octree submeshes with multiple mesh tiles.
  • Use PostGIS MapAlgebra on multi-variate value arrays.

Fig 8 – Population view of Denver - Census SF1 P0010001 scale 3

Increasing the scale exaggerates relative population. My how Denver has grown!

Fig 9 – Population view of Denver - Census SF1 P0010001 scale 20

Demographic Landscapes – X3D/Census

Fig 1 – X3DOM view of tabblock census polygons demographics P0040003 density

PostGIS 2.2

PostGIS 2.2 is due for release sometime in August of 2015. Among other things, PostGIS 2.2 adds some interesting 3D functions via SFCGAL. ST_Extrude in tandem with ST_AsX3D offers a simple way to view a census polygon map in 3D. With these functions built into PostGIS, queries returning x3d text are possible.

Sample PostGIS 2.2 SQL Query:

SELECT ST_AsX3D(ST_Extrude(ST_SimplifyPreserveTopology(poly.GEOM, 0.00001),0,0, float4(sf1.P0040003)/19352),10) as geom, ST_Area(geography(poly.geom)) * 0.0000003861 as area, sf1.P0040003 as demographic
FROM Tract poly
JOIN sf1geo geo ON geo.geoid = poly.geoid
JOIN sf1_00003 sf1 ON geo.stlogrecno = sf1.stlogrecno
WHERE geo.geocomp='00' AND geo.sumlev = '140'
AND ST_Intersects(poly.GEOM, ST_GeometryFromText('POLYGON((-105.188236232418 39.6533019504969,-105.051581442071 39.6533019504969,-105.051581442071 39.7349599496294,-105.188236232418 39.7349599496294,-105.188236232418 39.6533019504969))', 4269))

Sample x3d result:

<IndexedFaceSet  coordIndex='0 1 2 3 -1 4 5 6 7 -1 8 9 10 11 -1 12 13 14 15 -1 16 17 18 19 -1 20 21 22 23'>
<Coordinate point='-105.05323 39.735185 0 -105.053212 39.74398 0 -105.039308 39.743953 0 -105.0393 39.734344 0 -105.05323 39.735185 0.1139417115 -105.0393 39.734344 0.1139417115 -105.039308 39.743953 0.1139417115 -105.053212 39.74398 0.1139417115 -105.05323 39.735185 0 -105.05323 39.735185 0.1139417115 -105.053212 39.74398 0.1139417115 -105.053212 39.74398 0 -105.053212 39.74398 0 -105.053212 39.74398 0.1139417115 -105.039308 39.743953 0.1139417115 -105.039308 39.743953 0 -105.039308 39.743953 0 -105.039308 39.743953 0.1139417115 -105.0393 39.734344 0.1139417115 -105.0393 39.734344 0 -105.0393 39.734344 0 -105.0393 39.734344 0.1139417115 -105.05323 39.735185 0.1139417115 -105.05323 39.735185 0'/;>

x3d format is not directly visible in a browser, but it can be packaged into x3dom for use in any WebGL enabled browser. Packaging x3d into an x3dom container allows return of an .x3d mime type model/x3d+xml, for an x3dom inline content html.

x3dom: http://www.x3dom.org/

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE X3D PUBLIC "ISO//Web3D//DTD X3D 3.1//EN" "http://www.web3d.org/specifications/x3d-3.1.dtd">
<X3D  profile="Immersive" version="3.1" xsd:noNamespaceSchemaLocation="http://www.web3d.org/specifications/x3d-3.1.xsd" xmlns:xsd="http://www.w3.org/2001/XMLSchema-instance">
				<viewpoint def="cam"
				centerofrotation="-105.0314177	39.73108357 0"
				orientation="-105.0314177,39.73108357,0.025, -0.25"
				position="-105.0314177	39.73108357 0.025"
				zfar="-1" znear="-1"
<material DEF='color' diffuseColor='0.6 0 0' specularColor='1 1 1'></material>
<IndexedFaceSet  coordIndex='0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 -1 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 -1 38 39 40 41 -1 42 43 44 45 -1 46 47 48 49 -1 50 51 52 53 -1 54 55 56 57 -1 58 59 60 61 -1 62 63 64 65 -1 66 67 68 69 -1 70 71 72 73 -1 74 75 76 77 -1 78 79 80 81 -1 82 83 84 85 -1 86 87 88 89 -1 90 91 92 93 -1 94 95 96 97 -1 98 99 100 101 -1 102 103 104 105 -1 106 107 108 109 -1 110 111 112 113'>
<Coordinate point='-105.036957 39.733962 0 -105.036965 39.733879 0 -105.036797 39.734039 0 -105.036545 39.734181 0 -105.036326 39.734265 0 -105.036075 39.734319 0 -105.035671 39.734368 0 -105.035528 39.734449 0 -105.035493 39.73447 0 -105.035538 39.734786 0 -105.035679 39.73499 0 -105.035703 39.734927 0 -105.035731 39.734902 0 -105.035956 39.734777 0 -105.03643 39.734623 0/>


Inline x3dom packaged into html:

    <meta http-equiv="X-UA-Compatible" content="IE=edge"/>
    <title>TabBlock Pop X3D</title>
	<script src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-2.1.4.min.js"></script>
    <script type='text/javascript' src='http://www.x3dom.org/download/x3dom.js'> </script>
    <link rel='stylesheet' type='text/css' href='http://www.x3dom.org/download/x3dom.css'/>
<h2 id='testTxt'>TabBlock Population</h2>

<div id="content">
<x3d width='1000px' height='600px' showStat="true" showlog="false">
        <navigationInfo id="head" headlight='true' type='"EXAMINE"'></navigationInfo>
        <background  skyColor="0 0 0"></background>
		<directionallight id="directional"
				intensity="0.5" on="TRUE" direction="0 -1 0" color="1,1,1" zNear="-1" zFar="-1" >

		<Inline id="x3dContent" nameSpaceName="blockpop" mapDEFToID="true"



In this case I added a non-standard WMS service adapted to add a new kind of request, GetX3D.

x3d is an open xml standards for leveraging immediate mode WebGL graphics in the browser. x3dom is an open source javascript library for translating x3d xml into WebGL in the browser.

“X3DOM (pronounced X-Freedom) is an open-source framework and runtime for 3D graphics on the Web. It can be freely used for non-commercial and commercial purposes, and is dual-licensed under MIT and GPL license.”

Why X3D?

I’ll admit it’s fun, but novelty may not always be helpful. Adding elevation does show demographic values in finer detail than the coarse classification used by a thematic color range. This experiment did not delve into the bivariate world, but multiple value modelling is possible using color and elevation with perhaps less interpretive misgivings than a bivariate thematic color scheme.

However, pondering the visionary, why should analysts be left out of the upcoming immersive world? If Occulus Rift, HoloLens, or Google Cardboard are part of our future, analysts will want to wander through population landscapes exploring avenues of millennials and valleys of the aged. My primitive experiments show only a bit of demographic landscape but eventually demographic terrain streams will be layer choices available to web users for exploration.

Demographic landscapes like the census are still grounded, tethered to real objects. The towering polygon on the left recapitulates the geophysical apartment highrise, a looming block of 18-22 year olds reflect a military base. But models potentially float away from geophysical grounding. Facebook networks are less about physical location than network relation. Abstracted models of relationship are also subject to helpful visualization in 3D. Unfortunately we have only a paltry few dimensions to work with, ruling out value landscapes of higher dimensions.

Fig 3 – P0010001 Jenks Population

Some problems

For some reason IE11 always reverts to software rendering instead of using the system’s GPU. Chrome provides hardware rendering with consequent smoother user experience. Obviously the level of GPU support available on the client directly correlates to maximum x3dom complexity and user experience.

IE11 x3dom logs show this error:

ERROR: WebGL version WebGL 0.94 lacks important WebGL/GLSL features needed for shadows, special vertex attribute types, etc.!
INFO: experimental-webgl context found
Vendor: Microsoft Microsoft, Renderer: Internet Explorer Intel(R) HD Graphics 3000, Version: WebGL 0.94, ShadingLangV.:
WebGL GLSL ES 0.94, MSAA samples: 4
Extensions: WEBGL_compressed_texture_s3tc, OES_texture_float, OES_texture_float_linear, EXT_texture_filter_anisotropic,
      OES_standard_derivatives, ANGLE_instanced_arrays, OES_element_index_uint, WEBGL_debug_renderer_info
INFO: Initializing X3DCanvas for [x3dom-1434130193467-canvas]
INFO: Creating canvas for (X)3D element...
INFO: Found 1 X3D and 0 (experimental) WebSG nodes...
INFO: X3DOM version 1.6.2, Revison 8f5655cec1951042e852ee9def292c9e0194186b, Date Sat Dec 20 00:03:52 2014 +0100

In some cases the ST_Extrude result is rendered to odd surfaces with multiple artifacts. Here is an example with low population in eastern Colorado. Perhaps the extrusion surface breaks down due to tessellation issues on complex polygons with zero or near zero height. This warrants further experimentation.

Fig 2 – rendering artifacts at near zero elevations

The performance complexity curve on a typical client is fairly low. It’s tricky to keep the model sizes small enough for acceptable performance. IE11 is especially vulnerable to collapse due to software rendering. In this experiment the x3d view is limited to the intersections with extents of the selected territory using turf.js.

var extent = turf.extent(app.territory);

In addition making use of PostGIS ST_SimplifyPreserveTopology helps reduce polygon complexity.

Xml formats like x3d tend to be verbose and newer lightweight frameworks prefer a json interchange. Json for WebGL is not officially standardized but there are a few resources available.

Lighthouse – http://www.lighthouse3d.com/2013/07/webgl-importing-a-json-formatted-3d-model/
Three.js – http://threejs.org/ JSONLoader BabylonLoader
Babylon.js – http://www.babylonjs.com/ SceneLoader BABYLON.SceneLoader.ImportMesh


PostGIS 2.2 SFCGAL functions offer a new window into data landscapes. X3d/x3dom is an easy way to leverage WebGL in the browser.

A future project may look at converting the x3d output of PostGIS into a json mesh. This would enable use of other client libraries like Babylon.js

Fig 4 – P0040003 Quantile Density

Territorial Turf

Fig 1 – Territories from Census Zip Code Tabulation Areas (ZCTAs) on Bing Maps

An interesting GIS development over the years has been the evolution from monolithic applications to multiple open source plug and play tools. Considering GIS as a three tier system with back end storage, a controller in the middle, and a UI display out front, more and more of the middle tier is migrating to either end.

SQL DBs, such as SQL Server, Oracle Spatial, and especially PostGIS, now implement a multitude of GIS functions originally considered middle tier domain. On the other end, the good folks at turf.js and proj4js continue to push atomic GIS functions out to javascript, where they can fit into the client side UI. The middle tier is getting thinner and thinner as time goes on. Generally the middle is what costs money, so rolling your own middle with less work is a good thing. As a desirable side effect, instead of kitchen sink GIS, very specific user tools can be cobbled together as needed.

Looking for an excuse to experiment with turf.js, I decided to create a Territory Builder utilizing turf.js on the client and some of the US Census Bureau Services on the backend.

US Census Bureau does expose some “useful” services at TigerWeb. I tend to agree with Brian Timoney that .gov should stick to generating data exposed in useful ways. Apps and presentation are fast evolving targets and historically .gov can’t really hope to keep up. Although you can use the census custom TigerWeb applications for some needs, there are many other occasions when you would like to build something less generic. For example a Territory builder over Bing Maps.

Here is the simplified POC work flow:

1. Use Census WMS GetMap to show polygons on a canvas over Bing Maps.
2. Use Census WMS GetFeatureInfo to return a GeoID of selected polygon(s).
3. Use Census REST service and GeoID to return selected polygon vertices.
4. Use Turf.js merge function to merge selected polygons into a single Territory polygon
5. Display territory polygon using Bing Maps Ajax v7 Microsoft.Maps.AdvancedShapes polygon

Fig 2 – Territory polygon with a demographic overlay on top of Bing Maps

Note: Because there doesn’t appear to be an efficient way to join demographic data to geography with current TigerWeb service APIs, the demographic tab of this app uses a custom WMS PostGIS backend, hooking SF1 to TIGER polygons.

1. Census WMS GetMap over Bing Maps

WMS GetMap requests simply return an image. In order to overlay the image on a Bing Map this Territory app uses an html5 canvas and context app.context.drawImage(imageObj, 0, 0); The trick is to stack the canvas in between the Bing Map and the Bing Map navigation, scale, and selector tools. TigerWeb WMS conveniently exposes epsg:3857 which correctly aligns with Bing Map tiles.

    function showCensus() {
        if (app.canvas == null) {
            app.canvas = document.createElement('canvas');
            app.canvas.id = 'censuscanvas';
            app.canvas.style.position = 'relative';
            var mapDiv = app.map.getRootElement();
            //position this canvas under Bing navigation and over map tiles
            mapDiv.insertBefore(app.canvas, mapDiv.childNodes[3]);
            app.context = app.canvas.getContext("2d");
        //match canvas size to map size
        app.canvas.height = app.map.getHeight();
        app.canvas.width = app.map.getWidth();

        var imageObj = new Image();
        //image onload function draws image on context
        imageObj.onload = function () {
            app.canvas.style.opacity = app.alpha / 100;
            app.context.drawImage(imageObj, 0, 0);
        //prepare a TigerWeb WMS GetMap request
        //use proj4.js to transform ll to mercator bbox
        var b = app.map.getBounds();
        var epsg3857 = "+proj=merc +a=6378137 +b=6378137 +lat_ts=0.0 +lon_0=0.0 +x_0=0.0 +y_0=0 +k=1.0 +units=m +nadgrids=@null +wktext  +no_defs";
        var sw = proj4(epsg3857).forward([b.getWest(), b.getSouth()]);
        var ne = proj4(epsg3857).forward([b.getEast(), b.getNorth()]);
        bbox = sw[0] + "," + sw[1] + "," + ne[0] + "," + ne[1];

        var url = "";
        if (tab == "Territory") {
            app.alpha = 100;
            // GetMap request
            url = "http://tigerweb.geo.census.gov/arcgis/services/TIGERweb/tigerWMS_Current/MapServer/WmsServer?REQUEST=GetMap&SERVICE=WMS&VERSION=1.3.0&LAYERS=" + app.censusPolygonType + "&STYLES=&FORMAT=image/png&BGCOLOR=0xFFFFFF&TRANSPARENT=TRUE&CRS=EPSG:3857&BBOX=" + bbox + "&WIDTH=" + app.canvas.width + "&HEIGHT=" + app.canvas.height;
        imageObj.src = url;

2. Census WMS GetFeatureInfo for obtaining GeoID

Unfortunately the WMS GetMap request has no IDs available for polygons, even if requested format is image/svg+xml. SVG is an xml format and could easily contain associated GeoID values for joining with other data resources, but this is contrary to the spirit of OGC WMS service specs. Naturally we must obey OGC Law, which is too bad. Adding a GeoID attribute would allow options such as choropleth fills directly on existing svg paths. For example adding id = “80132″ would allow fill colors by zipcode with a bit of javascript.

http://tigerweb.geo.census.gov/arcgis/services/TIGERweb/tigerWMS_Current/MapServer/WmsServer?REQUEST=GetMap&SERVICE=WMS&VERSION=1.3.0 &LAYERS=2010 Census ZIP Code Tabulation Areas&STYLES=&FORMAT=image/svg+xml&BGCOLOR=0xFFFFFF&TRANSPARENT=TRUE&CRS=EPSG:3857 &BBOX=-11679625.942909468,4709198.547476525,-11645573.246808422,4737900.651597611&WIDTH=891&HEIGHT=751

<g transform="matrix(1.3333333 0 0 -1.3333333 445.5 375.5)">
<path id="80132" d="M300.50809 -258.78592 L302.54407 -257.91829 L303.17977 -257.55608 L304.45554 -256.98889
L280.30116 -268.55133 L280.7184 -268.34637 L298.33448 -259.95117 L300.50809 -258.78592 "
  stroke="#99454A" fill="none" stroke-opacity="1" stroke-width="1.5"
 stroke-linecap="round" stroke-linejoin="round" stroke-miterlimit="10" />

Instead TigerWeb WMS exposes GetFeatureInfo requests. Using a click event we calculate a lat,lon location and send a GetFeatureInfo request to find a GeoID for the enclosing polygon.

http://tigerweb.geo.census.gov/arcgis/services/TIGERweb/tigerWMS_Current/MapServer/WmsServer?REQUEST=GetFeatureInfo&SERVICE=WMS&VERSION=1.3.0&CRS=EPSG:4326 &BBOX=13.6972399985862,-128.010213138288,52.2800368298775,-53.444917258507&WIDTH=1061&HEIGHT=549 &INFO_FORMAT=text/xml&QUERY_LAYERS=2010 Census ZIP Code Tabulation Areas&X=388&Y=190

<?xml version=”1.0″ encoding=”UTF-8″?>
<FeatureInfoResponse xmlns=”http://www.esri.com/wms” xmlns:esri_wms=”http://www.esri.com/wms”>
<FIELDS INTPTLON=”-101.2099848″ INTPTLAT=”+38.9263824″ CENTLON=”-101.2064010″ CENTLAT=”+38.9226272″ STGEOMETRY=”Polygon” OBJECTID=”16553″ AREAWATER=”163771″ AREALAND=”1327709219″ FUNCSTAT=”S” ZCTA5CC=”B5″ MTFCC=”G6350″ NAME=”ZCTA5 67764″ LSADC=”Z5″ BASENAME=”67764″ GEOID=”67764″ ZCTA5=”67764″ OID=”221404258331221″/>

Since cross browser restrictions come into play it’s necessary to add a bit of server side proxy code to actually return the xml.

Some MVC controller magic:

 function GetCensusFeature(e) {
    var x = e.pageX - $("#map").offset().left;
    var y = e.pageY - $("#map").offset().top;
    var FeatureUrl = {
        url: "http://tigerweb.geo.census.gov/arcgis/services/TIGERweb/tigerWMS_Current/MapServer/WmsServer?REQUEST=GetFeatureInfo&SERVICE=WMS&VERSION=1.3.0&LAYERS=" + app.censusPolygonType + "&STYLES=&FORMAT=image/png&BGCOLOR=0xFFFFFF&TRANSPARENT=TRUE&CRS=EPSG:3857&BBOX=" + bbox + "&WIDTH=" + app.canvas.width + "&HEIGHT=" + app.canvas.height + "&INFO_FORMAT=text/xml&QUERY_LAYERS=" + app.censusPolygonType + "&X=" + x + "&Y=" + y
        app.postit('/api/Operation/GetFeatureInfo', {
            data: JSON.stringify(FeatureUrl),
            success: function (response) {
                if (response.length > 0) {
                    var parser = new DOMParser();
                    var xmlDoc = parser.parseFromString(response, "text/xml");

Eventally gets here to a C# Controller which can proxy the GetFeatureInfo request:

        /// <summary>
        /// GetFeatureInfo
        /// </summary>
        /// <param name="FeatureUrl"></param>
        /// <returns>xml doc</returns>
        public async Task<IHttpActionResult> GetFeatureInfo(Models.FeatureUrl feature)
            var wc = new HttpClient();
            var getFeatureInfoRequest = new Uri(feature.url);
            var response = await wc.GetAsync(getFeatureInfoRequest);
            var content = await response.Content.ReadAsStreamAsync();
            var result = "";
            using (var reader = new StreamReader(content))
                while (!reader.EndOfStream)
                    result += reader.ReadLine();
            return Ok(result);

Finally, on success, javascript can parse the xml for GeoID and add a canvas context box with some text information:

     success: function (response) {
              if (response.length > 0) {
                    var parser = new DOMParser();
                    var xmlDoc = parser.parseFromString(response, "text/xml");
                    if (tab == "Territory" && xmlDoc.getElementsByTagName("FeatureInfoResponse")[0].hasChildNodes()) {
                        app.context.rect(x, y, 125, 25);
                        app.context.fillStyle = 'white';
                        app.context.lineWidth = 1;
                        app.context.strokeStyle = 'black';
                        app.context.fillStyle = 'black';
                        app.context.font = '11pt Calibri';
                        var fields = xmlDoc.getElementsByTagName("FIELDS")[0];
                        app.context.fillText(fields.getAttribute('NAME'), x + 5, y + 15);
                        var geoid= fields.getAttribute('GEOID')

Fig 3 – building territories from Census Block Groups on Bing Maps

3. Census REST to obtain vertices

The GEOID retrieved from our proxied GetFeatureInfo request allows us to grab vertices with another TigerWeb service, Census REST.

This spec is a little more proprietary and requires some detective work to unravel.
FeatureUrl.url = “http://tigerweb.geo.census.gov/arcgis/rest/services/TIGERweb/PUMA_TAD_TAZ_UGA_ZCTA/MapServer/1/query?where=GEOID%3D” + geoid + “&geometryPrecision=6&outSR=4326&f=pjson”;

There are different endpoint urls for the various polygon types. In this case Zip Codes are found in PUMA_TAD_TAZ_UGA_ZCTA. We don’t need more than 1 meter resolution so precision 6 is good enough and we would like the results in epsg:4326 to avoid a proj4 transform on the client.

This follows the same process as previously sending the REST request through a proxy controller. You can see the geojson geometry result with this sample request:

Census REST doesn’t appear to offer a simplify parameter so the coordinates returned are at the highest resolution. Highly detailed polygons can easily return several thousand vertices, which is a problem for performance, but the trade-off is eliminating hosting data ourselves.

4. turf.js merge function to Build Territory

Finally the interesting part, where we get to use turf.js to handle merging multiple polygons into a single territory polygon.

var FeatureUrl = {
url: "http://tigerweb.geo.census.gov/arcgis/rest/services/TIGERweb/PUMA_TAD_TAZ_UGA_ZCTA/MapServer/1/query?where=GEOID%3D" + geoid + "&geometryPrecision=6&outSR=4326&f=pjson";
app.postit('/api/Operation/GetFeatureGeo', {
    data: JSON.stringify(FeatureUrl),
    success: function (response) {
        var polygon = JSON.parse(response);
        if (polygon.features.length == 0 || polygon.features[0].geometry.rings.length == 0) {
            alert("No Geo Features returned.")
        if (app.territoryFeatures == null) {
            app.territoryFeatures = new Array();
            app.territory = turf.polygon(polygon.features[0].geometry.rings);
        else {
            var tpolygon = turf.polygon(polygon.features[0].geometry.rings);
            var fc = turf.featurecollection(app.territoryFeatures);
            try {
                app.territory = turf.merge(fc);
            catch (err) {
                alert("turf.js error: " + err.message);
        if (app.territory.geometry.coordinates.length > 0) {

    error: function (response) {
        alert("TigerWeb error:" + response);

5. Display Territory Bing Maps Ajax polygon

The final rendering just uses Bing Maps Ajax v7 Microsoft.Maps.AdvancedShapes to add the new territory to the map.

function displayTerritory() {
    var coordLen = app.territory.geometry.coordinates.length;
    if (app.territory.geometry.type == 'Polygon') {
        var rings = new Array();
        for (var i = 0; i < coordLen; i++) {
            var vertices = new Array();
            for (var j = 0; j < app.territory.geometry.coordinates[i].length; j++) {
                vertices.push(new Microsoft.Maps.Location(app.territory.geometry.coordinates[i][j][1], app.territory.geometry.coordinates[i][j][0]));
        var polygon = new Microsoft.Maps.Polygon(rings, { fillColor: new Microsoft.Maps.Color(100, 100, 0, 100) });
    else if (app.territory.geometry.type == 'MultiPolygon') {
        var multi = new Array();
        for (var i = 0; i < coordLen; i++) {
            var ringslen = app.territory.geometry.coordinates[i].length;
            for (var j = 0; j < ringslen; j++) {
                var vertices = new Array();
                for (var k = 0; k < app.territory.geometry.coordinates[i][j].length; k++) {
                    vertices.push(new Microsoft.Maps.Location(app.territory.geometry.coordinates[i][j][k][1], app.territory.geometry.coordinates[i][j][k][0]));
        var polygon = new Microsoft.Maps.Polygon(multi, { fillColor: new Microsoft.Maps.Color(100, 100, 0, 100) });
    else {
        alert("geometry type is " + app.territory.geometry.type + ". Territory requires a Polygon or MultiPolygon.")

Fig 4 – Territory Total for P0040003 Latino Origin – Map: Quantile population classifier Census Block Groups on Bing Map


TigerWeb offers some useful data access. With TigerWeb WMS and REST api, developers can customize apps without hosting and maintaining a backend SQL store. However, there are some drawbacks.

Some potential improvements:
1. Adding an svg id=GeoID would really improve the usefulness of TigerWeb WMS image/svg+xml, possibly eliminating steps 2 and 3 of the workflow.

2. Technically it’s possible to use the TigerWeb REST api to query geojson by area, but practically speaking the results are too detailed for useful performance. A helpful option for TigerWeb REST would be a parameter to request simplified polygons and avoid lengthy vertice transfers.

turf.js is a great tool box, however, occasionally the merge function had trouble with complex polygons from TigerWeb.

Fig 6 - turf merge had some rare difficulties with complex polygons from TigerWeb

In the end, territories are useful for query aggregates of SF1 demographic data.

Fig 5 – Territory Total for P0120025 Male 85 Years and older – Map: Jenks population classifier Census Block Group on Bing Map

2020 The Last Census?

Fig 1 - SF1QP Quantile Population County P0010001 P1.TOTAL POPULATION Universe: Total population

Preparation for US 2020 Census is underway at this mid-decennial point and we’ll see activity ramping up over the next few years. Will 2020 be the last meaningful decennial demographic data dump? US Census has been a data resource since 1790. It took a couple centuries for Census data to migrate into the digital age, but by Census 2000, data started trickling into the internet community. At first this was simply a primitive ftp data dump, ftp2.census.gov, still very useful for developers, and finally after 2011 exposed as OGC WMS, TigerWeb UI, and ESRI REST.

However, static data in general, and decennial static data in particular, is fast becoming anachronistic in the modern era. Surely the NSA data tree looks something like phone number JOIN Facebook account JOIN Twitter account JOIN social security id JOIN bank records JOIN IRS records JOIN medical records JOIN DNA sequence….. Why should this data access be limited to a few black budget bureaus? Once the data tree is altered a bit to include mobile devices, static demographics are a thing of the past. Queries in 2030 may well ask “how many 34 year old male Hispanic heads of households with greater than 3 dependents with a genetic predisposition to diabetes are in downtown Denver Wed at 10:38AM, at 10:00PM?” For that matter let’s run the location animation at 10 minute intervals for Tuesday and then compare with Sat.

“Privacy? We don’t need no stinking privacy!”

I suppose Men and Black may find location aware DNA queries useful for weeding out hostile alien grays, but shouldn’t local cancer support groups also be able to ping potential members as they wander by Star Bucks? Why not allow soda vending machines to check for your diabetic potential and credit before offering appropriate selections? BTW How’s that veggie smoothie?

Back to Old School

Fig 2 - SF1QD Quantile Density Census Block Group P0050008 P5.HISPANIC OR LATINO ORIGIN BY RACE Universe: Total population

By late 2011 census OGC services began to appear along with some front end data web UIs, and ESRI REST interfaces. [The ESRI connection is a tightly coupled symbiotic relationship as the Census Bureau, like many government bureaucracies, relies on ESRI products for both publishing and consuming data. From the outside ESRI could pass as an agency of the federal government. For better or worse “Arc this and that” are deeply rooted in the .gov GIS community.]

For mapping purposes there are two pillars of Census data, spatial and demographic. The spatial data largely resides as TIGER data while the demographic data is scattered across a large range of products and data formats. In the basement, a primary demographic resource is the SF1, Summary File 1, population data.

“Summary File 1 (SF 1) contains the data compiled from the questions asked of all people and about every housing unit. Population items include sex, age, race, Hispanic or Latino origin, household relationship, household type, household size, family type, family size, and group quarters. Housing items include occupancy status, vacancy status, and tenure (whether a housing unit is owner-occupied or renter-occupied).”

The intersection of SF1 and TIGER is the base level concern of census demographic mapping. There are a variety of rendering options, but the venerable color themed choropleth map is still the most widely recognized. This consists of assigning a value class to a color range and rendering polygons with their associated value color. This then is the root visualization of Census demographics, TIGER polygons colored by SF1 population classification ranges.

Unfortunately, access to this basic visualization is not part of the 2010 TigerWeb UI.

There are likely a few reasons for this, even aside from the glacially slow adoption of technology at the Bureau of the Census. A couple of obvious reasons are the sheer size of this data resource and the range of the statistics gathered. A PostGIS database with 5 level primary spatial hierarchy, all 48 SF1 population value files, appropriate indices, plus a few helpful functions consumes a reasonable 302.445 GB of a generic Amazon EC2 SSD elastic block storage. But, contained in those 48 SF1 tables are 8912 demographic values which you are welcome to peruse here. A problem for any UI is how to make 8912 plus 5 spatial levels usable.

Fig 3 – 47 SF1 tables plus sf1geo geography join file

Filling a gap

Since the Census Bureau budget did not include public visualization of TIGER/Demographics what does it take to fill in the gap? Census 2010 contains a large number of geographic polygons. The core hierarchy for useful demographic visualization is state, county, tract, block group, and block.

Fig 4 – Census polygon hierarchy

Loading the data into PostGIS affords low cost access to data for SF1 Polygon value queries such as this:

-- block tabblock
SELECT poly.GEOM, geo.stusab, geo.sumlev, geo.geocomp, geo.state, geo.county, geo.tract, geo.blkgrp, geo.block, poly.geoid10, sf1.P0010001, geo.stlogrecno
FROM tabblock poly
JOIN sf1geo geo ON geo.geoid10 = poly.geoid10
JOIN sf1_00001 sf1 ON geo.stlogrecno = sf1.stlogrecno
WHERE geo.geocomp='00' AND geo.sumlev = '101' AND ST_Intersects(poly.GEOM, ST_GeometryFromText('POLYGON ((-104.878035974004 38.9515291859429,-104.721023973742 38.9515291859429,-104.721023973742 39.063158980149,-104.878035974004 39.063158980149,-104.878035974004 38.9515291859429))', 4269))
ORDER BY geo.state, geo.county, geo.tract, geo.blkgrp, geo.block

Returning 1571 polygons in 1466 ms. Not too bad, but surely there’s room for improvement. Where is Paul Ramsey when you need him?

Fig 5 - PostgreSQL PostGIS Explain Query

Really Old School – WMS

Some considerations on the data:

A. Queries become unwieldy for larger extents with large numbers of polygons

Polygon Counts
county 3,233
tract 74,133
blockgroup 220,740
tabblock 11,166,336

These polygon counts rule out visualizations of the entire USA, or even moderate regions, at tract+ levels of the hierarchy. Vector mapping is not optimal here.

B. The number of possible image tile pyramids for 8912 values over 5 polygon levels is 5 * 8192 = 44,560. This rules out tile pyramids of any substantial depth without some deep Google like pockets for storage. Tile pyramids are not optimal either.

C. Even though vector grid pyramids would help with these 44,560 demographic variations, they suffer from the same restrictions as A. above.

One possible compromise of performance/visualization is to use an old fashioned OGC WMS GetMap request scheme that treats polygon types as layer parameters and demographic types as style parameters. With appropriate use of WMS <MinScaleDenominator> <MaxScaleDenominator> the layers are only rendered at sufficient zoom to reasonably limit the number of polygons. Using this scheme puts rendering computation right next to the DB on the same EC2 instance, while network latency is reduced to simple jpeg/png image download. Scaling access to public consumption is still problematic, but for in-house it does work.

Fig 6 – Scale dependent layer rendering for SF1JP - SF1 Jenks P0010001 (not density)

Fig 7 - a few of 8912 demographic style choices

There are still issues with a scale rendering approach. Since population is not very homogenous over US coverage extent, scale dependent rendering asks to be variable as well. This is easily visible over population centers. Without some type of pre-calculated density grid, the query is already completed prior to knowledge of the ideal scale dependency. Consequently, static rendering scales have to be tuned to high population urban regions. Since “fly over” US is generally less interesting to analysts, we can likely live with this compromise.

Fig 8 - SF1QD SF1 Quantile Density Census Tract P0010001/geographic Area

Classification schemes

Dividing a value curve to display adequately over a viewport range can be accomplished in a few different ways: equal intervals, equal quantile, jenks natural break optimization, K-means clustering, or “other.” Leaning toward the simpler, I chose a default quantile (guarantees some color) with a ten class single hue progression which of course is not recommended by color brewer. However 10 seems an appropriate number for decennial data. I also included a jenks classifier option which is considered a better representation. The classifier is based only on visible polygons rather than the entire polygon population. This means comparisons region to region are deceptive, but after all this is visualization of statistics.

“There are three kinds of lies: lies, damned lies, and statistics.” Mark Twain

Fig 9 – SF1JP SF1 Jenks Census Tract P0010001 (not density)

In order to manage Census data on a personal budget these compromises are involved:

1. Only expose SF1 demographic data for 2010 i.e. 8912 population value types
2. Only provide primary level polygon hierarchy – state, county, tract, blockgroup, block
3. Code a custom OGC WMS service – rendering GetMap image on the server
4. Resolution scale rendering to limit polygon counts down the polygon hierarchy
5. Provide only quantile and Jenks classifier options
6. Classifier applied only to viewport polygon selection

This is a workable map service for a small number of users. Exposing as an OGC WMS service offers some advantages. First there are already a ton of WMS clients available to actually see the results. Second, the Query, geometry parsing, and image computation (including any required re-projection) are all server side on the same instance reducing network traffic. Unfortunately the downside is that the computation cost is significant and discouraging for a public facing service.

Scaling could be accomplished in a few ways:

1. Vertical scaling to a high memory EC2 R3 instance(s) and a memory tuned PostGIS
2. Horizontal auto scaling to multiple instances with a load balancer
3. Storage scaling with pre-populated S3 tile pyramids for upper extents

Because this data happens to be read only for ten years, scaling is not too hard, as long as there is a budget. It would also be interesting to try some reconfiguration of data into NoSQL type key/value documents with perhaps each polygon document containing the 8912 values embedded along with the geometry. This would cost a bit in storage size but could decrease query times. NoSQL also offers some advantages for horizontal scaling.


The Census Bureau and its census are obviously not going away. The census is a bureaucracy with a curious inertial life stretching back to the founding of our country (United States Constitution Article 1, section 2). Although static aggregate data is not going to disappear, dynamic real time data has already arrived on stage in numerous and sundry ways from big data portals like Google, to marketing juggernauts like Coca Cola and the Democratic Party, to even more sinister black budget control regimes like the NSA.

Census data won’t disappear. It will simply be superseded.

The real issue for 2020 and beyond is, how to actually intelligently use the data. Already data overwhelms analytic capabilities. By 2030, will emerging AI manage floods of real time data replacing human analysts? If Wall Street still exists, will HFT algos lock in dynamic data pipelines at unheard of scale with no human intervention? Even with the help of tools like R Project perhaps the human end of data analysis will pass into anachronism along with the decennial Census.

Fig 10 - SF1JP SF1 Jenks Census Blocks P0010001

GeoEpistemology uh where’s that?

Google Arunachal Pradesh

Fig 1 -Google Arunachal Pradesh as represented to .in, not to be confused with Arunachal Pradesh .cn

In the background of the internet lies this ongoing discussion of epistemology. It’s an important discussion with links to crowd source algos, big data, and even AI. Perhaps it’s a stretch to include maps, which after all mean to represent “exactitude in science” or JTB, Justified True Belief. On the one hand we have the prescience of Jorge Luis Borges concisely represented by his single paragraph short story.

Del rigor en la ciencia

… En aquel Imperio, el Arte de la Cartografía logró tal Perfección que el mapa de una sola Provincia ocupaba toda una Ciudad, y el mapa del Imperio, toda una Provincia. Con el tiempo, esos Mapas Desmesurados no satisfacieron y los Colegios de Cartógrafos levantaron un Mapa del Imperio, que tenía el tamaño del Imperio y coincidía puntualmente con él. Menos Adictas al Estudio de la Cartografía, las Generaciones Siguientes entendieron que ese dilatado Mapa era Inútil y no sin Impiedad lo entregaron a las Inclemencias del Sol y de los Inviernos. En los desiertos del Oeste perduran despedazadas Ruinas del Mapa, habitadas por Animales y por Mendigos; en todo el País no hay otra reliquia de las Disciplinas Geográficas.

-Suárez Miranda: Viajes de varones prudentes,
Libro Cuarto, Cap. XLV, Lérida, 1658

translation by Andrew Hurley

On Exactitude in Science

…In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.

-Suarez Miranda,Viajes de varones prudentes,
Libro IV,Cap. XLV, Lerida, 1658

As Jorge Luis Borges so aptly implies, the issue of epistemology swings between scientific exactitude and cultural fondness, an artistic reference to the unsettling observations of Thomas Kuhn’s paradigm shiftiness, The Structure of Scientific Revolutions .

Precession of Simulacra

On the other hand Jean Baudrillard would prefer an inversion of Borges in his Simulacra and Simulation

“The territory no longer precedes the map, nor does it survive it. It is nevertheless the map that precedes the territory—precession of simulacra—that engenders the territory”

In a less postmodern sense we can point to the recent spectacle of Nicaraguan sovereignty extending into Costa Rica, provoked by the preceding Google Map error, as a very literal “precession of simulacrum.” See details in Wired.

We now have map border wars and a crafty Google expedient of representing the Arunachal Pradesh according to client language. China sees one thing, but India another, and all are happy. So maps are not exempt from geopolitical machinations any more than Wikipedia. Of course the secular bias of Google invents an agnostic viewpoint of neither here nor there, in its course presuming a superior vantage and relegating “simplistic” nationalism to a subjected role of global ignorance. Not unexpectedly, global corporations wield power globally and therefore their interests lie supra nationally.

Perhaps in a Jean Baudrillard world the DPRK could disappear for ROK viewers and vice versa resolving a particularly long lived conflict.

Filter Bubbles

We are all more or less familiar with the filter bubble phenomenon. Your every wish is my command.

“The best books, he perceived, are those that tell you what you know already.”
George Orwell, 1984 p185

The consumer is king and this holds true in search and advertising as well as in Aladdin’s tale. Search filters at the behest of advertising money work very well at fencing us into smaller and smaller bubbles of our own desire. The danger of self-referential input is well known as narcissism. We see this at work in contextual map bubbles displaying only relevant points of interest from past searches.

With google glasses self-referential virtual objects can literally mask any objectionable reality. Should a business desire to pop a filter bubble only a bit more money is required. In the end, map POI algorithms dictate desire by limiting context. Are “personalized” maps a hint of precession of simulacra or simply one more example of rampant technical narcissism?


In the political realm elitists such as Cass Sunstein want to nudge us, which is a yearning of all mildly totalitarian states. Although cognitive infiltration will do in a pinch, “a boot stamping on a human face” is reserved for a last resort. How might the precession of simulacra assist the fulfillment of Orwellian dreams?

Naturally, political realities are less interested in our desires than their own. This is apparently a property of organizational ascendancy. Whether corporations or state agencies, at some point of critical mass organizations gain a life of their own. The organization eventually becomes predatory, preying on those they serve for survival. Political information bubbles are less about individual desires than survival of the state. To be blunt “nudge” is a euphemism for good old propaganda.

propaganda map

Fig 2 - Propaganda Map - more of a shove than a nudge

The line from Sunstein to a Clinton, of either gender, is short. Hillary Clinton has long decried the chaotic democracy of page ranked search algorithms. After noting that any and all ideas, even uncomfortable truths, can surface virally in a Drudge effect, Hillary would insist “we are all going to have to rethink how we deal with the Internet.” At least she seems to have thought creatively about State Dept emails. Truth is more than a bit horrifying to oligarchs of all types, as revealed by the treatment of Edward Snowden, Julian Assange, and Barrett Brown.

Truth Vaults

Enter Google’s aspiration to Knowledge-Based Trust: Estimating the Trustworthiness of Web Sources. In other words a “truth page ranking” to supplant the venerable but messily democratic “link page ranking.” Why, after all, leave discretion or critical thought to the unqualified masses? For the history minded, this is rather reminiscent of pre-reformation exercise of Rome’s magisterium. We may soon see a Google Magisterium defining internet truth, albeit subject to FCC review.

“The net may be “neutral” but the FCC is most certainly not.”

According to Google: “Nothing but the truth.” I mean who could object? Well there seem to be some doubters among the hoi polloi. How then does this Google epistemology actually work? What exactly is Justified True Belief in Google’s Magisterium and how much does it effectively overlap with the politically powerful?

Leaving aside marginal Gettier-cases there are some pressing questions about the mechanics of KBT. In Google’s KBT basement is this thing called Knowledge Vault – Knowledge Graph.

“The fact extraction process we use is based on the Knowledge Vault (KV) project.”

“Knowledge Vault has pulled in 1.6 billion facts to date. Of these, 271 million are rated as “confident facts”, to which Google’s model ascribes a more than 90 per cent chance of being true. It does this by cross-referencing new facts with what it already knows.”

“Google’s Knowledge Graph is currently bigger than the Knowledge Vault, but it only includes manually integrated sources such as the CIA Factbook.”

“This is the most visionary thing,” says Suchanek. “The Knowledge Vault can model history and society.”

Per Jean Baudrillard read “model” as a verb rather than a thing. Google (is it possible to do this unwittingly?) arrogates a means to condition the present, in order to model the past, to control our future, to paraphrase the Orwellian syllogism.

“Who controls the past controls the future. Who controls the present controls the past.”
George Orwell, 1984

Not to be left behind MSNBC’s owner, Microsoft, harbors similar aspirations:

“LazyTruth developer Matt Stempeck, now the director of civic media at Microsoft New York, wants to develop software that exports the knowledge found in fact-checking services such as Snopes, PolitiFact, and FactCheck.org so that everyone has easy access to them.”

And National Geographic too, all in for a new science: The Consensus of “Experts”

“Everybody should be questioning,” says McNutt. “That’s a hallmark of a scientist. But then they should use the scientific method, or trust people using the scientific method, to decide which way they fall on those questions.”

Ah yes the consensus of “Experts,” naturally leading to the JTB question, whose experts? IPCC may do well to reflect on Copernicus in regards to ancien régime and scientific consensus.

Snopes duo

Fig 3 - Snopes duo in the Truth Vault at Google and Microsoft? Does the cat break tie votes?

Google’s penchant for metrics and algorithmic “neutrality” neatly papers over the Mechanical Turk or two in the vault so to speak.

Future of simulacra

In a pre-digital Soviet era, map propaganda was an expensive proposition. Interestingly today Potemkin maps are an anachronistic cash cow with only marginal propaganda value. Tomorrow’s Potemkin maps according to Microsoft will be much more entertaining but also a bit creepy if coupled to brain interfaces. Brain controls are inevitably a two way street.

“Microsoft HoloLens understands your movements, vision, and voice, enabling you to interact with content and information in the most natural way possible.”

The only question is, who is interacting with content in the most un-natural way possible in the Truth Vault?

Will InfoCrafting at the brain interface be the next step for precession of simulacra?


Fig 5 – Leonie and $145 million worth of “InfoCrafting“ COINTEL with paid trolls and sock puppet armies


Is our cultural fondness leaning toward globally agnostic maps of infinite plasticity, one world per person? Jean Baudrillard would likely presume the Google relativistic map is the order of the day, where precession of simulacra induces a customized world generated in some kind of propagandistic nirvana, tailored for each individual.

But just perhaps, the subtle art of Jorge Luis Borges would speak to a future of less exactitude:

“still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.”

I suppose to be human is to straddle exactitude and art, never sure whether to land on truth or on beauty. Either way, we do well to Beware of Truth Vaults!

WebGL with a little help from Babylon.js

Most modern browsers now support HTML5 WebGL standard: Internet Explorer 11+, Firefox 4+, Google Chrome 9+, Opera 12+
One of the latest to the party is IE 11.


Fig 2 – html5 test site showing WebGL support for IE11

WebGL support means that GPU power is available to javascript developers in supporting browsers. GPU technology fuels the $46.5 billion “vicarious life” industry. Video gaming revenues surpass even Hollywood movie tickets in annual revenues, but this projection shows a falling revenue curve by 2019. Hard to say why the decline, but is it possibly an economic side effect of too much vicarious living? The relative merits of passive versus active forms of “vicarious living” are debatable, but as long as technology chases these vast sums of money, GPU geometry pipeline performance will continue to improve year over year.

WebGL exposes immediate mode graphics pipelines for fast 3D transforms, lighting, shading, animations, and other amazing stuff. GPU induced endorphin bursts do have their social consequences. Apparently, Huxley’s futuristic vision has won out over Orwell’s, at least in internet culture.

“In short, Orwell feared that what we fear will ruin us. Huxley feared that our desire will ruin us.”

Neil Postman Amusing Ourselves to Death.

Aside from the Soma like addictive qualities of game playing, game creation is actually a lot of work. Setting up WebGL scenes with objects, textures, shaders, transforms … is not a trivial task, which is where Dave Catuhe’s Babylon.js framework comes in. Dave has been building 3D engines for a long time. In fact I’ve played with some of Dave’s earlier efforts in Ye olde Silverlight days of yore.

“I am a real fan of 3D development. Since I was 16, I spent all my spare time creating 3d engines with various technologies (DirectX, OpenGL, Silverlight 5, pure software, etc.). My happiness was complete when I discovered that Internet Explorer 11 has native support for WebGL. So I decided to write once again a new 3D engine but this time using WebGL and my beloved JavaScript.”

Dave Catuhe Eternal Coding

Dave’s efforts improve with each iteration and Babylon.js is a wonderfully powerful yet simple to use javascript WebGL engine. The usefulness/complexity curve is a rising trend. To be sure a full fledged gaming environment is still a lot of work. With babylon.js much of the heavy lifting falls to the art design guys. From a mapping perspective I’m happy to forego the gaming, but still enjoy some impressive 3D map building with low effort.

In order to try out babylon.js I went back to an old standby, NASA Earth Observation data. NASA has kindly provided an OGC WMS server for their earth data. Brushing off some old code I made use of babylon.js to display NEO data on a rotating globe.

Babylon.js has innumerable samples and tutorials which makes learning easy for those of us less inclined to read manuals. This playground is an easy way to experiment: Babylon playground

Babylon.js engine is used to create a scene which is then handed off to engine.runRenderLoop. From a mapping perspective, most of the interesting stuff happens in createScene.

Here is a very basic globe:

<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
    <title>Babylon.js Globe</title>

    <script src="http://www.babylonjs.com/babylon.js"></script>
        html, body {
            overflow: hidden;
            width: 100%;
            height: 100%;
            margin: 0;
            padding: 0;

        #renderCanvas {
            width: 100%;
            height: 100%;
            touch-action: none;

    <canvas id="renderCanvas"></canvas>

        var canvas = document.getElementById("renderCanvas");
        var engine = new BABYLON.Engine(canvas, true);

        var createScene = function () {
            var scene = new BABYLON.Scene(engine);

            // Light
            var light = new BABYLON.HemisphericLight("HemiLight", new BABYLON.Vector3(-2, 0, 0), scene);

            // Camera
            var camera = new BABYLON.ArcRotateCamera("Camera", -1.57, 1.0, 200, new BABYLON.Vector3.Zero(), scene);

            //Creation of a sphere
            //(name of the sphere, segments, diameter, scene)
            var sphere = BABYLON.Mesh.CreateSphere("sphere", 100.0, 100.0, scene);
            sphere.position = new BABYLON.Vector3(0, 0, 0);
            sphere.rotation.x = Math.PI;

            //Add material to sphere
            var groundMaterial = new BABYLON.StandardMaterial("mat", scene);
            groundMaterial.diffuseTexture = new BABYLON.Texture('textures/earth2.jpg', scene);
            sphere.material = groundMaterial;

            // Animations - rotate earth
            var alpha = 0;
            scene.beforeRender = function () {
                sphere.rotation.y = alpha;
                alpha -= 0.01;

            return scene;

        var scene = createScene();

        // Register a render loop to repeatedly render the scene
        engine.runRenderLoop(function () {

        // Watch for browser/canvas resize events
        window.addEventListener("resize", function () {

Fig 3- rotating Babylon.js globe

Add one line for a 3D effect using a normal (bump) map texture.

groundMaterial.bumpTexture = new BABYLON.Texture('textures/earthnormal2.jpg', scene);

Fig 4 – rotating Babylon.js globe with normal (bump) map texture

The textures applied to BABYLON.Mesh.CreateSphere required some transforms to map correctly.


Fig 5 – texture images require img.RotateFlip(RotateFlipType.Rotate90FlipY);

Without this image transform the resulting globe is more than a bit warped. It reminds me of a pangea timeline gone mad.


Fig 6 – globe with no texture image transform

Updating our globe texture skin requires a simple proxy that performs the img.RotateFlip after getting the requested NEO WMS image.

        public Stream GetMapFlip(string wmsurl)
            string message = "";
                HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(new Uri(wmsurl));
                using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
                    if (response.StatusDescription.Equals("OK"))
                        using (Image img = Image.FromStream(response.GetResponseStream()))
                            //rotate image 90 degrees, flip on Y axis
                            using (MemoryStream memoryStream = new MemoryStream()) {
                                img.Save(memoryStream, System.Drawing.Imaging.ImageFormat.Png);
                                WebOperationContext.Current.OutgoingResponse.ContentType = "image/png";
                                return new MemoryStream(memoryStream.ToArray());
                    else message = response.StatusDescription;
            catch (Exception e)
                message = e.Message;
            ASCIIEncoding encoding = new ASCIIEncoding();
            Byte[] errbytes = encoding.GetBytes("Err: " + message);
            return new MemoryStream(errbytes);

With texture in hand the globe can be updated adding hasAlpha true:

var overlayMaterial = new BABYLON.StandardMaterial("mat0", nasa.scene);
var nasaImageSrc = Constants.ServiceUrlOnline + "/GetMapFlip?url=http://neowms.sci.gsfc.nasa.gov/wms/wms?Service=WMS%26version=1.1.1%26Request=GetMap%26Layers=" + nasa.image + "%26BGCOLOR=0xFFFFFF%26TRANSPARENT=TRUE%26SRS=EPSG:4326%26BBOX=-180.0,-90,180,90%26width=" + nasa.width + "%26height=" + nasa.height + "%26format=image/png%26Exceptions=text/xml";
       overlayMaterial.diffuseTexture = new BABYLON.Texture(nasaImageSrc, nasa.scene);
       overlayMaterial.bumpTexture = new BABYLON.Texture('textures/earthnormal2.jpg', nasa.scene);
       overlayMaterial.diffuseTexture.hasAlpha = true;
       nasa.sphere.material = overlayMaterial;

True hasAlpha lets us show a secondary earth texture through the NEO overlay where data was not collected. For example Bathymetry, GEBCO_BATHY, leaves holes for the continental masses that are transparent making the earth texture underneath visible. Alpha sliders could also be added to stack several NEO layers, but that’s another project.


Fig 7 – alpha bathymetry texture over earth texture

Since a rotating globe can be annoying it’s worthwhile adding a toggle switch for the rotation weary. One simple method is to make use of a Babylon pick event:

        window.addEventListener("click", function (evt) {
            var pickResult = nasa.scene.pick(evt.clientX, evt.clientY);
            if (pickResult.pickedMesh.id != "skyBox") {
                if (nasa.rotationRate < 0.0) nasa.rotationRate = 0.0;
                else nasa.rotationRate = -0.005;

In this case any click ray that intersects the globe will toggle globe rotation on and off. Click picking is a kind of collision checking for object intersection in the scene which could be very handy for adding globe interaction. In addition to pickedMesh.id, pickResult gives a pickedPoint location, which could be reverse transformed to a latitude,longitude.

Starbox (no coffee involved) is a quick way to add a surrounding background in 3D. It’s really just a BABYLON.Mesh.CreateBox big enough to engulf the earth sphere, a very limited kind of cosmos. The stars are not astronomically accurate just added for some mood setting.

Another handy BABYLON Feature is BABYLON.Mesh.CreateGroundFromHeightMap

/* Name
 * Height map picture url
 * mesh Width
 * mesh Height
 * Number of subdivisions (increase the complexity of this mesh)
 * Minimum height : The lowest level of the mesh
 * Maximum height : the highest level of the mesh
 * scene
 * Updatable: say if this mesh can be updated dynamically in the future (Boolean)

var height = BABYLON.Mesh.CreateGroundFromHeightMap("height", "textures/" + heightmap, 200, 100, 200, 0, 2, scene, false);

For example using a grayscale elevation image as a HeightMap will add exaggerated elevation values to a ground map:


Fig 8 – elevation grayscale jpeg for use in BABYLON HeightMap


Fig -9 – HeightMap applied

The HeightMap can be any value for example NEO monthly fires converted to grayscale will show fire density over the surface.


Fig 10 – NEO monthly fires as heightmap

In this case a first person shooter, FPS, camera was substituted for a generic ArcRotate Camera so users can stalk around the earth looking at fire spikes.

“FreeCamera – This is a ‘first person shooter’ (FPS) type of camera where you control the camera with the mouse and the cursors keys.”

Lots of camera choices are listed here including Oculus Rift which promises some truly immersive map opportunities. I assume this note indicates Babylon is waiting on the retail release of Oculus to finalize a camera controller.

“The OculusCamera works closely with our Babylon.js OculusController class. More will be written about that, soon, and nearby.

Another Note: In newer versions of Babylon.js, the OculusOrientedCamera constructor is no longer available, nor is its .BuildOculusStereoCamera function. Stay tuned for more information.”

So it may be only a bit longer before “vicarious life” downhill skiing opportunities are added to FreshyMap.



Fig 11 - NEO Land Surface average night temperature