Five Reasons You Should Be Excited About the 10.1 ArcGIS Runtime SDK

With ArcGIS version 10.1, Esri will introduce the ArcGIS Runtime and associated SDKs. There’s already a lot of buzz about the Runtime in the developer community, and for good reason. The runtime was been architected from the ground up with an eye on addressing some familiar challenges for GIS developers: exceedingly complex, fine-grained object models; complicated deployment; poor performance; large and memory intensive applications; lack of native 64-bit support … the list goes on. Esri hopes to eliminate many of these “pain points” with the new Runtime SDK, and I think most of us are also ready to let the healing begin. With its powerful yet coarse-grained architecture, it might be tempting to view the new ArcGIS Runtime as MapObjects and ArcGIS Engines love child, but there are (at least) five reasons why it’s more than that.

1)      Simplified deployment

You may have already seen the now almost legendary demo where an Esri dev drags his Runtime project on to a thumb drive, walks it to another machine and fires it up without an install. It never fails to gets gasps from the crowd, and for good reason. How many times have you had to sheepishly tell your client “hmm, it works on my machine”? The ArcGIS Runtime eliminates complicated deployments by packaging everything required by the application into a (relatively) small deployment package. A simple tracking application I built with a pre-beta version of the runtime weighed in at about 175 megs. Since the Runtime is split into several “functionality sets”, you can keep your deployment lean by eliminating functionality your application doesn’t require. As a bonus, a Runtime application doesn’t care about other versions of ArcGIS you might have installed, and will happily run side-by-side with its progenitors.

2)      Performance

The ArcGIS Runtime has been architected to take advantage of available CPU resources, including multiple processors and cores. It provides true multithreading and native 32 and 64-bit support. It also supports an asynchronous programming pattern (made possible by its multithreaded architecture) that contributes to an improved user experience. The ArcGIS Runtime display architecture has also been optimized for speed.

3)     Connected and disconnected modes

Both local and Web data sources for the Runtime use the same programming model, which means it’s easy to build connected and disconnected modes into your application. Since your application’s data source can be based on map (or tile, locator, geoprocessing, etc.) packages, these data can be downloaded from on the initial load and then used locally thereafter. A common and effective approach is to use online data for your base map, while deploying a map package with your application to contain operational data. Editing can also be performed in a disconnected manner, using the geodatabase check-in/check-out replication model.

4)     Intuitive object model

While functionally more powerful than MapObjects, the ArcGIS Runtime SDK is considerably more coarse grained than ArcGIS Engine. In other words, it delivers all the functionality that most GIS applications need without an overly complex object model. Under the hood, the Runtime uses REST for communication with both Web and local data sources. Developers who are familiar with any of the ArcGIS Web APIs should find the Runtime API very intuitive to work with right out of the gate.

5)     Multi-platform support

The ArcGIS Runtime supports applications for 32-bit Windows, 64-bit Windows, and for 64-bit Linux. Windows applications can be built using the WPF, Java, or QT SDKs. Linux applications can be built using Java or QT.

While your applications still won’t be writing themselves anytime soon, the ArcGIS Runtime SDK should make GIS development a little more enjoyable. The time you may have had to spend smoothing out your deployment or pouring over a couple dozen object model diagrams can now be spent doing something fun (or at least productive), like fine-tuning your cartography or making tweaks to your application’s UI.

Applying 50% More D-Factor to Your GIS Projects

If you’re a flash buff, it’s hard to escape the direction Adobe is taking with their newest flash player introducing the Stage 3D capabilities (previously codenamed “Molehill”).  Molehill is a new platform used for low-level GPU-accelerated APIs which will enable support across multiple screens and devices.  For those Flash 3D aficionados out there, what does this mean?  It means our world is about to change… Cube textures, z-buffering, fragmented vertex shaders… And for the layman, here’s some numbers that just make you say wow:

  • Previous flash players supported 4-8 thousand polygons
  • Molehill has been stress-testing supporting millions of polygons

Talk about holy poly-count Batman!

But diving into the guts of the API isn’t for the faint hearted.  Coming from the development world, we’re all familiar with the ‘Hello World’ examples… so how about a ‘Hello Triangle’ example?  The following code snippet was authored by Ryan Speets.  This is an incomplete example of a simple triangle and square on the screen.

public function myContext3DHandler ( event : Event ) : void {

var stage3D:Stage3D;

var vertexShaderAssembler:AGALMiniAssembler;

var fragmentShaderAssembler:AGALMiniAssembler;

stage3D = as Stage3D;

context = stage3D.context3D;

context.configureBackBuffer(640, 480, 4, true);

// Set up triangle's buffers

trianglevertexbuffer = context.createVertexBuffer(3, 6);

trianglevertexbuffer.uploadFromVector ( Vector.<Number>([

0, 1, 0,  1,0,0,

-1,-1, 0,  0,1,0,

1,-1, 0,  0,0,1

]),0, 3 );

triangleindexbuffer = context.createIndexBuffer(3);

triangleindexbuffer.uploadFromVector(Vector.<uint>([0, 1, 2]), 0, 3);

// Set up square's buffers

squarevertexbuffer = context.createVertexBuffer(4, 6);

squarevertexbuffer.uploadFromVector ( Vector.<Number>([

-1, 1, 0,  0.5,0.5,1.0,

1, 1, 0,  0.5,0.5,1.0,

1,-1, 0,  0.5,0.5,1.0,

-1,-1, 0,  0.5,0.5,1.0

]),0, 4 );

squareindexbuffer = context.createIndexBuffer(6);

squareindexbuffer.uploadFromVector(Vector.<uint>([0, 1, 2, 0, 2, 3]), 0, 6);

// Assemble shaders

vertexShaderAssembler = new AGALMiniAssembler();

vertexShaderAssembler.assemble( Context3DProgramType.VERTEX,

"m44 vt0, va0, vc0 \n" +

"m44 op, vt0, vc4 \n" +

"mov v0, va1"


fragmentShaderAssembler = new AGALMiniAssembler();

fragmentShaderAssembler.assemble( Context3DProgramType.FRAGMENT,

"mov oc, v0\n"


// Upload and set active the shaders

program = context.createProgram();

program.upload(vertexShaderAssembler.agalcode, fragmentShaderAssembler.agalcode);


this.addEventListener(Event.ENTER_FRAME, enterFrame)


Whatever happened to var triangle:Triangle = new Triangle()?  As mentioned earlier, Molehill is a low-level API (and they mean really low).  Fortunately there are a number of robust frameworks freely available that are compatible with Molehill and abstract much of the complexity of the Stage3D away from us so we can focus on doing “funner” stuff.  What are some of these frameworks?

For the following demonstrations, I leveraged Away3D.  Away3D is an open-sourced framework created by the same folks who put together (the now dead?) Papervision.  Away3D features a great developer community, significant samples, responsive contributors, and open-source accessibility.  Seemed like a good choice.  Alternativa3D was a close second and was the shop responsible for putting together the MAX3D racer demo available from Adobe’s website (

But we’re a GIS Technology company and so we started asking ourselves how these new advances in web-based 3D visualization can help the geospatial community.  Really, the possibilities are endless, but here are some of the low hanging fruit:

  • Terrain visualization
  • Flight simulation
  • 3D based COP
  • Utility/Pipelines
  • Physics simulations
  • Development pre-visualization
  • Resource geolocation/tracking
  • Etc.

The demonstrations range from simple primitive test, elevation and ESRI basemaps, to global geolocation and AVL tracking.

Caption:  Who would have thought that simple polygons would get me so excited?

Once the polygons were in place I thought it might be beneficial if we could interact with the 3D objects.  A simple property change (mouseEnabled = true) and an extra EventListener and voila, we now have interactive 3D objects.

Caption:  Imagine, if you will, something cool!

Then I asked myself what would a 3D scene be without a little mood lighting and textures?  Away3D supports a number of light and material options including DirectionalLight, Point Light, BitmapMaterial, ColorMaterial, VideoMaterial, etc.  Notice the Poly count listed on the statistics in the upper left corner… 19802 polygons.  That sphere has waaay too many divisions, but take my word for it, manipulating the 3D scene is a smooth as butter.

Caption:  Adding a little mood makes everything seem more dramatic.

Next up was the manipulation of 3D objects using embedded audio assets.  We’re all familiar with audio equalizers, but what about representing audio frequencies in real-time 3D?  The class offers useful utilities for introspecting audio playback and tying it to 3D objects makes an interesting demonstration.  Not much to see by way of screenshot, but visualize each bar bumping up and down to the rhythm.

Caption:  3D equalizer gives a new meaning to audio visualizations

At this point I was ready to begin delving into more geospatial experimentation.  What’s the first thing that many of us GIS professionals think of when we think of 3D… TINS, DEMS, elevation.  Right?  Unfortunately, the following demonstrations don’t represent true elevevations (rather pixel values are converted into relative extrusion heights), but from a visualization perspective the results are impressive (if not accurate).  Notice the POLY count on this screenshot… 80,000 polygons without a single hiccup in framerate as I spin the map.

Caption:  Pixel colors are converted into relative heights to create the 3D extrusion.

Caption:  A closer look

As developers, we’re always looking for ways to create value for our stakeholders.  Those familiar with the ESRI development APIs are probably familiar with the ESRI basemaps and the manipulation of graphics via GraphicLayers… now if only this could be extruded into 3D…

Caption:  Starting sample application.  User may begin sketching their waypoints.

Caption:  Ready for 3D visualization

Caption:  Elevation and basemap data mashed together to create 3D flight simulation.  Red box could easily be swapped out for camera.

In the above concepts, I use three different map services published with ESRI’s ArcGIS server.  The first map service uses the ESRI street basemap as simple reference while sketching the waypoints.  Once the waypoints are drawn, the application analyzes the extent of the points and exports images from a map service rendering DEM data, as well as ESRI’s topographic basemap.  These are mashed together to create the final scene.  The red box represents an object that could just as well have been a camera to create a “pilots perspective” flight simulation.  Again, the terrain is only a simulation/visualization, and should not be considered representative as true elevation.

Now that I had location on a local level, I started thinking international/global.  Can I use the concepts learned while generating local 3D scenes on a much larger scope?  You bet!  These examples not only demonstrate the ability to load data from external web services dynamically into a 3D scene, but also mapping geographic coordinates to pixel coordinates.  We can also attach event listeners to the markers to provide click or tooltip information.

Caption:  There’s something about don’t click me buttons that drives people crazy

Caption:  Data loaded and converted to pixel-space from external web-service, with tooltip interactivity

The mapping of lat/lon data in real-time 3D reminded me of AVL situations on COP applications.  What if you were managing a wildfire in rough terrain and wanted to visualize on-the-ground resources in context of the terrain?  In the next experiment, I created a simulated web-service that the 3D scene can poll every XX seconds to update the locations of resources on the ground.  The blue resources move over time as their coordinates are updated via a web-service call.  All the while I’m zooming in and out of canyons, panning around, etc within the 3D scene.   How about real-time FAA flight visualizations?

Caption:  Select area to analyze

Caption:  See on the ground resource locations in near realtime (simulated from web-service).

Finally, (and just for the fun of it) I wanted to start testing 3D physics.  Adobe has another great project underway that allows users to compile raw C and C++ code into SWF or SWC files that can be embedded in your applications (codename “Alchemy”).  Someone in the community was kind enough to compile the Bullet physics engine ( into a consumable SWC file used in the following demonstration.  Not much use for it yet, but it certainly opens the door of possibilities.

Caption:  Shooting balls at a wall of cubes could be packaged into a game

Caption:  Resulting chaos

Hopefully we can post a demo video up soon, but in the meantime enjoy the provided screenshots/discussion to get you thinking about adding 50% more D to your world.