Close Close Comment Creative Commons Donate Email Add Email Facebook Instagram Mastodon Facebook Messenger Mobile Nav Menu Podcast Print RSS Search Secure Twitter WhatsApp YouTube
PROPUBLICA Support independent journalism by 12/31
DONATE

How We Made the 3-D New York City Flood Map

We used features only available in the most modern web browsers to create the interactive map of the city's flood zones.

We used WebGL to create the 3-D map of FEMA's new flood zones.

Earlier this year we published a story and an interactive graphic about the evolving Federal Emergency Management Agency flood maps in New York City in the year after Hurricane Sandy.

FEMA had advisory maps in the works when Sandy hit. The agency rushed them out in the days afterward as a first sketch for those looking to rebuild.

Our story found that while the maps continued to be revised over the course of a year, homeowners had little guidance on how much their home’s value — as well as its required elevation — were changing as they struggled to rebuild after the storm. To complicate matters, Congress had recently passed legislation which threatened to dramatically raise flood insurance premiums for those remapped into high-risk flood zones.

In the midst of all of this, New York City Mayor Michael Bloomberg announced an ambitious $20 billion plan to protect the city from storms, a plan with at least a $4.5 billion funding gap and no clear timeline.

With these advisory maps as a guide, and knowing there would be another revision in the coming months, we wanted to create a visualization that would show readers the impact Sandy had, how much impact a potential flood could have, and how the measures laid out in Bloomberg’s plan, if implemented, might protect the city.

We were inspired by graphics like this Times-Picayune 3-D map of the New Orleans levee system which shows how bowl-like that city is, as well as the U.S. Army Corps of Engineers' scale model of the San Francisco Bay. Mapping in three dimensions helps readers see ground elevation and building height in a much more intuitive way than a traditional flat map, and one which matches their mental model of the city.

We set out to find the right technology to render our map in a browser. Software like Maya would allow us to make an animated motion graphic of the map, which would have been beautiful, but we wanted to let readers explore it and find locations that are important to them. So even though it only works in the newest browsers, we decided to use WebGL, which is a 3-D graphics library and a kind of experimental bridge between JavaScript and OpenGL. It lets web developers talk right to the user's graphics card.

Aside from creating what we believe is one of the first maps of its kind on the web, we also persuaded New York City to release, for the first time, its 26 gigabyte 2010 digital elevation model, which is now available on the NYC DataMine.

The Data

To make our 3-D maps we needed accurate geographic data. We needed the GIS files for two different iterations of the flood-risk zones — the 2007 zones (mostly based on 1983 data) and the new 2013 advisory ones. We also needed the footprints for every building in New York City (there are more than a million), including their heights as well as the amount of damage each building sustained during Sandy. We also needed elevation data, showing how high the land needed is all over the city, as well a base layer of things like streets and parks.

Some of that data was easy to get. FEMA will ship you free shapefiles for every flood insurance map in the country through its Map Service Center, and they post the new New York City flood zone files on a regional site.

Getting the building data wasn't so easy.

At the time we were making the map, the city’s building footprints shapefile did not include building heights, so we needed to join it to two other databases — the Property Assessment Roll and the Property Address Directory file in order to get number of stories (which we used as a proxy for height). Since then, the city has updated the buildings footprint file with a field containing the actual height for each building.

The last step was associating FEMA's estimate of damage level to each building. Since FEMA's data is stored as a collection of points, we needed to do a spatial query to find which of the building footprints intersected with FEMA's points.

With all of the data in hand, we were able to make a new shapefile combining building footprints, height (which we approximated by setting a story to ten feet high), and level of damage.

To make a 3-D map, you need information on the height of the ground. But finding a good dataset of ground elevations in a crowded city like New York is difficult. In 2010, the City University of New York mapped the topography of New York City using a technology called lidar in order to find rooftops that would be good locations for solar installations. Thankfully, we were able to persuade New York City’s Department of Information Technology and Telecommunications to give us the dataset. The department also posted the data on the NYC DataMine for anyone to download.

Inventing a Format

Once we assembled the data, we needed to convert it into a form WebGL could understand. Shapefiles store polygons as a simple array of points, but WebGL doesn't handle polygons, only triangles, so we needed to transform our polygons into a collection of triangles. In order to do so, we implemented an “ear clipping" algorithm to turn polygons like this:

into a collection of triangles that looks like this:

We then packed these triangles into a buffer of binary data in a format we called a ".jeff" file. Jeff files are very simple, binary files. They consist of three length-prefixed buffers: A 32-bit float representing the vertices in a particular shape, a 32-bit integer array of triangle offsets, and metadata. The record layout looks like this:

length of vertices vertex x1 vertex y1 vertex x2 vertex y2 ...
length of triangles index of the start of the triangle index of second point the triangle index of third point the triangle ...
length of metadata JSON encoded metadata

It turns out to be simple and fast for browsers that support WebGL to read this binary format in Javascript, because they already need to implement fast binary data buffers and typed arrays. In order to get and read binary data from the server you create an XMLHttpRequest which has a responseType of arraybuffer. The data sent back will be in a ArrayBuffer object which is simply an array of bytes. After the request completes we parse the .jeff files with the readJeff function of this class:

var Buffer = propublica.utils.Buffer = function(buf){
  this.offset = 0;
  this.buf = buf;
};

Buffer.prototype.read = function(amt, type){
  var ret = new type(this.buf.slice(this.offset, this.offset + amt));
  this.offset += amt;
  return ret;
};

Buffer.prototype.readI32 = function(){
  return this.read(Int32Array.BYTES_PER_ELEMENT, Int32Array)[0];
};

Buffer.prototype.readI32array = function(amt){
  return this.read(amt*Int32Array.BYTES_PER_ELEMENT, Int32Array);
};

Buffer.prototype.readF32array = function(amt){
  return this.read(amt*Float32Array.BYTES_PER_ELEMENT, Float32Array);
};

Buffer.prototype.readStr = function(amt) {
   return JSON.parse(String.fromCharCode.apply(null, this.read(amt*Uint8Array.BYTES_PER_ELEMENT, Uint8Array)));
};

Buffer.prototype.more = function() {
  return this.offset < this.buf.byteLength;
};

Buffer.prototype.readJeff = function(cb){
  while(this.more()) {
    var points     = this.readF32array(this.readI32());
    var triangles  = this.readI32array(this.readI32());
    var meta       = this.readStr(this.readI32());
    cb(points, triangles, meta);
  }
};

You'll notice that we are only sending x and y coordinates over the wire. Along with the triangles and point arrays we send to the client, we also send a bit of metadata that defines the height for the shape (either flood zones or building footprints) as a whole. Once the data arrives on the client a web worker extrudes the buildings into the 3-D shapes to display in the browser. The final view shows a lot of data. For example, Coney Island alone has almost 200,000 triangles.

A Neighborhood Bakery

In order to slice up the map into neighborhoods, we wrote a script to iterate through all of our files and clip them to the same bounds, so they would stack up on the map like a layer cake. New zones, old zones, the city's boundary, all five categories of damaged buildings, coastline and street map data needed to be clipped, reprojected and turned into .jeff files at once. With 11 layers and seven neighborhoods, we baked out 77 files every time we tweaked the map. Because Postgres's PostGIS extension is the best way to dice shapes into squares, our script created temporary tables for each layer of each neighborhood, and ran them through a query like so:

SELECT *,
  ST_Intersection(ST_Transform(the_geom, $NJ_PROJ),
  ST_SetSRID(
    ST_MakeBox2D(
      ST_Point($ENVELOPE[0], $ENVELOPE[1]),
      ST_POINT($ENVELOPE[2], $ENVELOPE[3])
    ), $NJ_PROJ)
  )
AS
  clipped_geom
FROM
  $TMP_TABLE
WHERE
  ST_Intersects(
    ST_Transform(the_geom, $NJ_PROJ), ST_SetSRID(
      ST_MakeBox2D(ST_POINT($ENVELOPE[0], $ENVELOPE[1]), ST_POINT($ENVELOPE[2], $ENVELOPE[3])
    ), $NJ_PROJ)
  )
AND
  ST_Geometrytype(
    ST_Intersection(
      ST_Transform(the_geom, $NJ_PROJ), ST_SetSRID(
        ST_MakeBox2D(ST_Point($ENVELOPE[0], $ENVELOPE[1]), ST_POINT($ENVELOPE[2], $ENVELOPE[3])
      ), $NJ_PROJ)
    )
  ) != 'ST_GeometryCollection';

In the above, $NJ_PROJ is EPSG:32011, a New Jersey state plane projection that was best for coastal New York City, and $ENVELOPE[0]..$ENVELOPE[3] is the bounds of each neighborhood.

The query grabs all the geographical data inside a box of coordinates. We took the result of that query, and used the pgsql2shp tool to create a shapefile of each one —77 in all . We then took each of those and pipe ran them through our script that baked out .jeff files. When that was done, we had 353 files, including all of the additional files that come along with the .shp format. In order to speed up the process, we used a Ruby gem called Parallel to run the tasks over our iMacs’ eight cores. In order to make sure each temporary table didn't stomp on the feet of its parallel task, our baker script created unique random table names for each shape for each neighborhood and dropped the table after it finished baking.

For the building shapes, we needed to use our elevation raster to record the ground elevation at each building's centroid. Fortunately gdal has a command line tool that makes that trivial. For any raster that gdal can read, you can query the values encoded within the image with:

gdallocationinfo raster.tif [x] [y] -geoloc

We issued that command while the baker was running and stored the result in the metadata we sent to the browser.

Making the Map

In order to display all this data on the web we relied on a fairly new web standard called WebGL. WebGL is an extension to the tag that is supported by certain browsers and allows JavaScript to access OpenGL-like APIs to process 3-D information directly on a computer's graphics card. The API is very complex so we used lightgl.js, which provides very nice api that is closer to the WebGL API than something like three.js.

To organize things a bit, we created individual objects we called Scenes for each of the neighborhoods we wanted to show on the map. Each scene had what we called Shapes for Buildings, Flood, Flood Zones, Terrain, and Earth.

To build Buildings and Flood Zones we used the binary data format described above to build a 3-D representation that we uploaded to the graphics card.

But for the Terrain and Flood scenes, the elevation of the earth and the storm surge extent, we sent a specially encoded image to the browser that contained height information encoded as a network format float in each pixel's red, green, blue and alpha values. We wrote a little tool to encode images this way. Here’s an example:

In WebGL, you don't actually manipulate 3-D models as a whole, instead you upload to the graphics card small programs called “shaders” that operate in parallel on the 3-D data you've previously sent to the graphics card. We implemented both kinds of shaders —vertex and fragment. When a browser couldn't compile one of our shaders (for instance, not all browsers and video cards support reading textures in vertex shaders), we redirected to a fallback version of the map we called “lo-fi.”

After the geometric data was processed in the graphics pipeline, we did a bit of post processing to antialias harsh lines, and we added a bit of shading to make the buildings stand out from one another using a technique called Screen Space Ambient Occlusion. You can play with the settings to our shaders by visiting the maps in debug mode.

In order to make the little flags for landmarks like Nathan's Hot Dogs and the Cyclone, we built up an object of New York City BIN numbers and their descriptions for interesting buildings. For these BINs, we added bounds to the metadata section of the buildings .jeff file. Once we had those bounds we could use lightgl's project method to attach an HTML element (the flag) to the DOM near where the building was shown in canvasland. Whenever the user moved the map, we would reproject the flags so they would move along with the underlying map.

Maintaining State

The user interface for the maps is pretty complicated; we have seven different areas each with three different views. Originally we had set up an ad hoc way of tracking state through a lot of if statements, but when this became unwieldy we wrote a small state machine implementation we called the KeyMaster.

State Machines are one of the best tools in a programmer's toolbox. As their name implies, they maintain state and they work by defining transitions between those states. So for example, KeyMaster could define these states:

KeyMaster.add('warn').from('green').to('yellow');
KeyMaster.add('panic').from('yellow').to('red');
KeyMaster.add('calm').from('red').to('yellow');
KeyMaster.add('clear').from('yellow').to('green');
KeyMaster.initial('green');

and transition between states like this:

KeyMaster.warn();
>> 'yellow'
KeyMaster.panic();
>> 'red'
KeyMaster.calm();
>> 'yellow'
KeyMaster.clear();
>> 'green'

KeyMaster also has events that are called “before” and “after transitions,” which we used to clean up and remove the current state.

The "Lo-Fi" Version

For users that aren't using WebGL-capable browsers, we also made a "lo-fi" version of the map, which is simply a series of images for each neighborhood. To generate the images, we wrote a little tool to automatically take and save snapshots of whatever the current map view was. This was easy thanks to the Canvas API's toDataURL method. In the end, this version looked exactly like the main view, except you couldn't change the zoom or angle. We left the snapshotter in the secret debug version of the app -- you can save your own snapshots of the current map view by hitting the "take snapshot" button in the bottom left corner.

Our 3-D map of New York City is a taste of what we’ll be able to build once all browsers support it, and we think it helped us tell a story in a way that a 2-D map wouldn't have done as well. If you end up building a 3-D news graphic, let us know!

Latest Stories from ProPublica

Current site Current page