Leopard Map Disassembly

Friends, it’s been a long while since I last wrote up a walkthrough of one of my mapping projects. So, today, let’s break down a piece that I made earlier this year for Scientific American magazine.

This is actually the first of three pieces that I’ve made for Scientific American this year. Much of my freelance work consists of making small, monochrome reference maps for academic books. SciAm, on the other hand, gives me an opportunity to design thematic maps, in color, for a large audience, and it’s a real treat to be able to work with them. Although placing a map in a prestigious publication also makes me even more self-critical and worried about the quality of my output than normal.

Data

To accompany an article about the interactions between humans and leopards in India, I was asked to show the audience two datasets: population density, and where leopards can be found. To start out, they were able to point me toward leopard range data. They also provided me with an early draft of the article, so that I could read through and find places that were mentioned and that might be worth highlighting. For showing human population density, I remembered the Kontur population dataset, which I learned about last year during the 30DayMapChallenge; the theme for Day 21 of the challenge was the Kontur dataset.

Failed Attempts

From there, I began to mess around with how best to combine these two datasets. The problem is that these are both areal datasets. For each place on the map, I had to show two pieces of data. That can be a real challenge, and I spent hours in a trial-and-error struggle to find something that worked cleanly. Here are a few screenshots I took of some of my failed solutions.

These colors, it’s important to note, are all placeholders. They were just there so I could test out the general idea of how to join these two datasets, and were not meant to represent what I thought would be good final color choices.

One thing I learned early on in my career: it’s a lot easier to encode data than to decode it. Technically, each of the above items works as a way of encoding both datasets in a way that they don’t specifically occlude each other. But, that doesn’t mean they’re easy to read. It’s important that the two symbologies be separable, letting the reader easily follow patterns in one dataset without interference from the other. And these prototypes mostly fail in that regard. In the first one, for example, the color shows population, while the halftone pattern shows leopard status. But a reader would be hard pressed to perceive every red hexagon as symbolizing the same level of population density, given that the leopard data causes some to be solid red, and others to be a field of sparse red dots.

Consider also the third example. The color of the hexagons indicates whether leopards live in an area, but their size depends on how many humans live in that area. By combining both datasets into dimensions of a single symbol, we’re tempting the reader into misinterpretation. When someone looks at the yellow hexes (which show areas where leopards currently live), they might instinctively interpret the changing hexagon sizes as indicating the leopard population. Even if that person has read the map legend and knows that hexagon size does not relate to leopard population, they might still get tripped up sometimes. Our brains tend to want to interpret symbols in certain ways, and while we can certainly try to override that instinct, it’s best to go along with a “natural” reading if possible.

After a lot of attempts, I finally ended up with something much more workable. The yellow population density dots can easily be seen as their own layer, but they still leave the underlying leopard data layer visible and clearly separate.

Generalization

One important part of making this all work is the extent to which I generalized the data. The population data, for example, originally came in a 400m hexagon grid that made it, at my map’s scale, practically a raster. I aggregated it to a much, much larger one of 50km.

In order to get the population layer to play well with the leopard layer, I had chosen to show population using a grid of graduated symbols. But, a tiny grid would have left me with tiny circles, too small for anyone to discern differences. The variation in circle sizes needed to be large enough for a reader to see multiple size distinctions, and leave room enough for them to see the leopard data color underneath. So, the population had to be generalized into fairly large hexagons.

I also chose to align the leopard data to those same hexagons. The original dataset came in the form of smooth polygons, but it looked cleaner, visually, to align everything to the same grid as the circles. This also ensured that the circles wouldn’t obscure any boundaries between classes in the leopard data.

Finally, I decided to reduce the leopard data from four classes to three. The original polygons that I downloaded were tagged as showing where leopards were (1) extant, (2) possibly extant, (3) possibly extinct, and (4) extinct. The middle two categories carry nuance that might be useful to an expert, but for a general audience, I thought it best to merge them into a single area marked “uncertain.” This makes designing the map easier: I don’t have to worry about finding a way to clearly fit yet another color or pattern into an already busy piece. It also means less cognitive load on the reader, as they don’t have to busy themselves paying attention to a data distinction that doesn’t support the main subject of the map.

The real world is messy, and generalizing it into apprehensible conclusions is the point of cartography. Not all complexity is bad, but every added bit of information makes the map slower and more difficult to interpret, so it’s important to strike a balance that leaves the main conclusions clear. And, again, this is for a general audience, not scientists planning the details of their next field trip, so this is a good situation to prioritize communicating clear, broad patterns.

Other notes

A few other odds and ends come to mind as far as the overall design goes:

  • I’ve lately become fond of preparing orthographic locator maps (i.e., ones that look like a globe). This was something I was taught about as a student at the University of Wisconsin Cartography Lab, but in the intervening years, I never really made much use of it for whatever reason. Tastes change, and presently mine is aligned in that direction.
  • Layout-wise, I originally had planned to put the locator map on the bottom. Here’s the layout plan I initially sent, vs. how things turned out:
  • But, it was pointed out to me that the globe should be up top, so that a reader proceeding from top-to-bottom (as they especially would on a mobile device) can see it first and orient themselves before engaging with the main map.
  • I decided to go with a dark color scheme for this one, mostly because I don’t usually get to play in that space too much. Map printing tends to lend itself to thinking in terms of dark data upon a light page, and I probably stick to that as much as anyone. It nice to have a change of pace.
  • As is default for me, I made sure that the colors on this map were legible to people with color vision impairments.

Layer-by-Layer

Let’s break down the Illustrator file for the main map. We start with a simple background fill of 20C 90K.

Then we add a layer for the various colored polygons indicating leopard status (plus a no-data background that acts as a default fill color for the land). Everything is clipped against the land boundary. Note also that the whole layer has a dark outer glow applied to it, just to help further separate it from the background.

Atop that we have a couple of lakes. I probably could have saved myself a layer and incorporated them into the clipping mask for the leopard status layer. I think this is a remnant left over from when I was still experimenting with what I wanted the style to be.

Then we have boundaries for a few states that were mentioned in the draft article. These are likewise clipped to the land, to prevent the line ends from poking out. They’re also at 90% opacity, just to push them a little bit more into the background.

Atop all that we have the population dots, with a subtle glow. This bit of shadowing helps make them seem more in the foreground against the leopard data.

And then I have a couple of markers for points of interest mentioned in the article. Simple black-and-white diamonds. They are point symbols attempting to distinguish themselves amidst a sea of population dots, and so they don’t stand out very well. They would probably do better if I had a different population symbology.

But, the type layer, the final piece of the map, helps that. Word bubbles featuring high contrast labels draw attention to the points of interest.

Note also that the other labels have a subtle glow, and the thinner state labels also knock out the surrounding dots, to make everything cleaner and more legible.

Final

Once everything was together, I handed the main map to Scientific American, along with a separate file for the locator map, and another for the legend. Then they took over and finalized things:

They added the title, annotations, and chart in the lower right, and also made some adjustments to the map content (removing a couple of states and un-curving my leader lines). This is pretty normal; the client has final editorial control over the map and sometimes makes changes themselves based on their style and also the current state of the accompanying article. This is why I provided files for each page element separately, to give them flexibility in laying everything out.

I also made sure to provide a healthy amount of extra map content, so that they could frame things a little differently if they chose.

Thanks for journeying with me through this breakdown of a fun project. I love doing magazine work, and it’s always an interesting challenge. It’s also an honor to be able to put my maps in front of such a large audience.


Thanks to all my patrons, whose support helps keep this blog going! Having this outlet is especially important in a time where my ability to reach an audience via other media (Twitter, mainly) is fading. If you’d like to support my efforts, you can donate via the links below. Also, just spreading the word about my work is a big help! Finally, consider subscribing, to make sure that you catch every post here.

9 thoughts on “Leopard Map Disassembly

  1. I always find it interesting and inspiring to read your posts!
    As a programmer and not at all a designer I always wonder how the use of vector tile maps where we can define our own styling like Mapbox and Openlayers will change the world of mapmaking (or perhaps it has already has changed it?).

    My thoughts is that exporting layers to SVG, then importing to Illustrator or Photoshop leads to static single use maps which are hard/time consuming to change/update if the dataset change. While defining all these styles in Openlayers/Mapbox styles gives an interactive generic map that can be used anywhere in the world with any dataset, even live updating.
    – This is what I ended up doing when I wanted to create a skiing map for the family cabin -> using mapbox styles where the end goal is to be able to use the same style everywhere in the world, and then export to printable PNG: https://mathiash98.github.io/posts/hjemmelaget-turkart/ . This map is very simple so the styling did not require advanced aggregates and labels like your maps usually do.

    What are your thoughts on doing advanced map making and styling in code (vector tile styling) versus using Photoshop? Is it possible, practical, slower/quicker?

    1. Thanks for the kind words! Tile maps (originally raster, now vector) caused major changes in the professional world of cartography as they started to become possible about 10–15 years ago. Many cartographers now produce interactive maps, or otherwise use Mapbox/Carto/similar tools. However, many cartographers still produce static maps, as each workflow is valuable in different circumstances.

      Static maps are still very common in cartography, as most maps do not need to be updated regularly on the web. Instead, they appear in newspapers, books, etc. Or, if they appear on websites, they might be simple maps that do not need regular updates that would make it worth creating code. The tools I use make it easy to create something customized the needs of each of my clients — most of the maps I make are unlike previous maps that I have made, and so there’s not much I could re-use.

      Tile maps are designed to be generic, as you said, and that makes them useful for a lot of purposes. But, there are tradeoffs. They tend to be harder to customize to the same level as a static map. If I need to re-shape a feature, or adjust the curvature of a label, those can be time-consuming, or maybe not even possible in a tile map. Whereas in Illustrator, I can move things, re-draw them, quickly recolor them, etc. I am not sure it would even be possible to make the India map in this post, using any tile map services. At least not in the same way.

      Tile maps, presently, are simply less flexible, because they are programmatic. They offer a program to make it easy to handle a lot of data, map the whole world at multiple scales, and allow quick updates. But, they can do this because they impose constraints. It’s the classic trade-off between efficiency and customization. We build tools that make it possible to do common things quickly, and then uncommon things get left to artisans ;). I think tile maps will continue to evolve and improve, but there’s not likely enough demand for someone to go to the trouble of designing them to be able to make almost anything that exists in my portfolio. You could make versions of the things you see in my portfolio, but they would likely have to be sacrifices to adapt them to the constraints of what the tile map’s system allows.

      I hope that makes sense!

      1. Thanks for the thorough response, this will ease my mind when I see data analysis and cartography being done in Photoshop :)

  2. Thanks for the content. I always learn a lot from your posts.
    By the way, do you have any suggestions on how to learn how to make hexagonal maps combining QGIS/ArcGIS and Illustrator?

    1. Glad my content has been helpful! QGIS/Arc both have tools to generate hexagon grids. In QGIS you can find that under Vector –> Research Tools –> Create Grid. In Arc it’s the Generate Tessellation tool.

      Getting your dataset aggregated into the hexagon grid depends on the data. You can have vector hexagons sample a raster dataset using Zonal Statistics in Arc or QGIS, assigning each hexagon the mean/median/majority/etc. of the parts of the underlying raster that it overlaps. To join to a vector instead, you could do a spatial join (or convert your vector to a raster first).

      From there you’ll have values assigned to your grid, a you can generate a choropleth or proportional symbol map as you normally would. I did my final color choices in Illustrator after I brought everything in, but I got the data prepared and classified in QGIS.

      Hope that helps!

  3. These steps were done in QGIS, actually. I used the “Create Grid” tool to generate the hexagons. There is a similar tool in ArcGIS, as well. It’s been a while, but I believe I took my population data (a raster), and performed Zonal Statistics on it. The zones were defined by the hexagons, and QGIS calculated the average pixel value for each individual hexagon, so that I had an overall density calculation. I then found centroids for each hexagon, and sampled the raster at those points to get the values. Then I could use that to adjust the dot sizes.

Leave a comment