I had a whimsical idea a few months ago: map projection trading cards. Something nerdy and map-related that you could collect and exchange at conferences. I poked around at some design ideas for a while, and here’s what I’ve come up with so far. This is the front side of the card:
Most of the card will be taken up by a map that shows off the given projection (the stripes are not part of the design). And if you notice down at the bottom, it says “Cartographer Name” — because I don’t want to do all of these myself. Instead, I’m hoping some of you out there will make the maps (more on that below). This is not entirely laziness on my part: it’s also very much in the spirit of collectible card games like Magic: The Gathering, which feature art in a variety of styles from a host of contributors.
Another thing I borrowed from collectible card games: icons in the upper right, which tell you a projection’s class or notable properties. I could have just used words to tell you if a projection is conformal or conic. That would be clearer, and better in most other circumstances. But icons fit the spirit of what I’m going for, and are more fun. The back side of the cards will have a uniform design, and a guide to what each icon means. I have that planned out, but I’ll keep it secret for now, as it’s still in flux.
These designs are 2.5×3.5 inches, which is fairly standard for trading cards. Not-so-standard is the landscape orientation. But, it’s probably best for map-related content.
As said, I’d like your help! I want to show off work from a variety of mappers. If you’d like to design a map for the project, here’s a sign up sheet! I’ll contact you with details to get you started. I’m hoping to receive designs from everyone by about May 15.
I have no idea how many I’m going to print, or how I’ll get them out to the world. I may carry them around and distribute them to lucky people at conferences. Or maybe I’ll set up something where I can do a one-time sale of packs to people (at cost). Stay tuned!
While many people use my Blender shaded relief tutorial, and associate me with the software, I’m really not much of a 3D cartographer. I make oblique-view maps only rarely, and when I do, it’s usually in a simplified, abstract style, rather than the detailed, naturalistic representations that people like Tom Patterson produce. Here’s a not-quite-finalized draft that I’ve been working on for a client’s book.
I make oblique views in this style mostly because they’re much easier and faster to produce than a more realistic terrain view (one with trees and shimmering water and clouds) — they fit well in projects where a client needs a 3D view, but there isn’t the time or budget to produce something very detailed. Also, I make these because I’m not yet particularly skilled in making the fancier realistic maps, though I’ve made some attempts.
So, let’s walk through how I make one of these. There’s a lot to cover, but once you get the hang of it, they’re not too complicated. And, if you want to work your way toward eventually making more photorealistic terrain renders, much of what we discuss here will be useful to you.
This tutorial presumes that you’re familiar with the basics of using Blender and that you’ve gone through my shaded relief tutorial or something similar. It also presumes some familiarity with the basics of Photoshop or other raster image editing software.
To keep things simple, we’re just going to pick right up where we left off in my previous tutorial. You could start from scratch, but I find it easier to just load up an old shaded relief Blender file, change out the DEM, and make use of some of the work I’ve already done. So, go ahead and set up everything like you were preparing a shaded relief: load in the DEM you’d like, get your plane resized, etc.
If we want to make an oblique view, the first thing we’ll need to do is set up the camera; the settings that we were left with from the shaded relief tutorial aren’t quite right for this job. The camera is in the wrong position (looking straight down at our terrain), and it’s got the wrong lens. If you remember from Part 4 of the previous tutorial, we had to switch our camera from a perspective to an orthographic view, to avoid perspective distortions that would make our shaded relief look weird. Now it’s time to switch it back. We need the more realistic perspective view here; we want our camera to operate more like an ordinary real-life camera that most of us would use.
So, select your camera, then head on over to the Object Data Properties panel by clicking the green icon of an old movie camera on the left of the panel. Then change the lens type to Perspective.
Next up, we need to reposition our camera. Right now, you’ll see it’s pointing straight down at the terrain, because that’s what we needed for our shaded relief. But, for an oblique view, we want to be looking at the terrain from an angle.
Getting the right view takes some trial and error, and some exploration. You’re going to reposition the camera and change its orientation, eventually finding the place you’d like. If you select your camera the choose the Object Properties panel (the orange square), you can adjust the Location and Rotation of your camera by plugging in whatever numbers you like, or by clicking on a number and dragging your mouse left/right. Play around with those for a moment to get comfortable. As you change the numbers, look at the 3D Viewport to see how the camera moves around.
But how do you know if it’s in the position you want? We can’t see the terrain, just a flat plane. You could make test renders over and over again, but that takes a lot of time. Instead, let’s look at a rough version of the terrain. At the bottom of the 3D Viewport (or maybe the top; it seems to move around when I look at Blender on other people’s computers), you should see some icons of spheres. If you mouse over them, they’ll probably say something about “Viewport Shading.” Click the last one of the four icons, that looks like a sphere with little shadow lines.
Don’t see the icons? No problem. Click on that bottom toolbar with your middle mouse button (or alt + left button, if you’ve turned on 3-button emulation in the last tutorial), and drag it until you find the icons.
Now in your 3D viewport, you should see a rough rendered version of your terrain. It may look comically exaggerated — the level of vertical exaggeration needed for shaded relief is usually much higher than will look realistic in an oblique view. So, you may want to tone that down until it looks right (remember that’s by adjusting the Scale property of the Displacement node on your plane’s material).
Now that we can see our terrain, figuring out where to put the camera gets a little easier. You may remember from the last tutorial that we have a shortcut for looking through the camera’s lens to see what it sees. To do this, find the View menu, choose Cameras, then Active Camera. You can also hit 0 on the numeric keypad.
Now you see what the camera sees, and as you change its position, you can start to home in on the area you want to look at. Take a minute to play around and get closer to the terrain (but don’t go to the trouble of finalizing your position yet; we’ll talk about some tools that will help make that easier).
In the 3D viewport, you’re only going to see a simplified terrain, since Blender is trying to quickly show this to you on the fly, rather than doing a fully detailed render. Blender is also simplifying your DEM, cutting out a lot of details to make it render faster. But, as we zoom in, it doesn’t get more detailed. It would be nice to see a less-simplified terrain so that we can fine-tune our camera position. An easy way to do that is to go back to those Viewport Shading buttons (the four spheres) and click on another sphere to switch to another mode (any of them will do), and then switch back to Rendered mode (the 4th sphere). Notice how more detail comes in.
Why does turning the Rendered view mode off and on again help? Well, when we first turned Rendered mode on, your camera was probably far from the terrain. And Blender said, “I’ll load in just enough of your DEM data to make this quick preview look fine at that distance.” And as we zoom in, it didn’t update the level of detail to fit out new camera position. So, by turning the mode on and off, we force it to re-evaluate how much of the DEM detail that it needs to load in. Note that, no matter where your camera is, Blender will always use the full detail when you’re doing a final render by going to Render→Render Image as we normally do. What I’m talking about here is just about Blender’s attempt to show you a quick preview in the 3D Viewport with as little data as possible.
Now, there’s one other way to move the camera around which you will probably find useful. Instead of plugging in numbers, you can click and drag around the scene. If you press G (for “grab”), and move the mouse, you’ll start moving the camera around the scene (make sure you’ve got the camera selected and not the plane; I often make that mistake). This can help you position things. Hit Enter to finish grabbing, or hit escape to go back to your old position. If you stop moving a moment, Blender will catch up with you and fill in a more detailed rendering.
Another useful tip: you can constrain movements when you grab things. Let’s say you want to move the camera to the left. It’s hard to do that when grabbing, because your mouse might slip and you might shift up or down. It’s also not really possible by adjusting the Location values for your camera, because shifting your view left/right would require adjusting both the x and y position of the camera (since your camera is pointing at the terrain at an angle). The solution for this is to use shortcut keys that constrain our camera movement. Hit G to grab, and then hit X. Notice near the bottom of the screen you’ll see a message that says “along global X” — this means that now, you can only move the camera along the X direction of our entire scene. You may even see a line show up on screen showing you the X-axis of the scene, and if you try to grab, you’ll only move along that line.
Hit X a second time. Now the message says “along local X.” Now our grab movement is constrained to the X-axis of our camera’s view, not the entire scene. A line will probably show up, showing you where you’re constrained to. As you drag the mouse, note that you can only move left/right.
Constraints like this are really handy to position your view right where you want it. There are also constraints for the Y and Z directions. Moving along the global Z would be raising/lowering your camera relative to the terrain. Moving along the local Z would be pushing your camera forward or pulling it back along the direction it’s facing, effectively zooming into the terrain or zooming out.
There are still more constraints you can use, if those aren’t enough. While grabbing, if you hit Shift-X (or Shift-Y/Shift-Z), you can lock a direction. So, for example, you could allow your grab action to move the camera, but lock your global Z, so that it never gets any higher or lower in its elevation above the terrain. And you can hit Shift-X (or -Y or -Z) twice to lock the local direction, rather than the global one.
Finally, besides grab, you may want to also try pressing R to rotate the camera. This operates just like grab, letting you drag the mouse around to change the camera’s orientation, and you can likewise constrain yourself to (or lock) any axis you want. I don’t use this one as often, but it can be handy. Usually I rotate the camera by plugging in specific numbers, though, as it’s easier to wrap my brain around.
Ok, that’s a lot to take in, so I encourage you spend a little bit of time messing around with all those options and getting comfortable moving around.
Now, what if you’re unhappy with the shape of your camera’s window on the terrain? Well, that’s tied directly to your final render size. When we were doing shaded relief, we set it the same size as our plane. But here, it can be anything you want! Head on over to the Output Properties panel and adjust the X & Y resolution of your image. Remember, that’s the icon that looks like a photo coming out of a printer. Your camera view will change as you adjust your output dimensions.
Ok, now that you’ve got that set, it’s finally time to set up our actual terrain view. Using the grab/rotate tools, and/or plugging in numbers on the camera’s Object Properties panel, get your camera positioned where you’d like it. You might also decide at this point to tweak your terrain’s vertical exaggeration, as it might look too high or low as you get close to it. It’s an iterative process to get things looking right.
Here’s where I ended up with my own terrain: a 16:9 aspect ratio, with some mountains on the left side and valleys on the right.
If you’d like, you can do a render of this right now to see it at full detail (or as much detail as your DEM supports — since you’ve zoomed in so much on the terrain, your DEM will need greater resolution than if you were viewing the whole thing from far above). Mine’s looking pretty smooth — about as smooth as the quick preview in the 3D viewport, really. But, for my tutorial purposes, that’s going to be just fine. And, for a lot of cartographic purposes, simplified terrain is preferable to detailed terrain.
Now, what we’ve just done here — getting the camera set up — is the starting point for basically any oblique-view map you want to make, whether it’s the simplified look that I showed you at the start of the tutorial, or it’s a photorealistic render with beautifully detailed trees, clouds, and streams. But we’re going to stick to the simplified look.
The main reason I use Blender to render shaded relief is that it lights the terrain realistically. To do this, it calculates the path of light rays coming into the terrain and bouncing around, the way real light does. That’s something 3D software can do that a more traditional hillshade algorithm can’t. Likewise, you can use Blender (and similar programs) to create highly-detailed, lifelike oblique terrain maps, because it has this capability of simulating realistic lighting and materials, down to the level of giving each individual tree its own shadow.
Now we’re going to take all those powerful capabilities and completely ignore them. We won’t end up caring at all what our final render looks like. If you felt like it, you could even go delete your light source right now from this file, because we won’t need it (though it would be hard to position your camera correctly in the dark).
The main thing that we need from Blender, instead, is a depth map. Put simply, this is an image that encodes how far from the camera our terrain (or other object) is. Here’s the example from that linked Wikipedia article, by user Dominicos:
On the left is an ordinary 3D view of a cube, with lighting and shadows and such. On the right is a depth map. Darker areas mean that the cube is closer to the camera. Lighter is farther away. Here’s an example with my chosen terrain view:
In the depth map on the right, you can see how the closest mountains are darker, and the scene fades away into white as it gets farther back. This, by itself, is pretty visually interesting and fun to play around with. But it’s just our starting point.
Now, since the depth map just shows us what’s near and what’s far away, our lighting and materials don’t really matter. If you removed the light from this scene, or changed the color of the terrain, that wouldn’t change how far away the mountains are from the camera.
So, let’s get Blender to give us a depth map. First off, we’re going to tell it to start calculating the depth every time we render things. It doesn’t necessarily do this by default. Head on over to the Layer Properties panel (the icon that looks like a stack of images), and go to Passes→Data, then check Z. A “Z Pass” is another name for a depth map.
Next, you’re going to want to go to the Compositor window. To get there, click down in the bottom corner, in the same place you’ve previously used to change between the Shader Editor and the 3D Viewport. Once you’re there, check the box for Use Nodes.
Once you get there, you should see something that looks somewhat familiar. The Compositor looks a bit like the Shader Editor, in that we can plug various nodes into each other, which instructs Blender to carry out certain operations. The difference here is that the Shader Editor was all about setting up the material our plane was made from, and adding terrain to it. The Compositor is a window used for post-processing. It acts much like Photoshop and other raster image editing programs. Once Blender is done making a rendered image, you can use this window to have it do all sorts of things, like adjusting colors, mixing images together, adding filters, and so on.
The big box on the left represents our rendered image, and right now the image output is connected to the image input of the Composite node. This just means “take the image of the render, and then send it directly into the final image output location, without any changes.” It just makes sure your rendered image, with all its fancy lighting, shows up at the end.
You may notice the Depth output on the Render Layers node. This is our depth map, exactly what we’re looking for. When Blender renders a scene, it also calculates the depth map for that scene. But how can we see it? Well, let’s plug that Depth output right into the Image input, so that instead of seeing our terrain render at the end, we see our depth map. Now render your scene; you should see the render at first (while Blender calculates all that realistic lighting that we no longer care about), but at the end it will get replaced with the Depth map, because we’ve told Blender that’s what we want as our final image.
Before we go any further, let’s take a moment to make this whole process go faster. You might remember that render time is related to the number of render samples — more samples look better, but take more time. It can take several minutes to render a final shaded relief image. A photorealistic terrain map might take hours to render. But, we don’t care about a good-looking output, and it turns out that Blender only needs 1 render sample to generate the depth map. So, head on over to your Render Properties tab, and turn down your render samples to 1. From now on, all our test renders will go much faster.
Now, you may notice that my initial depth map doesn’t darken all the way to black. The nearest mountains are kinda grey. We’ll need to fix that, because it represents lost information. If we used the entire black-to-white range, we’d be able to encode a lot of different depths, because there are a lot of different shades of grey in between black and white. If we instead had a depth map that was only grey to white, there are fewer shades of grey in between, and so we can’t encode as many differences in depth.
To fix this, we need to add a new node. Hit Shift-A, and use the search bar to look for Normalize. Stick that new node in between our two existing nodes, like so:
This tells Blender to take all the depth information that it’s calculated, and then map it onto a black-to-white scale (I believe that, behind the scenes, it’s really a 0 to 1 scale that eventually gets mapped onto a black-to-white raster once it knows the bit depth of your output, but that’s more detail than we need to go in to). If we render again, we get the full greyscale range.
Now let’s turn this into something that looks more like terrain. To do that, we’re going to find edges in our image. If you’ve played around in Photoshop, you may have seen a tool that does that: it looks for places where the colors in your image shift suddenly. Blender has tools like this, too. In our depth map, a sudden shift in color means a sudden shift from something close to something far. On the lower left of the image above, you can see we have a dark foreground mountain, and then right behind it a mid-grey mountain some distance behind it, so that’s a sharp transition from foreground to background. If you imagine drawing lines that trace along those transitions, you’d start to outline the shape of each mountain or ridge as it occludes the terrain farther behind it.
To get Blender to do this for us, we’re going to set up a Filter node. Just like with Normalize, just hit Shift-A and then search for Filter. When you place the node it will have a different name, like “Soften”; that’s fine. Plug it in after Normalize.
Notice that on the Soften node, there’s a dropdown menu that also says Soften. Click that and choose Sobel. Notice that the name of the node now changes to Sobel. So, this is really a Filter node, but it happens to name itself after whichever particular filter you’ve chosen. Just like Photoshop or GIMP or other raster image editing programs, Blender has several filters to choose from that can modify how your image looks. The Sobel filter is one that detects edges, and it’s named after one of the people who invented it. Now that you’ve got it plugged in, do another render.
Those light areas are our image edges. The brighter they are, the bigger the shift in color in our image (which means, in our case, the bigger the shift in depth). You can see how it effectively outlines the tops of ridges, and even filled in some details on the sides of mountains.
While it has a cool, glowing, light-on-dark effect, let’s reverse that and have the edges be black and the background white. Add an Invert node after the filter:
Now it’s switched. Some details are perhaps a little harder to see, in the areas with weaker edges, but we can work on that later.
The Sobel filter is only one way of detecting edges. Another one on that list, the Kirsch filter, also does so. Switch to that one and have a look at your rendered output.
It’s pretty similar, though to my eye it seems maybe a little darker and more emphatic about each edge. But I think there’s also a little less distinction among the hardest edges. I’m no expert on these, and I’m sure a specialist could do better in pointing out the aesthetic differences and their relative merits.
Save your filtered depth map. You can pick your favorite, Sobel or Kirsch, or even choose both. When you save, make sure to choose a Color Depth of 16. This will store our images with 16-bits of data per pixel. This is important, in that it lets us encode way more information (basically, way more levels of grey). That’s important because we have a lot of faint areas we’ll want to work with. By storing a lot of levels of grey, this means that even those faint areas are actually still made up of many separate (if barely distinct) shades. So later on we can darken them and accentuate their differences without losing detail.
Now we start doing some manipulation on these images in Photoshop. Other programs will do this stuff, too, so feel free to translate these concepts into GIMP or whatever other software you’re comfortable with.
It’s important to note that the steps I’m going to follow are sort of like a recipe — everyone who uses Photoshop for cartography does things a little differently, because they have their own taste and their own habits. I highly encourage you to play around and try doing things differently than I do, once you get the hang of it. Experiment to find your own style. I’m just showing you mine (which actually changed during the writing of this tutorial).
I’m going to be a little less detailed here as far as telling you what to click, because this is going to presume some Photoshop familiarity. This is partly because this tutorial is already getting really long, and also because, if you’re new to Photoshop, this is probably not the best tutorial to start with. I’d encourage you to first get comfortable with the software by watching courses on YouTube or the like. I was more detailed with the Blender portion because I know that’s much less familiar to most of readers and that’s where I think it’s much easier to get lost.
I picked the Kirsch filter for my image. At this point in the tutorial, I’m going to switch to a totally different terrain, too, because as I was writing, I realized that I should have chosen something a bit more detailed to better show off the method, so I called up an old file where I’d done this before (the one used in my initial example at the top of the post). Here’s my new, more interesting terrain. Still just made entirely of edges, though.
Next I’m going to soften the Kirsch image a bit with a simple gaussian blur (Filter→Blur→Gaussian Blur). I’m going to do a radius of 2 pixels, but your preference may vary. This gives the Kirsch a softer look that I prefer.
Next up, I’m going to add a Levels adjustment layer (Layer→New Adjustment Layer→Levels) just above the Kirsch, and use that to darken things a bit by dragging the left slider over. This will discard the lighter pixels and redistribute the remaining pixels along a black-white gradient, thus darkening things and bringing out some detail in the lighter areas (this is where having a 16-bit image helps a lot).
We’re already almost there. I’d like to take some of those faint foothills in the bottom of the image and darken them a bit more, though. So, I’m going to add another Levels layer, and I’m going to darken things again, until the just the foothills are how I want them.
Now, of course, everything else in the main mountain range is too dark. So, let’s constrain our second levels adjustment so that it only affects some areas. I’m going to go into its opacity mask, and paint in some black to mask it out of the higher mountains, so that it only darkens parts of my image.
Notice that I feathered mask quite a bit, for a smooth transition. Now my levels adjustment fades out as it gets into the mountains, and fades back in among the foothills. Remember, here’s what it looked like before:
You can keep going around making adjustments like that and tweaking as you like. In a previous version of this method, I used to stack Kirsch and/or Sobel filters on top of each other with various blend modes. You could also try using Curves instead of Levels for finer control. I leave this all as an exercise to you. Experiment!
Now, that’s our really basic, abstract terrain. But, there are a few other niceties we can apply if we like.
To add the first of these niceties, we’ll need to go back to Blender and grab another layer. In your Compositor, disconnect the invert and filter nodes, so that you’re just outputting the original depth map. Then, save that.
Now, bring it into Photoshop and drop it into your file. This depth map can do a couple of things for us. If you think about looking at a real-life mountain scene, as things get farther from you they might get obscured by mist if the weather conditions are right. We can use the depth map to simulate that. This is something people do in photorealistic terrain renders all the time.
To add mist (or fog), we just need a new layer that’s solid white, and then we can use our depth map as the mask for that layer, making that white layer transparent near the front of our scene, but more opaque near the far end.
So now, as our scene recedes away, it gets fainter and mistier. You could adjust the opacity of the layer to tone down this effect. Note that I have the Depth layer turned off right now — it’s not visible, we’re just using it in the white layer as a mask.
I can also use this mask to simulate depth of field — making things in the background fuzzier, and focusing our attention on the foreground. In Photoshop, I select my Kirsch layer and go to Filter→Convert for Smart Filters. This lets me apply filters to it, but also mask them. Next, apply a gaussian blur to the layer, this time much stronger — say, 10 pixels or so.
Notice a new layer popped up, the Smart Filters layer. There’s one smart filter applied: the Gaussian Blur, which you can turn on and off. Copy your depth map into the Smart Filters mask.
Now the blur applies more strongly to the faraway places, and is less apparent nearby. You could once again tweak the strength of the blur, or adjust the darkness of the mask to further control this. This is another trick that you’ll find useful if you move beyond this tutorial toward more photorealistic renders.
In my case, I’m actually going to turn both the fog and blurring off, as that’s not the look I’m going for, but I wanted to show them to you.
Adding Other Data Layers
Now we’re all done with depth maps and their edges. You can get quite a lot out of a terrain image without even having to render lighting, it turns out. But, what if you want to stack some other data on top of this? Rivers, points of interest, and so on? To do that we’ll go back to Blender.
In your Compositor, reconnect the Image connectors of the Render Layers and the Composite nodes. In other words, put things back to the way they were — we want to look at renders again, rather than depth maps.
Now head into your Shader Editor. To add more data, you’re going to need another image that lines up correctly with your DEM. So, here’s my DEM, and here are some rivers that I’ve exported as a raster image (I was in Illustrator and saved them as a .png).
The spatial boundaries of each image are the same — they layer perfectly atop each other in space (they can be different resolutions, though). You may need to do a little preparatory work to make sure everything is aligned.
Now, let’s go into our Shader Editor for our plane. This is where, in the past, we’ve done things like set our vertical exaggeration and such.
You may recall that we talked a fair bit about the Principled BSDF in the last tutorial. This is the node that tells Blender how light will interact with the surface of our terrain — how it bounces, how it absorbs, etc. You may have played around with adjusting the Base Color parameter, or even plugged an image into that connection point, as mentioned near the end of the tutorial. Let’s give that a try: hit Shift-A to add a new node, and search for the Image Texture node. This is something you’ve done once before: you added the DEM as a displacement texture long ago when you set up your original shaded relief that we based this all on. Load your rivers (or points of interest, or whatever other layer you want to add in) into the image texture and plug it in to the Base Color.
If I set things up like above, I’ll get this as an output, showing where my rivers are in the oblique view.
This is somewhat useful, but there’s a problem: while it has the rivers that we want, it’s also got a shaded representation of the terrain, too. I want to add just the rivers to my existing simplified terrain that I’ve created from those edge filters. However, there’s no easy, clean way to pull the rivers out of the image above and ignore the rest (you might try selecting by color, but it’ll be a pain in many cases). Most of the time, the best we could do is bring it into Photoshop, then use it as a reference to manually trace the rivers in a new layer, then throw it out. Now, that’s a perfectly serviceable way of doing things (and not uncommon), but let me show you a better way that’s faster and more useful.
Go back to your Shader Editor, and get rid of the Principled BSDF, and replace it with an Emission shader. (Shift-A, then you can search for it). Plug it in exactly the same was as the Principled BSDF was connected.
So, what does all this mean? Well, an Emission node changes your terrain plane. Instead of having a solid material surface that light can bounce off of, it instead is now going to emit light in your scene, a light that is unaffected by shadows or other such complexities. And the light it emits is going to look like my rivers layer, since that’s the image that’s plugged into the color node. And, best of all: you can keep your render samples at 1. Emission nodes only require one pass. So, this will speed us up again, vs. the slower method of using the Principled BSDF to make a reference layer. Here’s what I get when I render:
Just the rivers, with nothing else. All the colors are the exact same as my input file, too, and I don’t have to worry about the lighting settings changing them, as I would if we’d used the Principled BSDF. Now I can take this over to Photoshop, drop it on top, and either select/delete the white pixels, or just set the blending mode Multiply (white pixels disappear in Multiply).
Again, everything lines up perfectly, because the original layers I used in Blender (my DEM and my rivers image) both covered the exact same spatial extent.
The downside is that it’s not easily editable — any adjustments that you make to your data layer have to be done to the image you’re feeding into Blender, and then you need to re-render. Fine-tuning can be a pain, and things can look pixellated (since this is all based on raster images). As such, even using this Emission method, it’s still common to just treat this as a reference layer and trace out the data, just like with the earlier method using the Principled BSDF. Nonetheless, the Emission method is quicker to render and at least lets you add your data cleanly to the map to make tracing easier, if you have to trace.
Here, for example, is that same image with a manual vector tracing in Illustrator. So now I have vector rivers atop the terrain we made earlier, which are easier to fine-tune if the style needs to change.
Instead of adding in vector data, what if we did raster data? The Emission method enables us to easily add a color layer, line hypsometric tints, land cover, or imagery to our map. Instead of plugging my rivers in as my color layer, let’s instead plug in my DEM as the color layer.
Remember, your DEM is represented in Blender as a simple black and white image, with black being low and white being high. Here’s what that renders:
Again, it’s just emitting the color of the DEM, and nothing more. But this is a pretty useful layer. Let’s bring that into Photoshop and drop it atop everything else. Now I can blend it into the rest of the layers. Here, I’ve set it to Multiply, with a 70% opacity.
Obviously, there are lots of options for blending here, and a lot of different directions you could take it. I could, for example, colorize my hypsometric tint first using the Gradient Map adjustment layer:
Gradient Map converts a black and white image into whatever color ramp you’d like. Note that I made sure to apply the gradient map only to the hypsometric layer, by Alt-clicking the space between the adjustment layer and the hypso layer to get that little arrow to turn on. Now when I blend that in with Multiply, I get this:
Instead of feeding your DEM into the Emission node in Blender, you could do a land cover layer, instead, or satellite imagery. Whatever you want to load into the color slot of that node, yields you something you can drop in with Photoshop, whether it’s raster data or (rasterized) vector data.
There are a lot of directions you can take this, and I could keep going on, but this is already running pretty long and there’s a lot to digest here, so I’ll wrap up for now. Thanks for playing! And thanks to all my patrons who keep content like this going! If you’d like to support my efforts, you can donate via the links below. Also, just spreading the word about my work is a big help!
I mentioned earlier that the Compositor gave you a lot of options such as you might find in Photoshop. Just to demonstrate that further, I spent a while poking around Blender (after releasing this tutorial) and found a node setup that would give you a similar result without using Photoshop.
I won’t go into detail here, but it largely puts the image through the same stack of processes as our Photoshop layers. I will live this as an exercise for you to explore on your own, if you like. I would say that the process felt more clunky to me than doing it in Photoshop, though that’s likely just because I’m not as comfortable in Blender.
As is now my annual tradition, it’s time for me to tell everyone how much money I make!
Why? Well, I find the financial opacity of the freelance world a bit intimidating, and I suspect that some others do, too—particularly those who are interested in freelancing, but haven’t yet jumped in. So I’d like to do my part to lend transparency by laying out my financial picture for all of you. And if you’re interested in more stuff like this, check out the results of the 2020 Cartographic Freelancer Survey.
My business income comes from a few different sources:
Freelance Cartography: $44,176.52
I mostly make my living by doing freelance mapping for clients. This number, like others here, represents my gross earnings, before taking out business expenses, etc.
Other Freelancing: $47,944.44
I also earn money from some other non-mapping freelance work. This year I did a lot of freelance GIS that I’m counting here (rather than as “cartography”). I also got a $5,000 coronavirus relief grant that’s counted here.
Instead of making maps for clients, I sometimes (or often) spend time making maps for no one in particular. And then I’ll put them up for sale in case anyone wants to buy them. This year’s value is super-high because I ran a Kickstarter to print up Landforms of Michigan.
My expenses includes things like software, utilities, shipping, cyanotype materials, the fees I pay to receive donations, etc. It also includes the cost of producing prints: for example, I raised nearly $6000 in my Kickstarter, but most of that money went into paying to have the posters printed and shipped. Cartography doesn’t have a huge overhead, but there is definitely some cost to doing business.
So, that works out to a net business income of $93,703.55 (which is a little different than a salary — see below). My pre-expense gross business income was $105,909.9. Here’s how that compares to the last several years:
My Income is not a Salary
If you’re only familiar with earning money as a salaried employee, my income might seem higher than it really feels. After business expenses, I earned roughly $94,000 in self-employment income. For my personal tax situation (single, living in WI), my take-home pay would have been similar if I had worked a salaried job (with benefits) that earned about $74,000. Actually probably less, because I didn’t count the fact that I don’t get any vacation or other paid time off.
This difference is because self-employed people in the United States pay a much higher tax rate, and have to cover their own health insurance, as well as retirement savings contributions.
This was an unusually good year, though primarily due to non-cartography work, as I’ve been doing a lot of freelance GIS work for a particular client. That will end at some point. I’ve stored away a lot of that income as a safety net for the next lean year (you can see that some past years I’ve earned much, much less). There’s no real guarantee about my income from one year to the next.
As mentioned above, I earn income from prints of my work. Most of the things in my storefront have never sold more than a few copies. Historically, it’s pretty much all income from River Transit maps. But, it doesn’t cost me anything to offer things for sale, so if I can earn $10 from one of the less popular items every once in a while, I might as well take it. I want to do a survey at some point to look at print sales of map posters. I suspect that lots of us have designs that get a lot of social media attention, but not a lot of actual sales, so that we each look like selling much more than we really are.
I greatly appreciate the kindness people have shown me over the years through donating to support the unpaid parts of my work. It’s becoming an increasing fraction of my overall income. If you’d like to add your own support, here are some handy buttons:
Finally, I hope all this stuff above offers some useful insight as to one freelancer’s life. I’m sure some others earn more, and some others earn less. I’d encourage others who are comfortable doing so to share their own financial information, to make the picture a little broader.
It’s that time again! If you’ve been following me for a while, you know that every other year, Aly Ollivierre and I conduct a survey of freelance mapmakers. The 2022 survey is now out and awaiting your data!
I hope you’ll share this widely, so that we can reach enough people to have a meaningful dataset. And please consider filling it out yourself — whether you freelance a little, or a lot. It helps our community grow stronger through a better understanding of our value.
If you want to get notified when the results are available, you can sign up for an email list here: bit.ly/cart-list.
Also, if you’ve already seen me & Aly spam this announcement across various Slack channels, social media, etc., thanks for your patience. We really want to reach as many people as we can with this important work.
A few weeks ago I wrote about the potential downsides of making an overly-detailed shaded relief image. It’s easy for readers to miss the major landforms, as they are hidden under the highly-detailed noise of the individual hills and bumps.
I truly understand the urge to show all that detail. It can feel wrong to hide it, to smooth it away, to present a gross reduction of nature’s amazing complexity. We are awash in so much great data nowadays, so let’s show off all of the exquisite detail that we finally have about land and water and populations and incomes — information that previous generations would have loved to access, but couldn’t.
But more importantly, it feels safer.
Putting every possible detail on the map means we can avoid difficult choices about what to show and what not to show. We can tell ourselves that we’re just showing what is, without applying influence. We can say to the reader “I’m just putting everything out there and you can decide; I’m not trying to interpose myself.” It lets us imagine that our maps are objective, even if that’s not actually possible; nothing authored by humans (or authored by human-authored software) can be anything but subjective.
If we don’t make too many choices, hopefully it also means no one will disagree with us. Map generalization is a form of artistic vulnerability. If I reduce the detail in a coastline, I am keenly aware that another person might have done it a bit differently. Did I do it wrong? Should I have used different parameters? How much is too much? Does mine look stupid, or confusing? I grapple with this constantly.
We all commit sins against the landscape when we fix it into a cartographic form. I know that it can feel uncomfortable; it does to me. But it’s necessary if we want to serve our readers well. Think again to the example above, of the broad landform structures vs. the tiny details. The “right” answer will depend on the map that you are making, but in many, many cases, it’s better for the reader to focus not on the thousands of individual peaks and valleys, but the great ridges that they form — the large structures that affect the weather, determine the course of streams, and constrain the movement of animals and people.
Here’s an example I often share with students. Fictional cities, in a fictional country (but based on a real-life example I encountered as a student).
Notice how Border City appears to be just inside the country of Pineland. But, if we make the dots semitransparent, you can see that, due to a dip in the border, Border City is actually in Utopia.
One easy solution is to just move Border City. I mean, you could restyle the dots, too, but if you don’t feel like doing that, just move things.
When I tell students this, I wonder if some of them find it shocking to hear. I think a lot of us, early on in our careers (certainly my own), are very attached to the idea that we must make maps “accurate” and “objective,” and so this sort of generalization feels very wrong. But sometimes the lie is more accurate. The reader now will correctly understand what country Border City is in. Just like, with the terrain example above, they will now correctly understand the big shapes of the earth, rather than getting lost in details that are technically accurate but serve no greater purpose.
I tell my students to contrast “spatial accuracy” with “narrative accuracy.” Instead of obsessing about the exact proper position of everything, the latter focuses on asking: What will the reader actually learn from the map? What will they remember five minutes after they’ve put it down? We’re generally not landing spacecraft with these maps, so it’s OK if something is gently smoothed or nudged (or even made up!), so long as it’s not so extreme as to leave people with the wrong impression.
It’s been a long journey, but I mostly do not hesitate anymore to move or reshape cartographically inconvenient features. Many times, though certainly not all of them, it doesn’t matter if you get it “wrong.”
This post ended up being a bit of ramble, much less structured than I usually try to present my thoughts here. Thanks for coming along!
In the years since I stumbled across the idea of creating shaded relief in Blender, I’ve been amazed at the extent to which the cartographic community has adopted this technique. This was wholly unexpected: I’ve seen plenty of relief tricks come and go without achieving widespread adoption. This makes sense, given that there’s no “right” way to do terrain — it’s all down to your personal taste. But, for whatever reason, Blender has stuck, and I’m gratified to see that.
And now I’m here to push back on Blender relief. Or, rather, I’d like to make an appeal: consider making your Blender relief less Blender-y.
As more and more people adopt Blender, I have begun to notice that many relief images made using this technique have a fairly common look. Here are a few examples I found by looking at Reddit posts that linked to my Blender tutorial (worth clicking to see in detail):
Notice the dark, dramatic shadows, and the highly detailed, very bumpy land surface in which each individual peak and hill sticks out. They all have what I might call the “Blender Look”: exaggerated terrain that almost looks cinematic. I’m not calling any of these particular authors out — they’re just handy examples of a widespread style trend. Interestingly, though, there’s nothing intrinsic to the Blender method that requires reliefs to look this way. Here’s one that I made in Blender that doesn’t have The Look (it’s also got a subtle hypsometric gradient blended in):
I’m not wholly sure how The Look came about. My suspicion is that the Blender’s particular strengths just steer cartographers in that direction by default. Blender is amazing at generating realistic shadows, and if you’ve just come from years of making more rudimentary hillshades in GIS software, that’s really attractive and interesting to play with.
Now, maybe you like The Look. It’s quite trendy in present-day terrain mapping, and I’m assuming that’s because many people find it attractive. Maybe that’s also why there are an innumerable number of shops out there selling prints in which someone has attached a Blender relief to an out-of-copyright reference map.
But I’m not really a fan of the Blender Look. If you are, that’s fine: keep making what you enjoy, and what your clients/customers/employers like. I may have helped to popularize the Blender technique, but it’s not my place to tell you how to use it. However, I’d like to offer some of my thoughts about how to get different-looking outcomes from Blender. Maybe you’ll like them, maybe not. The more alternatives you know about, the more variety you’ll be able to bring to your work.
Smooth It Out
My first concern about the Blender Look is that I often find it too detailed. Shaded relief of any sort can fall into this trap, whether made in Blender or not. Here’s a screenshot from the Shaded Relief Archive comparing a manual relief and one made using a standard hillshade algorithm from a GIS program.
According to the website, the manual relief “provides a clearer picture,” but really, that’s not because it’s manual, it’s because it’s simply more generalized. In the image on the right, you see many small mountain bumps — every detail of the terrain is captured. On the left, everything’s been smoothed out and you see the larger ridges that those bumps form. Oftentimes (whether we’re dealing with terrain or other datasets) the small details can obscure the bigger picture, and cartographers can help readers by aggregating those details.
Here’s a Blender-based example. Two reliefs of the same area, made using the same program and settings, but with different levels of detail in the underlying elevation data:
Notice, again, how the big terrain structures are more apparent in the image on the left. I’ve circled an area that exemplifies what I mean by that — you can click the image to see a larger version. On the right, we have just a bumpy texture. On the left, we can see the specific ridges and mountains that those bumps belong to.
For fun, here’s a snazzier version of the comparison, which I made to promote this post on Twitter.
I think that the less detailed relief is more suitable for a lot of purposes. It’s probably better, for example, for communicating the basic landforms to a reader — letting them see the big picture, without the details obscuring it. It’s also less noisy: without all those little bumps, it takes up less attention, meaning it works better as a background, where we usually don’t want many distracting, rapid changes in contrast (i.e., edges). Backgrounds are also areas where fine details aren’t noticed as well, anyway, so it’s probably best to keep things general.
I suspect that highly detailed terrain became part of the Blender Look because many mappers switching to Blender were used to using detailed DEMs as part of their process. The standard GIS hillshade algorithm doesn’t produce dramatic shadows, and it consequently doesn’t produce nearly as much noise when you’re feeding it a high-resolution DEM. Here’s the same DEM rendered in a hillshade, vs. in Blender:
Obviously you could tune the parameters in each of those to get slightly different looks, but you get the idea: Blender can make details stand out more, so if you’re not used to that, it can lead to things looking more chaotic, in my opinion. Of course, even GIS hillshades regularly need some simplification, but I think Blender often needs it more.
Here’s the simplest method to reduce detail: blurring. If you take your DEM and apply a blur filter in Photoshop/GIMP/etc., or you use a mean filter via a neighborhood statistics tool in GIS, you’ll start to merge those tiny bumps into the bigger structures. Do this to your DEM before you make your relief, not to the relief you’ve already made.
Now, that look can be a little too soft for some tastes and some mapping situations. There are ways around that. On option is to play around with other raster generalizations, such as median filters in GIS or image editing software, or a combination of mean/median filters, or other noise reduction tools available in programs like Photoshop. This isn’t a tutorial per se, so I’ll leave that idea for you to explore.
But I will give a plug for the method that I use most often to get around the soft look: resolution bumping. This is a technique by Tom Patterson with a basic idea that is simple but powerful: smooth out your DEM, then add back in just a little bit of the original, detailed DEM. This gives you the clear landforms of the smoothed version, while also giving you some visual texture from the detailed version. It’s definitely worth trying. Below, I took 90% of the smoothed DEM, and added in 10% of the full-detail DEM, to get something that looked good to my eye.
Again, my personal issue with the Blender Look being too detailed isn’t intrinsic to Blender as a program or a technique. Cartographers have long struggled with how to make digital relief that looks neither too detailed nor too soft, and many mappers have their own recipes for combinations of mean/median filters, or mixing different levels of smoothing, or other software trickery, to get things just right. And all of these ideas predate Blender relief. However, I think Blender relief is particularly in need of the application of these kinds of smoothing techniques, because it is simply more likely to produce a noisy result from a high-resolution DEM.
The other aspect of the Blender Look, besides high detail, is its dramatic shadows. Every peak is separated by a deep black chasm, and every mountain stands impossibly tall, blocking the sun for large swathes of terrain behind it. Again, this is all very cinematic. But it’s not my preference, and maybe that’s simply because I’ve seen a lot of it over these recent years.
I think that, if you’re used to the somewhat more pedestrian GIS hillshade algorithm, the shadows that Blender can generate are exciting, and there’s a temptation to make them front-and-center in terrain work. I definitely get that — it’s much like any time we learn about a new tool: we want to lean into it and show it to our audience.
If your terrain needs to play nicely with other things (labels, water, roads, etc.), consider toning down the vertical exaggeration some. This is just a matter of changing the displacement scale in Blender, as seen in my tutorial. A little exaggeration can go a long way, but too much can blow out the rest of your map. Even if you’ve adjusted the particulars of the light-to-dark gradient used, to ensure that it doesn’t make your map too dark, an overly exaggerated relief can still make it seem more like you’re drawing text/lines on top of an aerial photo, rather than embedding them in the map.
Getting vector and raster to play together can be tough, and the more exaggerated and shadowy your terrain, the farther apart the vectors and rasters can feel. I speculate that this is because vector components are unaffected by the shadows — the terrain has shadows cast upon it, but the vectors don’t, and so the more shadowing you have, the more and more the vectors float atop the landscape rather than feel integrated with it.
Even if your relief is standing by itself, not having to work with any other layers, extreme exaggeration can still be detrimental. Those dark shadows are hiding parts of the terrain.
What’s going on in the circled area? The version on the right makes it much harder to tell.
I think terrain relief is much like a lot of other cartographic effects like glows, haloes, drop shadows, etc: they often work best when they’re subtle enough that people can’t tell they’re there. I think the urge to crank these effects up comes from a concern that readers won’t perceive the effect. But they will: it takes less than you think for the effect to work.
Make What Looks Good to You
The Blender Look isn’t to my taste, and if you love it, or you love it under certain circumstances, go for it! I’m not here to shame you for your preferences. I wrote this because I thought that maybe some people have been steered into The Look without thinking about the alternatives. So I hope you’ll take it that way: just a few thoughts on other ways you can make your relief look, if you want to mix things up.
Shaded relief doesn’t always need to be dramatic. You might like that look, and it can be serviceable in some contexts, but I think it’s valuable to have a few other styles in your toolkit. Blender can help you achieve those, too. There’s a lot of amazing terrain on this planet (and others), and I understand the urge to try and throw a lot of detail at the reader. All I ask is that you take a moment to weigh much they, and the rest of your map, can handle.
If you’d like to support essays like this one, one of the easiest ways you can do that is to spread the word — tell your friends and colleagues about my tutorials, YouTube videos, or whatever else you think they may like. And, if you’re interested in lending financial support to my effort to remain an independent teaching cartographer, I have these two handy buttons for you!
Friends, colleagues, and patrons, it’s time for my Annual Report. You’re all so kind as to support my work each year with donations and with spreading the word, and I try to be transparent with you about what exactly you’re supporting.
This year it felt like I had a lot less time to devote to side projects and tutorials. Maybe that’s just my perception, and it’s not actually true — it’s been a confusing year for many of us. But in whatever quantity I was able to, I’m glad to have been able to give back to the community that has taught me so much, and to repay your faith in me. With your support in 2021 I was able to (in no particular order):
Organize a mapping exhibit at the Overture Center, here in Madison. My colleague Tanya Andersen and I assembled a group of maps to help show the people of our area just how much cartography is connected back to our community. This is actually something that’s been in the works for a couple of years now, and we’re so pleased that we could finally make it happen. It’s up through January 16th if you happen to be in Madison.
Organize a special NACIS Map Gallery exhibit in Oklahoma City, Map Where your Heart is. It was great to see people share maps of places dear to them!
Answer a lot of questions via email, Twitter DMs, YouTube comments, Slack, etc. I spend a lot of my pro bono time out of the public eye. People write to me each year asking for software help (especially Blender), career advice, interviews for school projects, etc. I try to take the time to write back to everyone. It’s not really something I’d considered, when writing tutorials: the more resources you put out there, the more of these kinds of interactions they generate. I’m happy to help, and will try to keep up as long as the the volume remains manageable.
Volunteer to teach a class at UW. Tanya Andersen and I teamed up to run an independent study class at the University of Wisconsin. We met weekly (remotely) with our three students, advising them as they developed projects and carried them through to completion. This is some of my favorite work — being able to show people, hands-on (virtually) how to accomplish practical cartographic steps, and then watching them make great things with those tools.
Continue my “Live Carto” series from 2019–20, with four new broadcasts. It’s been fun to be able to work on a project and have people there alongside me to keep me company, especially during this pandemic isolation. I’ve appreciated being able to meet people from all over the world, and to be able to share knowledge with them in this format. I’ll continue doing this on occasion in the coming year!
Keep my tutorials updated. As software changes, often these resources need updating to stay useful (it can be frustrating and confusing to read through a tutorial that doesn’t quite match what you’re seeing on screen). I recently upgraded by Blender tutorial with new screenshots and workflow adjustments in response to that program being updated.
Create non-tutorial musings. That background blurring tutorial pushed me to consider some larger concepts that ended up as both their own blog post and a NACIS presentation.
Release short, ephemeral tutorials. It occurred to me recently that I also post mini-tutorials on Slack groups and on Twitter — things that don’t make it onto my blog or YouTube. These usually happen because someone asks a question about how to get something done, or because I discovered a neat trick I want to share, but it’s too short to be worth a formal writeup. Maybe someday I’ll collect these into a blog post.
Create a bunch of random one-off mappy things that get released to the winds of Twitter. Last year, I collected some of them into a free PDF book, An Atlas of Minor Projects. I haven’t done enough of these projects for a second book yet, but I am keeping track of them better this time around, so that maybe in 2022 or 2023 I’ll be able to assemble a second volume. Here’s one of those minor projects for you: a Halloween-themed map of the Murderkill River, in Delaware.
Make some non-live video content. I’m continuing to try and grow my tiny corner of YouTube. Besides the dot pattern tutorial, I also put together a walkthrough of how I make my cyanotypes, since it’s prompted a lot of questions from curious folks on Twitter.
Continue my long tradition of pitching in to help NACIS, the main professional society for mappers in North America. It’s an all-volunteer organization, so the more we all help out, the better it becomes. In the last year I:
Served on the Diversity & Inclusion subcommittee
Served on the Nominations Committee
Helped oversee logistics details for getting reprints of the first three volumes of the Atlas of Design to customers
Served as guardian of our projectors. I keep all four of them in Madison, and make sure they get to the conference site and back, without losing too many cords and connectors.
Co-led a special presentation with NACIS Past President Leo Dillon, in which we helped people better understand how the organization functions and how to get involved.
Presentto various conferences and classes. Besides sharing some of my thoughts with the NACIS crowd as usual, I was also invited to present to WLIA, as well as a colleague’s class.
So, maybe it wasn’t as quiet a year as I thought!
Your patronage helps me justify taking time away from my freelance work in order to write, design, and help others. It also pays for things like conference fees, the fees I pay to keep ads off my blog, domain names, and other direct costs associated with all these side projects. Thank you for making this list possible!
As we move into 2022, I hope to continue to merit the support you have shown me. I never know exactly how much I’ll be able to do so in a given year, but I do know that I fully intend to keep up my efforts to contribute to the cartographic community. You have all taught me so much, and I will continue repay the favor as best I can.
If you’d like to support my ongoing work, one of the easiest ways you can do that is to spread the word — tell your friends and colleagues about my tutorials, YouTube videos, or whatever else you think they may like. And, if you’re interested in lending financial support to my efforts in the coming year, I have these two handy buttons for you!
Today I’m going to steal an idea from Anton Thomas. A while back, he released North America: Portrait of a Continent, which is a masterpiece well worth looking at. As part of that release, he also used pieces of the map to take people on virtual tours of the map, and the landscape it depicts.
My Landforms of Michigan map is not nearly as detailed as Anton’s work, but there’s still a lot going on within it, and I’d like to try the same thing: taking you on a tour of a few interesting features of my homeland, as seen on the map. That’s one major reason I made the map: to understand and express the complex geography of a place that most people (including plenty of Michiganders) dismiss as flat and monotonous.
First, here’s an overview of the places we’ll be visiting.
1. Beaver Archipelago
There are tens of thousands of islands in the Great Lakes. Most of them are near the shoreline, but a small number of them require crossing miles of open water to reach. Several of these comprise the Beaver Archipelago, located in the midst of Lake Michigan.
I suppose I grew up thinking of archipelagoes as collections of islands out in the ocean, and so it’s always been fascinating to me to realize that, in the middle of the United States, there’s a freshwater archipelago, with small bits of green scattered amidst the great expanses of water.
This is, incidentally, probably one of my favorite labels I’ve ever placed on a map. I spent a very long time tweaking it to get the spacing, curvature, and placement just right — something that would build a strong visual relationship between the islands and the text. I’m still so pleased with how it turned out.
2. Fruit Ridge
West Michigan is a fruit growing powerhouse. This is owed largely to the unique microclimates that Lake Michigan creates to its immediate east.
Fruit Ridge is a low range of hills just east of Lake Michigan that’s dedicated largely to apple production. It’s in just the right spot, with just the right soil, to form an “agricultural mecca” (according to the people who live there).That slight grey-tinged mass just to the south is Grand Rapids, the state’s second-largest city, sitting on the Grand River.
Given the state’s agricultural output, I’m sometimes a little surprised that more places in the state aren’t named “fruit hill” or “fruit plains,” etc.
3. Copper Range
Extending for over a hundred miles along the shore of Lake Superior, from the Wisconsin border up through the Keweenaw Peninsula, you’ll find the Copper Range — which, you might correctly guess, is known for its copper deposits.
The first known metalworking in North America began in this area, about 7,000 years ago. Indigenous peoples made use of the deposits here to craft tools and jewelry, and traded them throughout the continent. Much of the copper in the area was in “native copper” form, meaning it was not in an ore, and didn’t need to be smelted. It could be taken right from the ground and used.
Commercial mining in the nineteenth and twentieth century depleted the reserves, and now there are various abandoned mines & ghost towns.
4. Maumee Plain
Moving from the northwest corner of the state to the southeast, we come to the Maumee Plain, upon which Detroit sits.
Maumee Plain is not a name that probably almost anyone in the area would know, unless they were a physical geographer. It’s named for Lake Maumee, which doesn’t exist anymore. Fourteen thousand years ago, the Detroit area was underwater. But as glaciers and rivers moved and evolved, the more familiar present-day Lakes Erie and St. Clair formed, and the land was uncovered.
This area is sometimes referred to in sources as the Maumee Lake Plain. But, that’s a name with a physical geographer’s perspective, rather than a colloquial one. It calls attention the glacial past and the process that formed the plain. Here (as in a few other places on the map), I edited the name to something that would fit more along the lines of how someone without a physical geography background would call such a place. Except, of course, mostly no one calls it anything at all. Which is a shame. I think giving names to these features helps reify them. It makes us more aware of their amazing histories and the powerful forces that gave them their shape.
If you want to know more, I’ve documented my naming sources and rationale for the entire map here.
5. St. Marys River
Along the eastern edge of the Upper Peninsula, Lake Superior flows into Lake Huron via the St. Marys River.
The St. Marys river flows along multiple channels, which widen and pool into a series of lakes, such as Munuscong Lake and Lake George. Amidst the river’s lakes and channels are a series of islands divided between the United States and Canada.
The St. Marys Falls give their name (in French) to two identically-named cities on either side of the US–Canada border: Sault Ste. Marie. The falls are bypassed by the Soo Locks, allowing ships to transit along the river between the lakes. By cargo tonnage, they are the busiest locks in the world.
Also, it is indeed St. Marys River and not St. Mary’s River, because the US Board on Geographic Names doesn’t like apostrophes in names.
Ancient glacial lakes created sand bars that eventually rose up as the dunes we are familiar with. There are many along the shore of Lake Michigan in particular, including these three above. These unique micro-ecosystem along the state’s fringes tend to be in various protected areas (such as Ludington State Park), and draw tourists. While you shouldn’t climb just any dune you find (these are fragile ecosystems), some are open for people to walk up.
I have climbed a sand dune before, and I will tell you it is a lot of work. It may look like a low hill, but sand is very difficult to trudge up.
Also do yourself a favor and do a Google image search for “michigan sand dunes snow.” You’ll see desert-like terrain covered in snowfall. It’s such a wonderful mix.
So there you have it — a quick tour of a few of the state’s landforms. There are plenty more to be found on Landforms of Michigan. If you’d like to browse through a larger image, click here. And if you’d like to take a copy of the map home, click the button below!
Friends, I wanted to share with you a project that I recently completed: Continental Divides, a series of six 42 × 51cm (16.5 × 20.1in) cyanotype posters.
We’ll dive into the details, below, but first: you can indeed buy copies of any (or all) of these if you’d like. Each one is hand-printed, so there will be some variations from print to print, and yours won’t look exactly like the ones above.
Update: I’ve also made a series of smaller (23 × 19cm) versions as well!
Roughly, continental divides are lines that separate the rivers that flow into one ocean/sea, from the rivers that flow into another. So, for North America, there are the lines demarcating which waters flow into the Pacific, the Gulf of Mexico, the Atlantic, etc. There are also various endorheic basins — places where the rivers do not reach the sea.
The choice of which divides to show on a map can be somewhat arbitrary. The water that flows into Gulf of Mexico eventually flows into the Atlantic Ocean, for example, so is it really fair to separate them? And, of course, all seas and oceans flow together in the end. The lines I’ve drawn are simply one interpretation, and not the only way it can be done.
While North America’s continental divides are depicted on many maps, I was surprised to find that this concept, as near as I can tell, isn’t often applied to other continents. Searches for the term “continental divide” mostly turns up results related to North America. Maybe that’s just because I’m performing my searches from an IP address in the United States. Or maybe people on other continents just don’t find this concept as interesting to map out.
For some years, I’d had an idea of how I wanted to depict continental divides: as uniform ridges, sloping evenly down to the sea. That’s an abstraction; in reality, divides can be great mountains, or they can be barely-noticeable bumps, all depending on where you are.
It took a few intermittent failed experiments before I happened upon a way to make the process work, and it turned out to be simpler than I had expected. Here’s how it works:
First, I grabbed vector data for the divide lines — these came mostly from HydroSHEDS, though it took a fair bit of manual selection and adjustment to get the lines that I wanted. These are the high points for the final map. For the low points, I grabbed ocean vectors from Natural Earth, to which I added some hand-drawn lines that went roughly through the centers of the endorheic basins.
I rasterize those two layers, and for each, I create a proximity raster. So now I have two datasets: distance to the high areas (the divides) and distance to the low areas (the seas).
Then I divide the sea proximity raster by the sum of the two proximity rasters.
This gives our slopes running from the high point of the ridges to the low point of the seas (or endorheic basin centers). From there, I can use this as a DEM and generate a shaded relief in Blender, clipped to the shape of the land.
And finally, I apply some gratuitous halftones. They’re sized so that you can’t see them from a distance, but when you get in close, they’re large enough to be obvious. It’s an effect that I quite enjoy.
I think the end result is a fun and interesting way to think about the relationship between the land and the sea. Partly art, partly educational. And if you’d like one to take home, here is that button again.
I have long been interested in the intersection between the cartographic and the personal. While we make maps for clients or employers, many of us also use our cartographic skills as an outlet for self expression. At this year’s NACIS Annual Meeting, I’d like to assemble an exhibition of more personal pieces. I invite you to engage in the joy and vulnerability of sharing, by mapping where your heart is.
How it Works
You make a map that answers the call to “Map Where your Heart is.” There are a lot of directions you can take that prompt, and it’s up to you.
You submit the map file, and I’ll print it out and bring it to NACIS.
Or, if you’d like to print it yourself, or you want to draw/paint your map, you’re welcome to do so and bring it to NACIS yourself for inclusion in the exhibition.
All maps will be on 8½ × 11 inch paper, so make sure your design fits, and, if you’re having me print it, please leave a margin of ½-inch all around.
Maps can be in color. These will probably be printed on a laser printer; the resolution won’t be bad, but it won’t be amazing, either, so plan accordingly.
Everything should fit on the piece of paper; there won’t be space for a separate title or credit or explanation.
You don’t need to attend the conference to send in your work.
Deadline: September 30th, 2021
I hope you’ll consider joining in! I look forward to seeing everyone’s personal expressions of place and of love, and to seeing the conversations that they spark when these are all exhibited in Oklahoma City.