Monthly Archives: October 2011

On Generalization Blending for Shaded Relief

I have nearly recovered sufficiently from an amazing NACIS conference, and I think I’m ready to get back to a little blogging. This time around, I’d like to present you all with an unfinished concept, and to ask you for your help in carrying it to completion. Specifically, I’d like to show you some attempts I’ve made at improving digital hillshades (I’ll be randomly switching terminology between ‘hillshade’ and ‘shaded relief’ throughout).

Automated, straight-out-of-your-GIS hillshades are usually terrible, and it generally takes some extra cleanup work to get them to the point where they aren’t embarrassing or won’t burst into flames simply by being put next to a well-executed manual shaded relief. Here’s an example I stole from shadedreliefarchive.com which illustrates the problem:

Digital vs. manual relief

The computer doesn’t see the big picture — that every little bump in elevation can sum to a large mountain, or that some bumps are more critical than others. It treats everything the same, because it can’t generalize. What we’re left with is noise, rather than an image. But most of us, including myself, haven’t the talent to do a manual hillshade. We are left with two options: steal one from shadedreliefarchive.com, or do a digital one and try to find ways to make it look not terrible. In this post, I’m going to talk about some new (or, at least new to me) ways of doing the latter.

To begin, here’s a bit of Mars, from a project I’m doing about Olympus Mons, given an automated hillshade through ArcMap’s Spatial Analyst tools.

As in the earlier example, this image is way too noisy and detailed, especially in the rough area west of the mountain, Lycus Sulci. The common answer to these problems is to find ways of reducing the detail in the DEM so that those annoying little bumps go away, but the big stuff remains. Usually this is done by downsampling, blurring, median filters, and a few other more sophisticated methods that I don’t have time to explain in detail. For starters, check out Tom Patterson’s excellent tutorials at shadedrelief.com, and Bernhard Jenny’s gasp-inducing tools at terraincartography.com — both of these resources can take you a long way toward improving a digital hillshade.

Downsample and median filter

Run through Bernhard Jenny's Terrain Equalizer

Both of these are an improvement over the original. The major valleys in Lycus Sulci become more apparent, and the flatter plateau regions there are no longer obscured by a myriad of tiny bumps. At the same time, though, while we’re losing unwanted details in the Sulci, we’re also losing desirable details elsewhere, especially along the escarpment of Olympus Mons and the gently sloping mountain face. In places like these, where the terrain is not so rough, we can support a finer level of detail than in the Sulci.

Details of the three above images, in order. Note the loss of detail along the bottom edge and along the cliff face.

What we need is a way to keep 100% of the original detail in the smooth places where we can support it, and to generalize the terrain where it’s too rough. To do this, we need a way of figuring out where the terrain is rough and where it isn’t. To do this, I originally started looking at variations in terrain aspect — which way things are facing, since the rough areas have a lot of variation in aspect, and the smooth areas have relatively constant aspect in one direction. But, that’s a somewhat complicated path to go down (though it works well), so instead I’m going with a simpler method that’s probably just as effective: I’m going to look at the variation in my initial hillshade, above. If I do some analysis to find out where the hillshade is seeing a lot of variation — many dark and light pixels in close proximity, then that will give me a mathematical way of separating the smooth from the rough areas.

Here, I’ve calculated the standard deviation of the hillshade (using a 12px diameter circle window), and also blurred it a bit just to smooth things. The darker areas correspond to the smoothest terrain, and the bright areas are where we find a lot of jagged changes, such as in the rugged Sulci. Notice that even though the escarpment is steep, and the plain at the top center of the image is flat, both are dark because they’re relatively smooth and would be good places to keep lots of detail in our final image. In the end, what I’ve really done here is take a look at my initial, poor hillshade, and find out where the noisiest sections are. I think the analogy to image noise reduction is valuable here — we’re trying to reduce noise in our image, so that the major features become clear.

So now I’ve got a data set which tells me a degree of ruggedness or noisiness for different parts of the terrain. There are other ways to get the same effect — you could do a high-pass filter, or the aspect analysis I mentioned above, or perhaps look at curvature. This is just my way of measuring things.

Once I have this data set, I can move on to the fun part. What I want to do is use this to figure out where to keep details and where to lose them. I’m going to use this thing to do a weighted average of my original, high-detail DEM, and a much more generalized DEM. Where the terrain is very rough, I want the resulting data set to draw from the generalized DEM. Where it’s very smooth, I want it to use the detailed DEM. Where it’s in-between, I want it to mix both of them together, adjusting the level of detail in the final product based on the level of roughness in the terrain. In more mathematical terms, I want to use this thing as the weight in a weighted average of my original DEM and the generalized one.

The general formula looks kind of like this:

((Generalized DEM * Weight) + (Detailed DEM * (WeightMax – Weight))) / WeightMax — where Weight is the value of our noisiness data set. Each pixel in the final output is a mix of the original DEM and the generalized one. Where there’s a lot of variation in the terrain, our Weight is very high, so we get a result that’s mostly the generalized DEM and very little of the detailed DEM. Where terrain is smooth, Weight is low, and we see mostly our detailed DEM.

Here’s the output, once it’s been hillshaded:

The smoothest areas retain all of their original detail, and the roughest areas are much more generalized. It’s a combination of the first two hillshades near the top of this post, with the best of both worlds. It could still use some tweaking. For example, in the Lycus Sulci, it’s still blending in some of the initial DEM into the generalized one, so I could tweak my setup a bit, by requiring the noisiness index to fall below a certain number before we even begin to blend in the detailed DEM. Right now the index runs from 0 to 100. So, an area with a noisiness of 80 would mean that we blend in 20% of the detailed DEM and 80% of the generalized. If I tweak the data set so that the new maximum is 40 (and all values above 40 are replaced with 40), then more of my terrain will get the highest level of generalization. Any place that’s at 40 (or was higher than 40 and has become 40) will get 100% of the generalized DEM and 0% of the detailed one.

Here’s what we get:

And here it is compared to the original relief:

Improved hillshade on top, original on the bottom.

Notice how seamlessly the two images blend together along the mountain slope — each of them has the same high level of detail. But in the Sulci, where we need more generalization, the improvement is manifest. For comparison, here’s my generalized DEM vs. the original before blending the two. The loss of fine texture detail on the mountain slope and especially along the cliff face becomes apparent here:

So, there you have it. I feel like this is still a work in progress, that there are some other places it could go. Is this the best way to figure out how to blend the two DEMs together? Should I even be blending at all? Is this even a problem that needs solving? I am a bit unhappy with the median filter, I will say — it’s a classic of noise reduction, but it tends to leave things a bit…geometric. Here’s a more extreme example:

There’s a balance still between cutting out detail and the artificial look of the median filter. I have also tried blurring, but then everything looks blurry, unsurprisingly. I’d like something that can cut out details, but keep sharpness. I may go back and use Terrain Equalizer some more to generate the blending base. But all this fits more on the side of “things you can use to blend into your detailed DEM,” and the main point I am writing about here is the blending concept.

So, I invite you, gentle reader, to give me your input on where this can go, if it has any potential, and how to improve it. I think, after some weeks of work on this and a number of dead ends, my brain can take this no further without a break.

On Projection Videos

I’ve been meaning for some time to share these videos that I produced last year to assist in teaching projections to my students. Specifically, I wanted to use them to emphasize the importance of choosing projection parameters carefully to reduce distortions in the subject area, and to show how two different-looking maps can really be the same projection.

The first video is of an Azimuthal Equidistant projection. The standard point moves around the map, beginning in the central US and ending near the southern end of Africa. I try to point out, when showing it, that the pattern of distortion remains the same because it’s the same projection, but that the location of those distortions on the earth changes as the standard point moves, and how the map at the beginning and the map at the end are appropriate for showing different locations.


The second is of an Albers Equal Area Conic. First the central meridian moves, then the two standard parallels. Here I point out that the areas of the land features never change throughout the movie. Their shapes shift around significantly, but area is always preserved. The angle distortion moves with the standard parallels, and we can choose a set of standard parallels to best depict each area. We begin with a projection best suited for India and end with one adjusted for Sweden.


By the time I show these videos, I’ve already gone over all these projection concepts — they’re just a nice way to reinforce what we’ve already discussed. Student responses suggest that the videos have been helpful in teaching distortions and the importance of choosing projection parameters. It can be a tough thing to get your head around, and I like to approach it from several different angles to make sure I’m reaching as many of them as I can.

I made these using GeoCart (and Tom Patterson’s lovely Natural Earth raster), in a painstaking process which consisted of: 1) adjust projection parameters by a small amount (I think it was .25 degrees), 2) export image, 3) repeat 1-2 several hundred times, 4) use some Photoshop automation to mark the standard point/central meridian (though I had to add the standard parallels manually), 5) stitch together with FrameByFrame

It took many hours. Soon thereafter daan Strebe, GeoCart’s author, pointed out at the 2010 NACIS meeting that he’d added an animation feature to the program, which probably would have saved me a lot of time.

If you’d like the originals (each a bit under 40 MB, in .mov format), drop me a line.

Follow

Get every new post delivered to your Inbox.

Join 76 other followers