I have nearly recovered sufficiently from an amazing NACIS conference, and I think I’m ready to get back to a little blogging. This time around, I’d like to present you all with an unfinished concept, and to ask you for your help in carrying it to completion. Specifically, I’d like to show you some attempts I’ve made at improving digital hillshades (I’ll be randomly switching terminology between ‘hillshade’ and ‘shaded relief’ throughout).
Automated, straight-out-of-your-GIS hillshades are usually terrible, and it generally takes some extra cleanup work to get them to the point where they aren’t embarrassing or won’t burst into flames simply by being put next to a well-executed manual shaded relief. Here’s an example I stole from shadedreliefarchive.com which illustrates the problem:

The computer doesn’t see the big picture — that every little bump in elevation can sum to a large mountain, or that some bumps are more critical than others. It treats everything the same, because it can’t generalize. What we’re left with is noise, rather than an image. But most of us, including myself, haven’t the talent to do a manual hillshade. We are left with two options: steal one from shadedreliefarchive.com, or do a digital one and try to find ways to make it look not terrible. In this post, I’m going to talk about some new (or, at least new to me) ways of doing the latter.
To begin, here’s a bit of Mars, from a project I’m doing about Olympus Mons, given an automated hillshade through ArcMap’s Spatial Analyst tools.
As in the earlier example, this image is way too noisy and detailed, especially in the rough area west of the mountain, Lycus Sulci. The common answer to these problems is to find ways of reducing the detail in the DEM so that those annoying little bumps go away, but the big stuff remains. Usually this is done by downsampling, blurring, median filters, and a few other more sophisticated methods that I don’t have time to explain in detail. For starters, check out Tom Patterson’s excellent tutorials at shadedrelief.com, and Bernhard Jenny’s gasp-inducing tools at terraincartography.com — both of these resources can take you a long way toward improving a digital hillshade.


Both of these are an improvement over the original. The major valleys in Lycus Sulci become more apparent, and the flatter plateau regions there are no longer obscured by a myriad of tiny bumps. At the same time, though, while we’re losing unwanted details in the Sulci, we’re also losing desirable details elsewhere, especially along the escarpment of Olympus Mons and the gently sloping mountain face. In places like these, where the terrain is not so rough, we can support a finer level of detail than in the Sulci.

What we need is a way to keep 100% of the original detail in the smooth places where we can support it, and to generalize the terrain where it’s too rough. To do this, we need a way of figuring out where the terrain is rough and where it isn’t. To do this, I originally started looking at variations in terrain aspect — which way things are facing, since the rough areas have a lot of variation in aspect, and the smooth areas have relatively constant aspect in one direction. But, that’s a somewhat complicated path to go down (though it works well), so instead I’m going with a simpler method that’s probably just as effective: I’m going to look at the variation in my initial hillshade, above. If I do some analysis to find out where the hillshade is seeing a lot of variation — many dark and light pixels in close proximity, then that will give me a mathematical way of separating the smooth from the rough areas.
Here, I’ve calculated the standard deviation of the hillshade (using a 12px diameter circle window), and also blurred it a bit just to smooth things. The darker areas correspond to the smoothest terrain, and the bright areas are where we find a lot of jagged changes, such as in the rugged Sulci. Notice that even though the escarpment is steep, and the plain at the top center of the image is flat, both are dark because they’re relatively smooth and would be good places to keep lots of detail in our final image. In the end, what I’ve really done here is take a look at my initial, poor hillshade, and find out where the noisiest sections are. I think the analogy to image noise reduction is valuable here — we’re trying to reduce noise in our image, so that the major features become clear.
So now I’ve got a data set which tells me a degree of ruggedness or noisiness for different parts of the terrain. There are other ways to get the same effect — you could do a high-pass filter, or the aspect analysis I mentioned above, or perhaps look at curvature. This is just my way of measuring things.
Once I have this data set, I can move on to the fun part. What I want to do is use this to figure out where to keep details and where to lose them. I’m going to use this thing to do a weighted average of my original, high-detail DEM, and a much more generalized DEM. Where the terrain is very rough, I want the resulting data set to draw from the generalized DEM. Where it’s very smooth, I want it to use the detailed DEM. Where it’s in-between, I want it to mix both of them together, adjusting the level of detail in the final product based on the level of roughness in the terrain. In more mathematical terms, I want to use this thing as the weight in a weighted average of my original DEM and the generalized one.
The general formula looks kind of like this:
((Generalized DEM * Weight) + (Detailed DEM * (WeightMax – Weight))) / WeightMax — where Weight is the value of our noisiness data set. Each pixel in the final output is a mix of the original DEM and the generalized one. Where there’s a lot of variation in the terrain, our Weight is very high, so we get a result that’s mostly the generalized DEM and very little of the detailed DEM. Where terrain is smooth, Weight is low, and we see mostly our detailed DEM.
Here’s the output, once it’s been hillshaded:
The smoothest areas retain all of their original detail, and the roughest areas are much more generalized. It’s a combination of the first two hillshades near the top of this post, with the best of both worlds. It could still use some tweaking. For example, in the Lycus Sulci, it’s still blending in some of the initial DEM into the generalized one, so I could tweak my setup a bit, by requiring the noisiness index to fall below a certain number before we even begin to blend in the detailed DEM. Right now the index runs from 0 to 100. So, an area with a noisiness of 80 would mean that we blend in 20% of the detailed DEM and 80% of the generalized. If I tweak the data set so that the new maximum is 40 (and all values above 40 are replaced with 40), then more of my terrain will get the highest level of generalization. Any place that’s at 40 (or was higher than 40 and has become 40) will get 100% of the generalized DEM and 0% of the detailed one.
Here’s what we get:
And here it is compared to the original relief:

Notice how seamlessly the two images blend together along the mountain slope — each of them has the same high level of detail. But in the Sulci, where we need more generalization, the improvement is manifest. For comparison, here’s my generalized DEM vs. the original before blending the two. The loss of fine texture detail on the mountain slope and especially along the cliff face becomes apparent here:
So, there you have it. I feel like this is still a work in progress, that there are some other places it could go. Is this the best way to figure out how to blend the two DEMs together? Should I even be blending at all? Is this even a problem that needs solving? I am a bit unhappy with the median filter, I will say — it’s a classic of noise reduction, but it tends to leave things a bit…geometric. Here’s a more extreme example:
There’s a balance still between cutting out detail and the artificial look of the median filter. I have also tried blurring, but then everything looks blurry, unsurprisingly. I’d like something that can cut out details, but keep sharpness. I may go back and use Terrain Equalizer some more to generate the blending base. But all this fits more on the side of “things you can use to blend into your detailed DEM,” and the main point I am writing about here is the blending concept.
So, I invite you, gentle reader, to give me your input on where this can go, if it has any potential, and how to improve it. I think, after some weeks of work on this and a number of dead ends, my brain can take this no further without a break.
Good post. I do a lot of terrain shading and have the same issues. I use a Natural Scene Designer to Photoshop to Illustrator workflow most often and rely heavily on the median filter as well as the surface and gaussian blur tools. The trick is in applying the right amount of each. When I need less simplistic hill shades I follow some of Patterson excellent advice on blending multiple output layers from NSD, one set at normal relief exaggeration and the other set to 1/4 vertical exaggeration. Once blended in PS you get the simplification of the median and blur filters with some of the finer detail added back in but more subdued in the lower exaggeration layer.
I have not used Terrain Equalizer yet but I have tried Terrain Sculptor. Very promising concept and the results can look fairly close to manual methods IMO. The issue for me is that it requires a GRID format which is a hassle to incorporate into my workflow. It also seems to be limited in the size of ht area it will work with, much smaller than NSD will handle for me. AND… the output is not a geo referenced image which makes placing in AI using MAPublisher a bit more of a hassle. As far as the algorithm though it may be exactly what you’re after in terms of control over detail balance between valleys, ridges and flat areas. I’d be curious to hear your impressions of the program?
I initially started to use Terrain Sculptor for this project, but I thought it generalized things rather more than I was looking for. It looked amazing, but just not for the scale I was hoping to use it at. It might have worked out better if I could have used a more detailed DEM (but, 128px per degree is the best you can get for Mars). Being as these tools are still all in the early stages, they’re also sometimes a bit of a challenge to work with and so I was having trouble getting my data sets to go through them reliably. It’s possible there were some settings in the program to get things more like I wanted, but it sometimes slowed to a crawl and so it was hard to move the sliders around and see things change on the fly. As you say, they can’t take a large file just yet. Terrain Equalizer seemed to make things blurrier than I was hoping to see, though maybe some of that was the downsampling I had to do in order to get the file size small enough. I was hoping to bug Bernie about all of this at NACIS, but he couldn’t make it.
I’ve often blended in relief at two different exaggeration levels, as Tom Patterson suggests, and it may indeed be possible to take this processed DEM and apply that technique again for further enhancement. Or to apply a number of other standard improvement techniques. Perhaps this can simply be a pre-processing step.
>________________________________
Good post. I’ve been studying Patterson and Jenny’s techniques recently, so this has been on my mind. I think in this situation (once you have the mask you need… probably not from the shaded version), blur it a bit more and use this conditional “Blend If” technique:
http://www.cgtextures.com/content.php?action=tutorial&name=blendif
I have a 3d CG / Game-Art background, so discovered the Blend IF in my environment shader work… I was about to bring it in to my terrain workflow when I discovered your post (while reading Jenny’s PDFs this weekend).
Thanks for that! Coincidentally, I just read about Blend If in another context, but hadn’t thought yet how to apply it. I have a moderate proficiency with Photoshop — enough to do the basics, but I still have plenty of things like this to learn.
>________________________________
Reblogged this on Smathermather's Weblog.