Creating Shaded Relief in Blender

Welcome! This is the long-awaited text version of my Blender relief tutorial, following on the video series I did a few years back. If you’ve already seen the videos and are returning for a refresher, note that I use a somewhat different method now, so don’t be surprised if you encounter unfamiliar settings.

This tutorial will take you an hour or two to get through — but I think the results are quite worth it. More importantly, note that your second relief will take much less time than this first one, since most of the work you’ll be doing can be saved and simply reloaded for future relief projects. Once you’ve invested the time to get comfortable with it, this technique can fit within ordinary production timelines.

Tutorials like this take a surprising amount of time to develop and maintain. This tutorial is, and will remain, free, but if you derive some value from it, you are welcome to make a donation to support my continued work.

Version History

Version 2.3 (Jan 13, 2022) — Edits to the last page, including a link to a discussion of how to make Blender relief look less Blender-y, and an updated oblique map.

Version 2.2 (Dec 05, 2021) — Blender 3.0 is now out. Significant revisions to screenshots. Discussed render performance earlier (as it bogged down for me earlier in the process). Denoising is now default, so I adjusted that section and removed a good chunk of it.

Version 2.1 (Oct 30, 2019) — Added step in Chapter 6 to change the heightmap’s color space, to avoid lowlands being washed out.

Version 2.0 (Sep 29, 2019) — Major revision for Blender 2.80. All screenshots replaced to reflect the new UI. Many steps rewritten to reflect new interface elements and new names for tools/features/menu items. Removed some no-longer-needed material, such as UV unwrapping, are no longer needed. Thanks to Diane Fritz for her notes on changes, which helped me double-check my work.

Version 1.2 (May 14, 2018) — Added new section in Chapter 7, pointing readers toward the idea of rendering relief on a pre-colored plane. Suggested by Anton van Tetering.

Version 1.1 (Jan 29, 2018) — Changes to Chapter 6: Added section on denoising, and alterered render settings to suggest using Limited Global Illumination. Both of these tips are courtesy of Dunstan Orchard.

Version 1.0 (Nov 16, 2017) — Initial release of text version.


Why Blender? In short: Blender makes better-looking relief. Most of the cartographers I know do their shaded relief in ArcMap or another GIS program, or sometimes they use Photoshop or Natural Scene Designer. All of these programs use basically the same algorithm, and you get a pretty similar results, as seen below. This standard GIS hillshade looks OK, but it’s rather noisy and harsh.

As Leland Brown has put it, this looks sort of like wrinkled tinfoil; full of sharp edges.

Blender, on the other hand, is designed specifically for 3D modeling. People use it for CGI, animations, and plenty more cool stuff. It’s intended to simulate the complexities of how light really works: the way it scatters, the way it reflects from one mountain to the next, and the way its absence creates shadows. Here’s Blender’s version of the same area:


Notice how it’s softer and more natural. The peaks cast shadows, and then those shadowed areas are gently lit by light scattered off of nearby mountain faces. Notice also how the structure of the terrain becomes more apparent. In a standard hillshade, I think you lose the forest for the trees. Here’s a side-by-side comparison of the two methods:


Blender’s result not only looks more attractive and realistic, it’s also more intelligible, I think. Certain features of the landscape become more apparent — look at the valley below, running northeast-southwest. It’s hard to tell how wide it is, or that it’s a valley at all, when looking at the standard hillshade. But the Blender relief makes its structure clear, thanks to the improved modeling of lighting.


Whereas the standard hillshade algorithm makes pixels lighter or darker based solely on which direction they’re facing, Blender looks at the scene’s context, and whether that pixel is in a mountain shadow or is in a position to catch scattered light. The result is a more attractive, more understandable relief.

Table of Contents

This is a fairly long tutorial, as I mentioned, so for your convenience I’ve split it up into multiple chapters.

  1. Getting Set Up: We begin by downloading Blender and preparing a heightmap
  2. Blender Basics: Here, we’ll learn to navigate the software
  3. The Plane: We shall set up a plane mesh and apply a heightmap texture
  4. The Camera: Let us prepare to image the plane correctly
  5. The Sun: In which we cast light upon the plane
  6. Final Adjustments: Here, lingering settings are finally adjusted
  7. Advanced Thoughts: For your consideration on future days

Please enjoy, and if you see any errors (either typographical or of fact), please do let me know. I hope that this tutorial empowers you to produce work you can be proud of!

104 thoughts on “Creating Shaded Relief in Blender

  1. Got any advice for using this/your same workflow on a DEM that isn’t square or rectangular? I have a county DEM the has an irregular shape and the resulting TIFF export contains white space in the outside portions of the extent. The portions that have “No Data” alter the render. Would a PNG with transparent background solve the problem of using a DEM that has an irregular shape?

    1. I’ve dealt with this situation when using DEMs that leave the ocean as “no data.” I would recommend filling those empty areas with black, rather than white. Transparency won’t do it, I don’t think. Even in a transparent raster, every pixel is filled in with some number, just a number that certain programs choose to ignore and depict as invisible. Blender likely wouldn’t use that to make the plane vanish, since the DEM is being read only to modify the plane’s height. With those areas being white, as you’ve see, the plane gets very high and creates shadows over the “real” area. If you make it black, they’ll be very low and stay more out of the way, not casting shadows.

  2. Super nice tutorial series Daniel, kudos! Just did the whole series and rendered the IBCAO polar dataset with 15% resolution. Turned out quite nice, see Twitter mention. Would NEVER have been able to do that without your extensive input. Written for humans, thanks for that!

    However, in the final “Advanced thoughts” tutorial, are you sure about replacing the “Displacement” node with a “Math Multiply” node? I’ve seen this in another tutorial which is built on top of yours as well:

    When I did that, it seemed like it shifted the camera perspective considerably towards southwest, more or less proportional to the multiplication value. Only when I switched back to “Displacement” as for the ordinary BW render, it started to behave again.

    Didn’t you have that behavior yet? Maybe a Blender version issue? I’m on the latest, v2.81. The rest I configure mostly as you did, with the exception of some minor unrelated settings.

    1. Thanks for the kind words! That screenshot was older, thanks for noticing. It was provided by someone else for an earlier version; I’ll get a new one in there today.

  3. Thanks for the awesome tutorial Daniel! Have you had success with large renders (something like 12000×7200)? Wondering if you have any tips and tricks!

    1. I have definitely done some large renders successfully. I am not sure offhand how large they were, but several thousand pixels per side. It’s possible to run into memory limits depending on your hardware, though. You could split the DEM into chunks, too (I used to do this in an older workflow). Make sure to leave some overlap, so that shadows from one chunk don’t abruptly stop at the edge of another chunk. If you’re willing to spend a bit, you can also get things done faster and avoid tying up your computer by using a render farm.

  4. Thank you so much for this useful tutorial! But i have one question: at step six Rescale you have the following formula: (Pixel Value – 120) ÷ (1500 – 120) * 65,535

    What do i have to fill in at Pixel Value? My raster information says that my Pixel Depth is 32 Bit.

    1. This is a formula run on a pixel-by-pixel basis in your raster calculator. So in the example, it would take the value of each pixel, regardless of what that value is, and subtract 120 from it, before dividing it by 1500-120, and multiplying by 65,535. The correct syntax will vary based on the GIS program you are using, so you’ll need to read up on the details of your GIS program’s raster calculator.

        1. That is correct — negative numbers cannot be read by most graphics software. That’s why there’s the Rescale step in the data preparation. The goal of using the raster calculator is to ensure that there are no negative numbers (and that you have reasonable vertical resolution in your final output, if you have non-integer values in your DEM). I’m not sure what particular formula you might have used if you got a negative value after you used the raster calculator, but the goal is to simply plug in whatever formula is needed to ensure that you have all positive values. So in this case you would add 28 to each of your values so that the lowest is 0. I’m sorry if the tutorial was unclear, and if there is something missing from it, please let me know.

          1. Hello Daniel,
            Thank you for the explanation. Could you take the time to check the DEM results that I have made?

            1. This is the DEM file I got from USGS and intersect according to AOI

            2. This is the DEM result that I have rescaled on the Arc Map

            Thank you for sharing this knowledge :)

              1. Hello Daniel,
                How are you? I hope all is well. I’ve tried to make the map, and the result still looks noisy when zooming in. In different from the results other people make. Even though I have followed each of the steps mentioned above. Please more advise about this.

                My Result:

                Other Result/ People:

                Best Regards,

                1. I’m not sure what you might mean by “noisy” in this case — the Iceland one actually looks worse off to me, as it has some striations and terracing (probably due to not rescaling the data, or just an un-ideal data source). Yours looks fairly free of artifacts. I suppose I see some noise along the edges, where the shadows are cast, but that’s just a matter of increasing the # of samples (I think that’s in Part 6). Or perhaps you mean that the terrain itself simply has too much detail? In which case you could smooth it out first. I usually do that in Photoshop with a Gaussian blur, though you can also use a variety of other tools, like neighborhood statistics in your GIS.

  5. You are very important in spreading this methodology. I am particularly grateful. But many people have difficulties mainly in the preparation of the DEM in Qgis or Arcgis. Why not make this tutorial a video? Thankful!

  6. Hello Daniel. First off, thank you so much for creating this detailed tutorial aimed at enhancing understanding in your readers instead of giving them random numbers and formulas to use. It’s a great approach and very, very helpful!
    I’ve been having fun adding 3D terrain to old maps or to topographic maps. I have a question for you relating to a problem I’m having that I haven’t been able to track down and fix. When I pull a grayscale image from Tangram Heightmapper and plug it in to my node setup it creates a layered look in my mesh. It literally looks like a 3D print instead of having smooth vertical slopes.
    A broader angle for the sun softens this, as does increasing the Dicing Scale on the adaptive subsurf. I’m just wondering if I’ve missed a setting somewhere. I’ve turned on Shade Smooth for the mesh as well.
    Some of my previous files don’t seem to have this issue, but the ones I’ve done recently are very noticeably layered-looking. I’ve used THeightmapper with auto-exposure on and off, I’ve created my grayscale maps by zooming in to export multiple files to stitch together in Photoshop, and I’ve tried it by capturing the entire area in one export. None of these things seem to affect it. I can slightly mitigate the layering by adding noise to the grayscale image and then blurring it. But this adds a lot of other texture to the mesh.
    Any insights you can give me would be greatly appreciated!
    Again, thank you for this detailed tutorial.

    1. Glad you enjoy the tutorial and are putting it to fun uses! It sounds like you’re having a vertical resolution problem. I’m not familiar with Tangram Heightmapper, but perhaps it’s outputting 8-bit images? The tutorial talks about this a little, in its discussion of rescaling the DEM. The basic idea is that an 8-bit image has 256 levels of elevation, so you’ll notice the “steps” between levels. With a 16-bit image (if you have the data to support it), you’ll have 65,000 steps, so things will look smooth. But, that requires going back to the original elevation dataset. If Tangram is just outputting a simple 8-bit heightmap, there’s no way to process that to get more detail. You can smooth it in the ways you’ve been doing so, which can help some (and I’ve done the same, in a pinch), but to really avoid terracing, you’ll need original DEM images that you then process yourself to preserve as much detail as possible.

      1. Thanks Daniel! I think you are correct that the source I’ve been using is merely 8-bit. I’ll try working with better DEMs in the future. Thanks again for your tutorials!

  7. I have been trying so hard with this tutorial, but soon as I get to adding the material to the plane, nothing happens. I am sure I have followed all the steps, the only thing that I’ve done slightly different is that instead of saving through photoshop after I’ve completed resizing my raster, I have been using GIMP. I have tried adding the image texture before and after this step and no luck. Before I process it through GIMP, I am able to make an elevation map in ArcScene, so I know that the data is available. I was hoping I could get some pointers, if possible.


    1. I’m sorry you’ve been having trouble. I am not familiar with ArcScene’s tools, and in any case I am not certain what might be wrong. Blender can take any image as a heightmap, so if you load in any old random thing, it should produce a result. If not, then something is wrong in your Blender setup. Otherwise, there’s something in the way you’ve prepared your DEM into a heightmap (which should, prior to Blender, look like a reasonable black-to-white elevation gradient in GIMP/Photoshop/etc.).

      On Sat, Dec 12, 2020 at 2:53 AM somethingaboutmaps wrote:


  8. Hi Daniel,
    Appreciate your tutorial, I was able to make some good looking heightmaps! While my renders look great, I am curious as to a workflow to combine the render with a scanned topo map from the USGS (or anywhere). Can this combination be done in photoshop? Or do I need to mosaic the two rasters in a GIS? If you could point me in the right direction to do this that would be awesome.


    1. I’m glad it worked out for you! I usually combine things any sort of additional color layers in Photoshop, but you can also do it directly in Blender via another image texture node, plugging that into your Principled BSDF shader as the color. The last chapter of the tutorial goes into these options in a little more detail.

  9. Hi Daniel,
    Happy New Year!
    My girlfriend is a GIS Analyst and she recently introduced me to this tutorial. I am now hooked on rendering maps in Blender to the point where I actually have a clue about what I’m doing in programs like QGIS…
    While my renders look great, I am disappointed by the lack of detail/image quality that I achieve with my final product where Sampling Render @ 300 and Resolution @ 100%.
    I am wondering to what extent the resolution of my DEM impacts the quality of the final render. I have been overlaying colour layers, via another image texture node as per your tutorial, at high dpi (even 1000!) but my renders remain grainy when zooming in and not as sharp as I’d like.

    What advice would you give regarding processing/preparation of raw DEMs as well as blending in order to produce a high quality final render? Is my image quality issue attributed to DEM resolution, and if so, what could I do to improve/maintain it throughout processing (clipping/stretching/translating)?

    For reference, the Iceland map that Sara Vieira shared in the comments is what I’d hope to achieve in terms of image quality, where the image remains sharp despite generously zooming in…

    1. I’m glad the tutorial has been of use to you, and that you’re gaining some GIS experience as well! Looking at your image, I don’t see a lot of noise (it all looks pretty good to me), so I’m not quite sure what you might mean, but increasing the sampling will definitely reduce that. Sometimes people set it as high as 1000, though for most everyday purposes I keep it at 200-ish, as I almost exclusively use relief as a light background layer where such noise won’t matter.

      DEM resolution will have a significant impact on the final results. I encourage you to experiment with this by using either different DEMs, or taking a single DEM and just purposefully reducing its size (or blurring it) to see the effect. It’s often advantageous to do so — a lot of relief is run with datasets that are too detailed, when smoother representations (which help readers understand the forest, as it were, rather than the trees) would do better.

      Also I find that many people who begin to do shaded relief are sometimes just surprised by how smooth the earth can be in some areas. At some scales, it is jagged; the more you zoom in, though, the smoother it becomes.

  10. Hi Daniel

    These are some of the most beautiful hillshade reliefs I’ve found on the web, The attention to detail in Blender, I never would have thought it would make a difference. But yes, the lighting difference compared to the original algorithm is a nice touch.

    I’ve been looking for DEM data, in an attempt to do some of these tutorials myself. I thought some of these digital elevation models might be of some use to others checking here –

  11. Hi there ! Thanks you Daniel for this awesome tutorial !

    Really appreciate, I follow along it and you made it really neat !
    Also, according to the map I used, and the SRTM, sometimes I am struggling with some misalignment in Blender wen I try to mix it with an overlay… Could be great if you add some tricks to your QGIS step, maybe (rescale, warp…) ! I talk as an non QGIS expert.

    I don’t understand why my relief shade is correctly align on the left of the photo but totally out of frame on the right… probably due to the earth sphere shape… I’ve hard time to fix it… even with the warp tool ( maybe is not the good format). Do you see some solution ?

    Have a good day !

    1. I’m glad that you have found the method useful. Raster GIS can definitely be complicated, and there is often a lot to troubleshoot! I’m sorry that I can’t say, specifically, what might be going wrong — there are many possibilities, and QGIS can also be a little more wonky than something like ArcMap, introducing its own problems.

      I would suggest taking some time to get comfortable with QGIS tutorials and raster methods before coming back to this Blender tutorial — it’s definitely written with the intent that readers have some background in the subject. I have thought of trying to write up more detail on raster methods, but that’s a big undertaking (basically an entire university course), so it may not happen for some time.

      Good luck!

      1. Thank you for your support ;)

        Be right back when I will success it !

        By the way congratulation for your work and this amazing website !

    1. If you’re adding an old map as a color layer (which seems to have become quite popular lately), you’ll almost certainly need to georeference it with GIS. It’s unlikely you can get it to line up via any other means, at least with the precision needed for the relief to look good. Your example seems to line up pretty well at first glance, so it looks like the final image would just need to be cropped and rotated in Photoshop later on.

      On Mon, Jan 11, 2021 at 1:48 PM somethingaboutmaps wrote:


      1. There’s no chance i can warp the DEM file and maintain the original shape of the map in gis?
        if I trasform later in photoshop, I apply to the image 2 warping, and it can affect the details.

        1. Oh, I see what you mean. You want to make the DEM match the map, rather than the other way around. That’s very tricky, because you’d need to figure out the original projection the map was made in. Sometimes it’s written on the map, but often times, it’s not described, and then it’s a matter of trying to guess, unfortunately!

          The only other thing I can think of would be: bring the old map into GIS, then try to identify enough landmarks to georeference the DEM to the old map. I think finding enough landmarks could be hard, though, depending on the area you’re working in. It would look better to warp the DEM this way, though, than to warp the final relief.

  12. Hi Daniel,
    I went down the rabbit hole of trying to make these maps a few months ago, and with your excellent tutorial I’ve finally pulled it off! Thank you so much.
    I have one question regards image resolution. 300 dpi seems to be the required resolution for posters etc.
    Would a lower res Dem with high res Texture node (old map) work?

    1. I’m glad you were able to get things working! As to your question, there are a lot of variables upon which it depends. I would recommend trying it out and seeing how it looks. I can imagine defects in the DEM being sufficiently masked by a detailed color layer. However, it depends on how low a resolution you mean, and in any case, it’s usually wise to have a low-ish resolution/smoother DEM anyway, because major terrain structures become clearer. A common rule of thumb is to have a DEM that’s 100–150 dpi.

      On Thu, Jan 28, 2021 at 1:18 PM somethingaboutmaps wrote:


  13. Hi Daniel,
    This tutorial was super helpful. I have been working on a map of the Dalmatian Coast of Croatia and used this tutorial and the one about your airline routes map for some of my features. Anyway, since I was working with an area that shows part of the Adriatic Sea I needed to find a way to clip the terrain to the coastline. The problem was that when I convert the DEM to UInt16 and set the NoData values to 0, Blender renders those anyway. I finally managed to mask the NoData values in Blender and thought your readers might like to know how.
    Here’s the trick in a few steps for anyone interested:
    1. Create an Alpha channel for the image in Photoshop, using the color picker to select pixels with a value of 0.
    2. In Blender, add a Transparent BSDF node setting the RGB values to 0
    3. Add a Mix Shader node.
    4. Connect the nodes this way: Image Texture Node [output Alpha] –> Mix Shader [input Factor] ; Principled BDSF –> Mix Shader [input Shader 1]; Transparent BDSF –> Mix Shader [ input Shader 2]; Mix Shader [output Shader] –> Material Output [Surface].
    This looks cleaner in a diagram but the result makes your masked areas (in my case the Adriatic Sea) black in the rendered image.
    Thanks again!

  14. This tutorial is so great!! Thank you so much:) Just a quick question: Do you have any advice for working with NAs in the heightmap? I have a non-rectangle map and so I have lots of NAs in the corners of it when I look at in QGis. I’m just in the middle of your tutorial, but I don’t know if this might become an issue more towards the end. When I render it currently the NAs are just way higher than the rest of the data:/
    In any case. Thank you so so much!!

    1. The specific numerical value that gets assigned to the N/A areas will vary based on your raster, but I usually make sure they’re assigned to the value of 0 — in this way, they’re read as low areas and don’t cast any shadows. You could also select those areas by color in Photoshop/GIMP and recolor them to black.

      1. Thank you so much Daniel!! I just tried it with setting the NoData-value with gdalwarp in the destination file to 0 and it looks like it worked:)
        Can’t wait to go on with the tutorial after work today!:)

  15. I’m really sorry Daniel for all the questions:/ But do you have any advice on how to handle the rasters so that my final rendered image can be loaded directly into let’s say QGis as a georeferenced image. One approach I tried was, as suggested by, was to create a .tfw-file of the image right before pulling it into Blender and then making the name of the rendered output of blender the same as the .tfw-file. For some reason, this works for me when I use the blendergis-addon. When I try to do it “manually” as in your tutorial, I don’t manage to get a georeferenced image. I think I could just georeference it in Qgis manually, but maybe I’m just missing something very basic here:)
    Again, thank you so so much and have a hopefully nice Sunday,

    1. I have tried this as well without success. Must be something to do with how different programs store/remove information from the tiff headers. I usually just do a quick georeference back onto the position of the DEM.

  16. Hello,

    I followed this tutorial on MacOS Blender version and I cant find “The Displacement option will be set to Bump Only. Change it to Displacement Only, instead.”

    1. Did you set the rendering engine to Cycles and enable experimental features? Sometimes those parts get missed by some people.

  17. Hi Daniel, thanks for this amazing tutorial… I followed the entire process with the DEM you have provided and it worked fine.. But when I started working with my area of interest, I couldn’t get rid of no data value of my file. When I export my DEM as GeoTIFF, all I get is this while I want is , of course except the OSM background.. Can you help with any how can I work around?

    1. It’s important to understand that NoData is just a fiction. In reality, all NoData pixels in your DEM have a number assigned to them, and then GIS software just ignores all pixels with that particular number. So, in your example, maybe all the NoData pixels have a value of 12,000 (I just made that up as an example). And then when you load it in GIS, the software just turns anything that has a value of 12,000 invisible. But, those numbers are still there in the file and other programs, like Photoshop or Blender, will see them as numbers, rather than deciding that they should be invisible.

      I would take your file and do a reclassification of your NoData values. In QGIS the tool is r.reclass, and in ArcMap there’s Reclassify. Turn all your NoData values to 0. In this way, they’ll be very low values on your final relief. They will be flat, and at the bottom. So, they won’t cast shadows over anything else, and then later on your can clip them out when you are processing things in Photoshop or GIMP or whatever. You could do that by simply overlaying your original DEM and selecting all the pixels that had a value of 0, and that would give yo the boundaries of your old NoData area.

      Hope this helps!

  18. Hi, thanks so much for this Daniel! This is just what I have been looking for! <3

    I am still confused about Step 6 (Rescale).

    I have a 32 Bit DEM from USGS, with elevation values 1473 – 4345 and a cell size of 27 X 27 what value would I use for "pixel value"?

    I appreciate your time and attention to this matter,


  19. What an incredible tutorial. Thanks for putting this together. I do have a question. I’ve technically gotten this to work, but it’s insanely exaggerated, even with the displacement scale set to 0.3. I have to set mine to 0.0001 just to get it to look ok, and there are a bunch of horizontal and vertical lines outside the national boundary. Any idea what’s going on here?

    1. Not sure what might be going on. I suppose it might have something to do with the horizontal and vertical scales of the image, and whether the Z scale is figured relative to them or not. But that’s just a guess. Did you find the default settings with the test DEM to work for you? In any case, it’s good that it’s not something that’s blocking your progress, but I’m afraid I’ve not seen anything like this before!

  20. Hi Daniel, thanks very much for the tutorial, it’s a great guide for us the Blender newbies!

    would you have any ideas how to integrate your tutorial with the Blender gis add-on?

    I usually work on subsurface maps that come as tiffs and I’m trying to find a way to combine your tutorial with the add-on so that it maintains the georeferencing info . Any ideas or links to training or tutorials would be welcome ! Thanks !

    1. You’re welcome! Happy to be able to provide something of service to people.

      Offhand, the first thing I’d check is whether this pattern is in the DEM; some of them have various artifacts. You might try running a simple hillshade in ArcMap/QGIS to see if it shows a similar pattern, then you’ll know if it’s in the DEM.

      1. Wow, nice hunch! When I reprojected my source heightmap, gdalwarp used the default resampling method ‘near’ and that introduced the artifacting. I used cubic instead and they’re gone. Thanks again Daniel, you have a great intuition.

  21. Daniel superb tutorial. I just wondered how / if you can combine bathymetry with topgraphy in blender. I’m sure I read somewhere that Blender can’t handle negative values?

  22. Thanks! If you have a DEM with both topography & bathymetry, this is where the rescaling step is needed, to offset those negative values. If you have those DEMs separately, you could composite them later in Photoshop after producing separate reliefs.

  23. Hi Daniel,
    Thank you so much for this wonderful tutorial!

    As a regular Blender user I would like to make a suggestion, as I feel that you have omitted an important step in the preparation of the plane. A raster DEM is composed of square pixels – in the case of your tutorial 2000×2800 pixels. Your plane has been scaled to have the matching proportions 2000:2800 (i.e. 2:2.8, or 1:1.4), and is thus a rectangle. When the Subdivision Surface modifier is applied to this plane it will be divided down into thousands of tiny rectangles with those same proportions, so as we get down into the finely subdivided mesh we have tiny 1:1.4 rectangles trying to match up with the 1:1 pixel data of the displacement map.

    My personal preference would be to first manually add divisions into the plane so that it is composed of squares. This way, as the mesh is divided down by the Subdivision Surface modifier, it becomes thousands of tiny squares rather than rectangles, which could stand a better chance of interpolating the square pixel data. In Edit Mode we can add loop cuts to the plane using Ctrl-r and then scrolling the mouse wheel, or else using the Loop Cut tool in the tool bar. I would add 9 loop cuts to the X axis, giving 10 divisions, and 13 loop cuts to the Y axis, giving 14 divisions, which results in a mesh composed of 10×14 perfect 1:1 squares.

    (sorry, I prepared a screenshot but have no idea how to paste it here)

    This initial manual division might not make much visual difference when using adaptive subdivision (assuming that the adaptive subdivision settings take the final mesh resolution sufficiently beyond the 2000×2800 pixel resolution), but it is a step that I would make without questioning (especially as I would not use adaptive subdivision for an overhead orthographic view, but would rather dial in the subdivision level to match or slightly exceed the pixel resolution).

    Kind regards,

    1. Thanks for the note; I hear what you are saying. The older version of this tutorial used to entirely be based on subdivision of the plane. Once adaptive subsurf became an option, I switched over because the computational expense was high in the old method; I often had to split a project into multiple reliefs.

      I would be curious to see side-by-side comparisons of the results of using the tutorial method vs. adding the loop cut step. I haven’t noticed any stretching when I look in detail at the final relief outputs, though I understand what you are saying about the possibility. Perhaps adaptive subsurf is dicing things finely enough to overcome such issues.

  24. Alas, I’m afraid that I haven’t used it (or done much scripting for GIS / Blender in general), so I can’t say. I just use the more manual preparation methods laid out in the tutorial.

    1. I would say that looks like artifacts in your source DEM, rather than an issue with Blender. Some blurring in GIS might clear them up if you can’t get a better set of source data.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s