93
\$\begingroup\$

I'm new to both gamedev and Blender, and there's something I can't shake:

In Blender, a single render (even using the more advanced Cycles renderer) can take up to 45 seconds on my machine. But obviously, in a game, you can have amazing graphics, and so rendering is obviously happening continuously, multiple times a second in real-time.

So I'm also wondering what the disconnect is, regarding how "slow" Blender's renders seem to be, versus how game engines achieve real-time (or near real-time) rendering.

\$\endgroup\$
15
  • 3
    \$\begingroup\$ Real-time rendering is a huge topic in itself, there's a lot of books written about it (including "Real-Time Rendering"). And renderers like Cycles work completely differently than 3D renderers in game engines - you can't really compare them \$\endgroup\$ Commented Feb 3, 2017 at 12:53
  • 43
    \$\begingroup\$ @UnholySheep Of course you can compare them. How else would anyone explain the difference, to answer the question? \$\endgroup\$
    – user985366
    Commented Feb 3, 2017 at 22:12
  • 2
    \$\begingroup\$ @10Replies But this question would not be topical on that site. \$\endgroup\$ Commented Feb 4, 2017 at 22:29
  • 3
    \$\begingroup\$ @10Replies: While the OP does mention Blender, the question essentially boils down to why real-time game engines seem to render 3D scenes faster than approximately-photo-realistic 3D renderers (such as Blender, but also many others). Note that this is also the question answered by the accepted answer. With that in mind, I agree the question is more on-topic here on Game Development, where questions about general game development technology can be asked, rather than on Blender, where questions are more specific to Blender in particular. \$\endgroup\$ Commented Feb 4, 2017 at 23:22
  • 3
    \$\begingroup\$ I guess the secret here is that amazing doesn't have to be precise. There are fast approximations for math used in 3D rendering, like InvSqrt \$\endgroup\$ Commented Feb 6, 2017 at 17:37

2 Answers 2

119
\$\begingroup\$

Real-time rendering, even modern real-time rendering, is a grab-bag of tricks, shortcuts, hacks and approximations.

Take shadows for example.

We still don't have a completely accurate & robust mechanism for rendering real-time shadows from an arbitrary number of lights and arbitrarily complex objects. We do have multiple variants on shadow mapping techniques but they all suffer from the well-known problems with shadow maps and even the "fixes" for these are really just a collection of work-arounds and trade-offs (as a rule of thumb if you see the terms "depth bias" or "polygon offset" in anything then it's not a robust technique).

Another example of a technique used by real-time renderers is precalculation. If something (e.g. lighting) is too slow to calculate in real-time (and this can depend on the lighting system you use), we can pre-calculate it and store it out, then we can use the pre-calculated data in real-time for a performance boost, that often comes at the expense of dynamic effects. This is a straight-up memory vs compute tradeoff: memory is often cheap and plentiful, compute is often not, so we burn the extra memory in exchange for a saving on compute.

Offline renderers and modelling tools, on the other hand, tend to focus more on correctness and quality. Also, because they're working with dynamically changing geometry (such as a model as you're building it) they must oftn recalculate things, whereas a real-time renderer would be working with a final version that does not have this requirement.

\$\endgroup\$
6
  • 15
    \$\begingroup\$ Another point to mention is that the amount of computation used to generate all the data a game will need to render views of an area quickly may be orders of magnitude greater than the amount of computation that would be required to render one view. If rendering views of an area would take one second without any precalculation, but some precalculated data could cut that to 1/100 second, spending 20 minutes on the precalculations could be useful if views will be needed in a real-time game, but if one just wants a ten-second 24fps movie it would have been much faster to spend four minutes... \$\endgroup\$
    – supercat
    Commented Feb 3, 2017 at 16:53
  • 9
    \$\begingroup\$ ...generating the 240 required views at a rate of one per second. \$\endgroup\$
    – supercat
    Commented Feb 3, 2017 at 16:53
  • \$\begingroup\$ @supercat and because of this your renders are pretty much free of hastle and you gain much control over the process. You could use a game engine to render... if you would be ready to sacrafice on features. But as you said its not worth it. \$\endgroup\$
    – joojaa
    Commented Feb 5, 2017 at 7:18
  • \$\begingroup\$ One striking example of this that I can recall is the original Quake engine (~1996), which was able to achieve relatively mind-blowing real-time 3D graphics on very limited machines using combinations of extremely time-consuming pre-calculation techniques. BSP trees and pre-rendered lighting effects were generated ahead of time; designing a level for that engine typically involved hours (usually overnight) of waiting for map compilation tools to finish. The trade-off was, essentially, decreased rendering times at the expense of authoring time. \$\endgroup\$
    – Jason C
    Commented Feb 7, 2017 at 5:54
  • \$\begingroup\$ (The original Doom engine [1993] had similar precalculations. Marathon may have as well, but I don't recall, I remember building Marathon levels but I can't remember what was involved.) \$\endgroup\$
    – Jason C
    Commented Feb 7, 2017 at 6:02
114
\$\begingroup\$

The current answer has done a very good job of explaining the general issues involved, but I feel it misses an important technical detail: Blender's "Cycles" render engine is a different type of engine to what most games use.

Typically games are rendered by iterating through all the polygons in a scene and drawing them individually. This is done by 'projecting' the polygon coordinates through a virtual camera in order to produce a flat image. The reason this technique is used for games is that modern hardware is designed around this technique and it can be done in realtime to relatively high levels of detail. Out of interest, this is also the technique that was employed by Blender's previous render engine before the Blender Foundation dropped the old engine in favour of the Cycles engine.

Polygon Rendering

Cycles on the other hand is what is known as a raytracing engine. Instead of looking at the polygons and rendering them individually, it casts virtual rays of light out into the scene (one for every pixel in the final image), bounces that light beam off several surfaces and then uses that data to decide what colour the pixel should be. Raytracing is a very computationally expensive technique which makes it impractical for real time rendering, but it is used for rendering images and videos because it provides extra levels of detail and realism.

Raytracing Rendering


Please note that my brief descriptions of raytracing and polygon rendering are highly stripped down for the sake of brevity. If you wish to know more about the techniques I recommend that you seek out an in-depth tutorial or book as I suspect there are a great many people who have written better explanations than I could muster.

Also note that there are a variety of techniques involved in 3D rendering and some games do actually use variations of raytracing for certain purposes.

\$\endgroup\$
10
  • 3
    \$\begingroup\$ +1 for a very good point; I deliberately didn't go down the rabbit hole of raytracing vs rasterization, so it's great to have this as a supplemental. \$\endgroup\$ Commented Feb 3, 2017 at 19:28
  • 17
    \$\begingroup\$ This answer gets more to the heart of the difference. Game engines perform rasterization (forward or deferred) while offline renderers (like Blender, Renderman, etc.) perform ray-tracing. Two completely different approaches to drawing an image. \$\endgroup\$
    – ssell
    Commented Feb 3, 2017 at 19:31
  • 4
    \$\begingroup\$ @LeComteduMerde-fou As gamedev is aimed at game developers I felt a supplemental technical explanation would be of benefit to the more technically inclined reader. \$\endgroup\$
    – Pharap
    Commented Feb 3, 2017 at 19:33
  • 1
    \$\begingroup\$ @ssell True, but it's not just about ray-tracing - even without ray-tracing, even with GPU rendering, Blender's rendering is usually much more detailed and slower. This mostly has to do with the focus on correctness - better texture filtering and resolution, anti-aliasing, lighting, shadow mapping, Z-accuracy, quads, bi-directional surfaces, large polygon counts, higher-resolution output, accurate bump-mapping, lack of pre-calculated maps, morphing, accurate kinematics... it's a long list of features that game engines lack or fake their way through. \$\endgroup\$
    – Luaan
    Commented Feb 6, 2017 at 12:33
  • 1
    \$\begingroup\$ @Chii I misremembered. I was thinking of ART VPS, it was just acceleration, not real-time. \$\endgroup\$
    – Jason C
    Commented Feb 7, 2017 at 6:12

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .