0
$\begingroup$

When working with blender, some models seem quite slow when animating.

What should look one for what generally slows down annimation?

  • shaders ??
  • subdivs ( most likely, but disabling doesnt always change a lot).
  • complex meta rigs ??
  • bone modifiers ??
  • alternate bone view types ??
  • ???

My question is how to improve the animation speed, what should i look for and change.
Asking it as a more general question not related to a specific blend file.
Thus how is it possible to determine what slows down an animation.
To improve the rate of animation playback?

$\endgroup$
7
  • 1
    $\begingroup$ if the object is high-poly it will be hard to animate, is it the case? $\endgroup$
    – moonboots
    Commented Dec 6, 2021 at 12:45
  • 1
    $\begingroup$ it depends, are you refering to animation in solid mode or render preview? If we are talking only about the solid mode, then I can tell you that performance is based on how many vertices need to be moved from 1 place to another and your CPU has to calculate it... Due to this vertex count, subdiv are mostly the 1st to slow you down, in my case it is always subdiv or another modifier active on my object, other things never seem to slow it down $\endgroup$
    – MikoCG
    Commented Dec 6, 2021 at 12:46
  • $\begingroup$ @moonboots, its not always the case i wondered what effects it. complex shaders ? or are certain modiefiers slow, does one needs to be aware of specific combinations. $\endgroup$
    – Peter
    Commented Dec 6, 2021 at 12:53
  • $\begingroup$ I don't think there are special combinations, but the more complex the slower it gets. The framerate setting is very often not achieved in 3D Viewport playback if the scene is a little complex. To test motions of your animation in correct speed you should do viewport renders and playback the rendered frames with Ctrl+F11. $\endgroup$ Commented Dec 6, 2021 at 13:10
  • 1
    $\begingroup$ Maybe a better way to phrase the question: "How to measure..." and then a list of shader compilation time, modifier evaluation time, etc. $\endgroup$ Commented Dec 6, 2021 at 13:30

2 Answers 2

3
$\begingroup$

So there are a lot of different ways we divide up the operations Blender does to show us a frame (and it's always showing us a frame, whether that be in a viewport or via an explicit render.)

One way we could divide that up would be:

  1. Blender figures out where all the vertices should be. We could call this the "animation" part of the frame's evaluation time.

  2. Blender figures out how to draw all the faces for us onto our screen-- what color to use for every pixel. We could call this the "render" part of the frame.

The costs of those two things are different, and it's a useful distinction to make. Especially in the midst of animation, we need to see our animation at a reasonable rate to judge timing, and we can do that without spending as much time rendering the frame as we would for the final render, by using faster rendering techniques like solid view (Workbench renderer.)

So with that division, yeah, we can divide models up into characteristics that animate faster and characteristics that animate slower. But not all of those characteristics can just be added up: some of them are multiplicative costs, and some of them are permutative costs. Basically, what that means is that they cause slowdown at different rates as complexity goes up.

1) Physics.

The way that they're typically used, the thing that is going to slow down your model more than anything else is the use of physics. Here, the main cost is calculation of collision, and that's a permutative cost, related to the permutation of the number of colliders. So dropping a single cubic rigid body is not slow, but at the point that we start looking at the interactions of thousands of rigid bodies, or a single "mesh" type rigid body with a vertex-dense mesh, or, say, thousands of vertices in a soft body simulation, it gets really slow. With the way that they're typically used, any use of physics is going to be the biggest reason for animation slowness. (We can bake the physics to eliminate that cost, but then they will no longer react to changes we make, so that's not a reasonable thing to do in circumstances where we're specifically trying to minimize animation time.)

2) Vertex count.

The next biggest thing that's going to cause animation slowdown is vertex count, but this is something where, like I mentioned before, it's not a simple additive cost. To repurpose words used for a different field, asking whether vertex count or modifiers are more important to animation speed is like asking whether the length or the width of a rectangle contributes more to its area. Most of the operations that will cause animation slowdown need to operate on every vertex. So whenever we increase our vertex count, we make everything else slower. You can get great performance from a static object with a lot of vertices; you can get great performance from a single vertex object with a lot of modifiers; but once you put a lot of modifiers on an object with a lot of vertices, you'll see significant slowdown.

3) Modifiers.

So the next things that's going to cause slowdown are modifiers. Again, the cost of modifiers is generally proportional to the number of vertices, because most modifiers have to act on every vertex. Every modifier is going to cause slowdown, but some of them are faster than others. A few modifiers need to look at the relationship between two different meshes, and when large vertex counts are used, these modifiers can get very, very slow. Good examples of this are the shrinkwrap modifier and the data transfer modifier. Obviously, we can't go into detail on every single modifier; just recognize that the time it takes to calculate a modifier depends on the modifier in question, and there can be large differences in the time it takes to, for example, evaluate a cast as compared to a 100 iteration Laplacian smooth.

A few modifiers deserve special attention. Subdivision surface is probably the most frequently used modifier in Blender. In addition to requiring time to calculate, time that is proportional to the vertex count, it also creates new vertices, which increase the time it takes to calculate further modifiers. Subdivision has special settings (render/preview iteration count, "Simplify" settings) to help manage the cost of this modifier in different contexts, but those aren't always usable. One of the fastest things that can be done to improve subdivision speed is to simply disable "use limit surface" in its options, which increases its performance tremendously on some meshes, with very little visual difference.

An armature modifier is very important as a typical source of animation slowdown. An armature-deformed mesh needs to perform the armature calculation for every single vertex, for every single bone to which that vertex is weighted. So this can be slow, and the use of very diffuse weights, with lots of per-vertex vertex groups, can make it more slow (in game engines, you'll often see hard limits on the number of vertex groups per vertex.) Compare to bone parenting, or armature constraints: we only need to do the operation once, rather than for each vertex, so those are very very fast in comparison.

Geometry nodes modifiers probably deserve special mention here. Other modifiers can often be emulated with GN, but those modifiers can use optimizations that GN cannot. Animating via a GN modifier is always going to be slower than using some other technique.

A special note on particles

I don't know whether particles should get their own section. It is very easy to create a huge number of vertices using particles. And particles can be created with their own logic, including physics logic, that isn't necessarily very fast, depending on settings. Particles can be responsible for incredible slowdown, but they do so mostly because of the number of vertices they create. Vertices created by particle systems are likely to cost less than other vertices, but that's not much consolation if you still need millions of them.

4) Everything else.

The other things you mention are almost negligible in their cost. We'll go over them anyways:

Shaders do not contribute whatsoever to the animation time. They can contribute very significantly to the render time. Animating in a solid-view viewport eliminates their contribution to the time required to see an animation.

A "meta-rig" as it's usually considered in Blender is not evaluated except on special instructions. It contributes exactly zero to the time it takes to run an animation. Its complexity doesn't matter at all.

If by "bone modifiers" you're referring to f-curve modifiers, they cost almost nothing to evaluate. Again, the reason here is due to the number of things they're affecting: when you armature deform an object, you're doing hundreds of thousands of operations (vertex count times per-vertex group membership). You almost certainly don't have as many bones as you have vertices, by a factor of 100s or 1000s, and so evaluating this takes almost no time in real world situations.

It's the same with bone constraints, btw. Bone constraint operations are not all that slow, but the big thing is, you're just not doing them hundreds of thousands of times a frame. Even in a very complicated armature, you're evaluating maybe a hundred constraints a frame. (There may be exceptions: a shrinkwrap constraint depends on another mesh, which can have a lot of vertices; a spline IK may depend on an arbitrarily complicated curve.)

Bone view contributes almost nothing as well. The transforms of all of those bones need to be calculated anyways. so there's no marginal cost there. The calculation of the actual view of the bone is a per-object calculation, rather than a per-vertex calculation, so again, you're just not doing the massive number of calculations you'd need to see before this was a significant part of the cost of animating a frame.

I've tried to offer reasonable advice, based on what I consider to be reasonable expectations. For that, I imagine that an armature will have between 50 and 500 bones, that an armature-deformed mesh will have between 5000 and 150000 vertices, that a scene with a physics system will involve at least 50 different colliding entities, that a single object will average greater than 1000 vertices. I believe those to be reasonable assumptions, but there are different ways to do things. If at any point you have more bones than vertices, and all those bones have constraints, then yes, bone constraints would matter more than many modifiers. But it's hard for me to imagine anybody making a file that looked like that.

$\endgroup$
1
  • $\begingroup$ great info, usefull $\endgroup$
    – Peter
    Commented Dec 8, 2021 at 13:18
1
$\begingroup$

My understanding is that it is usually the mesh that slows things down but only if it is a very dense mesh. One way to get around that is by using two versions of the mesh, a high poly mesh and a low poly mesh, and parenting them both to the same armature. Then you hide the high poly mesh until you are ready to render the scene.

I would not think that shaders are the problem, but you can get around that using the same method, hide the high poly mesh with the complicated shaders until it is time to render the scene. Or just animate with viewport shading set to solid mode.

Animation playback performance can be helped by reducing the size of your viewport window, I usually keep my viewport reduced to a quarter screen view when watching animation playback. Having multiple viewport windows open, or very large viewport windows will slow down playback speed.

$\endgroup$
1
  • $\begingroup$ also verry handy, i could just duplicate the mesh and merge vertics in those cases, I had never given it a try. but will do next time with armatures. $\endgroup$
    – Peter
    Commented Dec 8, 2021 at 13:20

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .