38
$\begingroup$

In my script, I have for loop over many cube objects (~1000) and the treatment is very slow. Looking more in details, I notice that in the same amount of loops:

  1. if I use python operation or simple blender operation like

    obj = bpy.data.objects[obj_name]
    

    or

    obj.select = True
    

    it takes less than 0.08s (for all 1000 objects)

  2. but as soon as I start using blender operators like:

    bpy.ops.object.select_pattern() # or
    bpy.ops.object.duplicate() # or
    bpy.ops.object.location_clear() # or
    bpy.ops.object.transform_apply() # etc.
    

    then, the performance drops down deeply with more than 6s for the same amount of objects.

And, a last information, if I reduce the amount of objects from 1000 to 50, the same set of operations with bpy.ops is 0.03s - so extrapolating, it would be 0.6s for the 1000 objects and not 6s. It is as is with 1000 objects, we loose a factor 10 in speed in comparison than with 50 objects.

I tried reducing the complexity of my mesh changing the cubes in planes but it had no effect at whole on performance.

Is there something particular to know to improve these performance or a way to use the bpy.ops methods with better performances? Obviously, I am missing something important probably.

$\endgroup$
2
  • 1
    $\begingroup$ Related: blender.stackexchange.com/q/2848/599 $\endgroup$
    – gandalf3
    Commented Feb 26, 2014 at 21:40
  • $\begingroup$ @CoDEmanX: Thanks, this is clear, it helps a lot. I will review all my bpy.ops usage. $\endgroup$
    – Salvatore
    Commented Feb 27, 2014 at 22:18

2 Answers 2

56
$\begingroup$

Most operators cause implicit scene updates. It means that every object in the scene is checked and updated if necessary. If you add e.g. mesh primitives using bpy.ops.mesh.primitive_cube_add() in a loop, every iteration creates a new cube, starts a scene update and Blender iterates over all objects in the scene and updates objects if it has to.

If you start with 0 objects, there will be 1 in the first iteration and 1 object needs to be checked in the scene update. In the second iteration, there will be 2 objects and 2 be checked. The first object was checked in the first iteration already (thus, 3 object updates in total). In the third iteration, there will be 3 objects and 3 + 2 + 1 = 6 objects checked in total. In iteration 1000, there will be 1000 objects and 500,500 checks have been carried out. Here's the formula where n is the number of objects:

$\displaystyle \sum_{n=1}^n i = 1 + 2 + ... + n = \frac {n (n + 1)} {2} $

As you see, the runtime isn't linear and could only be if there was only one update for every object after all have been added. You need to use the "low-level" API - RNA methods and attributes - instead of operators to achieve better runtimes. A scene update needs to be called manually like bpy.context.scene.update() with this approach.

Many, but not all operator calls can be somehow replaced by "low-level" code. You can duplicate objects very efficiently like:

import bpy
from mathutils import Vector

ob = bpy.context.object
obs = []
sce = bpy.context.scene

for i in range(-48, 48, 3):
    for j in range(-48, 48, 3):
        copy = ob.copy()
        copy.location += Vector((i, j, 0))
        copy.data = copy.data.copy() # also duplicate mesh, remove for linked duplicate
        obs.append(copy)

for ob in obs:
    sce.objects.link(ob)

sce.update() # don't place this in either of the above loops!

A good comparison between 4 different ways to do the same thing:

  1. bpy.ops.anim.keyframe_insert_menu() - don't ever use this in a script, it is solely to show a menu for the user

  2. bpy.ops.anim.keyframe_insert() - this is supposed to be used via the UI, not in script. Use operator calls only if there is no lower-level API!

  3. Object.keyframe_insert() - RNA method that can be called on an object, better

  4. The low-level way - add F-Curves and keyframe_points manually, fastest but you need to do alot yourself and consider several conditions (like object not having animation_data or an animation_data.action)

Related (also examples included):

$\endgroup$
8
  • $\begingroup$ According to Blender Python API docs bpy.context.scene.objects.link runs scene.update every time it is invoked, so the last line in the code is superfluous and the whole situation is not very happy in that way. $\endgroup$ Commented Oct 15, 2014 at 12:55
  • $\begingroup$ The docs state "Link object to scene, run scene.update() after", so it's telling you that you are supposed to call scene.update() - did you read "runs..." by chance? Even if it did cause an update, performance is still great compared to operator calls. $\endgroup$
    – CodeManX
    Commented Oct 15, 2014 at 15:55
  • $\begingroup$ Yes, I did read both "link..." and "run..." as "links..." and "runs..." in the docs. Am I right that the second part of the phrase is ambiguous? I'm not an English speaker, but suppose my reading to be a valid and more consistent one. Anyway I was to correct the last part of my comment. Yes, if objects are linked to a scene in a separate loop all at once, performance is really great. I've changed my script so that all the lamps are generated without bpy.ops and the execution time has reduced $\endgroup$ Commented Oct 17, 2014 at 7:22
  • 3
    $\begingroup$ Have you considered adding this to the Blender Manual? $\endgroup$
    – dr. Sybren
    Commented Apr 30, 2018 at 12:20
  • 7
    $\begingroup$ in 2.8 sce.update() becomes dg = bpy.context.evaluated_depsgraph_get() dg.update() ref and sce.objects.link(ob) becomes bpy.context.collection.objects.link(ob) ref I was quite frankly shocked at the 5sec to 0.5sec improvement. $\endgroup$
    – Emile
    Commented Jan 24, 2020 at 13:08
4
$\begingroup$

Is there something particular to know to improve these performance or a way to use the bpy.ops methods with better performances?

The top answer is correct, calling an operator updates the current view layer (generally twice, once before and once after), which takes time proportional to the size of that view layer.

So you can make it go faster by... not... doing that.

Fair warning: I have no idea what this breaks (something, surely), so use at your own risk.

def run_ops_without_view_layer_update(func):
    from bpy.ops import _BPyOpsSubModOp

    view_layer_update = _BPyOpsSubModOp._view_layer_update

    def dummy_view_layer_update(context):
        pass

    try:
        _BPyOpsSubModOp._view_layer_update = dummy_view_layer_update

        func()

    finally:
        _BPyOpsSubModOp._view_layer_update = view_layer_update


# Example usage

import bpy

def add_cubes():
    for i in range(-48, 48, 3):
        for j in range(-48, 48, 3):
            bpy.ops.mesh.primitive_cube_add(location=(i, j, 0))

run_ops_without_view_layer_update(add_cubes)

For adding/importing many objects, it doesn't seem to cause any problem, and the speed difference is rather phenomenal.

  • add_cubes() : 21 s
  • run_ops_without_view_layer_update(add_cubes) : 1 s

(Everything in this answer tested with Blender 2.93.)

$\endgroup$
4
  • $\begingroup$ Very interesting, thank you. Do you know if this (so far) unknow way will be maintained in the future by the devs? $\endgroup$
    – lemon
    Commented Nov 11, 2021 at 18:47
  • $\begingroup$ It's monkeypatching private members of a class, so I expect it will break eventually. $\endgroup$
    – scurest
    Commented Nov 11, 2021 at 18:50
  • $\begingroup$ Should we ask they provide an api like "bpy.ops.begin_ops" / "bpy.ops.end_ops"? $\endgroup$
    – lemon
    Commented Nov 11, 2021 at 18:51
  • $\begingroup$ fyi, the above script works with 3.0 beta. $\endgroup$
    – lemon
    Commented Nov 11, 2021 at 18:58

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .