0
$\begingroup$

So I have a scene in which I'm using blender to projection map content. Im then rendering that content back out from Blender correctly mapped per each of the 8 'surfaces', I therefore have 8 cameras in the scene, each requiring their own respective resolution. This led me to create 8 scenes (which could contain resoluton information and specific cameras).

Next I wanted to automate the rendering of scenes in sequence. I tried the sequencer but that doesnt respect resolution of each scene, and then I solved this issue with a little help from the forums and chatGPT, and successfully managed to render every scene to the correct directory.

See code below;

import bpy
import os

# Define the output directory for the renders
output_base_dir = "/Volumes/New Volume/PERSONAL/The Cube/Camera Export Test"

# Ensure the output directory exists
if not os.path.exists(output_base_dir):
    os.makedirs(output_base_dir)

def render_scene(scene):
    # Set the active scene
    bpy.context.window.scene = scene

    # Construct the output path for the scene's animation, ensuring a subfolder for each scene
    output_path = os.path.join(output_base_dir, scene.name)
    if not os.path.exists(output_path):
        os.makedirs(output_path)
    
    # Update the scene's output path for the render
    scene.render.filepath = os.path.join(output_path, scene.name + "_")

    # Perform the viewport render
    try:
        # Attempt to use the current view context for rendering directly
        bpy.ops.render.opengl('INVOKE_DEFAULT', animation=True, view_context=True)
    except Exception as e:
        print(f"Failed to render scene {scene.name} due to: {str(e)}")

# Iterate over all scenes and render them
for scene in bpy.data.scenes:
    render_scene(scene)

print("Viewport rendering attempt complete.")

However, this rendered using the normal EEVEE render mode instead of my preference of 'Viewport Render Animation' in 'Material Preview' mode. 'Viewport render animation' + 'Material Preview' has an indiscernible difference in quality for my purposes, and no matter how much I tweak the settings within EEVEE I cannot get the render times to the same as 'viewport render animation'.

I have 10's of thousands of frames to render so both this automation and the faster render times are required. For reference: Standard render mode = 3FPS, 'Viewport Render animation' + 'Material Preview' = 14fps). I tried asking chatGPT to fix the python and force 'Viewport Render Animation' + 'Material Preview' to which it responded with this code;

import bpy
import os

def set_material_preview_mode(screen):
    for area in screen.areas:
        if area.type == 'VIEW_3D':
            for space in area.spaces:
                if space.type == 'VIEW_3D':
                    # Set the viewport shading to 'MATERIAL'
                    space.shading.type = 'MATERIAL'
                    return True
    return False

def render_viewport_animation(scene_name, output_path):
    # Store the current scene to switch back later
    current_scene = bpy.context.window.scene.name
    
    # Switch to the target scene
    bpy.context.window.scene = bpy.data.scenes[scene_name]

    # Update the scene's render output path
    bpy.data.scenes[scene_name].render.filepath = output_path

    rendered = False
    # Attempt to set Material Preview mode and render for each screen
    for screen in bpy.data.screens:
        if set_material_preview_mode(screen):
            # Find the 3D view and set the context for rendering
            for area in screen.areas:
                if area.type == 'VIEW_3D':
                    for region in area.regions:
                        if region.type == 'WINDOW':
                            override = {'window': bpy.context.window, 'screen': screen, 'area': area, 'region': region, 'scene': bpy.context.scene}
                            bpy.ops.render.opengl(override, animation=True)
                            rendered = True
                            break
                if rendered:
                    break
            if rendered:
                break

    # Switch back to the original scene
    bpy.context.window.scene = bpy.data.scenes[current_scene]

# Define the output directory for the renders
output_base_dir = "/Volumes/New Volume/PERSONAL/The Cube/Camera Export Test"

# Ensure the output directory exists
if not os.path.exists(output_base_dir):
    os.makedirs(output_base_dir)

# Iterate over all scenes and render them using viewport rendering
for scene in bpy.data.scenes:
    scene_name = scene.name
    output_scene_path = os.path.join(output_base_dir, scene_name)
    if not os.path.exists(output_scene_path):
        os.makedirs(output_scene_path)
    render_viewport_animation(scene_name, output_scene_path + '/')

print("Viewport rendering complete.")

Now this worked for the 'active scene', and rendered stupendously quick, but for the others it just rendered in what I think was 'workbench', or maybe 'Solid' mode in the 3D viewport. Im at a bit of an impasse here, I think this is down to the fact that 'Viewport Render Animation' in 'material preview' is intrinsically linked to the 'Active Scene' open in the viewport, and as you cannot have multiple active scenes I think theres no way to get all scenes to render correctly.

So I currently see my options as;

A) render each scene manually. Its a bit more time consuming for me , but the time saved by using 'Viewport Render Animation' is too big to give up.

B) Optimise my EEVEE render settings - do any of you know if theres a way to optimise the EEVEE normal render mode to be as quick as 'Material Preview'

I hope Ive explained myself clearly, and not asked a stupid question thats already been answered, I feel like this is a bit of an edge case as I want to use the 'Material Preview' + 'Viewport Render Animation'. Also I know ChatGPT isn't a perfect solution but its helping me learn programming in blender and serves as a good elucidator for my process ideas. Id be happy with either optimising tips or any pointers to efficient batch rendering. Thanks.

Link to scene - https://drive.google.com/file/d/1GRO68BDhmKkrE5ItCF00b3cXzHoOhvfj/view

$\endgroup$
3
  • $\begingroup$ Link to Scene; drive.google.com/file/d/1GRO68BDhmKkrE5ItCF00b3cXzHoOhvfj/… $\endgroup$
    – Bes Thomas
    Commented Feb 22 at 22:36
  • 1
    $\begingroup$ Hey Duarte, Mr Moderator, I don't know how to DM you or submit a ticket, so im commenting here. To be called artificial is one this but the extraneous grammar comment hurts, I've never had grammer corrected by a mod of a forum before. if I was, it was only to make it very clear what my problem was and what steps i'd already taken. Could you please remove the AI warning? I dont know if it reduces the ranking of my question but Im in real need of help here. Ill try to tone down my extraneous speech $\endgroup$
    – Bes Thomas
    Commented Feb 23 at 10:33
  • $\begingroup$ I'm pretty sure Duarte didn't add the warning because of imperfect grammar. It's because chatGPT was used to generate the code. It's sad you've removed the warning, and yet you didn't care to click on the link in the warning to read the content under it, which clarifies the reason for putting the warning in your post: Concerns regarding AI generated content Also: How to upload a blend file on BSE - good practices and advice $\endgroup$ Commented Feb 24 at 20:00

0

You must log in to answer this question.

Browse other questions tagged .