16
$\begingroup$

[SEE Edit #7 at the bottom for official, final question]

I know its possible to take a screenshot with python of a particular window (take screenshots programmatically in bge?), but is there a better way to get an image (in base64 format) of exactly whats displayed in the 3D view (so I can send it to a websocket)?

If not, how can I at least get a screenshot in a better-resolution? Like how do I open a full-screen (or more) 3D view window in the background so I can get a 1920-1080 screenshot of the camera? (or at least the whole 3D view and I can crop the camera window later)

So:

  1. Is there another way to capture the 3D view as an image in blender (python) besides for bpy.ops.screen.screenshot ('cause I don't want the other buttons in the picture, even with overlay disabled it shows "view" "object" etc...)?
  2. If not, how can I at least open a background 3D view window (which would possibly be bigger than my own screen resolution), and take a screenshot of that?
  3. And also when I try to take multiple screenshots by setting the frame_current, apparently blender displays the view after running the script, so how can I get around this? (I basically just want to stream the 3D view animation to a websocket)

EDIT:

BTW I kind of just learned about handlers which are very useful, here's my current code (based on the link from batFinger):

import bpy, os
def coby(scene):
    frame = scene.frame_current
    in_frame =  scene.frame_start <= frame <= scene.frame_end
    if not in_frame:
        return None

    folder = scene.render.filepath
    format = scene.render.image_settings.file_format.lower()
    filepath = os.path.join(folder,"%04d.%s" % (frame, format))
    print("render %d" % frame)
    bpy.ops.screen.screenshot(
        filepath=filepath,
        check_existing=False)
bpy.app.handlers.frame_change_pre.clear()
bpy.app.handlers.frame_change_pre.append(coby)

and it's actually very cool the way it works, however its still not REAL time (try testing it and playing, it lags a bit), but that's only because bpy.ops.screen.screenshot takes some time to save the file, but if I would theoretically just get the base64 data of the file without saving it, it might be a lot faster, and I can send that to a websocket possibly and save it using that.... so:

How can I just get a base64 data string of the screenshot image without saving it? I image it has something to do with the source code of bpy.ops.screen.screenshot, although I have no idea how to edit that (I'm willing to make my own build of blender if necessary, but I'm just wondering how to even access the source of that function)

EDIT 2: Just found the source of the screenshot code: \blender-master\source\blender\editors\screen\screendump.c, function name: SCREEN_OT_screenshot, looking into it now...

EDIT 3: actually I think I'm on to something using only native python, this: https://docs.blender.org/api/blender2.8/gpu.html?highlight=gpu#module-gpu is the instruction to at least DRAW the 3D camera view as an image, try running this:

import bpy
import bgl
import gpu
from gpu_extras.presets import draw_texture_2d

WIDTH = 512
HEIGHT = 256 

offscreen = gpu.types.GPUOffScreen(WIDTH, HEIGHT)


def draw():
    context = bpy.context
    scene = context.scene

    view_matrix = scene.camera.matrix_world.inverted()

    projection_matrix = scene.camera.calc_matrix_camera(
        context.depsgraph, x=WIDTH, y=HEIGHT)

    offscreen.draw_view3d(
        scene,
        context.view_layer,
        context.space_data,
        context.region,
        view_matrix,
        projection_matrix)

    bgl.glDisable(bgl.GL_DEPTH_TEST)
    draw_texture_2d(offscreen.color_texture, (10, 10), WIDTH, HEIGHT)


bpy.types.SpaceView3D.draw_handler_add(draw, (), 'WINDOW', 'POST_PIXEL')

NOW I just have to figure out how to get the data for this in a string that can be sent to a socket server.... if anyone knows/ can give insight on how to do this, that would be cool

EDIT 4

OK so I think I'm getting at least somewhere, I'm now able to, in (pretty much) real time export PNG images (and / or get the data URI / bytes for easy socket communication) WITH an alpha background, without any toolbars or anything interfering! There's only a few glitches, but here's the code for maybe someone to test it out and help (the only dependancy is installing PiL in blender which I used with these powershell commands: .\python -m ensurepip --default-pip, .\python -m pip install Pillow, after being in the\2.80\python\bin directory):

import bpy
import gpu
import bgl
import random, base64, io, os
import numpy as np
from mathutils import Matrix
from gpu_extras.presets import draw_circle_2d, draw_texture_2d
from PIL import Image

finalPath = bpy.context.scene.render.filepath + "hithere.png"
WIDTH = 600
HEIGHT = 600
RING_AMOUNT = 10


offscreen = gpu.types.GPUOffScreen(WIDTH, HEIGHT)

def draw2():
    global finalPath

    context = bpy.context
    scene = context.scene

    view_matrix = scene.camera.matrix_world.inverted()

    projection_matrix = scene.camera.calc_matrix_camera(
        context.depsgraph, x=WIDTH, y=HEIGHT)

    offscreen.draw_view3d(
        scene,
        context.view_layer,
        context.space_data,
        context.region,
        view_matrix,
        projection_matrix)

    bgl.glDisable(bgl.GL_DEPTH_TEST)
    draw_texture_2d(offscreen.color_texture, (0, -125), WIDTH, HEIGHT)

    buffer = bgl.Buffer(bgl.GL_BYTE, WIDTH * HEIGHT * 4)
    bgl.glReadBuffer(bgl.GL_BACK)
    bgl.glReadPixels(0, 0, WIDTH, HEIGHT, bgl.GL_RGBA, bgl.GL_UNSIGNED_BYTE, buffer)

    p = "/tmp/" 
    array = np.asarray(buffer, dtype=np.uint8)
    myBytes = array.tobytes()
    im = Image.frombytes("RGBA",(WIDTH, HEIGHT), myBytes)
    rawBytes = io.BytesIO()
    im.save(rawBytes, "PNG")
    rawBytes.seek(0)
    base64Encoded = base64.b64encode(rawBytes.read())
    txt =  "data:image/png;base64," + base64Encoded.decode()

    f = open(finalPath, "wb")
    f.write(base64.decodebytes(base64Encoded))

    f.close()



def coby(scene):
    frame = scene.frame_current
    folder = scene.render.filepath
    myFormat = ".png"#scene.render.image_settings.renderformat.lower()
    outputPath = os.path.join(folder, "%05d.%s" % (frame, myFormat))
    global finalPath
    finalPath = outputPath

h = bpy.types.SpaceView3D.draw_handler_add(draw2, (), 'WINDOW', 'POST_PIXEL')
bpy.app.handlers.frame_change_pre.clear()
bpy.app.handlers.frame_change_pre.append(coby)

and also BTW if you want it to go in real time about > 30 FPS than you'd have to make your own build of blender following this answer: bgl.Buffer() to bytes.

So if you've tested it out with a small, try about 150 frame animation with an animated camera, you should see that the images are a little distorted at times, like this: enter image description here I have a feeling its something to do with not clearing the bgl before drawing another picture or something like that; however, I'm not too versed in openGL in general so it might be some other problem, and if someone else knows about it that would be great. As I just noticed, the smearing takes place outside of the camera outline so its probably jsut anything outside of the camera view becomes smeared, so IDK if theres an easy way to fit the picture to the camera view always, no matter what size (or at least keep the camera box in the middle of the picture if the aspect ratio isn't the same as the camera) Also I'm not sure how to properly change the image size to be, lets say, 1920 by 1080 and still fit in the camera, try changing the WIDTH and HEIGHT around and you should see that the result isn't as expected: enter image description here

besides the fact that its very offset from the actual camera position (were you to actually render it), it seems like the entire bottom part is smeared down for probably the same reason as above, but I just don't know how to fix it, and its also much lower quality if you zoom in at 1920 by 1080, but I want to find a way to keep it the best possible quality somehow (its definitely possible).


EDIT 5


So its pretty much able to work with a very low quality picture, but how can I safely increase the resolution and keep the picture in the camera view?

Also I'm trying to implement threads, by doing this at the end of the draw2() function instead of above:

    bgl.glReadPixels(0, -125, WIDTH, HEIGHT, bgl.GL_RGBA, bgl.GL_UNSIGNED_BYTE, buffer)
    needle = threading.Thread(target=saveIt,args=[buffer, finalPath, WIDTH, HEIGHT])
    needle.daemon = True
    needle.start()
    thread.start_new_thread(saveIt,(buffer, finalPath, WIDTH, HEIGHT))

and then later I define saveIt:

def saveIt(buffer, path, width, height):
    array = np.asarray(buffer, dtype=np.uint8)
    myBytes = array.tobytes()
    im = Image.frombytes("RGBA",(width, height), myBytes)
    rawBytes = io.BytesIO()
    im.save(rawBytes, "PNG")
    rawBytes.seek(0)
    base64Encoded = base64.b64encode(rawBytes.read())
    txt =  "data:image/png;base64," + base64Encoded.decode()
    f = open(finalPath, "wb")
    f.write(base64.decodebytes(base64Encoded))
    f.close()
    print("saved image" + finalPath)

I'm mainly doing this to output the images faster, and it kind of almost works, although when I simply play the timeline, some images are being skipped, as can be seen in this screenshot: enter image description here The images with the blue plane are what was outputted first, without the thread (originally all of the images had the blue plane), then I played the timeline again, using the thread, and almost all of the images were replaced by a plane with a gray background; however, some of the images were missed, and I think its caused by the thread not completing.

I've been looking around for how to use threads with queues and pooling (https://stackoverflow.com/questions/22582043/how-to-use-python-multiprocessing-pool-map-within-loop, https://stackoverflow.com/questions/6319268/what-happened-to-thread-start-new-thread-in-python-3, https://stackoverflow.com/questions/45039513/how-can-i-make-a-background-non-blocking-input-loop-in-python, https://stackoverflow.com/questions/2905965/creating-threads-in-python), but they all involved an array input of values, or some other method which I couldn't figure out how to incorporate into mine. I'm not sure how to add a thread to a queue to make sure it finished by just doing a simple function at a time????

(BTW the last line in the code in the screenshot was commented out and it doesn't work in blenders python I'm pretty sure, I get a compiling error but I'm not sure if that old python method would be better than the one I'm using...)

EDIT 6 I think the camera smearing problem is because, first of all, the image is inverted, so I think that needs to be fixed somehow... also, I realized that the smeared part is from when the other blender windows overlap it, so that itself is a problem: How can I simple open up a new blender window with the display of the offscreen render, and simply bgl.glReadPixels off of that? And mainly: I still can't figure out how to get good resolution with higher widths and heights for the offscreen render, when I increase it its just pixelated, how can I fix this? If I need to modify the C++ / C code, then so be it.

EDIT 7 OK I almost have this figured out actually, I still haven't been able to get the different quality / image smearing with the camera fixed, but I did fix the thread issue above, where the images weren't saving right, I was just passing the global path variable (which changes on every frame callback) instead of the local "path" variable that was passed to the thread, my new saveIt funciton is this (I originally used queues while experimenting, and it works either way for me):

def saveIt(buffer, path, width, height):
    print("now I'm in the actual thread! SO exciting (for picture: "+path+")")
    array = np.asarray(buffer, dtype=np.uint8)
    myBytes = array.tobytes()
    im = Image.frombytes("RGBA",(width, height), myBytes)
    rawBytes = io.BytesIO()
    im.save(rawBytes, "PNG")
    rawBytes.seek(0)
    base64Encoded = base64.b64encode(rawBytes.read())
    txt =  "data:image/png;base64," + base64Encoded.decode()
    filebytes = base64.decodebytes(base64Encoded)
   # myQ.put(filebytes)

    f = open(path, "wb")
   # while myQ.qsize():
   #     f.write(myQ.get())
  #  f.flush()
    f.write(filebytes)
    f.close()
    print("gotmeThis time for picture:"+path)
$\endgroup$
5
  • $\begingroup$ "I basically just want to stream the 3D view animation to a websocket" could you explain in more depth what you mean by this? What do you mean by "3D view animation"? Do you want to output an animation? Why not render it with OpenGL? $\endgroup$ Commented Jan 8, 2019 at 8:17
  • $\begingroup$ Better link for screencast: blender.stackexchange.com/questions/49037/… Elaborate on what you mean by in the background. Recommend If you have code post or link to it. From what is posted: Do not change frame with scene.frame_current = f use scene.frame_set(f). The "3D full view" in blender prior 2.8 is in display render only mode no buttons or overlays. . Also would you consider a 3rd party python lib like gtk3 for capturing a screenshot of a window. $\endgroup$
    – batFINGER
    Commented Jan 8, 2019 at 9:58
  • $\begingroup$ Let us continue this discussion in chat. $\endgroup$ Commented Jan 9, 2019 at 0:23
  • 1
    $\begingroup$ Hi. What with all the edits you have done this question is getting quite long. Instead of just continually adding information, I suggest removing any information that may have been superseded by a subsequent edit. If it's all relevant, fine, but I think it's getting less likely that someone will read through the whole thing and some of the info in the early edits might be obsolete now. $\endgroup$ Commented May 26, 2019 at 14:24
  • $\begingroup$ @RayMairlot interesting idea, but do you have any answers to the actual questioN? $\endgroup$ Commented May 26, 2019 at 22:46

1 Answer 1

4
+25
$\begingroup$

The gi lib.

As I mentioned in comment I have been recently using the gtk3 toolkit for taking screenshots of open windows to include some window manager like functionality in blender to choose other open windows (blender, matplotlib, or vim or consoles etc) directly from within blender without falling back to desktop window manager.

Here is a simple test script. It places a screenshot of fullscreen, each open window, and active window into the folder in which blender was opened from.

You may be able to set the geometry of a window to larger than screen resolution, haven't tested.

Most window managers, I like the lightweight ewmh let you see the window title, class, set geometry, full screen, remove decorations etc.

import gi
gi.require_version('Gdk', '3.0')
from gi.repository import Gdk

# full screenshot
window = Gdk.get_default_root_window()
pb = Gdk.pixbuf_get_from_window(window, *window.get_geometry())
pb.savev("full.png", "png", (), ())

# screenshots for all windows
window = Gdk.get_default_root_window()
screen = window.get_screen()
typ = window.get_type_hint()
for i, w in enumerate(screen.get_window_stack()):
    pb = Gdk.pixbuf_get_from_window(w, *w.get_geometry())
    pb.savev("{}.png".format(i), "png", (), ())

# screenshot active window
screen = Gdk.get_default_root_window().get_screen()
w = screen.get_active_window()
pb = Gdk.pixbuf_get_from_window(w, *w.get_geometry())
pb.savev("active.png", "png", (), ())

enter image description here A crop from "active.png" the window was active when I chose to run the script

This simple example only uses Gdk.pixbuff_get_from_window(...) There are other methods using the toolkit, checkout cairo for example.

Notes on installing https://pygobject.readthedocs.io/en/latest/getting_started.html

$\endgroup$
8
  • $\begingroup$ I'm giving this a try now, but just a side question: will this allow me to take a picture of the 3D view without any of the menu items at the top, and also preserve the alpha channel of the background? Because I pretty much have that working already with the offscreen.draw_view3d, the only real question I still have is how to change the resolution of the offscreen.draw_view3d to and the draw_texture_2d while still preserving quality. I'm still checking out your answer currently, though $\endgroup$ Commented Jan 10, 2019 at 20:39
  • $\begingroup$ Yea thanks for the answer and your effort, although I don't think this is what I'm looking for. This simply takes a screenshot of an active blender window, I'm just trying to get the 3D view as an image, in a similar way that the render function works, although this would just be taking a snapshot of the screen (I guess just simply using bgl.glReadPixels since it preserves alpha (and its what the blender screenshot function actually uses in the source code)). The blender GPU library almost already has this perfectly working, I'm just trying to figure out how to simply get better resolution $\endgroup$ Commented Jan 10, 2019 at 21:51
  • $\begingroup$ To clarify: i) That PyGobject is more than a GUI library and ii) the example code Takes a screenshot of every window whether active or on a different workspace forground background etc. $\endgroup$
    – batFINGER
    Commented Jan 10, 2019 at 22:41
  • $\begingroup$ can you please elaborate, I don't understand what you mean by different workspace and how taking a screenshot of every blender window will help me get the 3d VIEW $\endgroup$ Commented Jan 10, 2019 at 22:43
  • 2
    $\begingroup$ Consider this an answer you don't need to accept, It was written with little expectation of that, rather as an alternative way to take screenshots of open windows using a third party python module. The code pasted takes a screenshot of the screen, and all opened windows including the active, focused or background. If you open 2.79 file in 2.8 the full screen display render only screen is available, the settings have moved to overlay or view layer. What I would envisage is open a 3d view in own window, grab its handle, then resize and manage with window manager Gdk in this instance. $\endgroup$
    – batFINGER
    Commented Jan 11, 2019 at 10:45

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .