4
$\begingroup$

I'm making a POV fan and would like to display some simple graphics in real-time on it. I have a working naive solution with OpenGL, but I'm maxing out at 7 fps rending a simple cube on a raspberry pi 3 B+.

normal rasterization of an image, as I understand it, happens as illustrated in the following image, where the pixels are processed left-to-right, top-to-bottom. tradition rasterization

However, to be displayed on a rotating fan, the rasterization needs to be radial like in this image: enter image description here

Now, I'm not sure if this is best achieved with a custom rasterization or perhaps applying a matrix after the projection matrix has been applied. My current solution to the issue is for every "frame" I render a 128x1 pixel image, slightly rotate the camera about it's viewing axis, and repeat until the camera has rotated 360 degrees. This works, but it's abyssmally slow and is a pretty hacky solution. Another thing that is unique to this problem is that the final result never needs to actually be shown on screen. The colors in the color buffer are sent out over a simplified SPI protocol to the LED fan blade. I'm programming this in OpenGL 2.1, so any relevant code examples would be much appreciated!

$\endgroup$

1 Answer 1

3
$\begingroup$

GPU hardware rasterization can't generate the kind of image you're looking for directly. However, one option would be to render a normal image to an offscreen buffer, and then use a pixel shader to convert it to the format to send out to the LEDs.

In other words, run a full-screen pass on the output render target, with a shader that interprets its input X as radius and Y as time, and samples the calculated location in the previously rendered image as a texture. Then the output render target can be read back, and will contain the successive LED values row by row.

$\endgroup$
3
  • $\begingroup$ Alright, I think I understand. So I should render as normal using my standard vertex and fragment shaders, then pass the result of that render as a texture to another shader program along with the desired polar coordinates that would be used by the fan. And in the new fragment shader do some math to turn those polar coordinates into cartesian coordinates for sampling of the first render? How would I pass the result between programs? Using glReadPixels()? or is there a better way? $\endgroup$ Commented Oct 28, 2021 at 5:04
  • $\begingroup$ Yep, you've got the idea. In OpenGL 2.1, you'll be using the "pixel buffer object" functionality. You can look that up, and also search the phrase "render to texture". $\endgroup$ Commented Oct 28, 2021 at 5:50
  • $\begingroup$ You could render to a 128x1 output buffer from your original 2d render image and read that in whole (glReadPixels) to the CPU side so it can be sent to your external device. $\endgroup$
    – PaulHK
    Commented Oct 29, 2021 at 5:03

Not the answer you're looking for? Browse other questions tagged or ask your own question.