4
$\begingroup$

I'm rendering the exact same file in Blender on two computers. It's an animation (here's a frame) Yoda and R2
I'm using the network rendering placeholder and non overwrite functions to render individual frames from each of the computers. If you haven't heard what this is or how it works, it's not really too important to my question, but you can find out about it here.

Anyway, the render is being done in Cycles with GPU acceleration enabled. I am using the experimental set for the GPU renderer as there is sub surface scattering in the skin and cloth which is only available with experimental. So the exact same file/scene is being rendered frame by frame by my two computers (one does, say frame 275, the other then does 276, so almost the exact same in terms of what's in the frame).

Both computers are Windows 7. Both running Blender 2.76b. One is an older i7 @ 2.4 Ghz with 12GB RAM and a GTX670 graphics card. The other is a brand new i7 @ 4Ghz with 32GB RAM and a GTX970 graphics card.

My older, slower computer with the GTX 670 is rendering frames in 2m40s or so, while my new computer with the GTX970, with more memory and more CUDA cores and a faster GPU clock speed (should be way faster) is rendering them in 6-7m.

I have double checked and both are rendering using the GPU. I shut off GPU rendering on both just to check, and sure enough, they were both way slower, as expected. Both have the newest drivers. Tried restarting, uninstalling and reinstalling driver.

I did get a weird error on my newer computer on one render that said there was a CUDA error, so I'm thinking this may have something to do with it, but I haven't been able to recreate the error. Since this computer is almost brand new, I don't have a lot of history to go on.

Bottom line, render speed on these should basically come down to the GPU, and with a newer, faster GPU, with more CUDA cores, more memory it should be rendering faster. Unless somehow the newer GPU is missing something that the older one has.

I know that unfortunately with this information it will be difficult to diagnose the problem, but I really don't know what else to include. I'm hoping that someone will be able to point me to something to check.

$\endgroup$
4
  • $\begingroup$ Have you tried to play around with the tile sizes for rendering? Depending on the number of CUDA cores, different tile sizes will be optimal. $\endgroup$
    – maddin45
    Commented Feb 18, 2016 at 23:16
  • $\begingroup$ Thank you for your response. I have, unfortunately. My default in the file is 256x256 which was recommended by Andrew Price, I believe, when rendering with the GPU, but I tried 512x512 and 128x128 and 64x64. All only gave small differences in render time, unfortunately. But thanks for the suggestion! $\endgroup$
    – Blazer003
    Commented Feb 18, 2016 at 23:23
  • 1
    $\begingroup$ Might be worth reporting this to the devs, either via the mailing list or the bug tracker $\endgroup$
    – gandalf3
    Commented Feb 19, 2016 at 0:14
  • $\begingroup$ Thanks for the tips guys. I disabled the subsurfscattering and turned off the experimental rendering and the newer 970GT ran faster, like it should. Not as much faster as I was hoping, but at least faster. So there must be something in the experimental Cycles rendering my old graphics card did better. New SSS speedups are in the 2.77 release, so I'm going to see how that compares in the 2.77 test build that was just released. $\endgroup$
    – Blazer003
    Commented Feb 22, 2016 at 23:54

2 Answers 2

1
$\begingroup$

I think it's a know bug, but no solution has been found yet. Not much you can do but wait for a solution.

See here https://developer.blender.org/T45093

$\endgroup$
0
$\begingroup$

Probably this wont solve this exact problem but, you can increase render speed by turning on add-on Auto Tile Size, it helps a lot in guessing right tile sizes for different GPUs.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .