96
$\begingroup$

I want to enable GPU rendering, but there is no option in User Preferences > System:

enter image description here

Why is this? How can I get cycles to render using my GPU?

$\endgroup$
3
  • $\begingroup$ I'm using OpenSUSE 13.1 x64 with Nvidia official repo drivers installed. However, despite I bought an nvidia Geforce 650GT, stil have no GPU option available on Blender. What am I missing or doing wrong? $\endgroup$ Commented May 15, 2014 at 13:42
  • $\begingroup$ @user3305984 Without more info it's hard to say. This site isn't really designed for back and forth discussions (as will undoubtedly be the result of troubleshooting etc.), so you'll probably have better luck on a forum like BlenderArtists $\endgroup$
    – gandalf3
    Commented May 29, 2014 at 19:19
  • $\begingroup$ for Linux mint (an maybe other distros) read this also: blender.stackexchange.com/a/31111/1853 $\endgroup$
    – user1853
    Commented May 19, 2015 at 17:33

3 Answers 3

136
$\begingroup$

Ensure GPU Support

First you should make sure that you have supported Graphics cards for GPU rendering. Currently Cycles supports:

Nvidia

CUDA or Optix on Nvidia devices

Cycles only supports CUDA GPUs with a CUDA compute ability of 3.0 or higher. To use CUDA, check to make sure your GPU is on this list of CUDA capable GPUs and has a ranking of at least 3.0.

If you are running an older NVIDIA card, ie the Geforce series, support is extremely limited and these are not officially supported, see How to enable GPU rendering on older Nvidia GPUs?

AMD

Experimental support for OpenCL devices as of 2.75 (added in B7f447). If you are using an AMD/ATI graphics card, see the OpenCL section below.

As of Blender 3.0 Cycles X was introduced, and with it rendering with OpenCL on AMD GPUs has been deprecated. If you have an AMD Radeon RX 5000 Series, AMD Radeon RX 6000 Series, or AMD Radeon Pro W6000 Series, you can use the new HIP device.

Intel

As of Blender 3.3 LTS initial support for Cycles GPU acceleration on new Intel Arc dedicated line of Graphics cards has been introduced for Windows and Linux using the new OneAPI

Install Latest Drivers

If your GPU has a CUDA compute ability greater than or equal to 3.0 and you still don't have the option to enable GPU rendering, you can check a couple more things:

Below are instructions for various operating systems. If you are still having issues after trying all the steps listed in this post, try asking for support on BlenderArtists.
This site is not well suited to localized troubleshooting discussions often needed to untangle unusual hardware/driver issues.

Linux

Run as root
Due to an issue with some versions of the nvidia drivers, you must run blender (or any other program which uses cuda) as root before you can use any cuda features as a normal user. See this thread for more detail.

Ubuntu-based Distributions

  • Open your driver manager and select the recommended driver and Apply Changes.

enter image description here

  • You can also use the terminal to install the latest stable driver.

      $ sudo apt-add-repository ppa:ubuntu-x-swat/x-updates
        $ sudo apt-get update
      $ sudo apt-get install nvidia-current
    

For linux Mint, Ubuntu and Debian variants (and maybe other distributions) you will need to install the package nvidia-modprobe which will detect your nvidia CUDA device and make it available for blender. Read this answer for further instructions

Debian Jesse

Please note that these instructions were put together in June 2015 on Debian Jesse. Although Debian is a very stable distribution, it isn't unlikely that this will be out of date on Debian Stretch. If you have more up to date information, please feel free to edit this.

Before we can install the drivers, we will need to install the kernel headers from the contrib nonfree repository. If this repository hasn't been added already, open /etc/apt/sources.list with nano:

$ sudo nano /etc/apt/sources.list

And add:

deb http://http.debian.net/debian/ jessie main contrib non-free 

For Debian to recognize the repository, we will need to refresh the package list:

$ sudo apt-get update

Once this is done, the headers can be installed:

$ sudo apt-get install linux-headers-$(uname -r|sed 's,[^-]*-[^-]*-,,') nvidia-kernel-dkms

With some sed magic, this will install the correct headers for your version of the kernel.

Now we need to blacklist (disable) the open source nouveau driver. To do this, we will create an Xorg configuration file:

$ sudo mkdir /etc/X11/xorg.conf.d
$ sudo echo -e 'Section "Device"\n\tIdentifier "My GPU"\n\tDriver "nvidia"\nEndSection' > /etc/X11/xorg.conf.d/20-nvidia.conf

And reboot the computer.

$ reboot

All that is required afterwards, is to install cuda:

$ sudo apt-get install nvidia-cuda-toolkit

For more in depth information, please see https://wiki.debian.org/NvidiaGraphicsDrivers (only covers the drivers, not cuda). If you are running a GTX 970 or 980 you will need a special build of cuda available here.


Arch Linux

Identifying your GPU:

From the Arch wiki:

If you don't know what GPU you have, you can find out by running:

$ lspci -k | grep -A 2 -i "VGA"

Drivers and CUDA:

For Arch Linux, installing proprietary Nvidia drivers for your GPU can be as simple as installing the nvidia package and then rebooting:

# pacman -S nvidia
# systemctl reboot

If you are compiling Blender from source, you will also need the CUDA toolkit. You can get it by installing the cuda package:

# pacman -S cuda

#Windows 7

  1. Find out what GPU you have in the Device Manager. Go to Start -> Control Panel -> System and Security -> System -> Device Manager), then open the Display adapters tree.

Screenshot of Device Manager showing an NVIDIA GeForce GTX 580

  1. To find out the architecture of your Windows installation, open a command prompt (search for cmd in the start menu) and run wmic os get osarchitecture.

enter image description here

Alternatively, you can get this information from a GUI by going to Start -> Control Panel -> System and Security -> System or by using the keyboard shortcut Windows KeyPause.

Screenshot showing the architecture of Windows.

  1. Go to the Nvidia Website and select your driver.

enter image description here

  1. Finally, download and install the proper driver for your architecture. I am assuming you know how to use installers.

enter image description here

Nvidia Optimus:

If you're running Blender on a notebook with Nvidia Optimus, make sure it uses the dedicated GPU. Either configure Blender to always use the dedicated over the integrated GPU in the Nvidia Control Panel, or right-click Blender.exe (or a shortcut to Blender) and select the Nvidia GPU in the the Run with graphics processor menu:

Run with high-performance Nvidia processor (GPU)


OS X

Install the latest Nvidia Driver for you graphics card. You can download them from the Nvidia website.

  1. Open the CUDADriver.pkg file by double clicking it.

enter image description here

  1. Go through the installer.

enter image description here

  1. If it installed correctly, there should be a new CUDA option in the System Preferences (the only time you need to go here is to install updates):

enter image description here


Finally after you have installed your drivers:

  1. Restart your computer

  2. Start Blender.

  3. There should now be an option in the Blender's settings allowing you to select CUDA and your GPU:

enter image description here

  1. Then select the GPU in Render settings > Render > Device:

enter image description here


OpenCL

As of blender 2.75, AMD HD 7xxx+ GPUs are officially supported. Other OpenCL devices may work, and can be tested by force-enabling OpenCL with an environment variable:

CYCLES_OPENCL_SPLIT_KERNEL_TEST=1

Also see Is it possible to do OpenCL rendering on Intel processors?

##Ubuntu/Debian

On Ubuntu/debian you may need to install ocl-icd-opencl-dev package

ArchLinux

Nvidia OpenCL
To get OpenCL working for nvidia GPUs, ensure that the opencl-nvidia package is installed:

# pacman -S opencl-nvidia

Then run blender with the environment variable set to 1:

CYCLES_OPENCL_SPLIT_KERNEL_TEST=1 blender

In the User Preferences > System there should now be an OpenCL option:

enter image description here

If it's selected, rendering on the GPU will now use opencl. Note that the first time you try to render, blender will have to first compile the necessary kernels which may some time.

$\endgroup$
24
  • 2
    $\begingroup$ This was on IRC yesterday: kaito: look how 'gandalf' is replying things blender.stackexchange.com/questions/7485/… [11:15am] Severin: that's what I call an answer $\endgroup$ Commented Aug 25, 2014 at 10:49
  • 3
    $\begingroup$ @MarcClintDion This was a team answer, credit must also go to Vader, CharlesL, CoDEmanX, and catlover2 :) $\endgroup$
    – gandalf3
    Commented Aug 25, 2014 at 17:55
  • 3
    $\begingroup$ @MarcClintDion You can always check this in the revision history. $\endgroup$
    – iKlsR
    Commented Sep 21, 2014 at 9:29
  • 1
    $\begingroup$ @JMY1000 I think so, but I wouldn't know for sure. It sounds like there might be some way to get it working on the open source drivers, maybe. $\endgroup$
    – gandalf3
    Commented Apr 16, 2016 at 7:53
  • 1
    $\begingroup$ So important to check that list for compatibility. My GeForce GT 525M ranks a 2.1, so there's no CUDA available for it since it is not a 3.0 or higher. I was about to try a different Linux distro! Thanks! $\endgroup$ Commented Apr 3, 2020 at 19:01
7
$\begingroup$

Also note that you need to change 2 settings to enable GPU rendering. The obvious one is in the User Preferences, System. You also need to set it for the blender file (scene) by clicking on the camera icon (on the left) in the Properties window and under the Render section is a setting for Device.

$\endgroup$
1
  • $\begingroup$ If this is important that might have to be included in that other much more detailed community wiki answer? $\endgroup$
    – Samoth
    Commented Apr 16, 2016 at 9:43
1
$\begingroup$

Sharing my recent experience with 2.8

If both, the CPU and the GPU are checked, in the "Preferences / System", Blender will prioritaze the CPU and the render will be slower.

When I unchecked the CPU, I only could see 1 "processing square" in the render time, but really fast, 1/6 of the old render total time.

I hope this help someone.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .