1

I have an Ubuntu Server 20.04 LTS on an Alienware Aurora R6 (i7-7700 CPU with a Nvidia GTX 1080). I am relatively new to type-1 hypervisor and have some experience with type-2 (VirtualBox).

On the host system I have created a Xubuntu 20.04 QEMU VM as a test lab using Cockpit. I configured it to

  • support CPU passthrough
  • support PCI passthrough for the GTX 1080, custom VBIOS for the specific GPU also included (before that patched using NVIDIA vBIOS VFIO patcher
  • connect to a bridge network interface that binds it to the host's ethernet connection for remote (SSH, VNC), internal (institute's network) and Internet access

Here are the configurations of the respective devices (the full configuration can be seen at the end of the post):

CPU

<cpu mode='host-passthrough' check='partial'>
  <topology sockets='1' cores='6' threads='1'/>
  <cache mode='passthrough'/>
</cpu>

NETWORK

<interface type='bridge'>
  <mac address='52:54:00:2a:b8:4f'/>
  <source bridge='bridge_vm_eth'/>
  <model type='virtio'/>
  <link state='up'/>
  <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
</interface>

GPU

<hostdev mode='subsystem' type='pci' managed='yes'>
  <driver name='vfio'/>
  <source>
    <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
  </source>
  <rom file='/home/gin/gpu_roms/Dell.GTX1080.8192.170320.rom'/>
  <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0' multifunction='on'/>
</hostdev>
<hostdev mode='subsystem' type='pci' managed='yes'>
  <driver name='vfio'/>
  <source>
    <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
  </source>
  <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x1'/>
</hostdev>

The tricky part was the GPU since I've never done something like this before and based on this tutorial I created a passthrough for both the VGA as well as the sound controller, both part of the Nvidia GPU and required for a working driver installation.

I used lspci -nn | grep -i nvidia to get the PCI BDF (buss, device, function) as well as the vendor and device IDs (used this tutorial for the explanation on which column in the output is what):

01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1080] [10de:1b80] (rev a1)
01:00.1 Audio device [0403]: NVIDIA Corporation GP104 High Definition Audio Controller [10de:10f0] (rev a1)

and used that to assign the source address for both <hostdev/> entries in my configuration. The guest PCI slots were assigned without too much thought (just looking at the rest and seeing which is the next bus/slot number that is available). I have also activated multifunction on the graphics <hostdev/> not sure if I need to do that for the sound too.

My GRUB configuration looks like this:

GRUB_DEFAULT=0
GRUB_TIMEOUT_STYLE=hidden
GRUB_TIMEOUT=0
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`

GRUB_CMDLINE_LINUX_DEFAULT="maybe-ubiquity intel_iommu=on vfio-pci.ids=10de:1b80,10de:10f0"
GRUB_CMDLINE_LINUX

where I enabled IOMMU for Intel as well as added the PCI IDs (vendor:device) to the VFIO list of PCI IDs.

In addition my /etc/modprobe.d/vfio.conf contains only

options vfio-pci ids=10de:1b80,10de:10f0

Two things I also had to do in order to even be able to install the host OS was to disable secure boot and turn on legacy mode (perhaps this is also the source of my problem below) as well as set the security driver to none due to an issue with AppArmor (currently investigating a fix for that).

After installing Xubuntu 20.04 as my guest VM I did some configuration (proxy, X11VNC, SSH etc.) and immediately checked if the GPU is detected by running the lspci command (see above):

06:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1080] [10de:1b80] (rev a1)
06:00.1 Audio device [0403]: NVIDIA Corporation GP104 High Definition Audio Controller [10de:10f0] (rev a1)

As you can see both the VGA and sound controllers are detected by the OS as PCI devices and the assigned PCI addresses (see <hostdev/> XML snippets above) as well as vendor and device ID match.

I went on to install the recommended Nvidia drivers

sudo ubuntu-drivers autoinstall

which resulted in successful (as in no error messages during installation) installation of Nvidia driver 470 (tested, recommended). I also tried to diagnose my issue below with driver versions 390 and 510, which are listed as by the Ubuntu additional drivers as available for the 18.04.

Currently my VM is using QXL, which also according to the configuration of my VM, is the primary video output device:

<video>
  <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
</video>

and is - at least from what I understand - a sort of a basic emulation of a graphics controller that works (on the surface at least) in a very similar manner as an integrated graphics with the RAM being used as VRAM and the CPU providing the rendering capabilities.

This is all fine since I want to use the Nvidia GPU for something else namely CUDA (of course later I got the sad fact that CUDA 8.0 toolkit, which is the last supported version of the toolkit is officially supported on Ubuntu 16.04 with some success stories on 18.04 but none on 20.04) so offloading the dedicated GPU is perfectly fine.

As a test I installed DaVinci Resolve 17. Upon launch I was greeted with a message that the GPU processing mode is not supported and further configuration is required. Needless to say I got

No GPU detected inside DaVinci Resolve 17

where it is evident that the list of available GPUs is empty

Next thing I did was to load the Nvidia X Server Settings (first GUI, then from the terminal). I was greeted with

Blank Nvidia X Server Settings

Upon launching from a terminal I got

ERROR: An internal driver error occurred

ERROR: Unable to load info from any available system

(nvidia-settings:1536): GLib-GObject-CRITICAL **: 11:02:41.340: g_object_unref: assertion 'G_IS_OBJECT (object)' failed
** Message: 11:02:41.343: PRIME: No offloading required. Abort
** Message: 11:02:41.343: PRIME: is it supported? no

Further of course nvidia-smi gives

Unable to determine the device handle for GPU 0000:06:00.0: Unknown Error

which, needless to say, is a highly descriptive error message.

Any ideas what I'm missing? I've read about putting iommu=pt for better performance or the infamous error 43, which according to my research should not apply to recent drivers on Linux at least.


UPDATE

Here is a part of the kernel log from the VM that I find interesting:

Mar 30 09:33:04 SZA-DT043-L-VM0 kernel: [    4.485748] nvidia: module license 'NVIDIA' taints kernel.
Mar 30 09:33:04 SZA-DT043-L-VM0 kernel: [    4.485753] Disabling lock debugging due to kernel taint
Mar 30 09:33:04 SZA-DT043-L-VM0 kernel: [    4.488862] input: HDA NVidia HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:02.5/0000:06:00.1/sound/card1/input5
Mar 30 09:33:04 SZA-DT043-L-VM0 kernel: [    4.488890] input: HDA NVidia HDMI/DP,pcm=7 as /devices/pci0000:00/0000:00:02.5/0000:06:00.1/sound/card1/input6
Mar 30 09:33:04 SZA-DT043-L-VM0 kernel: [    4.488911] input: HDA NVidia HDMI/DP,pcm=8 as /devices/pci0000:00/0000:00:02.5/0000:06:00.1/sound/card1/input7
Mar 30 09:33:04 SZA-DT043-L-VM0 kernel: [    4.488933] input: HDA NVidia HDMI/DP,pcm=9 as /devices/pci0000:00/0000:00:02.5/0000:06:00.1/sound/card1/input8
Mar 30 09:33:04 SZA-DT043-L-VM0 kernel: [    4.488956] input: HDA NVidia HDMI/DP,pcm=10 as /devices/pci0000:00/0000:00:02.5/0000:06:00.1/sound/card1/input9
Mar 30 09:33:04 SZA-DT043-L-VM0 kernel: [    4.488979] input: HDA NVidia HDMI/DP,pcm=11 as /devices/pci0000:00/0000:00:02.5/0000:06:00.1/sound/card1/input10
Mar 30 09:33:04 SZA-DT043-L-VM0 kernel: [    4.489001] input: HDA NVidia HDMI/DP,pcm=12 as /devices/pci0000:00/0000:00:02.5/0000:06:00.1/sound/card1/input11
Mar 30 09:33:04 SZA-DT043-L-VM0 kernel: [    4.508386] nvidia: module verification failed: signature and/or required key missing - tainting kernel
Mar 30 09:33:04 SZA-DT043-L-VM0 kernel: [    4.530278] nvidia-nvlink: Nvlink Core is being initialized, major device number 511
Mar 30 09:33:04 SZA-DT043-L-VM0 kernel: [    4.530287]
Mar 30 09:33:04 SZA-DT043-L-VM0 kernel: [    4.537228] nvidia 0000:06:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=none
Mar 30 09:33:04 SZA-DT043-L-VM0 kernel: [    4.739718] cryptd: max_cpu_qlen set to 1000
Mar 30 09:33:04 SZA-DT043-L-VM0 kernel: [    4.743814] AVX2 version of gcm_enc/dec engaged.
Mar 30 09:33:04 SZA-DT043-L-VM0 kernel: [    4.743828] AES CTR mode by8 optimization enabled
Mar 30 09:33:04 SZA-DT043-L-VM0 kernel: [    5.015914] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  470.103.01  Thu Jan  6 12:10:04 UTC 2022
Mar 30 09:33:04 SZA-DT043-L-VM0 kernel: [    5.025646] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms  470.103.01  Thu Jan  6 >
Mar 30 09:33:04 SZA-DT043-L-VM0 kernel: [    5.027257] [drm] [nvidia-drm] [GPU ID 0x00000600] Loading driver
Mar 30 09:33:04 SZA-DT043-L-VM0 kernel: [    5.027259] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:06:00.0 on minor 1
Mar 30 09:33:04 SZA-DT043-L-VM0 kernel: [    5.036862] nvidia_uvm: module uses symbols from proprietary module nvidia, inheriting taint.
Mar 30 09:33:04 SZA-DT043-L-VM0 kernel: [    5.038962] nvidia-uvm: Loaded the UVM driver, major device number 509.

...


Mar 30 09:33:11 SZA-DT043-L-VM0 kernel: [   13.548312] NVRM: GPU 0000:06:00.0: RmInitAdapter failed! (0x24:0xffff:1211)
Mar 30 09:33:11 SZA-DT043-L-VM0 kernel: [   13.548440] NVRM: GPU 0000:06:00.0: rm_init_adapter failed, device minor number 0
Mar 30 09:33:11 SZA-DT043-L-VM0 kernel: [   13.715984] NVRM: GPU 0000:06:00.0: RmInitAdapter failed! (0x24:0xffff:1211)
Mar 30 09:33:11 SZA-DT043-L-VM0 kernel: [   13.716118] NVRM: GPU 0000:06:00.0: rm_init_adapter failed, device minor number 0
Mar 30 09:33:18 SZA-DT043-L-VM0 kernel: [   21.252309] input: spice vdagent tablet as /devices/virtual/input/input12
Mar 30 09:33:19 SZA-DT043-L-VM0 kernel: [   21.940404] NVRM: GPU 0000:06:00.0: RmInitAdapter failed! (0x24:0xffff:1211)
Mar 30 09:33:19 SZA-DT043-L-VM0 kernel: [   21.940534] NVRM: GPU 0000:06:00.0: rm_init_adapter failed, device minor number 0
Mar 30 09:33:20 SZA-DT043-L-VM0 kernel: [   22.332615] NVRM: GPU 0000:06:00.0: RmInitAdapter failed! (0x24:0xffff:1211)
Mar 30 09:33:20 SZA-DT043-L-VM0 kernel: [   22.332750] NVRM: GPU 0000:06:00.0: rm_init_adapter failed, device minor number 0

The kernel log from the host contains

...

Mar 30 10:37:54 SZA-DT043-L kernel: [416243.385098] snd_hda_intel 0000:01:00.1: Disabling MSI
Mar 30 10:37:54 SZA-DT043-L kernel: [416243.385117] snd_hda_intel 0000:01:00.1: Handle vga_switcheroo audio client
Mar 30 10:37:56 SZA-DT043-L kernel: [416245.129722] input: HDA NVidia HDMI/DP,pcm=3 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input28
Mar 30 10:37:56 SZA-DT043-L kernel: [416245.129916] input: HDA NVidia HDMI/DP,pcm=7 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input29
Mar 30 10:37:56 SZA-DT043-L kernel: [416245.130094] input: HDA NVidia HDMI/DP,pcm=8 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input30
Mar 30 10:37:56 SZA-DT043-L kernel: [416245.130283] input: HDA NVidia HDMI/DP,pcm=9 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input31
Mar 30 10:37:56 SZA-DT043-L kernel: [416245.130432] input: HDA NVidia HDMI/DP,pcm=10 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input32
Mar 30 10:37:56 SZA-DT043-L kernel: [416245.130582] input: HDA NVidia HDMI/DP,pcm=11 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input33
Mar 30 10:37:56 SZA-DT043-L kernel: [416245.130729] input: HDA NVidia HDMI/DP,pcm=12 as /devices/pci0000:00/0000:00:01.0/0000:01:00.1/sound/card1/input34

...

Mar 30 11:10:54 SZA-DT043-L kernel: [  134.197857] vfio-pci 0000:01:00.0: BAR 3: can't reserve [mem 0xd0000000-0xd1ffffff 64bit pref]

...

Mar 30 11:11:03 SZA-DT043-L kernel: [  143.752631] vfio-pci 0000:01:00.0: BAR 3: can't reserve [mem 0xd0000000-0xd1ffffff 64bit pref]

VM full configuration XML file

<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh edit Xubuntu20.04
or other application using the libvirt API.
-->

<domain type='kvm'>
  <name>Xubuntu20.04</name>
  <uuid>3534d4f4-c899-402c-b82b-ea34b4b9b65e</uuid>
  <metadata>
    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
      <libosinfo:os id="http://ubuntu.com/ubuntu/20.04"/>
    </libosinfo:libosinfo>
    <cockpit_machines:data xmlns:cockpit_machines="https://github.com/cockpit-project/cockpit/tree/master/pkg/machines">
      <cockpit_machines:has_install_phase>false</cockpit_machines:has_install_phase>
      <cockpit_machines:install_source_type>disk_image</cockpit_machines:install_source_type>
      <cockpit_machines:install_source>/etc/libvirt/qemu/Xubuntu20.04.xml</cockpit_machines:install_source>
      <cockpit_machines:os_variant>ubuntu20.04</cockpit_machines:os_variant>
    </cockpit_machines:data>
  </metadata>
  <memory unit='KiB'>16777216</memory>
  <currentMemory unit='KiB'>16777216</currentMemory>
  <vcpu placement='static' current='1'>6</vcpu>
  <os>
    <type arch='x86_64' machine='pc-q35-4.2'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <vmport state='off'/>
  </features>
  <cpu mode='host-passthrough' check='partial'>
    <topology sockets='1' cores='6' threads='1'/>
    <cache mode='passthrough'/>
  </cpu>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='volume' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source pool='default' volume='Xubuntu20.04.qcow2'/>
      <target dev='vdb' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x2'/>
    </controller>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x14'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0x15'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
    </controller>
    <controller type='pci' index='7' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='7' port='0x16'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <interface type='network'>
      <mac address='52:54:00:1e:cf:8b'/>
      <source network='default'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <mac address='52:54:00:2a:b8:4f'/>
      <source bridge='bridge_vm_eth'/>
      <model type='virtio'/>
      <link state='up'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='spice' autoport='yes' listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
      <image compression='off'/>
    </graphics>
    <graphics type='vnc' port='-1' autoport='yes' listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
    </graphics>
    <sound model='ich9'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/>
    </sound>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <rom file='/home/gin/gpu_roms/Dell.GTX1080.8192.170320_PATCHED.rom'/>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0' multifunction='on'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x1'/>
    </hostdev>
    <redirdev bus='usb' type='spicevmc'>
      <address type='usb' bus='0' port='2'/>
    </redirdev>
    <redirdev bus='usb' type='spicevmc'>
      <address type='usb' bus='0' port='3'/>
    </redirdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </memballoon>
    <rng model='virtio'>
      <backend model='random'>/dev/urandom</backend>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </rng>
  </devices>
</domain>

1 Answer 1

1

I removed the custom ROM following an advice on reddit. This didn't help but at least removed a possible source for the problem at hand.

Based on this article I checked the IOMMU group that my Nvidia belongs to and it appears that it also contained an Intel PCI bridge:

IOMMU Group 1:
                00:01.0 PCI bridge [0604]: Intel Corporation 6th-10th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 05)
                01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1080] [10de:1b80] (rev a1)
                01:00.1 Audio device [0403]: NVIDIA Corporation GP104 High Definition Audio Controller [10de:10f0] (rev a1)

After removing the PCI bridge

echo 1 > /sys/bus/pci/devices/0000\:00\:03.1/remove
echo 1 > /sys/bus/pci/rescan

and rebooting the VM I was able to fix the issue. In order to make the fix permanent I added a QEMU hook during prepare stage containing the code above and stored it inside /etc/libvirt/hooks/qemu.d/Xubuntu20.04/prepare/begin/script.sh so that libvirt will load it automatically.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .