9 Commits

4 changed files with 95 additions and 67 deletions

BIN
510.108.03.patch Normal file

Binary file not shown.

150
README.md
View File

@ -1,10 +1,12 @@
# NVIDIA vGPU with the 510 driver
# NVIDIA vGPU with the GRID 14.3 driver
Thanks to the great work of a `LIL'pingu` in the vgpu unlock discord we can finally use the (at the time of writing) latest NVIDIA GRID driver with the version number 510.47.03 with most consumer GPUs. Personally I have tested the T1000, a turing based card but others from the discord server got it working with a pascal based card as well.
A few days ago, NVIDIA released their latest enterprise GRID driver. I created a patch that allows the use of most consumer GPUs for vGPU. One notable exception from that list is every officially unsupported Ampere GPU.
### This tutorial assumes you are using a clean install of Proxmox 7.1, or ymmv when using an existing installation. Make sure to always have backups!
This guide and all my tests were done on a RTX 2080 Ti which is based on the Turing architechture.
The patch included in this repository should work on other linux systems with kernel versions >= 5.13 but I have only tested it on proxmox.
### This tutorial assumes you are using a clean install of Proxmox 7.3, or ymmv when using an existing installation. Make sure to always have backups!
The patch included in this repository should work on other linux systems with kernel versions 5.13 to 5.16 but I have only tested it on the current proxmox version.
If you are not using proxmox, you have to adapt some parts of this tutorial to work for your distribution.
## Packages
@ -22,14 +24,7 @@ apt update
apt dist-upgrade
```
By default Proxmox 7.1 has the 5.13 kernel, but you can opt-in to the newer 5.15 version. Both versions work just fine for vGPU.
If you didn't install a newer kernel on proxmox, then skip the following line. If you have the 5.15 kernel, it is required that you also install the corresponding kernel headers:
```bash
apt install -y pve-headers-5.15
```
Next we need to install a few more packages like git, a compiler and some other tools. This is required no matter which kernel version you are using.
We need to install a few more packages like git, a compiler and some other tools.
```bash
apt install -y git build-essential dkms pve-headers mdevctl
```
@ -38,13 +33,13 @@ apt install -y git build-essential dkms pve-headers mdevctl
First, clone this repo to your home folder (in this case `/root/`)
```bash
git clone https://gitlab.com/polloloco/vgpu-5.15.git
git clone https://gitlab.com/polloloco/vgpu-proxmox.git
```
You also need the vgpu_unlock-rs repo
```bash
cd /opt
git clone https://github.com/p0lloloco/vgpu_unlock-rs
git clone https://github.com/mbilker/vgpu_unlock-rs.git
```
After that, install the rust compiler
@ -80,6 +75,14 @@ echo -e "[Service]\nEnvironment=LD_PRELOAD=/opt/vgpu_unlock-rs/target/release/li
echo -e "[Service]\nEnvironment=LD_PRELOAD=/opt/vgpu_unlock-rs/target/release/libvgpu_unlock_rs.so" > /etc/systemd/system/nvidia-vgpu-mgr.service.d/vgpu_unlock.conf
```
> ### Have a vgpu supported card? Read here!
>
> If you don't have a card like the Tesla P4, or any other gpu from [this list](https://docs.nvidia.com/grid/gpus-supported-by-vgpu.html), please continue reading at [Enabling IOMMU](#enabling-iommu)
>
> Disable the unlock part as doing this on a gpu that already supports vgpu, could break things as it introduces unnecessary complexity and more points of possible failure:
> ```bash
> echo "unlock = false" > /etc/vgpu_unlock/config.toml
> ```
## Enabling IOMMU
#### Note: Usually this isn't required for vGPU to work, but it doesn't hurt to enable it. You can skip this section, but if you run into problems later on, make sure to enable IOMMU.
@ -94,7 +97,7 @@ Depending on which system you are using to boot, you have to chose from the foll
<details>
<summary>GRUB</summary>
Open the file `/etc/default/grub` in your favorite editor
```bash
nano /etc/default/grub
@ -128,7 +131,7 @@ Depending on which system you are using to boot, you have to chose from the foll
<details>
<summary>systemd-boot</summary>
The kernel parameters have to be appended to the commandline in the file `/etc/kernel/cmdline`, so open that in your favorite editor:
```bash
nano /etc/kernel/cmdline
@ -225,42 +228,67 @@ Depending on your mainboard and cpu, the output will be different, in my output
## NVIDIA Driver
As of the time of this writing (March 2022), the latest available GRID driver is 14.0 with vGPU driver version 510.47.03. You can check for the latest version [here](https://docs.nvidia.com/grid/). I cannot guarantee that newer versions would work without additional patches, this tutorial only covers 14.0 (510.47.03).
As of the time of this writing (November 2022), the latest available GRID driver is 14.3 with vGPU driver version 510.108.03. You can check for the latest version [here](https://docs.nvidia.com/grid/). I cannot guarantee that newer versions would work without additional patches, this tutorial only covers 14.3 (510.108.03).
### Obtaining the driver
NVIDIA doesn't let you freely download vGPU drivers like they do with GeForce or normal Quadro drivers, instead you have to download them through the [NVIDIA Licensing Portal](https://nvid.nvidia.com/dashboard/) (see: [https://www.nvidia.com/en-us/drivers/vgpu-software-driver/](https://www.nvidia.com/en-us/drivers/vgpu-software-driver/)). You can sign up for a free evaluation to get access to the download page.
The file you are looking for is called `NVIDIA-GRID-Linux-KVM-510.47.03-511.65.zip`, you can get it from the download portal by downloading version 14.0 for `Linux KVM`.
NB: When applying for an eval license, do NOT use your personal email or other email at a free email provider like gmail.com. You will probably have to go through manual review if you use such emails. I have very good experience using a custom domain for my email address, that way the automatic verification usually lets me in after about five minutes.
After downloading, extract that and copy the file `NVIDIA-Linux-x86_64-510.47.03-vgpu-kvm.run` to your Proxmox host into the `/root/` folder
```bash
scp NVIDIA-Linux-x86_64-510.47.03-vgpu-kvm.run root@pve:/root/
The file you are looking for is called `NVIDIA-GRID-Linux-KVM-510.108.03-513.91.zip`, you can get it from the download portal by downloading version 14.3 for `Linux KVM`.
For those who want to find the file somewhere else, here are some checksums :)
```
sha1: ec82f7d197a5ea583d7b083dfd3a31fe8748aaa8
md5: 3aea33ebc972a9dfa643d8e5e89ba5ef
```
After downloading, extract that and copy the file `NVIDIA-Linux-x86_64-510.108.03-vgpu-kvm.run` to your Proxmox host into the `/root/` folder
```bash
scp NVIDIA-Linux-x86_64-510.108.03-vgpu-kvm.run root@pve:/root/
```
> ### Have a vgpu supported card? Read here!
>
> If you don't have a card like the Tesla P4, or any other gpu from [this list](https://docs.nvidia.com/grid/gpus-supported-by-vgpu.html), please continue reading at [Patching the driver](#patching-the-driver)
>
> With a supported gpu, patching the driver is not needed, so you should skip the next section. You can simply install the driver package like this:
> ```bash
> chmod +x NVIDIA-Linux-x86_64-510.108.03-vgpu-kvm.run
> ./NVIDIA-Linux-x86_64-510.108.03-vgpu-kvm.run --dkms
> ```
>
> To finish the installation, reboot the system
> ```bash
> reboot
> ```
>
> Now, skip the following two sections and continue at [Finishing touches](#finishing-touches)
### Patching the driver
Now, on the proxmox host, make the driver executable
```bash
chmod +x NVIDIA-Linux-x86_64-510.47.03-vgpu-kvm.run
chmod +x NVIDIA-Linux-x86_64-510.108.03-vgpu-kvm.run
```
And then patch it
```bash
./NVIDIA-Linux-x86_64-510.47.03-vgpu-kvm.run --apply-patch ~/vgpu-5.15/NVIDIA-Linux-x86_64-510.47.03-vgpu-kvm.patch
./NVIDIA-Linux-x86_64-510.108.03-vgpu-kvm.run --apply-patch ~/vgpu-proxmox/510.108.03.patch
```
That should output a lot of lines ending with
That should output a lot of lines ending with
```
Self-extractible archive "NVIDIA-Linux-x86_64-510.47.03-vgpu-kvm-custom.run" successfully created.
Self-extractible archive "NVIDIA-Linux-x86_64-510.108.03-vgpu-kvm-custom.run" successfully created.
```
You should now have a file called `NVIDIA-Linux-x86_64-510.47.03-vgpu-kvm-custom.run`, that is your patched driver.
You should now have a file called `NVIDIA-Linux-x86_64-510.108.03-vgpu-kvm-custom.run`, that is your patched driver.
### Installing the driver
Now that the required patch is applied, you can install the driver
```bash
./NVIDIA-Linux-x86_64-510.47.03-vgpu-kvm-custom.run --dkms
./NVIDIA-Linux-x86_64-510.108.03-vgpu-kvm-custom.run --dkms
```
The installer will ask you `Would you like to register the kernel module sources with DKMS? This will allow DKMS to automatically build a new module, if you install a different kernel later.`, answer with `Yes`.
@ -269,7 +297,7 @@ Depending on your hardware, the installation could take a minute or two.
If everything went right, you will be presented with this message.
```
Installation of the NVIDIA Accelerated Graphics Driver for Linux-x86_64 (version: 510.47.03) is now complete.
Installation of the NVIDIA Accelerated Graphics Driver for Linux-x86_64 (version: 510.108.03) is now complete.
```
Click `Ok` to exit the installer.
@ -288,16 +316,16 @@ nvidia-smi
You should get an output similar to this one
```
Fri Mar 25 11:39:40 2022
Thu Nov 24 21:39:42 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.47.03 Driver Version: 510.47.03 CUDA Version: N/A |
| NVIDIA-SMI 510.108.03 Driver Version: 510.108.03 CUDA Version: N/A |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA T1000 On | 00000000:01:00.0 Off | N/A |
| 0% 36C P8 N/A / 50W | 35MiB / 4096MiB | 0% Default |
| 0 NVIDIA GeForce ... On | 00000000:01:00.0 Off | N/A |
| 26% 33C P8 43W / 260W | 85MiB / 11264MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
@ -338,37 +366,27 @@ The output will be similar to this
If this command doesn't return any output, vGPU unlock isn't working.
### Bonus: working `nvidia-smi vgpu` command
I've included an adapted version of the `nvidia-smi` [wrapper script](https://github.com/erin-allison/nvidia-merged-arch/blob/d2ce752cd38461b53b7e017612410a3348aa86e5/nvidia-smi) to get useful output from `nvidia-smi vgpu`.
Without that wrapper script, running `nvidia-smi vgpu` in your shell results in this output
```
No supported devices in vGPU mode
Another command you can try to see if your card is recognized as being vgpu enabled is this one:
```bash
nvidia-smi vgpu
```
With the wrapper script, the output looks similar to this
If everything worked right with the unlock, the output should be similar to this:
```
Fri Mar 25 11:40:18 2022
Thu Nov 24 21:29:52 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.47.03 Driver Version: 510.47.03 |
| NVIDIA-SMI 510.108.03 Driver Version: 510.108.03 |
|---------------------------------+------------------------------+------------+
| GPU Name | Bus-Id | GPU-Util |
| vGPU ID Name | VM ID VM Name | vGPU-Util |
|=================================+==============================+============|
| 0 NVIDIA T1000 | 00000000:01:00.0 | 0% |
| 0 NVIDIA GeForce RTX 208... | 00000000:01:00.0 | 0% |
+---------------------------------+------------------------------+------------+
```
To install this script, copy the `nvidia-smi` file from this repo to `/usr/local/bin` and make it executable
```bash
cp ~/vgpu-5.15/nvidia-smi /usr/local/bin/
chmod +x /usr/local/bin/nvidia-smi
However, if you get this output, then something went wrong
```
Run this in your shell (you might have to logout and back in first) to see if it worked
```bash
nvidia-smi vgpu
No supported devices in vGPU mode
```
## vGPU overrides
@ -377,6 +395,10 @@ Further up we have created the file `/etc/vgpu_unlock/profile_override.toml` and
If we take a look at the output of `mdevctl types` we see lots of different types that we can choose from. However, if we for example chose `GRID RTX6000-4Q` which gives us 4GB of vram in a VM, we are locked to that type for all of our VMs. Meaning we can only have 4GB VMs, its not possible to mix different types to have one 4GB VM, and two 2GB VMs.
> ### Important note
>
> C profiles (for example `GRID RTX6000-4C`) only work on Linux, don't try using those on Windows, it will not work - at all.
All of that changes with the override config file. Technically we are still locked to only using one profile, but now its possible to change the vram of the profile on a VM basis so even though we have three `GRID RTX6000-4Q` instances, one VM can have 4GB or vram but we can override the vram size for the other two VMs to only 2GB.
Lets take a look at this example config override file (its in TOML format)
@ -396,6 +418,8 @@ framebuffer = 0x76000000 # VRAM size for the VM. In this case its 2GB
# 4GB: 0xEC000000
# 8GB: 0x1D8000000
# 16GB: 0x3B0000000
# These numbers may not be accurate for you, but you can always calculate the right number like this:
# The amount of VRAM in your VM = `framebuffer` + `framebuffer_reservation`
[mdev.00000000-0000-0000-0000-000000000100]
frl_enabled = 0
@ -418,6 +442,8 @@ max_pixels = 2073600
### Spoofing your vGPU instance
#### Note: This only works on Windows guests, don't bother trying on Linux.
You can very easily spoof your virtual GPU to a different card, so that you could install normal quadro drivers instead of the GRID drivers that require licensing.
For that you just have to add two lines to the override config. In this example I'm spoofing my Turing based card to a normal RTX 6000 Quadro card:
@ -425,7 +451,7 @@ For that you just have to add two lines to the override config. In this example
[profile.nvidia-259]
# insert all of your other overrides here too
pci_device_id = 0x1E30
pci_id = 0x1E3012BA # This is not always required, see below
pci_id = 0x1E3012BA
```
`pci_device_id` is the pci id from the card you want to spoof to. In my case its `0x1E30` which is the `Quadro RTX 6000/8000`.
@ -434,7 +460,15 @@ pci_id = 0x1E3012BA # This is not always required, see below
You can get the IDs from [here](https://pci-ids.ucw.cz/read/PC/10de/). Just Ctrl+F and search the card you want to spoof to, then copy the id it shows you on the left and use it for `pci_device_id`.
After doing that, click the same id, it should open a new page where it lists the subsystems. If there are none listed, you can remove the `pci_id` entry from above. But if there are some, you have to select the one you want and use its id as the second value for `pci_id` (see above).
After doing that, click the same id, it should open a new page where it lists the subsystems. If there are none listed, you must use `0000` as the second value for `pci_id`. But if there are some, you have to select the one you want and use its id as the second value for `pci_id` (see above).
## Important note when spoofing
When I originally wrote this guide, the latest quadro drivers were from the R510 branch, but nvidia has since released multiple drivers in the R515 and R520 branch, those will **NOT WORK** and maybe even make your VM crash.
If you accidentally installed such a driver, its best to either remove the driver completely using DDU or just install a fresh windows VM.
The quadro driver for R510 branch can be found [here (for 512.78)](https://www.nvidia.com/Download/driverResults.aspx/189361/en-us/) or [here (for 513.46)](https://www.nvidia.com/download/driverResults.aspx/191342/en-us/). I've had the best results with 512.78 but the other could work too. But anything newer than that, will **NOT WORK**.
## Adding a vGPU to a Proxmox VM
@ -473,7 +507,13 @@ Finish by clicking `Add`, start the VM and install the required drivers. After i
Enjoy your new vGPU VM :)
## Credits
## Support
If something isn't working, please create an issue or join the [Discord server](https://discord.gg/5rQsSV3Byq) and ask for help in the `#proxmox-support` channel.
When asking for help, please describe your problem in detail instead of just saying "vgpu doesn't work". Usually a rough overview over your system (gpu, mainboard, proxmox version, kernel version, ...) and full output of `dmesg` and/or `journalctl --no-pager -b 0 -u nvidia-vgpu-mgr.service` (<-- this only after starting the VM that causes trouble) is helpful.
## Further reading
Thanks to all these people (in no particular order) for making this project possible
- [DualCoder](https://github.com/DualCoder) for his original [vgpu_unlock](https://github.com/DualCoder/vgpu_unlock) repo with the kernel hooks
@ -483,7 +523,7 @@ Thanks to all these people (in no particular order) for making this project poss
- [rupansh](https://github.com/rupansh) for the original [twelve.patch](https://github.com/rupansh/vgpu_unlock_5.12/blob/master/twelve.patch) to patch the driver on kernels >= 5.12
- mbuchel#1878 on the [GPU Unlocking discord](https://discord.gg/5rQsSV3Byq) for [fourteen.patch](https://gist.github.com/erin-allison/5f8acc33fa1ac2e4c0f77fdc5d0a3ed1) to patch the driver on kernels >= 5.14
- [erin-allison](https://github.com/erin-allison) for the [nvidia-smi wrapper script](https://github.com/erin-allison/nvidia-merged-arch/blob/d2ce752cd38461b53b7e017612410a3348aa86e5/nvidia-smi)
- LIL'pingu#9069 on the [GPU Unlocking discord](https://discord.gg/5rQsSV3Byq) for his patch to nop out code that NVIDIA added to prevent usage of drivers with a version >= 460 with consumer cards
- LIL'pingu#9069 on the [GPU Unlocking discord](https://discord.gg/5rQsSV3Byq) for his patch to nop out code that NVIDIA added to prevent usage of drivers with a version 460 - 470 with consumer cards
If I forgot to mention someone, please create an issue or let me know otherwise.

View File

@ -1,12 +0,0 @@
#!/usr/bin/bash
for a in $*
do
case $a in
vgpu)
export LD_PRELOAD="/opt/vgpu_unlock-rs/target/release/libvgpu_unlock_rs.so"
;;
esac
done
exec /usr/bin/nvidia-smi $@