WeichertLabs
PCIe Passthrough Proxmox Guide
This guide explains how to configure PCIe passthrough in Proxmox VE using any PCIe device, such as a GPU, USB controller, or network card. As an example, we use an NVIDIA RTX 4080 SUPER. You’ll learn how to enable IOMMU, bind the device to VFIO, and attach it to a virtual machine.
✅ Requirements
- Proxmox VE 7.x or 8.x
- A PCIe device (example: RTX 4080 SUPER)
- Virtualization enabled in BIOS/UEFI
This guide is beginner-friendly and explains where to place the commands and why each step is needed.
✴
Please note: All guides and scripts are provided for educational purposes. Always review and understand any code before running it – especially with administrative privileges. Your system, your responsibility.
✴
Use at your own risk: While every effort is made to ensure accuracy, I cannot take responsibility for issues caused by applying tutorials or scripts. Test in a safe environment before using in production.

Step 1 – Enable IOMMU in GRUB
Enable IOMMU in GRUB
IOMMU (Input-Output Memory Management Unit) allows your Proxmox host to assign PCIe devices directly to virtual machines. First, we need to enable this feature in the GRUB bootloader.
Open the GRUB config file:
nano /etc/default/grub
Add the correct line based on your CPU:
For Intel CPUs, find the line starting with GRUB_CMDLINE_LINUX_DEFAULT and modify it to:
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
For AMD CPUs, use this instead:
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"
Make sure you don’t remove any other kernel parameters that might already exist in that line.
Save and Exit:
Press CTRL+O to save, then CTRL+X to exit nano.
Apply the changes:
update-grub
Step 2 – Load VFIO Kernel Modules
VFIO (Virtual Function I/O) is required to handle PCIe passthrough in a safe and isolated way.
Edit the modules:
nano /etc/modules
Add these lines at the end:
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
Save and exit the file as before.
Step 3 – Blacklist_Host_Drivers(GPU_Example)
To prevent the Proxmox host from loading drivers for the PCIe device (especially for GPUs), we need to blacklist them.
Create or edit the blacklist file:
nano /etc/modprobe.d/blacklist.conf
Add:
blacklist nouveau
blacklist nvidia
blacklist nvidiafb
blacklist nvidia_drm
This stops Proxmox from using the GPU itself, making it available to the VM.
Step 4 – Bind PCIe Device to VFIO
Now, we need to manually assign the PCIe device to VFIO so the VM can use it.
List your PCI devices and find your GPU/device:
lspci -nn
Look for the device ID for your GPU (or other PCIe device), such as:
10de:2704, 10de:22bb
These are the vendor and product IDs.
Create a VFIO config file:
nano /etc/modprobe.d/vfio.conf
Add this line using your IDs:
options vfio-pci ids=10de:2704,10de:22bb
Apply changes:
update-initramfs -u
Step 5 – Reboot
Restart your Proxmox host to apply all changes.
reboot
Step 6 – Verify VFIO Binding
After rebooting, check that your device is using the vfio-pci driver:
lspci -k
Step 7 – ADD PCIe Device to VM
Now you can attach the PCIe device to your virtual machine.
Open the VM config file:
nano /etc/pve/qemu-server/<vmid>.conf
Add these lines (replace 01:00.0 and 01:00.1 with your actual device addresses):
hostpci0: 01:00.0,pcie=1,x-vga=1
hostpci1: 01:00.1
✅ Done! You have now successfully passed through a PCIe device to your VM.
PCIe Passthrough Proxmox Guide(Video Demo)
columnIn this video, we walk you through configuring your Proxmox server to pass a dedicated GPU (e.g. NVIDIA RTX 4080 SUPER) to a virtual machine. This allows the VM to use the GPU directly, enabling advanced workloads like gaming, CUDA, or machine learning.