HON’s Wiki # Proxmox VE

Home / Virtualization

Contents

Using Proxmox VE 7 (based on Debian 11).

Host

Installation

PVE Installer Method

  1. Make sure UEFI and virtualization extensions are enabled in the BIOS settings.
  2. (Optional) Find a mouse.
    • The GUI installer doesn’t require it any more, but it’s still somewhat practical.
  3. Download PVE and boot from the installation medium
  4. Storage:
    • Note that you can use e.g. ZFS with 2 mirrored SSDs. But a single reliable one with EXT4 is fine too.
    • (ZFS) enable compression and checksums and set the correct ashift for the SSD(s). If in doubt, use ashift=12.
  5. Localization:
    • (Nothing special.)
  6. Administrator user:
    • Set a root password. It should be different from your personal user’s password.
    • Set the email to “root@localhost” or something. It’s not important (yet).
  7. Network:
    • Just set up something temporary that works. You’ll probably change this after installation to setup bonding and VLANs and stuff.
  8. Miscellanea:
    • Make sure you set the correct FQDN during the install. This is a bit messy to change afterwards.

Debian Manual Method

Using Debian 12 (Bookwork).

  1. Install Debian as normal: See Debian Server.
  2. Install PVE on top: See Install Proxmox VE on Debian 12 Bookworm.

Tips:

Ansible Method

See HON95/ansible (Debian role) and lae.proxmox.

Initial Configuration

Follow the instructions for Debian server in addition to the notes and instructions below (read them first).

Warning: Don’t install any of the firmware packages, it will remove the PVE firmware packages.

PVE-specific instructions:

  1. Setup the PVE repos (assuming no subscription):
    1. (Note) More info: Proxmox VE: Package Repositories
    2. Comment out all content from /etc/apt/sources.list.d/pve-enterprise.list to disable the enterprise repo.
    3. Create /etc/apt/sources.list.d/pve-no-subscription.list containing deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription to enable the no-subscription repo.
    4. Run a full upgrade: apt update && apt full-upgrade
  2. Install basics:
    1. apt install sudo vim
  3. (Optional) Update network config using Open vSwitch (OVS):
    • (Note) Do NOT manually modify the configs for DNS, NTP, IPTables, etc. The network config (/etc/network/interfaces) and PVE configs may however be manually modified, but the GUI or API is still recommended.
    • (Note) Plain Linux stuff (the way PVE uses it at least) may break for certain setups where e.g. PVE has a VLAN L3 interface on the same bridge as a VM has one.
    • Install Open VSwitch: apt install openvswitch-switch
    • If using VLANs and an optionally an LACP link:
      1. (Note) Do this in a way to avoid taking the node offline, e.g. by only adding IPv6 to the new uplink and making sure it works before moving IPv4. Preferably use a separate link for the temporary uplink during install.
      2. Create the OVS bridge (vmbr<N>). If not using LAG/LACP then add the physical interface. If not using tagged PVE-mgmt, then add the PVE IP addresses here. When adding tagged or untagged VM interfaces later, use this bridge.
      3. If using LAG/LACP: Create the OVS bond (LACP) (bond<N>). Use the created bridge as the “OVS bridge” and the physical interfaces as the “slaves”. Use mode “LACP (balance-tcp)” and add the OVS option other_config:lacp-time=fast.
      4. If using a VLAN for PVE-mgmt, create the OVS IntPort (VLAN interface) (vlan<VID>), which PVE will use to access the network. Use the OVS bridge and specify the VLAN ID. Set the IP addresses for PVE here.
  4. Update MOTD:
    1. Disable the special PVE banner: systemctl disable --now pvebanner.service
    2. Clear or update /etc/issue and /etc/motd.
    3. (Optional) Set up dynamic MOTD: See the Debian guide.
  5. Setup firewall:
    1. (Note) While you should probably put PVE management in a protected network separated from the VMs, you still ned to protect PVE from the VMs.
    2. Open an SSH session, as this will prevent full lock-out. If you manage to lock yourself out, open the /etc/pve/firewall/cluster.fw config and set enable: 0 to disable the global firewall.
    3. Under the datacenter firewall top page, add incoming rules on the management network for ICMPv4 (ipv6-icmp), ICMPv6 (icmp), SSH (tcp 22) and the web GUI (tcp 8006), for the chosen management VLAN.
    4. Go to the datacenter firewall options page and enable “firewall” and “ebtables”. Make sure the input policy is “DROP” and the output policy is “ACCEPT”.
    5. Go to the host firewall options page and enable it.
    6. Disable NDP on the nodes. (This is because of a vulnerability in Proxmox where it autoconfigures itself on all bridges.)
    7. Enable TCP flags filter to block illegal TCP flag combinations.
    8. Make sure ping, SSH and the web GUI is working both for IPv4 and IPv6.
  6. Set up storage:
    1. Docs: Storage (Proxmox VE)
    2. Create a ZFS pool or something and add it to /etc/pve/storage.cfg. This can also be done in the GUI now, but you may want to to it manually if you want to tweak stuff. See Linux Server Storage: ZFS.
    3. Setup backup pruning:
  7. Setup users (PAM realm):
    1. Add a Linux user: adduser <username> etc. (see some Linux for adding Linux admin users).
    2. Create a PVE group: In the “groups” menu, create e.g. an admin group.
    3. Give the group permissions: In the “permissions” menu, add a group permission. E.g. path / and role Administrator for full admin access.
    4. Add the user to PVE: In the “users” menu, add the PAM user and add it to the group.
    5. (Optional) Relog as the new admin user and disable the root user.
  8. Setup backups:
    1. Figure it out. You probably want to set up a separate storage for backups.

Manual Configuration

This is generally not recommended if you want to avoid breaking the system. Most of this stuff may be changed in the GUI. None of this stuff is required for a normal, full setup.

Configure PCI(e) Passthrough

Possibly outdated

Troubleshooting

Failed login:

Make sure /etc/hosts contains both the IPv4 and IPv6 addresses for the management networks.

Cluster

Usage

Creating a Cluster

  1. Setup an internal and preferrably isolated management network for the cluster.
  2. Create the cluster on one of the nodes: pvecm create <name>

Joining a Cluster

  1. Add each other host to each host’s hostfile using shortnames and internal management addresses.
  2. If firewalling NDP, make sure it’s allowed for the internam management network. This must be fixed BEFORE joining the cluster to avoid loss of quorum.
  3. Join the cluster on the other hosts: pvecm add <name>
  4. Check the status: pvecm status
  5. If a node with the same IP address has been part of the cluster before, run pvecm updatecerts to update its SSH fingerprint to prevent any SSH errors.

Leaving a Cluster

This is the recommended method to remove a node from a cluster. The removed node must never come back online and must be reinstalled.

  1. Back up the node to be removed.
  2. Log into another node in the cluster.
  3. Run pvecm nodes to find the ID or name of the node to remove.
  4. Power off the node to be removed.
  5. Run pvecm nodes again to check that the node disappeared. If not, wait and try again.
  6. Run pvecm delnode <name> to remove the node.
  7. Check pvevm status to make sure everything is okay.
  8. (Optional) Remove the node from the hostfiles of the other nodes.

High Availability Info

See: Proxmox: High Availability

Troubleshooting

Unable to modify because of lost quorum:

If you lost quorum because if connection problems and need to modify something (e.g. to fix the connection problems), run pvecm expected 1 to set the expected quorum to 1.

VMs

Usage

General VM Setup

The “Cloud-Init” notes can be ignored if you’re not using Cloud-Init. See the separate section below first if you are.

Linux VM Setup (Manual)

  1. Setup the VM (see the general setup section).
  2. (Recommended) Setup the QEMU guest agent: See the section about it.
  3. (Optional) Setup SPICE (for better graphics): See the section about it.
  4. More detailed Debian setup: Debian

Linux VM Cloud-Init Debian Template

Using Debian 11.

Example for creating a Cloud-Init-enabled Debian template using official cloud images.

Resources:

Instructions:

  1. Download the VM image:
    1. (Note) Supported formats: qcow2, vmdk, raw (use qemu img <FILE> to check support)
    2. Download the image.
    3. (Optional) Verify the image integrity and authenticity: See Debian: Verifying authenticity of Debian CDs.
  2. Create the VM:
    1. (Note) You may want to use a high VMID like 1000+ for templates to visually separate them from the rest of VMs e.g. in the PVE UI.
    2. (Note) Using legacy BIOS and chipset (SeaBIOS and i440fx).
    3. Create: qm create <VMID> --name <NAME> --description "<DESC>" --ostype l26 --numa 1 --cpu cputype=host --sockets <CPU_SOCKETS> --cores <CPU_CORES> --memory <MEM_MB> --scsihw virtio-scsi-pci --ide2 <STORAGE>:vm-<VMID>-cloudinit --net0 virtio,bridge=<NET_BRIDGE>[,tag=<VLAN_ID>][,firewall=1] --serial0 socket [--vga serial0] --boot order=scsi0;ide2 --onboot no
  3. Import the cloud disk image:
    1. Import as unused disk: qm importdisk <VMID> <FILE> <STORAGE>
    2. Attach the disk: qm set <VMID> --scsi0 <STORAGE>:vm-<VMID>-disk-0 (or whatever disk ID it got)
  4. Make it a template:
    1. (Note) The Cloud-Init disk will not be created automatically before starting the VM, so the the template command might complain about it not existing.
    2. Protect it (prevent destruction): qm set <VMID> --protection 1
    3. Convert to template: qm template <VMID>
  5. (Example) Create a VM:
    1. (Note) Only SSH login is enabled, no local credentials. Use user debian with the specified SSH key(s). Sudo is passwordless for that user.
    2. Clone the template: qm clone <TEMPL_VMID> <VMID> --name <NAME> --storage <STORAGE> --full
    3. Set Cloud-Init user and SSH pubkeys: qm set <VMID> --ciuser <USERNAME> --sshkeys <PUBKEYS_FILE>
    4. Update the network interface: qm set <VMID> --net0 virtio,bridge=vmbr1,tag=10,firewall=1 (example)
    5. Set static IP config: qm set <VMID> --ipconfig0 ip=<>,gw=<>,ip6=<>,gw6=<> (for netif 0, using CIDR notation)
      • (Alternative) Set dynamic IP config: qm set <VMID> --ipconfig0 ip=dhcp,ip6=auto
    6. Set DNS server and search domain: qm set <VMID> --nameserver "<DNS_1> <DNS_2> <DNS_3>" --searchdomain <DOMAIN>
    7. (Optional) Disable protection: qm set <VMID> --protection 1
    8. (Optional) Enable auto-start: qm set <VMID> --onboot yes
    9. (Optional) Enable the QEMU agent (must be installed in guest): qm set <VMID> --agent enabled=1>
    10. Resize the volume (Cloud-Inif will resize the FS): qm resize <VMID> scsi0 <SIZE> (e.g. 20G)
    11. Set firewall config: See the example file and notes below.
    12. Start the VM: qm start 101
    13. Check the console in the web UI to see the status. Connect using SSH when it’s up.

VM firewall example:

File /etc/pve/firewall/<VMID>.fw:

[OPTIONS]
enable: 1
ndp: 1
dhcp: 0
radv: 0
policy_in: ACCEPT
policy_out: REJECT

[RULES]
OUT ACCEPT -source fe80::/10 -log nolog # Allow IPv6 LL local source
OUT ACCEPT -source <IPV4_ADDR> -log nolog # Verify IPv4 local source
OUT ACCEPT -source <IPV6_ADDR> -log nolog # Verify IPv6 GUA/ULA local source

Notes:

Old Notes

Using Debian 10.

Ignore this section. I’m keeping it for future reference only.

  1. Download a cloud-init-ready Linux image to the hypervisor:
    • Debian cloud-init downloads: Debian Official Cloud Images (the genericcloud or generic variant and qcow2 format)
    • TODO: genericcloud or generic? Does the latter fix the missing console?
    • Copy the download link and download it to the host (wget <url>).
  2. (Note) It is an UEFI installation (so the BIOS/UEFI mode must be set accordingly) and the image contains an EFI partition (so you don’t need a separate EFI disk).
  3. Setup a VM as in the general setup section (take note of the specified Cloud-Init notes).
    1. Set the VM up as UEFI with an “EFI disk” added.
    2. Add a serial interface since the GUI console may be broken (it is for me).
  4. Setup the prepared disk:
    1. (GUI) Completely remove the disk from the VM (“detach” then “remove”).
    2. Import the downloaded cloud-init-ready image as the system disk: qm importdisk <vmid> <image-file> <storage>
    3. (GUI) Find the unused disk for the VM, edit it (see the general notes), and add it.
    4. (GUI) Resize the disk to the desired size. Note that it can be expanded further at a later time, but not shrunk. 10GB is typically file.
    5. (GUI) Make sure the disk (e.g. scsi0) is added in “boot order” in the options tab. Others may be removed.
  5. Setup initial Cloud-Init disk:
    1. (GUI) Add a “CloudInit drive”.
    2. (GUI) In the Cloud-Init tab, set a temporary user and password and set the IP config to DHCPv4 and DHCPv6/SLAAC, such that you can boot the template and install stuff. (You can wipe these settings later to prepare it for templating.)
  6. Start the VM and open its console.
    1. The NoVNC console is broken for me for these VMs for some reason, so use the serial interface you added instead if NoVNC isn’t working (qm terminal <vmid>).
  7. Fix boot order:
    1. It may fail to boot into Linux and instead drop you into a UEFI shell (Shell>). Skip this if it actually boots.
    2. Run reset and prepare to press/spam Esc when it resets so that it drops you into the UEFI menu.
    3. Enter “Boot Maintenance Manager” and “Boot Options”, then delete all options except the harddisk one (no PXE or DVD-ROM). Commit.
    4. Press “continue” so that is attempts to boot using the new boot order. It should boot into Linux.
    5. (Optional) Try logging in (using Cloud-Init credentials), power it off (so the QEMU VM completely stops), and power it on again to check that the boot order is still working.
  8. Log in and configure basic stuff:
    1. Log in using the Cloud-Init credentials. The hostname should automatically have been set to the VM name, as an indication that the initial Cloud-Init setup succeeded.
    2. Setup basics like installing qemu-guest-agent.
  9. Wipe temporary Cloud-Init setup:
    1. (VM) Run cloud-init clean, so that it reruns the initial setup on the next boot.
    2. (GUI) Remove all settings in the Cloud-Init tab (or set appropriate defaults).
  10. (Optional) Create a template of the VM:
    • Rename it as e.g. <something>-template and treat is as a template, but don’t bother converting it to an actual template (which prevents you from changing it later).
    • If you made it a template then clone it and use the clone for the steps below.
  11. Prepare the new VM:
    • Manually: Setup Cloud-Init in the Cloud-Init tab and start it. Start it, log in using the Cloud-Init credentials and configure it.
    • Ansible: See the proxmox and proxmox_kvm modules.
    • Consider purging the cloud-init package to avoid accidental reconfiguration later.
    • Consider running cloud-init status --wait before configuring it to make sure the Cloud-Init setup has completed.

Windows VM Setup

Using Windows 10.

Proxmox VE Wiki: Windows 10 guest best practices

Before Installation

  1. Setup the VM (see the general setup section).
  2. Add the VirtIO drivers ISO: Fedora Docs: Creating Windows virtual machines using virtIO drivers
  3. Add it as a CDROM using IDE device 3.

During Installation

  1. (Optional) Select “I din’t have a product key” if you don’t have a product key.
  2. In the advanced storage section:
    1. Install storage driver: Open drivers disc dir vioscsi\w10\amd64 and install “Red Hat VirtIO SCSI pass-through controller”.
    2. Install network driver: Open drivers disc dir NetKVM\w10\amd64 and install “Redhat VirtIO Ethernet Adapter”.
    3. Install memory ballooning driver: Open drivers disc dir Balloon\w10\amd64 and install “VirtIO Balloon Driver”.

After Installation

  1. Install QEMU guest agent:
    1. Open the Device Manager and find “PCI Simple Communications Controller”.
    2. Click “Update driver” and select drivers disc dir vioserial\w10\amd64
    3. Open drivers disc dir guest-agent and install qemu-ga-x86_64.msi.
  2. Install drivers and services:
    1. Download virtio-win-gt-x64.msi (see the wiki for the link).
    2. (Optional) Deselect “Qxl” and “Spice” if you don’t plan to use SPICE.
  3. Install SPICE guest agent:
    1. TODO Find out if this is included in virtio-win-gt-x64.msi.
    2. Download and install spice-guest-tools from spice-space.org.
    3. Set the display type in PVE to “SPICE”.
  4. For SPICE audio, add an ich9-intel-hda audio device.
  5. Restart the VM.
  6. Install missing drivers:
    1. Open the Device Manager and look for missing drivers.
    2. Click “Update driver”, “Browse my computer for driver software” and select the drivers disc with “Include subfolders” checked.

QEMU Guest Agent Setup

Proxmox VE Wiki: Qemu-guest-agent

The QEMU guest agent provides more info about the VM to PVE, allows proper shutdown from PVE and allows PVE to freeze the guest file system when making backups.

  1. Activate the “QEMU Guest Agent” option for the VM in Proxmox and restart if it wasn’t already activated.
  2. Install the guest agent:
    • Linux: apt install qemu-guest-agent
    • Windows: See Windows Setup.
  3. Restart the VM from PVE (not from within the VM).
    • Alternatively, shut it down from inside the VM and then start it from PVE.

SPICE Setup

Proxmox VE Wiki: SPICE

SPICE allows interacting with graphical VM desktop environments, including support for keyboard, mouse, audio and video.

SPICE in PVE uses authentication and encryption by default.

  1. Install a SPICE compatible viewer on your client:
    • Linux: virt-viewer
  2. Install the guest agent:
  3. In the VM hardware configuration, set the display to SPICE.

Troubleshooting

VM failed to start, possibly after migration:

Check the host system logs. It may for instance be due to hardware changes or storage that’s no longre available after migration.

Firewall

Special Aliases and IP Sets

PVE Ports

Storage

Ceph

See Storage: Ceph for general notes. The notes below are PVE-specific.

Notes

Setup

  1. Setup a shared network.
    • It should be high-bandwidth and isolated.
    • It can be the same as used for PVE cluster management traffic.
  2. Install (all nodes): pveceph install
  3. Initialize (one node): pveceph init --network <subnet>
  4. Setup a monitor (all nodes): pveceph createmon
  5. Check the status: ceph status
    • Requires at least one monitor.
  6. Add a disk (all nodes, all disks): pveceph createosd <dev>
    • If the disks contains any partitions, run ceph-disk zap <dev> to clean the disk.
    • Can also be done from the dashboard.
  7. Check the disks: ceph osd tree
  8. Create a pool (PVE dashboard).
    • “Size” is the number of replicas.
    • “Minimum size” is the number of replicas that must be written before the write should be considered done.
    • Use at least size 3 and min. size 2 in production.
    • “Add storage” adds the pool to PVE for disk image and container content.

Troubleshooting

“Cannot remove image, a guest with VMID ‘100’ exists!” when trying to remove unused VM disk:


hon.one | HON95/wiki | Edit page