TOC:
update-grubbash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/tools/pve/post-pve-install.sh)"Using internal CA see Proxmox SSL
apt-get install libsasl2-modules postfix-pcre;cat >> /etc/postfix/main.cf << EOF
relayhost = [igly.one]:587
smtp_use_tls = yes
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt
smtp_header_checks = pcre:/etc/postfix/smtp_header_checks
EOFsystemctl restart postfix.service;echo "Test mail from postfix" | mail -s "$HOSTNAME Test Postfix" some@valid.emailgroupadd data -g 2000 && useradd data -u 2000 -g 2000 -m -s /bin/bashpython3 ~/run.py 2000:2000=2000:2000/etc/subuid:
root:2000:1/etc/subgid:
root:2000:1/etc/pve/lxc/<container_id>.conf
add pbs namespace to pve from cli:
pvesm add pbs [storage-id] --server pbs.lair.lan --datastore [id] --content backup --username [id] --password
In the case where you need to mount a samba share from a container/vm
running on top of pve. You can use
x-systemd.automount,x-systemd.idle-timeout=30 in
/etc/fstab and have it mounted when something accesses the
path.
//10.0.10.20/share /mnt/share cifs _netdev,x-systemd.automount,x-systemd.requires=network-online.target,x-systemd.device-timeout=10,noserverino,rw,users,nounix,password2=fake,user=data,password=<PASSWORD>,uid=2000,gid=2000,file_mode=0644,dir_mode=0755,vers=3 0 0Test with
reload /etc/network/interfaces from cli
ifreload -a
Apply SDN config changes from cli
pvesh set /cluster/sdn
I will list interesting commands here
list which interfaces are members of bridges:
list which interfaces/bridges are members of VLANs:
apt install bolt policykit-1
# Change to your ID
boltctl enroll d1030000-0082-8098-2027-f21a6402bb22
echo thunderbolt >> /etc/modules
# Change to your driver
echo atlantic >> /etc/modules
# Probably should rename nic before using its name as input below
echo 'SUBSYSTEM=="net", ACTION=="move", KERNEL=="enp9s0", RUN+="/usr/sbin/ip link set mtu 9000 txqlen 10000 dev '%k'", RUN+="/usr/sbin/ifup %k", RUN+="/usr/sbin/brctl addif vmbr0 %k"' > /etc/udev/rules.d/10-tb-en.rules
Add a hostpath as mpX device mounted at vmpath inside CT
pct set VMID -mp[0-9] [hostpath],mp=[vmpath]
first scp or otherwise load the disk file to the node
qm import 900 /path/to/file
qm set 901--scsi1 /dev/disk/by-label/[label]
creates a new disk image on storageid local size 30gb and maps into CT at /srv/www
pct set [VMID] -mp0 local:30,mp=/srv/www
To be able to use a GPU inside a container add the following to
/etc/pve/lxc/[VMID].conf:
dev0: /dev/dri/card0,gid=984
dev1: /dev/dri/renderD128,gid=988
lxc.cgroup2.devices.allow: c 226:1 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
Do remember to adjust GIDs to match inside the CT and also whatever software is a member of those groups
Only works if root has no passwd
GETTY_OVERRIDE="/etc/systemd/system/container-getty@1.service.d/override.conf"
mkdir -p $(dirname $GETTY_OVERRIDE)
cat <<EOF >$GETTY_OVERRIDE
[Service]
ExecStart=
ExecStart=-/sbin/agetty --autologin root --noclear --keep-baud tty%I 115200,38400,9600 \$TERM
EOF
systemctl daemon-reload
systemctl restart $(basename $(dirname $GETTY_OVERRIDE) | sed 's/\.d//')To figure out why a container won’t start:
lxc-start -n [VMID] -F -l DEBUG -o /tmp/lxc-ID.log
lxc-start -n [VMID] -F -l DEBUG -o /tmp/lxc-ID.log -l trace -o /dev/stderr
To see CAPS inside CT use
capsh --print
Do remember to copy over any PVID.confs to the current cluser
/etc/pve/lxc/ or /etc/pve/qm/ before
deleting
Log output shows
Feb 09 23:37:22 pve01 rrdcached[2038]: handle_request_update: Could not read RRD file.
Feb 09 23:37:22 pve01 pmxcfs[2052]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/311: -1
Feb 09 23:37:22 pve01 pmxcfs[2052]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-vm/311: mmaping file '/var/lib/rrdcached/db/pve2-vm/311': Invalid argument
Feb 09 23:37:32 pve01 rrdcached[2038]: handle_request_update: Could not read RRD file.
Feb 09 23:37:32 pve01 pmxcfs[2052]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/311: -1
Feb 09 23:37:32 pve01 pmxcfs[2052]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-vm/311: mmaping file '/var/lib/rrdcached/db/pve2-vm/311': Invalid argument
Feb 09 23:37:42 pve01 rrdcached[2038]: handle_request_update: Could not read RRD file.
Feb 09 23:37:42 pve01 pmxcfs[2052]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/311: -1
Feb 09 23:37:42 pve01 pmxcfs[2052]: [status] notice: RRD update error /var/lib/rrdcached/db/pve2-vm/311: mmaping file '/var/lib/rrdcached/db/pve2-vm/311': Invalid argument
The solution is simply to delete the RRDC db dir and restart the service. Files will automatically be regenerated
Links:
Last modified: Mon Jan 5 10:45:29 2026