Skip to content

Storage Configuration

PoolTypeDisksSizeContent
rpoolZFS Mirrornvme0n1p3, nvme1n1p3~930 GBProxmox OS
storage-vm-zrh-v1ZFS Singlenvme2n1~7.5 TBVM Disks
nvme0n1 (931.5 GB):
├─ nvme0n1p1: 1007K (BIOS Boot)
├─ nvme0n1p2: 1G (EFI System)
└─ nvme0n1p3: 930G (ZFS - rpool)
nvme1n1 (931.5 GB):
├─ nvme1n1p1: 1007K (BIOS Boot)
├─ nvme1n1p2: 1G (EFI System)
└─ nvme1n1p3: 930G (ZFS - rpool mirror)
nvme2n1 (7.5 TB):
└─ Entire disk: ZFS - storage-vm-zrh-v1
Terminal window
# On srv-pve-zrh-01
zpool create \
-o ashift=12 \
-O atime=off \
-O compression=lz4 \
-O relatime=on \
storage-vm-zrh-v1 \
/dev/nvme2n1
# Add to Proxmox
pvesm add zfspool storage-vm-zrh-v1 \
--pool storage-vm-zrh-v1 \
--content images,rootdir \
--nodes srv-pve-zrh-01

Repeat on srv-pve-zrh-02.

Terminal window
# Pool status
zpool status storage-vm-zrh-v1
zpool list storage-vm-zrh-v1
# Proxmox storage status
pvesm status
# IO stats
zpool iostat -v storage-vm-zrh-v1 1

Configure in Proxmox Web UI:

Datacenter → Replication → Add
- VM/CT: Select VM
- Target: srv-pve-zrh-02
- Schedule: */15 (every 15 minutes)

Ceph is planned but not yet implemented. Requirements:

  • Additional NVMe SSDs for Ceph OSD
  • Minimum 3 OSDs recommended (2 possible with reduced redundancy)
  • Will use bond1 (25G Storage Network)