Storage Configuration
Current Storage
Section titled “Current Storage”| Pool | Type | Disks | Size | Content |
|---|---|---|---|---|
| rpool | ZFS Mirror | nvme0n1p3, nvme1n1p3 | ~930 GB | Proxmox OS |
| storage-vm-zrh-v1 | ZFS Single | nvme2n1 | ~7.5 TB | VM Disks |
Disk Layout
Section titled “Disk Layout”nvme0n1 (931.5 GB):├─ nvme0n1p1: 1007K (BIOS Boot)├─ nvme0n1p2: 1G (EFI System)└─ nvme0n1p3: 930G (ZFS - rpool)
nvme1n1 (931.5 GB):├─ nvme1n1p1: 1007K (BIOS Boot)├─ nvme1n1p2: 1G (EFI System)└─ nvme1n1p3: 930G (ZFS - rpool mirror)
nvme2n1 (7.5 TB):└─ Entire disk: ZFS - storage-vm-zrh-v1ZFS Pool Creation (VM Storage)
Section titled “ZFS Pool Creation (VM Storage)”# On srv-pve-zrh-01zpool create \ -o ashift=12 \ -O atime=off \ -O compression=lz4 \ -O relatime=on \ storage-vm-zrh-v1 \ /dev/nvme2n1
# Add to Proxmoxpvesm add zfspool storage-vm-zrh-v1 \ --pool storage-vm-zrh-v1 \ --content images,rootdir \ --nodes srv-pve-zrh-01Repeat on srv-pve-zrh-02.
ZFS Status Commands
Section titled “ZFS Status Commands”# Pool statuszpool status storage-vm-zrh-v1zpool list storage-vm-zrh-v1
# Proxmox storage statuspvesm status
# IO statszpool iostat -v storage-vm-zrh-v1 1Replication (between nodes)
Section titled “Replication (between nodes)”Configure in Proxmox Web UI:
Datacenter → Replication → Add- VM/CT: Select VM- Target: srv-pve-zrh-02- Schedule: */15 (every 15 minutes)Future: Ceph (Planned)
Section titled “Future: Ceph (Planned)”Ceph is planned but not yet implemented. Requirements:
- Additional NVMe SSDs for Ceph OSD
- Minimum 3 OSDs recommended (2 possible with reduced redundancy)
- Will use bond1 (25G Storage Network)