I have a ZFS pool that I made on proxmox. I noticed an error today. I think the issue is the drives got renamed at some point and how its confused. I have 5 NVME drives in total. 4 are supposed to be on the ZFS array (CT1000s) and the 5th samsung drive is the system/proxmox install drive not part of ZFS. Looks like the numering got changed and now the drive that used to be in the array labeled nvme1n1p1 is actually the samsung drive and the drive that is supposed to be in the array is now called nvme0n1.
root@pve:~# zpool status
pool: zfspool1
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
scan: scrub repaired 0B in 00:07:38 with 0 errors on Sun Oct 13 00:31:39 2024
config:
NAME STATE READ WRITE CKSUM
zfspool1 DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
7987823070380178441 UNAVAIL 0 0 0 was /dev/nvme1n1p1
nvme2n1p1 ONLINE 0 0 0
nvme3n1p1 ONLINE 0 0 0
nvme4n1p1 ONLINE 0 0 0
errors: No known data errors
Looking at the devices:
nvme list
Node Generic SN Model Namespace Usage Format FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme4n1 /dev/ng4n1 193xx6A CT1000P1SSD8 1 1.00 TB / 1.00 TB 512 B + 0 B P3CR013
/dev/nvme3n1 /dev/ng3n1 1938xxFF CT1000P1SSD8 1 1.00 TB / 1.00 TB 512 B + 0 B P3CR013
/dev/nvme2n1 /dev/ng2n1 192xx10 CT1000P1SSD8 1 1.00 TB / 1.00 TB 512 B + 0 B P3CR010
/dev/nvme1n1 /dev/ng1n1 S5xx3L Samsung SSD 970 EVO Plus 1TB 1 289.03 GB / 1.00 TB 512 B + 0 B 2B2QEXM7
/dev/nvme0n1 /dev/ng0n1 19xxD6 CT1000P1SSD8 1 1.00 TB / 1.00 TB 512 B + 0 B P3CR013
Trying to use the zpool replace command gives this error:
root@pve:~# zpool replace zfspool1 7987823070380178441 nvme0n1p1
invalid vdev specification
use '-f' to override the following errors:
/dev/nvme0n1p1 is part of active pool 'zfspool1'
where it thinks 0n1 is still part of the array even though the zpool status command shows that its not.
Can anyone shed some light on what is going on here. I don't want to mess with it too much since it does work right now and I'd rather not start again from scratch (backups).
I used smartctl -a /dev/nvme0n1
on all the drives and there don't appear to be any smart errors, so all the drives seem to be working well.
Any idea on how I can fix the array?
I don't know anything about ZFS, but in the future you might want to address them by /dev/disks/by-uuid/... or by-id and not by /dev/nvme..
That is definitely true of zfs as well. In fact I have never seen a guide which suggests anything other than using the names found under /dev/disk/by-id/ or /dev/disk/by-id/uuid and that is to prevent this very problem. If the proper convention is used then you can plug the drives in through any available interface, in any order, and zfs will easily re-assemble the pool at boot.
So now this begs the question... is proxmox using some insane configuration to create drive clusters using the name they happen to boot up with???