The original post: /r/datahoarder by /u/ExtemporaneousAbider on 2024-07-16 20:54:03.
I have a home server that runs Proxmox. It has six ~12TB HDDs, which are all in a single ZFS node called "hoard". This node has a size of 71.98 TB. Proxmox storage node
This node has a single zpool using RAIDz2, which is also called "hoard". This pool has a size of 47.80 TB. ZFS zpool
In Proxmox, I allocated 22TB to one of my virtual machines. This virtual disk is called "vm-100-disk-1", and it has a size of 21.99 TB. I have not allocated any more of this zpool to any other virtual machines. Virtual machine disk
My Primary Question & Goal: Can you DataHoarders help me understand why my zpool reports that it's 90.95% full if only 22TB of 47.80TB are allocated to virtual machines? I want to allocate a new 10TB chunk of my zpool to another virtual machine; how do I recover/delete/peruse the data in my zpool that isn't my 22TB virtual disk?
Here are the outputs from some commands I've run on my Proxmox server. I hope these provide enough context.
zpool list -v results in this output:
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
hoard 65.5T 59.4T 6.10T - - 47% 90% 1.00x ONLINE -
raidz2-0 65.5T 59.4T 6.10T - - 47% 90.7% - ONLINE
ata-WDC_WD120EMFZ-11A6JA0_REDACTED1 10.9T - - - - - - - ONLINE
ata-WDC_WD120EMFZ-11A6JA0_REDACTED2 10.9T - - - - - - - ONLINE
ata-WDC_WD120EMFZ-11A6JA0_REDACTED3 10.9T - - - - - - - ONLINE
ata-WDC_WD120EMFZ-11A6JA0_REDACTED4 10.9T - - - - - - - ONLINE
ata-WDC_WD120EMFZ-11A6JA0_REDACTED5 10.9T - - - - - - - ONLINE
ata-WDC_WD120EMFZ-11A6JA0_REDACTED6 10.9T - - - - - - - ONLINE
zfs list results in this output:
NAME USED AVAIL REFER MOUNTPOINT
hoard 39.5T 3.94T 192K /hoard
hoard/ISO 619M 3.94T 619M /hoard/ISO
hoard/vm-100-disk-1 39.5T 3.94T 39.5T -
zfs list -o space results in this output:
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
hoard 3.94T 39.5T 0B 192K 0B 39.5T
hoard/ISO 3.94T 619M 0B 619M 0B 0B
hoard/vm-100-disk-1 3.94T 39.5T 0B 39.5T 0B 0B
zfs get used,available,logicalused,usedbychildren,compression,compressratio,reservation,quota hoard results in this output:
NAME PROPERTY VALUE SOURCE
hoard used 39.5T -
hoard available 3.94T -
hoard logicalused 19.9T -
hoard usedbychildren 39.5T -
hoard compression lz4 local
hoard compressratio 1.00x -
hoard reservation none default
hoard quota none default
zfs list -t snapshot results in this output:
no datasets available
zfs list -t volume results in this output:
NAME USED AVAIL REFER MOUNTPOINT
hoard/vm-100-disk-1 39.5T 3.94T 39.5T -
zpool status hoard results in this output:
pool: hoard
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
scan: scrub in progress since Sun Jul 14 00:24:02 2024
32.9T / 59.4T scanned at 161M/s, 32.5T / 59.4T issued at 159M/s
60K repaired, 54.78% done, 2 days 01:07:15 to go
config:
NAME STATE READ WRITE CKSUM
hoard ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
ata-WDC_WD120EMFZ-11A6JA0_REDACTED1 ONLINE 0 0 7 (repairing)
ata-WDC_WD120EMFZ-11A6JA0_REDACTED2 ONLINE 0 0 11 (repairing)
ata-WDC_WD120EMFZ-11A6JA0_REDACTED3 ONLINE 0 0 8 (repairing)
ata-WDC_WD120EMFZ-11A6JA0_REDACTED4 ONLINE 0 0 11 (repairing)
ata-WDC_WD120EMFZ-11A6JA0_REDACTED5 ONLINE 0 0 4 (repairing)
ata-WDC_WD120EMFZ-11A6JA0_REDACTED6 ONLINE 0 0 9 (repairing)
Questions:
- Why does my zpool report that it's 90.95% full if only 22TB of 47.80TB are allocated to a virtual machine?
- Similarly, why is my zpool's
used39.5T when itslogicalusedis only 19.9T? - What can I do to free up space in my zpool outside my 22TB virtual disk (i.e. vm-100-disk-1)?
- Will the
zfs scrubaffect any of this when it completes? - Is the issue of my zpool reporting 19 phantom TB related to the issue of my Proxmox storage node reporting 65TB of 72 TB allocated?
Thank you so much in advance for all the help!