Hello again! Ya’ll are my last hope for help! I’ve posted to both the proxmox forums and r/proxmox and nobody’s responding.
Here’s the deal, I built this home media server about a year ago. Took some time to work out the bugs, but I got TrueNAS Scale and Jellyfin on it and started filling it up. A few weeks ago TrueNAS started freezing up, but would work for a little while after a restart, but then it stopped working. I poked around and I found that there needed to be some sort of new EFI boot thing established. I followed it and it worked. A few days later, jellyfin freezes, I can’t access the pve GUI or anything, so I do a hard reset. Now proxmox can’t launch pve, let alone the GUI. So I’ve been poking around and found that the drives are at 100% usage, and inodes are at 100% usage (see pic, disk usage is the same % as the inode usage). Digging deeper, I try to find the offending folder in /rpool/ROOT/pve-1, but there are no deeper directories listed. So I drill down into the other pig one /subvol-100-disk-0; this lead me to find a jellyfin metadata library folder with a bunch of small files using up <250 inodes each. I’ve searched all over the place and haven’t been able to figure out what I could delete to at least get pve up and running, and work towards… idk, migrating it to a new larger drive? Or setting up something to automatically clear old files?
At any rate, I’m running 2 old 512gb laptop drives for all the OSs on the server. I have it in a ZFS mirror raid.
PS: Come to think of it, I’ve had to expand the size of the virtual drive for my jellyfin LXC multiple times now to get the container to actually launch. Seems I know just enough to get myself into trouble.
Someone, please help me rite my pirate ship!
- ∞ 🏳️⚧️Edie [it/its, she/her, fae/faer, love/loves, ze/hir, des/pair, none/use name, undecided]@hexbear.netEnglish9·1 month ago
/rpool/ROOT/pve-1 is in fact not the offending dir, but rather your drive, or well ZFS raid.
To find the offending dir run du, e.g.
du -x -d1 -h /
Thanks for chiming in! I ran the command, but added “*” after the slash and “| less” to get a more readable printout. It looks like /var/lib (FUCKING LIBS!!!) Is the culprit, taking up 415gb. What do?
- ∞ 🏳️⚧️Edie [it/its, she/her, fae/faer, love/loves, ze/hir, des/pair, none/use name, undecided]@hexbear.netEnglish6·1 month ago
Run du in /var/lib to find the offending dir there
And I assume follow that rabbit hole to the bottom? Brb.
- ∞ 🏳️⚧️Edie [it/its, she/her, fae/faer, love/loves, ze/hir, des/pair, none/use name, undecided]@hexbear.netEnglish4·1 month ago
Yup. It should at least help you figure what is taking up space. What to do after that is then another question.
Okay, found the problem files. It’s in /var/lib/vz/dump/. I don’t know what vzdump files are, but they have qemu and lxc names in the file names with dates and times. A whole bunch of .log, .tar, .zst, .notes, and a whole bunch of combinations of those. A bunch of these files are taking up multiple gb each and there’s a long list.
- ∞ 🏳️⚧️Edie [it/its, she/her, fae/faer, love/loves, ze/hir, des/pair, none/use name, undecided]@hexbear.netEnglish4·1 month ago
Searching for the path usually leads to some good answers.
That’s the proxmox backup dir.
Sooo… what do?
As shitty as it is, I would ask the folks over on the shitty-lib-lemmy, dbzero
Already solved tho! Honestly, hexbear has been a solid tech help community. People on official help channels couldn’t be bothered. A comrade in this thread got me fixed up in under an hour.
Are you using zfs snapshots at all? I have seen similar symptoms with automatic snapshotting that fills a disk and then becomes read-only. this command will show all snapshots:
zfs list -r -t snapshot
I… think? Not entirely clear on what snapshots are, but I think I turned them on by following the tutorials.
Just ran that command, it said “no datasets available”.