Well, not a noob, more like an idiot 😂 EDIT: Yes, on the same drive as my Home folder, etc. And yes, technically they’re snapshots, not backups.
93 GB is like one weekend of moderate media piracy for me…
Best thing that I’ve ever done was to write automate a weekly script that makes a ZFS snapshot and then deletes any that are over a month old.
That’s a very good idea. Might wanna keep an additional yearly one too though, in case you don’t use the computer actively for a while and realize you have to go back more than a month at some point.
Ya, I offsite backup the entire zpool once a year at least. I have quarterly and yearly snapshots too.
But the weeklies have saved me on several occasions, the others haven’t been needed yet.
I usually set up a completely separate partition on a different drive for Timeshift. That way it doesn’t gradually eat away at system space on the main drive. And even if it was on there, it would have already eaten all that space in readiness, so to speak.
Also, I don’t have it backing up my home directory. I do that separately.
But that said, this post has given me the reminder to see if there are any old snapshots that could do with deleting. And there were a few. It’s now back down to roughly the same size as my main OS install again, which is about as big as it needs to be if you think about it.
I recently had 2/3 of drive space taken by btrfs snapshots. Still learning to manage them properly :D
pip cache is another common culprit, I’ve seen up to 50GB
node_modules has entered the chat
There was once a 220 GB log file on my pi-hole server. Probably was a bug though.
400 gb of timeshift backups… It felt so good removing them XDDD
Same like me who never realized i have so many BTRFS Snapper backup in every end of year.
I use borg btw.
i am also running out of disk space. {pacman package cache, Team Fortress 2, a Windows VM and the android SDK being the main culprits.}
How small is your disk?
512GB.
“dust” is my go-to cli thing for finding what’s taking up hard drive space.
Speaking of, I should check my timeshirt settings
ncdu or gtfo
For me it’s failed AUR comps
The humble 50GB /var/cache/pacman/pkg on my 256GB drive.
Every. Time.
Filelight: /var/cache/pacman/pkg 👀
Paccache helps, but sometimes a nuke is needed to clean up pkg.
lol, you can just set it up to keep the latest snapshots only.
(noob here)
I’ve been in a similar situation
Holy shit
right? what the hell?
Mmmm somebody need some log rotate in their life.
Oh my production s***'s on point but all the Dave and QA s*** I need at least one failure before I get around to doing law rotate. I guess I should spend the time to make the ansible job.
This is what confuses me about Linux defaults, why would it let them grow that large?
We can tune logging settings to resonable values for the max size and everything, it just doesn’t come that way for some reason.
If you don’t use archaic technologies it actually does.
systemd-journald
is limited to max(10% FS size, 4GB) per default.Well, Linux is also made for servers and super computers, and just imagine it refusing to keep logs because the file’s too large
Well it’s better than a server locking up from a full disk.
But I think it’s better for it to fail from expected behavior vs unexpected behavior. Your storage being full is very transparent and expected, but that a file reaches max size and starts cutting off is unexpected and would surprise a lot of people.
I myself use supercomputers and the log files can get into a lot of GB, and I would hate it if it just cut off at some point.
I mean that’s fair, but a supercomputer would be heavily customized so disabling log limits would be part of that if needed.
I recently realized I forgot to use reflink copy on an XFS filesystem and ran duperemove which freed ~600GB of data