Well, not a noob, more like an idiot 😂 EDIT: Yes, on the same drive as my Home folder, etc. And yes, technically they’re snapshots, not backups.

  • FauxLiving@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Best thing that I’ve ever done was to write automate a weekly script that makes a ZFS snapshot and then deletes any that are over a month old.

    • Logical@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      That’s a very good idea. Might wanna keep an additional yearly one too though, in case you don’t use the computer actively for a while and realize you have to go back more than a month at some point.

      • FauxLiving@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        Ya, I offsite backup the entire zpool once a year at least. I have quarterly and yearly snapshots too.

        But the weeklies have saved me on several occasions, the others haven’t been needed yet.

  • palordrolap@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    I usually set up a completely separate partition on a different drive for Timeshift. That way it doesn’t gradually eat away at system space on the main drive. And even if it was on there, it would have already eaten all that space in readiness, so to speak.

    Also, I don’t have it backing up my home directory. I do that separately.

    But that said, this post has given me the reminder to see if there are any old snapshots that could do with deleting. And there were a few. It’s now back down to roughly the same size as my main OS install again, which is about as big as it needs to be if you think about it.

  • Allero@lemmy.today
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    I recently had 2/3 of drive space taken by btrfs snapshots. Still learning to manage them properly :D

  • germtm.@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    i am also running out of disk space. {pacman package cache, Team Fortress 2, a Windows VM and the android SDK being the main culprits.}

  • Sidhean@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    “dust” is my go-to cli thing for finding what’s taking up hard drive space.

    Speaking of, I should check my timeshirt settings

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Mmmm somebody need some log rotate in their life.

      Oh my production s***'s on point but all the Dave and QA s*** I need at least one failure before I get around to doing law rotate. I guess I should spend the time to make the ansible job.

    • MangoPenguin@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      This is what confuses me about Linux defaults, why would it let them grow that large?

      We can tune logging settings to resonable values for the max size and everything, it just doesn’t come that way for some reason.

      • faerbit@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        If you don’t use archaic technologies it actually does. systemd-journald is limited to max(10% FS size, 4GB) per default.

      • Chrobin@discuss.tchncs.de
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        Well, Linux is also made for servers and super computers, and just imagine it refusing to keep logs because the file’s too large

          • Chrobin@discuss.tchncs.de
            link
            fedilink
            arrow-up
            0
            ·
            2 months ago

            But I think it’s better for it to fail from expected behavior vs unexpected behavior. Your storage being full is very transparent and expected, but that a file reaches max size and starts cutting off is unexpected and would surprise a lot of people.

            I myself use supercomputers and the log files can get into a lot of GB, and I would hate it if it just cut off at some point.

            • MangoPenguin@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              I mean that’s fair, but a supercomputer would be heavily customized so disabling log limits would be part of that if needed.

  • mlg@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I recently realized I forgot to use reflink copy on an XFS filesystem and ran duperemove which freed ~600GB of data