• 2 Posts
  • 22 Comments
Joined 2 years ago
cake
Cake day: December 12th, 2023

help-circle

  • I created my own script/tool using rsync to handle backups and transferring data.

    My needs are quite smaller with just a computer and two Raspberry Pi’s but I found rsync to be really useful overall.

    My backup strategy is to make a complete backup on the local device (Computer / RPi4 / RPi5) then copy all those backups to a Storage partition on my computer, then make a whole backup from the partition to an externally attached SSD.

    The RPi’s both use docker/podman containers so I make sure any persistent data is in mounted directories. I usually stop all containers before performing a backup, especially things with databases.

    Everything in the docker containers is either hit or miss when it comes to restoring. The simple docker images restore as it they were untouched and will launch like nothing happened. I have a PieFed instance that must be rebuilt after restoring a backup. Since PieFed’s persistent data is in mount points, everything works perfectly after a fresh build.

    I can send a link to my rsync tool if that’s any interest to anyone. I’ve found it super useful for backups and minimizes so much headache for myself when it comes to transferring files between different network connected devices.


  • Maybe it’s something sightly outside no js/ccs/html but I am curious if there are any super minimal social media sites.

    I want to do something locally within my town and it would be nice to host something simple and tiny with my raspberry pi as the server.

    I’m assuming bulletin boards are quite minimal in comparison to other types of social media but I’ve never been a fan of how they handle previous replies with those boxed quotes.

    I’ve also been nostalgic for irc lately. Everything on the internet these days has become overwhelming. Over the past 1.5 years I’ve been turning to simplicity and it’s a craving I that’s hard to ignore.


  • I have a computer and 3 devices I wanted to transfer files between but every available solution was either too awkward which made things annoying, or too bulky with more than what I needed.

    I ended up writing a long script (around 1000 lines but I’m generous with spacing so I can read my own code easily) using rsync to deal with transferring files and whole directories with a single command. I can even chain together multiple rsync commands back to back so that I can quickly transfer multiple files or directories in one command. Instead of trying to refer to a wall of text full of rsync commands, I can make something like this:

    alias rtPHONEmedia="doas rtransfer /home/dell-pc/.sync/phone/.sync-phone_02_playlists /home/dell-pc/.sync/phone/.sync-phone_03_arbeit /home/dell-pc/.sync/phone/.sync-phone_04_albums /home/dell-pc/.sync/phone/.sync-phone_05_soulseek /home/dell-pc/.sync/phone/.sync-phone_06_youtube"

    This will copy everything from a specific folders on my phone, and store them neatly organized into my storage partition on my computer SSD. This also includes all the necessary information including SSH username, address and ID keys.

    I can then run alias rtARCHIVEfull="doas rtransfer /home/dell-pc/.sync/computer/.sync-computer_01_archive-full" to quickly copy that storage partition on my computer to my external backup SSD.

    I use it so often. It’s especially nice because I can work on a file on my computer and quickly update the file to the remote address location, putting it directly where I need it to be immediately.


  • I’m the same here. I don’t know enough or care to know enough about systemd. I simply enjoy the minimalism of Alpine.

    The downside is that I have to learn a bit more to make it work how I want but as a hobby I enjoy it.

    When I first started with linux, Mint with systemd just worked for my laptop. For the people who are less computer literate, that should be good enough. They don’t want to worry about how to make their computer work, they just wanna do basic computer things without hassle.





  • The automation industry is too dependant on so many other industries. I learned this real quick after the COVID lockdowns of 2020.

    The shipping delays, as well as the overall lack of devices and materials caused huge waves but management made it seem like manageable ripples in a pond.

    I made an educated gamble and got out of that industry in 2021. I couldn’t predict Trump’s tariff wars but I felt a disturbance early on.

    I’m speaking from a North American perspective but the automation industry is global. This doesn’t surprise me but even the headline alone brings a sense of disappointment.


  • I have a small partition that has a copy of Linux Mint live USB. I also have another partition that holds my backups. When I inevitably break my system, I launch Mint and use an rsync command I keep in a text file to revert back to the backup I made.

    Using Mint’s live usb image has multiple benefits. It has Gparted for partition management. It has basic apps like LibreOffice and Mozilla in case I need them. It has proper printer support too. And since it’s a live usb image, every time I launch it, the environment will always be the same. No changes are permanent and will disappear after a reset.

    My days of using Mint may be over, but it’s too reliable to ever truly leave my system.



  • I was installing Alpine Linux on a Raspberry Pi 5 and was using the kitchen TV as a temporary monitor. My parents thought I was sending encrypted messages. I was just updating the repository list to find the quickest mirror.

    It’s funny to me how some people see text scrolling by on a screen and immediately think witchcraft.


  • This reminds me of when I had apprenticeship classes that got interrupted by the covid lockdowns. I was forced to do theory classes online over zoom. Every morning my wifi connection would drop for a few minutes at a time during my classes.

    Turns out it was the microwave. Every time someone used the microwave, it would disrupt the wifi/router for the whole house.

    Ended up making a sign to let people know I was in class. My classes were only for 8 weeks total. I had about 4 or 5 weeks remaining by the time I figured it out so it wasn’t too long of an inconvenience.





  • I’ve spent a few hours with Podman and I was able to get my reverse proxy and a couple smaller services running which is quite nice. I’m using Alpine Linux so there were some extra steps I had to follow but their wiki handles that pretty good. The only issue I need to figure out is how to auto start my services on a system restart since Podman seems to focus on Systemd development. This seems like a good start but I think I need to figure out how pods and containers work in Podman first.

    I’ve only started learning this stuff not too long ago but I’m surprised how relaxed Docker is with port management. I was under the impression that docker is more secure because it’s containerized. Even more surprising was how little documentation there is for how to secure Docker ports.


  • A couple weeks ago I stumbled on to the fact that Docker pretty much ignores your firewall and manipulates iptables in the background. The way it sets itself up means the firewall has no idea the changes are made and won’t show up when you look at all the firewall policies. You can check iptables itself to see what docker is doing but iptables isn’t easy or simple to work with.

    I noticed your list included firewalld but I have some concerns about that. The first is that the firewall backend has changed from iptables to nftables as the default. That means the guide you linked is missing a step to change backends. Also, when changing back ends by editing /etc/firewalld/firewalld.conf there will be a message saying iptables is deprecated and will be removed in the future:

    # FirewallBackend
    # Selects the firewall backend implementation.
    # Choices are:
    #	- nftables (default)
    #	- iptables (iptables, ip6tables, ebtables and ipset)
    # Note: The iptables backend is deprecated. It will be removed in a future
    # release.
    FirewallBackend=nftables
    

    If following that guide works for other people, it may be okay for now. Although I think finding alternative firewalls for the future may be a thing to strongly consider.

    I did stumble across some ways to help deal with opened docker ports. I currently have 3 docker services that all sit behind a docker reverse proxy. In this case I’m using Caddy as a reverse proxy. First thing to do is create a docker network, for example I created one called “reverse_proxy” with the command:

    docker network create reverse_proxy

    After that I add the following lines to each docker-compose.yml file for all three services plus Caddy.

    services:
        networks:
          - reverse_proxy
    
    networks:
      reverse_proxy:
        external: true
    

    This will allow the three services plus Caddy to communicate together. Running the following command brings up all your currently running. The Name of the container will be used in the Caddyfile to set up the reverse proxy.

    docker container ls --format "table {{.ID}}\t{{.Names}}\t{{.Ports}}" -a

    Then you can add the following to the Caddyfile. Replace any capitalized parts with your own domain name and docker container name. Change #### to the Internal port number for your docker container. If your ports in your docker-compose.yml look like “5000:8000” 5000: is the external port, :8000 is the internal port.

    SUBDOMAIN.DOMAINNAME.COM:80 {
            reverse_proxy DOCKER_CONTAINER_NAME:####
    }
    

    After starting the Caddy docker container, things should be working as normal, however the three services behind the reverse proxy are still accessible outside the reverse proxy by accessing their ports directly, for example Subdomain.domainname.com:5000 in your browser.

    You can add 127.0.0.1: to the service’s external port in docker-compose.yml to force those service containers ports to only be accessible through the localhost machine.

    Before:

        ports:
          - 5000:8000
    

    After:

        ports:
          - 127.0.0.1:5000:8000
    

    After restarting the service, the only port that should be accessible from all your services should only be Caddy’s port. You can check what ports are open with the command

    netstat -tunpl

    Below I’ll leave a working example for Caddy and Kiwix (offline wikipedia)

    Caddy: docker-compose.yml

    services:
      caddy:
        container_name: caddy
        image: caddy:latest
        restart: unless-stopped
        ports:
          - 80:80
          - 443:443
        networks:
          - reverse_proxy
        volumes:
          - ./Caddyfile:/etc/caddy/Caddyfile
          - caddy_data:/data
          - caddy_config:/config
    
    volumes:
      caddy_data:
      caddy_config:
    
    networks:
      reverse_proxy:
        external: true
    

    Caddy: Caddyfile

    wiki.Domainname.com:80 {
            reverse_proxy kiwix:8080
    }
    

    Kiwix: docker-compose.yml (if you plan to use this setup, you MUST download a .zim file and place it in the /data/ folder. In this case /srv/kiwix/data) Kiwix Library .zim Files

    services:
      kiwix:
        image: ghcr.io/kiwix/kiwix-serve
        container_name: kiwix
        ports:
          - 127.0.0.1:8080:8080
        volumes:
          - /srv/kiwix/data:/data
        command: "*.zim"
        restart: unless-stopped
        networks:
          - reverse_proxy
    
    networks:
      reverse_proxy:
        external: true
    

    What I’m interested in from a firewall is something that offers some sort of rate limiting feature. I would like to set it up as a simple last line of defense against DDOS situations. Even with my current setup with Docker and Caddy, I still have no control over the Caddy exposed port so anything done by the firewall will still be completely ignored still.

    I may try out podman and see if I can get UFW or Awall to work as I would like it to. Hopefully that’s not to deep or a rabbit hole.




  • I’ll give your suggestions a try when I get the motivation to try again. Sort of burnt myself out at the moment and would like to continue with other stuff.

    I am actually using the Cloudflare Tunnel with SSL enabled which is how I was able to achieve that in the first place.

    For the curious here are the steps I took to get that to work:

    This is on a Raspberry Pi 5 (arm64, Raspberry Pi OS/Debian 12)

    # Cloudflared -> Install & Create Tunnel & Run Tunnel
                     -> https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/get-started/create-local-tunnel/
                        -> Select option -> Linux
                        -> Step 4: Change -> credentials-file: /root/.cloudflared/<Tunnel-UUID>.json -> credentials-file: /home/USERNAME/.cloudflared/<Tunnel-UUID>.json
                  -> Run as a service
                     -> Open new terminal
                     -> sudo cp ~/.cloudflared/config.yml /etc/cloudflared/config.yml
                     -> https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/configure-tunnels/local-management/as-a-service/
                  -> Configuration (Optional) -> https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/configure-tunnels/local-management/configuration-file/
                     -> sudo systemctl restart cloudflared
                  -> Enable SSL connections on Cloudflare site
                     -> Main Page -> Websites -> DOMAINNAME.COM -> SSL/TLS -> Configure -> Full -> Save
                        -> SSL/TLS -> Edge Certificates -> Always Use HTTPS: On -> Opportunistic Encryption: On -> Automatic HTTPS Rewrites: On -> Universal SSL: Enabled
    

    Cloudflared complains about ~/.cloudflared/config.yml and /etc/cloudflared/config.yml not matching. I just edit ~/.cloudflared/config.yml and run sudo cp ~/.cloudflared/config.yml /etc/cloudflared/config.yml again followed by sudo systemctl restart cloudflared whenever I make any changes.

    The configuration step is just there as reference for myself, it’s not necessary for a simple setup.

    The tunnel is nice and convenient. It does the job well. I just have a strong personal preference to not depend on large organizations. I’ve installed Timeshift as a backup management for myself so I can easily revisit this topic later when my brain is ready.