

I once saw someone in the irc channel jokingly refer to Alpine as Alpine linux pine. Now when I goto the website, all my mind reads is Alpine Linux Pine Linux…
I once saw someone in the irc channel jokingly refer to Alpine as Alpine linux pine. Now when I goto the website, all my mind reads is Alpine Linux Pine Linux…
I got super lucky, someone created a restart policy for Podman just a week ago. It works without changing anything to my docker-compose.yml files, as long as the file states restart: always
.
Following Alpine’s Wiki to install and setup Podman followed by the instructions on this Github Repository and everything works quite well on Alpine Linux.
I’ll have to play around with Podman some more and give it time to see how it holds up, but so far it seems promising.
I’ve spent a few hours with Podman and I was able to get my reverse proxy and a couple smaller services running which is quite nice. I’m using Alpine Linux so there were some extra steps I had to follow but their wiki handles that pretty good. The only issue I need to figure out is how to auto start my services on a system restart since Podman seems to focus on Systemd development. This seems like a good start but I think I need to figure out how pods and containers work in Podman first.
I’ve only started learning this stuff not too long ago but I’m surprised how relaxed Docker is with port management. I was under the impression that docker is more secure because it’s containerized. Even more surprising was how little documentation there is for how to secure Docker ports.
A couple weeks ago I stumbled on to the fact that Docker pretty much ignores your firewall and manipulates iptables in the background. The way it sets itself up means the firewall has no idea the changes are made and won’t show up when you look at all the firewall policies. You can check iptables itself to see what docker is doing but iptables isn’t easy or simple to work with.
I noticed your list included firewalld but I have some concerns about that. The first is that the firewall backend has changed from iptables to nftables as the default. That means the guide you linked is missing a step to change backends. Also, when changing back ends by editing /etc/firewalld/firewalld.conf there will be a message saying iptables is deprecated and will be removed in the future:
# FirewallBackend
# Selects the firewall backend implementation.
# Choices are:
# - nftables (default)
# - iptables (iptables, ip6tables, ebtables and ipset)
# Note: The iptables backend is deprecated. It will be removed in a future
# release.
FirewallBackend=nftables
If following that guide works for other people, it may be okay for now. Although I think finding alternative firewalls for the future may be a thing to strongly consider.
I did stumble across some ways to help deal with opened docker ports. I currently have 3 docker services that all sit behind a docker reverse proxy. In this case I’m using Caddy as a reverse proxy. First thing to do is create a docker network, for example I created one called “reverse_proxy” with the command:
docker network create reverse_proxy
After that I add the following lines to each docker-compose.yml file for all three services plus Caddy.
services:
networks:
- reverse_proxy
networks:
reverse_proxy:
external: true
This will allow the three services plus Caddy to communicate together. Running the following command brings up all your currently running. The Name of the container will be used in the Caddyfile to set up the reverse proxy.
docker container ls --format "table {{.ID}}\t{{.Names}}\t{{.Ports}}" -a
Then you can add the following to the Caddyfile. Replace any capitalized parts with your own domain name and docker container name. Change #### to the Internal port number for your docker container. If your ports in your docker-compose.yml look like “5000:8000” 5000: is the external port, :8000 is the internal port.
SUBDOMAIN.DOMAINNAME.COM:80 {
reverse_proxy DOCKER_CONTAINER_NAME:####
}
After starting the Caddy docker container, things should be working as normal, however the three services behind the reverse proxy are still accessible outside the reverse proxy by accessing their ports directly, for example Subdomain.domainname.com:5000 in your browser.
You can add 127.0.0.1:
to the service’s external port in docker-compose.yml to force those service containers ports to only be accessible through the localhost machine.
Before:
ports:
- 5000:8000
After:
ports:
- 127.0.0.1:5000:8000
After restarting the service, the only port that should be accessible from all your services should only be Caddy’s port. You can check what ports are open with the command
netstat -tunpl
Below I’ll leave a working example for Caddy and Kiwix (offline wikipedia)
Caddy: docker-compose.yml
services:
caddy:
container_name: caddy
image: caddy:latest
restart: unless-stopped
ports:
- 80:80
- 443:443
networks:
- reverse_proxy
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
volumes:
caddy_data:
caddy_config:
networks:
reverse_proxy:
external: true
Caddy: Caddyfile
wiki.Domainname.com:80 {
reverse_proxy kiwix:8080
}
Kiwix: docker-compose.yml (if you plan to use this setup, you MUST download a .zim file and place it in the /data/ folder. In this case /srv/kiwix/data) Kiwix Library .zim Files
services:
kiwix:
image: ghcr.io/kiwix/kiwix-serve
container_name: kiwix
ports:
- 127.0.0.1:8080:8080
volumes:
- /srv/kiwix/data:/data
command: "*.zim"
restart: unless-stopped
networks:
- reverse_proxy
networks:
reverse_proxy:
external: true
What I’m interested in from a firewall is something that offers some sort of rate limiting feature. I would like to set it up as a simple last line of defense against DDOS situations. Even with my current setup with Docker and Caddy, I still have no control over the Caddy exposed port so anything done by the firewall will still be completely ignored still.
I may try out podman and see if I can get UFW or Awall to work as I would like it to. Hopefully that’s not to deep or a rabbit hole.
For verification I used the built in certificate manager in Nginx Proxy Manager. I generate an API key from Cloudflare for a DNS zone:zone:edit key with the domain I am using. Then I chose DNS verification in Proxy Manager and put the API key in the edit box. This has been successful every time.
Do you use Cloudflare Tunnel or are you using Cloudflare as a Dynamic DNS? I’ve had issues with certbot but I think I just wasn’t using it properly, what process did you use for DNS verification?
I’ll give your suggestions a try when I get the motivation to try again. Sort of burnt myself out at the moment and would like to continue with other stuff.
I am actually using the Cloudflare Tunnel with SSL enabled which is how I was able to achieve that in the first place.
For the curious here are the steps I took to get that to work:
This is on a Raspberry Pi 5 (arm64, Raspberry Pi OS/Debian 12)
# Cloudflared -> Install & Create Tunnel & Run Tunnel
-> https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/get-started/create-local-tunnel/
-> Select option -> Linux
-> Step 4: Change -> credentials-file: /root/.cloudflared/<Tunnel-UUID>.json -> credentials-file: /home/USERNAME/.cloudflared/<Tunnel-UUID>.json
-> Run as a service
-> Open new terminal
-> sudo cp ~/.cloudflared/config.yml /etc/cloudflared/config.yml
-> https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/configure-tunnels/local-management/as-a-service/
-> Configuration (Optional) -> https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/configure-tunnels/local-management/configuration-file/
-> sudo systemctl restart cloudflared
-> Enable SSL connections on Cloudflare site
-> Main Page -> Websites -> DOMAINNAME.COM -> SSL/TLS -> Configure -> Full -> Save
-> SSL/TLS -> Edge Certificates -> Always Use HTTPS: On -> Opportunistic Encryption: On -> Automatic HTTPS Rewrites: On -> Universal SSL: Enabled
Cloudflared complains about ~/.cloudflared/config.yml and /etc/cloudflared/config.yml not matching. I just edit ~/.cloudflared/config.yml and run sudo cp ~/.cloudflared/config.yml /etc/cloudflared/config.yml
again followed by sudo systemctl restart cloudflared
whenever I make any changes.
The configuration step is just there as reference for myself, it’s not necessary for a simple setup.
The tunnel is nice and convenient. It does the job well. I just have a strong personal preference to not depend on large organizations. I’ve installed Timeshift as a backup management for myself so I can easily revisit this topic later when my brain is ready.
Nginx Proxy Manager has been handling certs for me, I’m not sure how it handles certs since it’s packaged in a docker container. I can only assume it does something similar to Caddy which also automatically handles certificate registration and renewals. So probably certbot.
All I know is that NPM has an option for DNS challenges which is how I got my certs in the first place.
That’s what I thought. NPM is handling the certs just fine.
Could it be that I’m setting up the reverse proxy wrong? Whenever I enable SSL on that reverse proxy, the connection just hangs and drops after a minute. I’m not understanding why it’s doing that.
Baby bush turkeys are so fuzzy and cute. I miss living among the bush turkeys and bearded dragons when I was in Byron Bay.