• 0 Posts
  • 3 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2023

help-circle

  • I’ve been doing this for 30+ years and it seems like the push lately has been towards oversimplification on the user side, but at the cost of resources and hidden complexity on the backend.

    As an Assembly Language programmer I’m used to programming with consideration towards resource consumption. Did using that extra register just cause a couple of extra PUSH and POP commands in the loop? What’s the overhead on that?

    But now some people just throw in a JavaScript framework for a single feature and don’t even worry about how it works or the overhead as long as the frontend looks right.

    The same is true with computing. We’re abstracting containers inside of VMs on top of base operating systems which is adding so much more resource utilization to the mix (what’s the carbon footprint on that?) with an extremely complex but hidden backend. Everything’s great until you have to figure out why you’re suddenly losing packets that pass through a virtualized router to linuxbridge or OVS to a Kubernetes pod inside a virtual machine. And if one of those processes fails along the way, BOOM! it’s all gone. But that’s OK; we’ll just tear it down and rebuild it.

    I get it. I understand the draw, and I see the benefits. IaC is awesome, and the speed with which things can be done is amazing. My concern is that I’ve seen a lot of people using these things who don’t know what’s going on under the hood, so they often make assumptions or mistakes that lead to surprises later.

    I’m not sure what the answer is other than to understand what you’re doing at every step of the way, and always try to choose the simplest route (but future-proofed).


  • I just sent a DMCA takedown last week to remove my site. They’ve claimed to follow meta tags and robots.txt since 1998, but no, they had over 1,000,000 of my pages going back that far. They even had the robots.txt configured for them archived from 1998.

    I’m tired of people linking to archived versions of things that I worked hard to create. Sites like Wikipedia were archiving urls and then linking to the archive, effectively removing branding and blocking user engagement.

    Not to mention that I’m losing advertising revenue if someone views the site in an archive. I have fewer problems with archiving if the original site is gone, but to mirror and republish active content with no supported way to prevent it short of legal action is ridiculous. Not to mention that I lose control over what’s done with that content – are they going to let Google train AI on it with their new partnership?

    I’m not a fan. They could easily allow people to block archiving, but they choose not to. They offer a way to circumvent artist or owner control, and I’m surprised that they still exist.

    So… That’s what I think is wrong with them.

    From a security perspective it’s terrible that they were breached. But it is kind of ironic – maybe they can think of it as an archive of their passwords or something.