And that’s mostly the “bullshit IoT” category. It’s not like the demand for phones and laptops exploded in the last years, it’s IoT, AI and other useless crap - regardless of the process node.
And that’s mostly the “bullshit IoT” category. It’s not like the demand for phones and laptops exploded in the last years, it’s IoT, AI and other useless crap - regardless of the process node.
We could start by not requiring new chips every few years.
For 90% of the users, there hasn’t been any actual gain within the last 5-10 years. Older computers work perfectly fine, but artificial slow downs and bad software cause laptops to feel sluggish for most users.
Phones haven’t really advanced either. But apps and OSes are too bloated, hardware impossible to repair, so a new phone it is.
Every device nowadays needs wifi and AI for some reason, so of course a new dishwasher has more computing power than an early Cray, even though nothing of that is ever used.
What exactly do you think these chips are used for?
Because it’s often enough AI, crypto and bullshit IoT.
Usually ~/devel/
On my work laptop I have separate subdirs for each project and basically try to mirror the Gitlab group/project structure because some fucktards like to split every project into 20 repos.
Ansible is actually pretty nice, if you get the hang of it. Not perfect, but better than triple tunnel ssh.
You could simply automate step by step, each time you change something, you add that to the playbook and over time you should end up with a good setup.
Flakey dev setups are productivity killers.
The real question is why you’re torturing yourself by manually fixing that stuff? Don’t you terraform your Ansibles?
Admittedly, I only ever entered an operating room under anesthesia, but could you just, you know, put the displays somewhere else?
This seems like one of those informercial “problems”.
You’re oversimplifying things, drastically.
Corporations don’t have one projects, they have dozens, maybe hundreds. And those projects need staffing.
It’s not a chair factory where more people equals faster delivery
And that’s the core of your folly - latency versus throughput. Yes, putting 10 new devs in a project won’t increase speed magically. But 200 developers can get 20 projects done, where 10 devs only finish one.
Though, technically not anyone can access every piece, so I guess we could dismiss it as a thing of the past.
That’s how words work, yes.
The threat of public information for most people is not a data broker, but their neighbor. And unless you have a particularly psychopathic neighbor, they can’t realistically access data from a data broker.
It’s threat modeling like every cyber security. My phone’s password protects me from a random thief, but if a state actor really wants my data, they will get it, but the chances of them even trying are very low for me personally.
This is just the peak power. The average power is much less. And batteries can maybe work on a grid scale for smoothing, but not for an individual consumer like a data center.
Outsourcing is realistically often a tool to get mass, not for cost.
There’s a reason so many people went to coding boot camps, there was a huge demand for developers. Here in Germany for quite a while you literally couldn’t get developers, unless you paid outrageous salaries. There were none. So if you needed a lot of devs, you had the chance to either outsource or cancel the project.
I actually talked to a manager about our “near shoreing” and it wasn’t actually that much cheaper if you accounted for all the friction, but you could deliver volume.
BTW: there’s a big difference between hiring the cheapest contractors you can find and opening an office in a low income country. My colleagues from Poland, Estonia and Romania were paid maybe half what I got, but those guys are absolutely solid, no complaints.
I get your point, but have you looked into the power demands of data centers? They already have room filling batteries for power outages, but those are just enough to keep the lights on while the diesel generators start.
That’s not “readily available”, and it’s certainly not given voluntarily by users, it’s often straight up illegal. That’s a very different case.
None. There is no model that can output anything even remotely usable on that tiny amount of RAM and certainly not using the few CPU cycles your vps has to offer.
Isn’t that pretty much a thing of the past? This meme is maybe true for Facebook, but most sub 40 people don’t use that anyway and the “public diary” days are also pretty over. Sure, you can stitch together a lot from geolocating Instagram posts and LinkedIn information, but it’s not like it’s the searchable database Facebook was in 2012.
Businesses (at least the larger ones) replace their hardware every few years anyway. They don’t care whether their new Optiplexes run Windows 10 or 11 and most hardware bought since 2022 probably has Windows 11 installed already, probably all since 2020 supports it. So there’s hardly a problem here. (Btw I’m taking the management view here, I know that it’s a pain to actually deploy, but that doesn’t matter to management).
You don’t have to get all philosophical, since the value art is almost by definition debatable.
These models can’t do basic logic. They already fail at this. And that’s actually relevant to corpos if you can suddenly convince a chatbot to reduce your bill by 60% because bears don’t eat mangos or some other nonsensical statement.
Oh come on, are you really that boneheaded not to understand that you’re not the norm?
I literally had not a single power surge in my entire life. The only power outages I had were for a few minutes maybe three times in the last 15 years.
The larping refers to you. Either you are truly an outlier who actually runs a small DC, or you just like the feeling you can get pretending to do so.
Your attitude is roughly the “only gold plated cables made from solid silver” equivalent in audiophiles. Technically maybe correct, practically a self-important waste of money.
But not for us.
That’s what I meant by larping. The vast vast majority of us here would probably not even notice if their systems went down for an hour. Yes, battery backup has its purpose. In a datacenter.
I mean, what’s on the line here in the worst case? 15min without jellyfin and home assistant? Does that warrant taking risks with old batteries or investing in new ones?
That equation might change if you’re in a place with truly unreliable electricity, but I guess those places have solutions in place already.
I use Karch, btw.