Just imagine if the UN had teeth for enforcement, at least for overwhelming votes like this. I feel like its one of the biggest oversights of the post WWII order they tried to make.
Big countries, of course, would never allow that, but still.
Just imagine if the UN had teeth for enforcement, at least for overwhelming votes like this. I feel like its one of the biggest oversights of the post WWII order they tried to make.
Big countries, of course, would never allow that, but still.
Sounds like it’d be nice if you had real control over the car’s software, and you could roll it back.
This… also makes me a little more weary driving around Teslas in traffic.
Shouldn’t they be in a rush, instead?
They basically have 7 days (or really till January) before NATO could be destabilized and delivering weapons becomes far more complicated.
NATO does drills and excercises, and I’m sure everything is inspected before its sent. They can just quietly throw away what doesn’t work and dodge that bullet.
Or just stop being so freaking stingy withholding weapons.
What are they waiting for? The UK to declare war on France? NO, they stockpiled all these freaking arms for Soviet aggression or fascist threats, and now its at their doorstep.
I don’t understand what good Grippens and Typhoons, tanks, missiles sysstems and such do rusting in storage when they could do exactly what they were built to do, right now.
The localllama people are feeling quite mixed about this, as they’re still charging through the nose for more RAM. Like, orders of magnitude more than the bigger ICs actually cost.
It’s kinda poetic. Apple wants to go all in on self-hosted AI now, yet their incredible RAM stinginess over the years is derailing that.
Presumably you will advance along with humanity though, or failing that, just figure out the transcendence thing yourself with so much time?
I don’t think anyone would choose to stay ‘meatbag human’ for trillions of years.
There is a breaking point, eventually. YouTube’s trajectory is gonna make next quarter’s revenue great, but eventually something else will pick up user’s attention instead.
Maybe I am just out of touch, but I smell another bubble bursting when I look at how enshittified all major web services are simultaneously becoming.
It feels like something has to give, right?
We have YouTube, Reddit, Twitter, and more just racing to enshittify like I can’t even believe, Google Search is racing to destroy the internet, yet they’re also at the ‘critical mass’ of ‘too big to fail’ and shoved out all their major competitors already (other than Discord I guess).
There are already open source/self hosted alternatives, like Perplexica.
CEO Tony Stubblebine says it “doesn’t matter” as long as
nobody reads it.they keep generating sign-ups and selling ads… till next quarter, at least.
Soldered is better! It’s sometimes faster, definitely faster if it happens to be lpddr.
But TBH the only thing that really matters his “how much VRAM do you have,” and Qwen 32B slots in at 24GB, or maybe 16GB if the GPU is totally empty and you tune your quantization carefully. And the cheapest way to that (until 2025) is a used MI60, P40 or 3090.
TSMC doesn’t really have official opinions, they take silicon orders for money and shrug happily. Being neutral is good for business.
Altman’s scheme is just a whole other level of crazy though.
It’s useful.
I keep Qwen 32B loaded on my desktop pretty much whenever its on, as an (unreliable) assistant to analyze or parse big texts, to do quick chores or write scripts, to bounce ideas off of or even as a offline replacement for google translate (though I specifically use aya 32B for that).
It does “feel” different when the LLM is local, as you can manipulate the prompt syntax so easily, hammer it with multiple requests that come back really fast when it seems to get something wrong, not worry about refusals or data leakage and such.
the model seems ok for tasks like summarisation though
That and retrieval and the business use cases so far, but even then only if the results can be wrong somewhat frequently.
the term AI will become every bit as radioactive to investors in the future as it is lucrative right now.
Well you say that, but somehow crypto is still around despite most schemes being (IMO) a much more explicit scam. We have politicans supporting it.
Current LLMs cannot be AGI, no matter how big they are. The fundamental architecture just isn’t right.
It’s selling an anticompetitive dystopia. It’s selling a Facebook monopoly vs selling the Fediverse.
We dont need 7 trillion dollars of datacenters burning the Earth, we need collaborative, open source innovation.
Easy, local AI.
Keep generative AI locally runnable instead of corporate hosted. Make it free, open and accessible. This gives the little guys the cost advantage, and takes away the scaling advantages of mega publishers. Lemmy users should be familiar with this concept.
Whenever I hear people rail against AI, I tell them they are handing the world to Sam Altman and his dystopia, who do not care about stealing content, equality, or them. I get a lot of hate for it. But they need to be fighting the corporate vs open AI battle instead.