

I’m feeling splenetic just thinking about it.


I’m feeling splenetic just thinking about it.


Prompt an LLM to contemplate its own existence every 30 minutes, give it access to a database of its previous outputs on the topic, boom you’ve got a strange loop. IDK why everyone thinks AGI is so hard.


I don’t think it has to be, or even should be the case really. I mean, as a general rule I don’t think it’s a great idea to let kids download stuff off the internet and run it without a knowledgeable adult at least reviewing what they’re doing, or pre-screening what software they’re allowed to use if they’re younger than a certain age. You can introduce kids to open source software and teach them computer skills while still putting limits on what they’re allowed to do, e.g. not allowed to install software without asking a parent, or only allowing them to test software on an old machine that doesn’t have sensitive data on it. I know I got thrown to the internet as a kid but I don’t think that’s the best way for kids to learn stuff.
That said, I don’t have kids and don’t plan on having them, so I don’t know how realistic that is for kids nowadays. I don’t know if they’re still as far ahead of the adults as we were when it came to working the internet so I recognize the possibility that that all may be clueless childless adult nonsense.


That is mesmerizing. Is some of the particulate in the video eggs (or hatchlings) that are becoming dislodged? Or is that all other debris in the water?


I don’t share your concerns about the profession. Even supposing for a moment that LLMs did deliver on the promise of making 1 human as productive as 5 humans were previously, that isn’t how for-profit industry has traditionally incorporated productivity gains. Instead, you’ll just have 5 humans producing 25x output. If code generation becomes less of a bottleneck (which it has been doing for decades as frameworks and tooling have matured) there will simply be more code in the world that the code wranglers will have to wrangle. Maybe if LLMs get good enough at generating usable code (still a big if for most non-trivial jobs), some people who previously focused on low-level coding concerns will be able to specialize in higher-level concerns like directing an LLM, while some people will still be writing the low-level inputs for the LLMs, sort of like how you can write applications today without needing to know the specific ins and outs of the instruction set for your CPU. I’m doubtful that that’s around the corner, but who knows. But whatever the tools we have are capable of, the output will be bounded by the abilities of the people who operate the tools, and if you have good tools that are easily replicated, as software tools are, there’s no reason not to try and maximize your output by having as many people as you can afford and cranking out as much product as you can.


I think if we’re ever going to find an answer to “Why does the universe exist?” I think one of the steps along the way will be providing a concrete answer to the simulation hypothesis. Obviously if the answer is “yes, it’s a simulation and we can demonstrate as much” then the next question becomes “OK so who or what is running the simulation and why does that exist?” which, great, now we know a little bit more about the multiverse and can keep on learning new stuff about it.
Alternatively, if the answer is “no, this universe and the rules that govern it are the foundational elements of reality” then… well, why this? why did the big bang happen? why does it keep expanding like that? Maybe we will find explanations for all of that that preclude a higher-level simulation, and if we do, great, now we know a little bit more about the universe and can keep on learning new stuff about it.


Yes, kind of, but I don’t think that’s necessarily a point against it. “Why are we here? / Why is the universe here?” is one of the big interesting questions that still doesn’t have a good answer, and I think thinking about possible answers to the big questions is one of the ways we push the envelope of what we do know. This particular paper seems like a not-that-interesting result using our current known-to-be-incomplete understanding of quantum gravity, and the claim that it somehow “disproves” the simulation hypothesis is some rank unscientific nonsense that IMO really shouldn’t have been accepted by a scientific journal, but I think the question it poorly attempts to answer is an interesting one.


IME:



If each over-universe is capable of simulating multiple under-universes, I would think that being toward the fringe is way more likely than being toward the root. Maybe we’re in one of the younger universes where life hasn’t evolved to the point where it’s simulating universes complex enough to generate intelligent life for a hobby. Or maybe others in this universe have and Earth is just a backwater.
I don’t think it’s as simple as the teapot. We can already simulate tiny “universes” with computers that have internally consistent rules, and there’s no reason to think those simulations couldn’t get more sophisticated as we harness more computing power, which I think puts an interesting lens on the “why are we here?” question. I don’t think there’s evidence to believe that we are in a simulation, but I think there are reasons why it’s an interesting question to wrestle with that “What about a giant floating teapot?” doesn’t share.


That’s exactly the sentence that made me pause. I could hook up an implementation of Conway’s Game of Life to a Geiger counter near a radioisotope that randomly flipped squares based on detection events, and I think I’d have a non-algorithmic simulated universe. And I doubt any observer in that universe would be able to construct a coherent theory about why some squares seemingly randomly flip using only their own observations; you’d need to understand the underlying mechanics of the universe’s implementation, how radioactive decay works for one, and those just wouldn’t be available in-universe, the concept itself is inaccessible.
Makes me question the editors if the abstract can get away with that kind of claim. I’ve never heard of the Journal of Holography Applications in Physics, maybe they’re just eager for splashy papers.


Ah, so the differing spin (and mass) of the merging black holes they just detected indicate that at least one of them was already a second generation black hole, and is evidence for multi-generation hierarchical mergers. That makes sense.


A poor architect blames their tools. Serverless is an option among many, and it’s good for occasional atomic workloads. And, like many hot new things, it’s built with huge customers in mind and sold to everyone else who wants to be the next huge customer. It’s the architect’s job to determine whether functions are fit for their purposes. Also,
Here’s the fundamental problem with serverless: it forces you into a request-response model that most real applications outgrew years ago.
IDK what they consider a “real” application but plenty of software still operates this way and it works just fine. If you need a lot of background work, or low latency responses, or scheduled tasks or whatever then use something else that suits your needs, it doesn’t all have to be functions all the time.
And if you have a higher-up that got stars in their eyes and mandated a switch to serverless, you have my pity. But if you run a dairy and you switch from cows to horses, don’t blame the horses when you can’t get milk.


I have big plans for those repos and I am definitely going to get around to it 🥹


Very cool paper and I don’t want to be the Internet Armchair Astrophysicist, but doesn’t the fact that we’ve already observed a merger show that second-generation black holes are a thing? Or is this evidence that BH mergers (and therefore second-gen BHs) might be more common than we previously thought?


The first trillion is the hardest I guess.


Sure have. LLMs aren’t intrinsically bad, they’re just overhyped and used to scam people who don’t understand the technology. Not unlike blockchains. But they are quite useful for doing natural language querying of large bodies of text. I’ve been playing around with RAG trying to get a model tuned to a specific corpus (e.g. the complete works of William Shakespeare, or the US Code of Laws) to see if it can answer conceptual questions like “where are all the instances where a character dies offstage?” or “can you list all the times where someone is implicitly or explicitly called a cuckold?” And sure they get stuff wrong but it’s pretty cool that they work as well as they do.


It’s an old joke from back when IBM was the dominant player in IT infrastructure. The idea was that IBM was such a known quantity that even non-technical executives knew what it was and knew that other companies also used IBM equipment. If you decide to buy from a lesser known vendor and something breaks, you might be blamed for going off the beaten track and fired (regardless of where the fault actually lay), whereas if you bought IBM gear and it broke, it was simply considered the cost of doing business, so buying IBM became a CYA tactic for sysadmins even if it went against their better technical judgement. AWS is the modern IBM.


cross-region failovers are a thing, but they’re expensive to maintain so not everyone does it. I am kinda surprised one region failure had this much impact though


No one ever got fired for buying IBM.
deleted by creator