Say what you will about Will Smith, but his movie iRobot made a good point about this 17 years ago.
(damn I’m old)
What are you going to train it off of since basic algorithms aren’t sufficient? Past committee decisions? If that’s the case you’re hard coding whatever human bias you’re supposedly trying to eliminate. A useless exercise.
Nah bud, you just authorize whatever the doctor orders are because they are more knowledgable of the situation.
That makes logical sense, but what about the numbers? They can’t go up if we keep spending the money we promised to spend on the 69th most effective and absolutely most expensive healthcare system in the world. What is this, an essential service? Rubes.
A slightly better metric to train it on would be chances of survival/years of life saved thanks to the transplant. However those also suffer from human bias due to the past decisions that influenced who got a transpant and thus what data we were able to gather.
And we do that with basic algorithms informed by research. But then the score gets tied and we have to decide who has the greatest chance of following though on their regimen based on things like past history and means to aquire the medication/go to the appointments/follow a diet/not drink. An AI model will optimize that based on wild demographic data that is correlative without being causative and end up just being a black box racist in a way that a committee that has to clarify it’s thinking to other members couldn’t, you watch.
Let’s get more kidneys out there instead with tax credits for donors.
I don’t mind AI. It is simply a reflection of whoever is in charge of it. Unfortunately, we have monsters who direct humans and AI alike to commit atrocities.
We need to get rid of the demons, else humanity as a whole will continue to suffer.
a reflection of who is in charge of it
not even that. it’s an inherently more regressive version of whatever data that person feeds it.
the two arguments for deploying this shit outside of very narrow laboratory uses, where everyone was already using other statistical models.
A. this is one last grasp at fukuyama’s ‘end of history’, one last desperate scream of the liberal order that they want to be regressive shit heads and build the abdication machine as their grand industrial-philosophical project, so they can do whatever horrible shit they want, and claim that they’re still compassionate and only doing it because computer said so.
B. this is a project by literal monarchists. people who wish to kill democracy. to murder truth and collaboration; replace it with blind tribalistic loyalty to a fuhrer/king. the rhetoric coming from a lot of the funders of these things supports this.
this technology is existentially evil, and will be the end of our society either way. it must be stopped. the people who work on it must be stopped. the people who fund it must be hanged.
I mean yes, but it can be VERY useful in these narrow laboratory use cases
im skeptical but open to that. it’s just that these models are pushing pushed into literally everything, to the point they’re hard to avoid. I can’t think of another kind of specialized lab tool that has had that done. I do not own, nor have I ever owned, a sample centrifuge. I don’t have CRISPR tools. I have never, outside of academic settings, opened wolfram alpha on my home computer. even AUTOCAD and solidworks are specialist tools, and I haven’t touched any version of either in years.
because these models, while not good for anything anyone should ever actually want outside a lab setting, are also very very good for fascism. they do everything a fascist needs to, aside from the actual physical killing.
and I don’t think the level of development and deployment that these tools get, along with the wildly inflated price of the hardware to run them (or anything else) and death of web search, the damage to academic journals, etc, is a net benefit. even to specialized researchers who have uses for specialized versions of them as the statistical tool that they are. certainly not to the fields over the long term.
Why shouldn’t they have long term benefits for researchers?
Reminds me a bit of when CRISPR got big, people were worried to no end about potential dangers, designer babies, bioterrorism (“everybody can make a killer virus in their garage now”) etc. In reality, it has been a huge leap forward for molecular biology and has vastly helped research, cancer treatment, drug development and many other things. I think machine learning could have a similar impact. It’s already being used in development of new drugs, genomics, detection of tumours just to name a few
because murdering truth is not good for science. fascism is not good for science funding. researchers use search engines all the time. academia is struggling with a LLM fraud problem.
If it wasn’t exclusively used for evil it would be a wonderful thing.
Unfortunately we also have capitalism. So everything has to be just the worst all the time so that the worst people alive can have more toys.
Thing is, those terrible people don’t enjoy the everything that they already own, and don’t understand that they are killing cool things in the crib. People make inventions and entertain if they can…because it is fun, and they think they got neat things to show the world. Problem is, prosperity is needed to allow people to have the luxury of trying to create.
The wealthy are murdering the golden geese of culture and technology. They won’t be happier for it, and will simply use their chainsaw to keep killing humanity in a desperate wish of finding happiness.
Transplant Candidates:
Black American Man who runs a charity: Denied ❌️
President: Approved ✅️
All Hail President Underwood
I still remember “death panels” from the Obama era.
Now it’s ai.
Whatever.
everything republicans complained about can be done under Trump twice as bad, twice as evil and they will be ‘happy’ and sing his praises
That’s not what the article is about. I think putting some more objectivety into the decisions you listed for example benefits the majority. Human factors will lean toward minority factions consisting of people of wealth, power, similar race, how “nice” they might be or how many vocal advocates they might have. This paper just states that current AIs aren’t very good at what we would call moral judgment.
It seems like algorithms would be the most objective way to do this, but I could see AI contributing by maybe looking for more complicated outcome trends. Ie. Hey, it looks like people with this gene mutation with chronically uncontrolled hypertension tend to live less than 5years after cardiac transplant - consider weighing your existing algorithm by 0.5%
Creatinin in urine was used as a measure of kidney function for literal decades despite African Americans having lower levels despite worse kidneys by other factors. Creatinine level is/was a primary determinant of transplant eligibility. Only a few years ago some hospitals have started to use inulin which is a more race and gender neutral measurement of kidney function.
No algorithm matters if the input isn’t comprehensive enough and cost effective biological testing is not.
Well yes. Garbage in garbage out of course.
That’s my point, this is real world data, its all garbage, and no amount of LLM rehashing fixes that.
Sure. The goal is more perfect here, not perfect.
Tho those complicated outcome trends can have issues with things like minorities having worse health outcomes due to a history of oppression and poorer access to Healthcare. Will definitely need humans overseeing it cause health data can be misleading looking purely at numbers
I wouldn’t say definitely. AI is subject to bias of course as well based on training, but humans are very much so, and inconsistently so too. If you are putting in a liver in a patient that has poorer access to healthcare they are less likely to have as many life years as someone that has better access. If that corellates with race is this the junction where you want to make a symbolic gesture about equality by using that liver in a situation where it is likely to fail? Some people would say yes. I’d argue that those efforts towards improved equality are better spent further upstream. Gets complicated quickly - if you want it to be objective and scientifically successful, I think the less human bias the better.
I agree with you but also
It seems like algorithms would be the most objective way to do this
Algo is just another tool corpos and owners use to abuse. They are not independent, they represent interest of their owners and they oppress pedon class.
Yep, basically. How it’s gonna go: instead of basing the transplant triage on morals, priority and the respect of human life as being priceless and equal, the AI will base it on your occupation within society, age, sex and how much money you make for the rich overlords if you recover. Fuck that noise.
That’s kinda how it already works we just need to optimize it even more to ensure that only the best people get the organs
That is not how it “basically works” where I live; doctors don’t care about what I do for a living or how much money I have, they just treat me like everyone else. The triage is by priority (as in emergency and compatibility of the organ). If they used AI, it wouldn’t be for the choice itself, but for keeping track of the waiting list. The AI itself choosing based on criteria like age, sex, race, work or culture would be unethical.
Everyone likes to think that AI is objective, but it is not. It is biased by its training which includes a lot of human bias.
The kidney would still be transplanted at the end, be the decision made by human or AI, no?
What’s with the Hewlett Packard Enterprises badging at the top?
I don’t really know how it’s better a human denying you a kidney rather than a AI.
It’s not like it’s something that makes more or less kidneys available for transplant anyway.
Terrible example.
It would have been better to make an example out of some other treatment that does not depend on finite recourses but only in money. Still, a human is now rejecting your needed treatments without the need of an AI, but at least it would make some sense.
In the end, as always, people who has chosen the AI as the “enemy” have not understand anything about the current state of society and how things work. Another example of how picking the wrong fights is a path to failure.
AI would be fine. we do not have artificial intelligence. full stop. none of the technologies being talked about even approach intelligence. it’s literally just autocorrect. do you know how the autocorrect on your phone’s software keyboard works? then you know how a large language model works. it’s exactly the same formulae, just scaled up and recursed a bunch. I could have endless debates about what ‘intelligence’ is, and I don’t know that there’s a single position I would commit to very hard, but I know, dead certain, that it is not this. turing and minsky agreed when they first threw this garbage away in 1951-too many hazards, too few benefits, and insane unreasonable costs.
but there’s more to it than that. large (whatever) models are inherently politically conservative. they are made of the past, they do not struggle, they do not innovate, and they do not integrate new concepts, because they don’t integrate any concept’s, they just pattern match. you cannot have social progress when decisions are made by large (whatever) models. you cannot have new insights. you cannot have better policies, you cannot improve. you can only cleave closer and closer to the past, and reinforce it by feeding it its own decisions.
It could perhaps be argued, in a society that had once been perfect and was doing pretty well, that this is tolerable in some sectors, as long as someone keeps an eye on it. right now we’re a smouldering sacrifice zone of a society. that means any training data would be toxic horror or toxic horror THAT IS ON FIRE. this is bad. these systems are bad. anyone who advocates for these systems outside extremely niche uses that probably all belong in a lab is a bad person.
and I think, if that isn’t your enemy, your priorities are deeply fucked, to the point you belong in a padded room or a pine box.
Autocorrect what the fuck? Models inherently conservative, wtf?
You show a vast lack of knowledge. Probably your source of information is just propaganda.
I know it’s an easy fight to pick. A trending dogma which is easy to support. You don’t really need to think, you just got pointed an easy enemy that’s easy to identify, and that’s easy to just be against and you follow that.
But the true enemy is not there.
Your heart is probably in the good place. But if you waste your strength fighting something useless is an incredible wasted of resources and spirit. You’ll achieve nothing, while the true enemy (which are human beings that doesn’t care about AI being a success or not) will keep laughing at you.
They have been oppressing you since before electricity. If you think AI is a tool needed for oppression you are deeply wrong.
WTF
okay then. you tell me what these models are. at a technical level. how they work. what do they actually DO? do you know? are you just shilling? can you cite any sources? I’ll show you mine of slightly better quality, if you show me yours. don’t be shy, babe. hit me with that academic good-good. don’t worry, im currently bedbound and have (some) journal access.
it’s an easy fight to pick
no it’s not. i’ve been saying this for years, and the constant failure of these systems to have a single god damn utility for the average person while absolutely ruining every part of the internet we used to treasure as they’re shoved into everything in an enshittification push we couldn’t have imagined fifteen years ago is only just starting to convince people. fuck your fascist nonsense.
but the enemy is not there
they literally are. peter thiel’s seqouia capital, elon musk, and every other billionaire, even the less explicitly fascist ones, are shoving this wildly unprofitable technology into everything. they literally lose money on every query, massive amounts of our productive capacity has been dedicated to making the hardware that these run on. countries are having brown outs and water shortages to run that fucking hardware. I actually used to enjoy video games, but I haven’t been able to buy a new graphics card in like a decade. why, if not this? if peter thiel is not your enemy, then you are either ignorant, or your are my enemy. if elon musk is not your enemy by march of 2025, then you either just woke up from a very long coma, or you are my enemy.
something useless
then why are we putting so many fucking resources towards it? you don’t see me fighting against minecraft or fucking roblox. im not ready to riot about adobe’s subscription model. I have a closet full of pretty clothes that are wildly impractical in my current climate.
wasted of resources and spirit
feels like it sometimes, when people shill for this crap day and night, but none of them ever seem to know what the shit you’re talking about.
the true enemy, they have been oppressing you since before electricity
who is ‘they’? because we both agree that people have been largely oppressed since before electricity, but i get the feeling that when you say that, you mean ‘the jews’ and not ‘the parasitic nobility who used them as a middleman to get around a technicality to do something their religion explicitly forbade while giving them a sacrificial firewall they could slaughter every few decades in a good pogrom whenever the peasants got too upset about being exploited, plus their modern counterparts’.
If you talk like that no one is going to want to talk with you.
What the hell did you just write, accusing me of antisemitism?
It’s really hard to even understand what you are talking about, really.
The sacrifices? The slaughter? The jews? The nobility? Shilling? Pogroms? Roblox Minecraft and graphics cards? A supposedly academic level knowledge of LLM but calling them autocorrect?
I’m not going to follow this conversation. That’s just my decision right now.
fun how you didn’t respond to a single one of my points. because you can’t.
fun how you didn’t respond to a single one of my points. because you can’t.
I stopped reading almost at ‘full stop’. I really stopped at ‘literally’. At that point I’m making too many assumptions and it’s nothing I can touch objectively.
He did about 4 walls of text after, and it sounds like you went down some serious rabbit holes. So you think you may have drifted a little into the paranoid ‘othering’ lane?
Anyway, cut him some slack and try with someone else. Next time for sure!
it’s a known nazi dog whistle, dude.
I don’t know how much time you’ve spent around or in the milleiu of silicon valley, but there’s a LOT of fascist shit there. all the major funders of this technology are literally fascists. many of them sig heiling nazis, or assholes who talk openly about the need for a fuhrer/king. this is not secret. one of them, without this technology, has helped multiple far right governments into power, aided multiple genocides, and turned all of your older relatives into nazis. one of them is known as ‘the vampire of silicon valley’; the guy who invented the whole ‘blood boy’ thing (yes that’s real) and has dedicated his vast vast fortune to ending democracy, likes to name his shit after stuff from LOTR; first big one was ‘palantir’. look up what those are for in the books. since its modern resurgance, most of the investments from his VC fund have been in this technology.
Responsibility. We’ve yet to decide as a society how we want to handle who is held responsible when the AI messes up and people get hurt.
You’ll start to see AI being used as a defense of plausible deniability as people continue to shirk their responsibilities. Instead of dealing with the tough questions, we’ll lean more and more on these systems to make it feel like it’s outside our control so there’s less guilt. And under the current system, it’ll most certainly be weaponized by some groups to indirectly hurt others.
“Pay no attention to that man behind the curtain”
Software have been implied in decision making for decades.
Anyway, the true responsible of a denial in a medical treatment has never been account responsible (except for our angel Luigi), no matter if AI has been used or not.
Hasn’t it been demonstrated that AI is better than doctors at medical diagnostics and we don’t use it only because hospitals would have to take the blame if AI fucks up but they can just fire a doctor that fucks up?
I believe a good doctor, properly focused, will outperform an AI. AI are also still prone to hallucinations, which is extremely bad in medicine. Where they win is against a tired, overworked doctor with too much on his plate.
Where it is useful is as a supplement. An AI can put a lot of seemingly innocuous information together to spot more unusual problems. Rarer conditions can be missed, particularly if they share symptoms with more common problems. An AI that can flag possibilities for the doctor to investigate would be extremely useful.
An AI diagnostic system is a tool for doctors to use, not a replacement.
Studies have also shown that doctors using AI don’t do better than just doctors but AI on its own does. Although, that one is attributed to the doctors not knowing how to use chatgpt.
Do you have a link to that study? I’d be interested to see what the false positive/negative rates were. Those are the big danger of LLMs being used, and why a trained doctor would be needed.
It is better at simple pattern recognition, but much worse at complex diagnoses.
It is useful as a help to doctors but won’t replace them.
As an example, it can give you a good prediction on who likely has lung cancer out of thousands of CT images. It will completely fuck up prognoses and treatment recommendations though.
has it? source?
you’re not gonna get one.
Yeah, I’d much rather have random humans I don’t know anything about making those “moral” decisions.
If you’re already answered, “No,” you may skip to the end.
So the purpose of this article is to convince people of a particular answer, not to actually evaluate the arguments pro and con.
The death panels Republican fascists claim Democrats were doing are now here, and it’s being done by Republicans.
I hate this planet
“Treatment request rejected, insufficient TC level”
A Voyager reference out in the wild! LMAO
Had to be done. It’s just too damn close not to.
Yeah. It’s much more cozy when a human being is the one that tells you you don’t get to live anymore.
Human beings have a soul you can appeal to?
Not every single one, but enough.