Gaywallet (they/it)

I’m gay

  • 19 Posts
  • 9 Comments
Joined 3 years ago
cake
Cake day: January 28th, 2022

help-circle




  • Ethically speaking, we should not be experimenting on humans, even with their explicit consent. It’s not allowed by any credible review board (such as the IRB) and in many countries you can be held legally liable for doing experiments on humans.

    With that being said, there have been exceptions to this, in that in some countries we allow unproven treatments to be given to terminal patients (patients who are going to die from a condition). We also generally don’t have repercussions for folks who experiment on themselves because they are perhaps the only people capable of truly weighing the pros and cons, of not being mislead by figures of authority (although I do think there is merit of discussing this with regards to being influenced by peers), and they are the only ones for which consent cannot be misconstrued.



















  • This isn’t just about GPT, of note in the article, one example:

    The AI assistant conducted a Breast Imaging Reporting and Data System (BI-RADS) assessment on each scan. Researchers knew beforehand which mammograms had cancer but set up the AI to provide an incorrect answer for a subset of the scans. When the AI provided an incorrect result, researchers found inexperienced and moderately experienced radiologists dropped their cancer-detecting accuracy from around 80% to about 22%. Very experienced radiologists’ accuracy dropped from nearly 80% to 45%.

    In this case, researchers manually spoiled the results of a non-generative AI designed to highlight areas of interest. Being presented with incorrect information reduced the accuracy of the radiologist. This kind of bias/issue is important to highlight and is of critical importance when we talk about when and how to ethically introduce any form of computerized assistance in healthcare.