The article explains the general problem on the example of software development. But given that AI models are heavily promoted by billion-dollar US companies and important actors in that space are not at all friendly to the European Union, I think the relevance can be far larger.

Generally, the article explains that judging usefulness of AI models, specifically LLMs, by trying them out is very prone to the same psychlological traps like astrology, tarot cards or psychics - the so-called Barnum effect. This is specifically because these models are carefully engineered to produce plausible-sounding andwers! And even very intelligent but unaware people can easily fall prey to it.

  • HaraldvonBlauzahn@feddit.orgOP
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    6 days ago

    What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.

    Some people firmly believe LLMs are helpful. But tasks lile programming are logical tasks and LLMs absolutely can’t think - only generate statistically plausible patterns.

    The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.

    Finally what should cause alarm is that on top that LLMs can’t think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.

  • Tar_Alcaran@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    6 days ago

    Here’s a big important test you can use to see if something is actually useful and effective.

    Pick a random tool and ask all your friends and neighbors what they last used it for. “Hey bob, I was wondering, what was the last thing you used your belt sander/hammer/paintbrush for?”. You’ll probably get a very accurate answer about something that needed doing. “Oh, I had to sand down the windowsill because the paint was cracked” or “I tightened the screws on my coffeetable”

    Now do the same for AI.


    The big problem with asking if AI is useful is that people suck at figuring out how to do someone else’s work, but they’ve got a pretty good idea what their own work is like. As a result, it’s very easy to think that AI can do someone else’s job, but for YOUR job, that you actually understand, you can easily see what bullshit AI spouts and how it misses all the important bits.

    Sure, if your idea is that “Programmers write code”, then yeah, AI can do that. Similarly, “authors write stories” is true, and AI can write stories. But if you know very slightly more, you realize that programmers only write code like 10% of the time, and authors probably write words less than 10% of the time. The job is about structuring and planning and laying out, the typing is just the final details.

    But if you understand fuckall about a job, then yeah AI can definitely generate stuff that looks like other stuff, because it’s a machine specifically designed to make stuff that looks like other stuff.

    • Jesus_666@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      Is actually have a good answer but in my case it’d be “I wanted to know what that one plant I saw was”. AI-based pattern matching to identify plant or animal species is pretty handy.

      It’s also way more sensible than trying to use text generation for anything useful.

    • markovs_gun@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      5 days ago

      This is a mischaracterization of how AI is used for coding and how it can lead to job loss. The use case is not “have the AI develop apps entirely on its own” it’s “allow one programmer to do the work of 3 programmers by using AI to write or review portions of code” and “allow people with technical knowledge who are not skilled programmers to write code that’s good enough without the need for dedicated programmers.” Some companies are trying to do the first one, but almost everyone is doing the second one, and it actually works. That’s how AI leads to job loss. A team of 3 programmers can do what used to take a team of 10 or so on.