OC below by @HaraldvonBlauzahn@feddit.org

What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.

Some people firmly believe LLMs are helpful. But programming is a logical task and LLMs can’t think - only generate statistically plausible patterns.

The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.

Finally what should cause alarm is that on top that LLMs can’t think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.

  • MTK@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    4 days ago

    That is actually missing an important issue, hallucinations.

    Copying from SO means you are copying from a human who might be stupid or lie but rarely spews out plausible sounding hot garbage (not never though) and because of other users voting and reputation etc etc, you actually do endup with a decently reliable source.

    With an LLM you could get something made up based on nothing related to the real world. The LLM might find your question to be outside of it’s knowledge but instead of realizing it it would just make up what it thinks sounds convincing.

    It would be like if you asked me how that animal that is half horse and half donkey is called and instead of saying “shit i’m blanking” I would say “Oh, that is called a Drog” and I couldn’t even tell you that I just made up that word because I will now be convinced that this is factual. Btw it’s “mule”

    So there is a real difference until we solve hallucinations, which right now doesn’t seem solvable but at best reduced to insignificance (maybe)