• jubilationtcornpone@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    54
    arrow-down
    1
    ·
    edit-2
    10 days ago

    That is one bullshit headline. Forbes keeping the AI pump and dump scheme going.

    TLDR: People correctly discerned that written responses were from an “AI” chatbot slightly less often than they correctly discerned that responses were from a psychotherapist.

    “AI” cannot replace a therapist and hasn’t “won” squat.

    • asap@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      5
      ·
      edit-2
      9 days ago

      A bit disingenuous not to mention this part:

      Further, participants in most cases preferred ChatGPT’s take on the matter at hand. That was based on five factors: whether the response understood the speaker, showed empathy, was appropriate for the therapy setting, was relevant for various cultural backgrounds, and was something a good therapist would say.

      • PapstJL4U@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        9 days ago

        Patients explaining they liked what they heared - not if it is correct or relevant to the cause. There is not even a pipeline for escalation, because AIs don’t think.