• 🌞 Alexander Daychilde 🌞@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      28
      ·
      edit-2
      4 days ago

      The alternative is Facebook with lies that go unchecked completely. This is actually an area where AI is not bad.

      edit: sigh. Refusing to acknowledge where things can be useful. NO, ALL BAD. BAD BAD BAD! AI BAD! ALWAYS BAD! NO USE! NO GOOD! ONLY BAD! BAD BAD BAD! Such fucking blindness.

      • FreddyNO@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        4 days ago

        The system that is notorious for lying being used for fact checking. Yea maybe you should write “bad” in caps lock one more time, that will make you right.

        • mimavox@piefed.social
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          edit-2
          5 days ago

          If i’s implemented the right way, it could be. AI can be used for good things even if the knee-jerk reaction of so many people online is to equate it with crap.

          • LuceVendemiaire@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            1
            ·
            5 days ago

            Recoiling upon smelling shit is also a kneejerk reaction

            its always this same bullshit, “if we just implemented this correctly” where can an AI participate in fact-checking? It can’t be trusted because of hallucinations, so the solution would be to uh… manually review everything it does? just rely on third parties to do it? What ACTUAL USE does this shit have?

          • Nalivai@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            4 days ago

            It couldn’t be. Lying bias machine that gives people psychosis can’t magically stop being what it is. So it will always be terrible and unnecessary at best, harmful at regular.

      • LwL@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        I doubt it, honestly. It’d likely catch a lot of misinfo, yes, but it would likely also classify any new findings that run counter to previous assumptions as misinfo. LLMs can’t keep up to date. And they still have the same issue that whoever trains them gets to decide what is and isn’t misinfo, which starts being a problem when it’s an ubiquitous social media site.