• ZILtoid1991@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    12 hours ago

    AI can do the heavy lifting, but must not be treated as an infallable machine that can do no wrong unless it absolutely malfunctions, otherwise we get yet another YouTube, Twitch, etc.

  • Jakeroxs@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    1 day ago

    I think using LLMs to HELP with moderation makes sense. The problem with all these companies is they appear to think it’ll be perfect and lay off all the humans.

      • Pyr_Pressure@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 hours ago

        I mean, what people refer to as AI today isn’t really synonymous with actual AI

        It’s been cheapened

        • Opinionhaver@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 hours ago

          I don’t think it’s that. LLM’s very much are actual AI. Most people just take that term to mean something more than that when it actually doesn’t. A simple chess engine is an AI as well.

    • Obelix@feddit.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      Yeah, LLMs could really help. Other tools without AI are also helpful. The problem with all those companies is that they don’t want to do moderating for the public good at all. Reddit could kill a lot of Fake News on it’s platform, prevent reposts of revenge porn or kick idiots just by implementing a few rules. They don’t want to

  • Xanza@lemm.ee
    link
    fedilink
    English
    arrow-up
    73
    ·
    2 days ago

    Why don’t we get AI to moderate Alexis. He stopped being relevant 10 years ago.

  • regrub@lemmy.world
    link
    fedilink
    English
    arrow-up
    46
    ·
    2 days ago

    Only if the company using the AI is held accountable for what it does/doesn’t moderate

  • masterofn001@lemmy.ca
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    edit-2
    2 days ago

    No.

    It is simple enough as is to confuse ai or to make it forget or work around its directives. Not least of the concerns would be malicious actors such as musk censoring our thoughts.

    Ai is not something humanity should, in any way, be subjugated by or subordinate to.

    Ever.

    • Ledericas@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      thier aggressive autoban is getting everyone, regardless if you did actually ban evade or not, though not in large numbers.

  • Opinionhaver@feddit.uk
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    5
    ·
    1 day ago

    I couldn’t agree more. Human moderators, especially unpaid ones simply aren’t the way to go and Lemmy is a perfect example of this. Blocking users and communities and using content filters works to some extent but is extemely blunt tool with a ton of collateral damage. I’d much rather tell an AI moderator what I’m interested in seeing and what not and have it analyze the content to see what needs to be filtered out.

    Take this thread for example:

    Cool. I think he should piss on the 3rd rail.

    This pukebag is just as bad as Steve. Fuck both of them.

    What a cunt.

    How else is anyone going to filter out hateful content like this with zero value without an intelligent moderation system? People are coming up with new insults faster than I can keep adding them to the filter list. AI could easily filter out 95% of toxic content like this.

    • Viri4thus@feddit.org
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 day ago

      Translation: An AI would allow me to maybe have an echo chamber since human moderators won’t work for me for free.

    • MissGutsy@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Interesting fact: many bigger Lemmy instances are already using AI systems to filter out dangerous content in pictures before they even get uploaded.

      Context: Last year there was a big spam attack of CSAM and gore on multiple instances. Some had to shut down temporarily because they couldn’t keep up with moderation. I don’t remember the name of the tool, but some people made a program that uses AI to try and recognize these types of images and filter them out. This heavily reduced the amount of moderation needed during these attacks.

      Early AI moderation systems are actually something more platforms should use. Human moderators, even paid ones, shouldn’t need to go though large amounts of violent content every day. Moderators at Facebook have been arguing these points for a while now, many of which have gotten mental issues though their work and don’t get any medical support. So no matter what you think of AI and if it’s moral, this is actually one of the few good applications in my opinion

      • mPony@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        Moderators at Facebook have been arguing these points for a while now, many of which have gotten mental issues though their work and don’t get any medical support

        How in the actual hell can Facebook not provide medical support to these people, after putting them through actual hell? That is actively evil of them.

        • boonhet@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          The real answer? They use people in countries like Nigeria that have fewer laws

        • MissGutsy@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          I agree, but it’s also not surprising. I think somebody else posted the article about kenyan Facebook moderators in this comment section somewhere if you want to know more

    • Womble@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      1 day ago

      Look, Reddit bad, AI bad. Engaging with anything more that the most surface level reactions is hard so why bother?

      At a recent conference in Qatar, he said AI could even “unlock” a system where people use “sliders” to “choose their level of tolerance” about certain topics on social media.

      That combined with a level of human review for people who feel they have been unfairly auto-moderated seems entirely reasonable to me.

  • billwashere@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 days ago

    Why would anybody even slightly technical ever say this? Has he ever used what passes for AI? I mean it’s a useful tool with some giant caveats, and as long as someone is fact checking and holding its hand. I use it daily for certain things. But it gets stuff wrong all the time. And not just a little wrong. I mean like bat shit crazy wrong.

    Any company that is trying to use this technology to replace actually intelligent people is going to have a really bad time eventually.

    • alcoholic_chipmunk@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      “Hey as a social media platform one of your biggest expenses is moderation. Us guys at Business Insider want to give you an opportunity to tell your investors how you plan on lowering that cost.” -Business Insider

      “Oh great thanks. Well AI would make the labor cost basically 0 and it’s super trendy ATM so that.” -Reddit cofounder

      Let’s be real here the goal was never good results it was to get the cost down so low that you no longer care. Probably eliminates some liability too since it’s a machine.