• FauxLiving@lemmy.world
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    3
    ·
    edit-2
    5 hours ago

    This research is good, valuable and desperately needed. The uproar online is predictable and could possibly help bring attention to the issue of LLM-enabled bots manipulating social media.

    This research isn’t what you should get mad it. It’s pretty common knowledge online that Reddit is dominated by bots. Advertising bots, scam bots, political bots, etc.

    Intelligence services of nation states and political actors seeking power are all running these kind of influence operations on social media, using bot posters to dominate the conversations about the topics that they want. This is pretty common knowledge in social media spaces. Go to any politically charged topic on international affairs and you will notice that something seems off, it’s hard to say exactly what it is… but if you’ve been active online for a long time you can recognize that something seems wrong.

    We’ve seen how effective this manipulation is on changing the public view (see: Cambridge Analytica, or if you don’t know what that is watch ‘The Great Hack’ documentary) and so it is only natural to wonder how much more effective online manipulation is now that bad actors can use LLMs.

    This study is by a group of scientists who are trying to figure that out. The only difference is that they’re publishing their findings in order to inform the public. Whereas Russia isn’t doing us the same favors.

    Naturally, it is in the interest of everyone using LLMs to manipulate the online conversation that this kind of research is never done. Having this information public could lead to reforms, regulations and effective counter strategies. It is no surprise that you see a bunch of social media ‘users’ creating a huge uproar.


    Most of you, who don’t work in tech spaces, may not understand just how easy and cheap it is to set something like this up. For a few million dollars and a small staff you could essentially dominate a large multi-million subscriber subreddit with whatever opinion you wanted to push. Bots generate variations of the opinion that you want to push, the bot accounts (guided by humans) downvote everyone else out of the conversation and, in addition, moderation power can be seized, stolen or bought to further control the conversation.

    Or, wholly fabricated subreddits can be created. A few months prior to the US election there were several new subreddits which were created and catapulted to popularity despite just being a bunch of bots reposting news. Now those subreddits are high in the /all and /popular feeds, despite their moderators and a huge portion of the users being bots.

    We desperately need this kind of study to keep from drowning in a sea of fake people who will tirelessly work to convince you of all manner of nonsense.

  • VampirePenguin@midwest.social
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    4
    ·
    7 hours ago

    AI is a fucking curse upon humanity. The tiny morsels of good it can do is FAR outweighed by the destruction it causes. Fuck anyone involved with perpetuating this nightmare.

    • Tja@programming.dev
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      4 hours ago

      Damn this AI, posting and doing all this mayhem all by itself on poor unsuspecting humans…

    • 13igTyme@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      3
      ·
      edit-2
      6 hours ago

      Todays “AI” is just machine learning code. It’s been around for decades and does a lot of good. It’s most often used for predictive analytics and used to facilitate patient flow in healthcare and understand volumes of data fast to provide assistance to providers, case manager, and social workers. Also used in other industries that receive little attention.

      Even some language learning machines can do good, it’s the shitty people that use it for shitty purposes that ruin it.

      • VampirePenguin@midwest.social
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        5 hours ago

        Sure I know what it is and what it is good for, I just don’t think the juice is worth the squeeze. The companies developing AI HAVE to shove it everywhere to make it feasible, and the doing of that is destructive to our entire civilization. The theft of folks’ work, the scamming, the deep fakes, the social media propaganda bots, the climate raping energy consumption, the loss of skill and knowledge, the enshittification of writing and the arts, the list goes on and on. It’s a deadend that humanity will regret pursuing if we survive this century. The fact that we get a paltry handful of positives is cold comfort for our ruin.

        • 13igTyme@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          3
          ·
          4 hours ago

          The fact that we get a paltry handful of positives is cold comfort for our ruin.

          This statement tells me you don’t understand how many industries are using machine learning and how many lives it saves.

      • Dagwood222@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        4 hours ago

        They are just harmless fireworks. They are even useful for signaling ships at sea of dangerous tides.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      edit-2
      6 hours ago

      I disagree. It may seem that way if that’s all you look at and/or you buy the BS coming from the LLM hype machine, but IMO it’s really no different than the leap to the internet or search engines. Yes, we open ourselves up to a ton of misinformation, shifting job market etc, but we also get a suite of interesting tools that’ll shake themselves out over the coming years to help improve productivity.

      It’s a big change, for sure, but it’s one we’ll navigate, probably in similar ways that we’ve navigated other challenges, like scams involving spoofed webpages or fake calls. We’ll figure out who to trust and how to verify that we’re getting the right info from them.

      • zbyte64@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 hours ago

        LLMs are not like the birth of the internet. LLMs are more like what came after when marketing took over the roadmap. We had AI before LLMs, and it delivered high quality search results. Now we have search powered by LLMs and the quality is dramatically lower.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          2 hours ago

          Sure, and we had an internet before the world wide web (ARPANET). But that wasn’t hugely influential until it was expanded into what’s now the Internet. And that evolved into the world wide web after 20-ish years. Each step was a pretty monumental change, and built on concepts from before.

          LLMs are no different. Yes they’re built on older tech, but that doesn’t change the fact that they’re a monumental shift from what we had before.

          Let’s look at access to information and misinformation. The process was something like this:

          1. Physical encyclopedias, newspapers, etc
          2. Digital, offline encyclopedias and physical newspapers
          3. Online encyclopedias and news
          4. SEO and the rise of blog/news spam - misinformation is intentional or negligent
          5. Early AI tools - misinformation from hallucinations is largely also accidental
          6. Misinformation in AI tools becomes intentional

          We’re in the transition from 5 to 6, which is similar to the transition from 3 to 4. I’m old enough to have seen each of these transitions.

          The way people interact with the world is fundamentally different now than it was before LLMs came out, just like the transition from offline to online computing. And just like people navigated the transition to SEO nonsense, people need to navigate he transition to LLM nonsense. It’s quite literally a paradigm shift.

    • acosmichippo@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      ·
      7 hours ago

      why wouldn’t that be the case, all the most persuasive humans are liars too. fantasy sells better than the truth.

  • justdoitlater@lemmy.world
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    5
    ·
    9 hours ago

    Reddit: Ban the Russian/Chinese/Israeli/American bots? Nope. Ban the Swiss researchers that are trying to study useful things? Yep

    • Ilandar@lemm.ee
      link
      fedilink
      English
      arrow-up
      35
      arrow-down
      2
      ·
      8 hours ago

      Bots attempting to manipulate humans by impersonating trauma counselors or rape survivors isn’t useful. It’s dangerous.

      • endeavor@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        4
        ·
        7 hours ago

        Humans pretend to be experts infront of eachother and constantly lie on the internet every day.

        Say what you want about 4chan but the disclaimer it had ontop of its page should be common sense to everyone on social media.

          • endeavor@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            6
            ·
            6 hours ago

            If fake experts on the internet get their jobs taken by the ai, it would be tragic indeed.

            Don’t worry tho, popular sites on the internet are dead since they’re all bots anyway. It’s over.

            • Chulk@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              2 hours ago

              If fake experts on the internet get their jobs taken by the ai, it would be tragic indeed.

              These two groups are not mutually exclusive

      • justdoitlater@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        8 hours ago

        Sure, but still less dangerous of bots undermining our democracies and trying to destroy our social frabic.

    • conicalscientist@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      4 hours ago

      I don’t know what you have in mind but the founders originally used bots to generate activity to make the site look popular. Which begs the question. What was really the root reddit cultures. Was it the bots following human activity to bolster it. Or were the humans merely following what the founders programmed the bots to post.

      One things for sure, reddit has always been a platform of questionable integrity.

      • FourWaveforms@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        17 seconds ago

        They’re banning 10+ year accounts over trifling things and it’s got noticeably worse this year. The widespread practice of shadowbanning makes it clear that they see users as things devoid of any inherent value, and that unlike most corporations, they’re not concerned with trying to hide it.

  • nodiratime@lemmy.world
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    1
    ·
    10 hours ago

    Reddit’s chief legal officer, Ben Lee, wrote that the company intends to “ensure that the researchers are held accountable for their misdeeds.”

    What are they going to do? Ban the last humans on there having a differing opinion?

    Next step for those fucks is verification that you are an AI when signing up.

  • SolNine@lemmy.ml
    link
    fedilink
    English
    arrow-up
    36
    ·
    12 hours ago

    Not remotely surprised.

    I dabble in conversational AI for work, and am currently studying its capabilities for thankfully (imo at least) positive and beneficial interactions with a customer base.

    I’ve been telling friends and family recently that for a fairly small amount of money and time investment, I am fairly certain a highly motivated individual could influence at a minimum a local election. Given that, I imagine it would be very easy for Nations or political parties to easily manipulate individuals on a much larger scale, that IMO nearly everything on the Internet should be suspect at this point, and Reddit is atop that list.

    • aceshigh@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      1
      ·
      10 hours ago

      This isn’t even a theoretical question. We saw it live in the last us elections. Fox News, TikTok, WaPo etc. are owned by right wing media and sane washed trump. It was a group effort. You need to be suspicious not only of the internet but of tv and newspapers too. Old school media isn’t safe either. It never really was.

      But I think the root cause is that people don’t have the time to really dig deep to get to the truth, and they want entertainment not be told about the doom and gloom of the actual future (like climate change, loss of the middle class etc).

      • DarthKaren@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 hours ago

        I think it’s more that most people don’t want to see views that don’t align with their own or challenge their current ones. There are those of us who are naturally curious. Who want to know how things work, why things are, what the latest real information is. That does require that research and digging. It can get exhausting if you don’t enjoy that. If it isn’t for you, then you just don’t want things to clash with what you “know” now. Others will also not want to admit they were wrong. They’ll push back and look for places that agree with them.

        • aceshigh@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 hours ago

          People are afraid to question their belief systems because it will create an identity crisis, and most people can’t psychologically deal with it. So it’s all self preservation.

  • MonkderVierte@lemmy.ml
    link
    fedilink
    English
    arrow-up
    26
    ·
    edit-2
    12 hours ago

    When Reddit rebranded itself as “the heart of the internet” a couple of years ago, the slogan was meant to evoke the site’s organic character. In an age of social media dominated by algorithms, Reddit took pride in being curated by a community that expressed its feelings in the form of upvotes and downvotes—in other words, being shaped by actual people.

    Not since the APIcalypse at least.

    Aside from that, this is just reheated news (for clicks i assume) from a week or two ago.

    • ClamDrinker@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      3 hours ago

      One likely reason the backlash has been so strong is because, on a platform as close-knit as Reddit, betrayal cuts deep.

      Another laughable quote after the APIcalypse, at least for the people that remained on Reddit after being totally ok with being betrayed.

  • flango@lemmy.eco.br
    link
    fedilink
    English
    arrow-up
    27
    ·
    13 hours ago

    […] I read through dozens of the AI comments, and although they weren’t all brilliant, most of them seemed reasonable and genuine enough. They made a lot of good points, and I found myself nodding along more than once. As the Zurich researchers warn, without more robust detection tools, AI bots might “seamlessly blend into online communities”—that is, assuming they haven’t already.

  • thedruid@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    4
    ·
    12 hours ago

    Fucking a. I. And their apologist script kiddies. worse than fucking Facebook in its disinformation

  • Ensign_Crab@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    13 hours ago

    Imagine what the people doing this professionally do, since they know they won’t face the scrutiny of publication.

  • TheObviousSolution@lemm.ee
    link
    fedilink
    English
    arrow-up
    64
    ·
    18 hours ago

    The reason this is “The Worst Internet-Research Ethics Violation” is because it has exposed what Cambridge Analytica’s successors already realized and are actively exploiting. Just a few months ago it was literally Meta itself running AI accounts trying to pass off as normal users, and not an f-ing peep - why do people think they, the ones who enabled Cambridge Analytica, were trying this shit to begin with. The only difference now is that everyone doing it knows to do it as a “unaffiliated” anonymous third party.

    • tauren@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      16 hours ago

      Just a few months ago it was literally Meta itself…

      Well, it’s Meta. When it comes to science and academic research, they have rather strict rules and committees to ensure that an experiment is ethical.

      • FarceOfWill@infosec.pub
        link
        fedilink
        English
        arrow-up
        8
        ·
        14 hours ago

        The headline is that they advertised beauty products to girls after they detected them deleting a selfie. No ethics or morals at all

      • thanksforallthefish@literature.cafe
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        13 hours ago

        You may wish to reword. The unspecified “they” reads like you think Meta have strict ethical rules. Lol.

        Meta have no ethics whatsoever, and yes I assume you meant universities have strict rules however the approval of this study marks even that as questionable

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 hours ago

      One of the Twitter leaks showed a user database that effectively had more users than there were people on earth with access to the Internet.

      Before Elon bought the company he was trashing them on social media for being mostly bots. He’s obviously stopped that now that he was forced to buy it, but the fact that Twitter (and, by extension, all social spaces) are mostly bots remains.

  • perestroika@lemm.ee
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    14 hours ago

    The University of Zurich’s ethics board—which can offer researchers advice but, according to the university, lacks the power to reject studies that fall short of its standards—told the researchers before they began posting that “the participants should be informed as much as possible,” according to the university statement I received. But the researchers seem to believe that doing so would have ruined the experiment. “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary,” because it more realistically mimics how people would respond to unidentified bad actors in real-world settings, the researchers wrote in one of their Reddit comments.

    This seems to be the kind of a situation where, if the researchers truly believe their study is necessary, they have to:

    • accept that negative publicity will result
    • accept that people may stop cooperating with them on this work
    • accept that their reputation will suffer as a result
    • ensure that they won’t do anything illegal

    After that, if they still feel their study is necesary, maybe they should run it and publish the results.

    If then, some eager redditors start sending death threats, that’s unfortunate. I would catalouge them, but not report them anywhere unless something actually happens.

    As for the question of whether a tailor-made response considering someone’s background can sway opinions better - that’s been obvious through ages of diplomacy. (If you approach an influential person with a weighty proposal, it has always been worthwhile to know their background, think of several ways of how they might perceive the proposal, and advance your explanation in a way that relates better with their viewpoint.)

    AI bots which take into consideration a person’s background will - if implemented right - indeed be more powerful at swaying opinions.

    As to whether secrecy was really needed - the article points to other studies which apparently managed to prove the persuasive capability of AI bots without deception and secrecy. So maybe it wasn’t needed after all.

    • Djinn_Indigo@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 hours ago

      But those other studies didn’t make the news though, did they? The thing about scientists is that they aren’t just scientists, and the impact of their work goes beyond the papers that they publish. If doing something ‘unethical’ is what it takes to get people to wake up, then maybe the publication status is a lesser concern.

      • CBYX@feddit.org
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        16 hours ago

        Not sure how everyone hasn’t expected Russia has been doing this the whole time on conservative subreddits…

        • seeigel@feddit.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 hours ago

          Or somebody else is doing the manipulation and is successfully putting the blame on Russia.

        • skisnow@lemmy.ca
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          4
          ·
          15 hours ago

          Russia are every bit as active in leftist groups whipping them up into a frenzy too. There was even a case during BLM where the same Russian troll farm organised both a protest and its counter-protest. Don’t think you’re immune to being manipulated to serve Russia’s long-term interests just because you’re not a conservative.

          They don’t care about promoting right-wing views, they care about sowing division. They support Trump because Trump sows division. Their long-term goal is to break American hegemony.

          • Madzielle@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 hours ago

            There have a been a few times over the last few years, that my “bullshit- this is an extemist plant/propaganda” meter has gone off for left leaning individuals.

            Meaning these comments/videos are aimed to look like they are left folks, but are meant to make the left look bad/extremist in order to push people from the working class movements.

            Im truly a layman, but you just know its out there. The goal is indeed to divide us, and everyone should be suspect of everything the see on the Internet and do proper vetting of their sources.

          • aceshigh@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 hours ago

            Yup. We’re all susceptible to joining a cult. No one willingly joins a cult, their group slowly morphs into one.

          • CBYX@feddit.org
            link
            fedilink
            English
            arrow-up
            4
            ·
            14 hours ago

            The difference is in which groups are consequentially making it their identity and giving one political party carte blanche to break American politics and political norms (and national security orgs).

            100% agree though.

        • taladar@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          15 hours ago

          Mainly I didn’t really expect that since the old methods of propaganda before AI use worked so well for the US conservatives’ self-destructive agenda that it didn’t seem necessary.

        • Geetnerd@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          15 hours ago

          Those of us who are not idiots have known this for a long time.

          They beat the USA without firing a shot.