• Reddit has begun issuing warnings to users to regularly upvote violent content with a view to taking harsher action in future.
  • The company says that it will consider expanding this action to other forms of content in future.
  • Users are concerned that this moderation tactic could be abused or just improperly implemented.
  • db2@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    ·
    4 days ago

    Can’t have anyone taking about the abuses going on, it makes Glorious Leader look bad.

    As always, fuck reddit.

    • see_i_did@lemm.ee
      link
      fedilink
      English
      arrow-up
      12
      ·
      4 days ago

      You’ll eat your new reddit and train any AI models that we decide to feed your data to and you’ll enjoy it. Reddits clients are probably passed that all their models end up turning into incels.

  • Lemmist@lemm.ee
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    4 days ago

    Reddit’s moderation policies are already too crazy and moderators are already abusing whatever they want. I don’t expect anything to become significantly worse.

  • N3Cr0@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    1
    ·
    4 days ago

    There are still human users over on reddit? This must change. More punishment on reddit please!

    • doodledup@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      11
      ·
      4 days ago

      Delusional. There’s like 100 users on Lemmy. Reddit has grown its userbase this year.

      • Otter@lemmy.ca
        link
        fedilink
        English
        arrow-up
        18
        ·
        edit-2
        4 days ago

        I think it’s a joke about dead internet theory, rather than userbase size

        https://en.m.wikipedia.org/wiki/Dead_Internet_theory

        The joke comes from an increase in bot use on Reddit, and the subsequent false positive / false negatives in trying to figure out which ones are bots

        Lemmy has that problem too, but it’s much smaller in scope. Mostly because there’s less of a reason to try and control the narrative on this smaller platform, but also because the goals are different. Lemmy instances get no benefit from a bunch of fake engagement, and public upvotes makes it easier to catch manipulation

      • kitnaht@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        6
        ·
        4 days ago

        Most users on Lemmy are Delusional. especially here in /c/Technology – turns out, this community isn’t for technology at all, but rather for bitching about silicon valley companies.

  • Skullgrid@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    4 days ago

    fucking normienet. why not punish users for overthrowing democracy and spreading misinformation , being bigoted and everything else that got us in this mess?

  • Monstrosity@lemm.ee
    link
    fedilink
    English
    arrow-up
    17
    ·
    4 days ago

    The amount of censorship taking place on that platform every day lately is kind of staggering.

  • Snowstorm@lemmy.ca
    link
    fedilink
    English
    arrow-up
    13
    ·
    4 days ago

    What if i think «guillotine » is the most beautiful word of 2025? I know some would argue Luigi should be first here, but i stand by my conviction.

  • Otter@lemmy.ca
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    4 days ago

    Users are concerned that this moderation tactic could be abused or just improperly implemented.

    This is the key bit. It’s good to try and make safer online spaces. But Reddit’s automated moderation has been bad for a while, and this might get more users caught up in false positives

    I’ve seen comments tagged as abusive regardless of the context:

    • someone quoting a news article
    • someone making a hyperbolic joke (especially in gen-Z subs)
    • actual abuse

    For well moderated subs, the vast majority of those reports became false positives over time. For the mod queue, this didn’t affect the end user since mods can dismiss the false positives. But automated ‘scores’ won’t account for that.

    We’re going to see even more annoying algospeak like “unalive”, only it’s going to be in news quotes as well

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    4 days ago

    Hegemonic violence, state violence, capitalist violence…

    These will continue to not be acknowledged as violence.