• ᴇᴍᴘᴇʀᴏʀ 帝@feddit.uk
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    3 hours ago

    “We have done this in the past for quarantined communities and found that it did help to reduce exposure to bad content, so we are experimenting with this sitewide,” according to the main post. Reddit “may consider” expanding the warnings in the future to cover repeated upvotes of other kinds of actions as well as taking other types of actions in addition to warnings.

    Thoughtcrime time.

    Bigger picture - what if Xitter, Meta and Reddit (all run by Trump humpers) started centrally compiling this kind of thing to flag up “persons of interest”?

  • yesman@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    15 hours ago

    I mean when everyone else is jettisoning moderation, reddit is cracking down on bots and trolls? I don’t hate it.

    • kat@orbi.camp
      link
      fedilink
      arrow-up
      8
      ·
      13 hours ago

      I mean they’re deciding based on what falls as violent on whatever arbitrary classifiers they’re feeling that day.

  • RightHandOfIkaros@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    14 hours ago

    Honestly I wouldn’t be surprised if this started happening at Lemmy too. Its a lot easier to control what kind of content is on a platform when you do something like this.

    Now, I don’t particularly think this is a good idea, but I can see the benefit of this as well. People have the freedom to upvote whatever they choose, even if I think they are dumb for doing it, and they shouldn’t have to worry about anyone other than law enforcement or lawyers (in extreme edge cases) using that information against them.