!endlesswar@lemmy.ca

Seems to be purely to post misinformation with repeated claims that Russia is innocent and the US caused the Ukraine situation, that they’re stopping Ukraine from agreeing to Russia’s super amazing peacedeals, etc.

This is the sort of garbage one would expect to find on ML or Hex, is CA intended to be the same low quality instance?

  • Rentlar@lemmy.ca
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    15 days ago

    The problem is, who is the arbiter of that? There are essentially 3 types of moderation styles here:

    Laissez-faire: Let people do whatever as long as it doesn’t actively hurt anyone. People can govern themselves and serious incidents are expected to be reported and dealt with. Some jerks will tiptoe around the rules but will eventually get caught. Lemm.ee, lemmy.ca and some others follow this.

    Casual enforcement of admin-philosophy: Most topics outside of politically contentious ones are not strictly monitored. Mods/admins will root out communities, comments and posts that actively go against the narrative, particularly on threads on political topics like Ukraine, Palestine, etc… Lemmy.world and lemmy.ml follow this.

    Strict enforcement of admin-philosophy: do not tolerate any potentially harmful statements (to that instance’s narrative or vibe). Any violation will be removed and repeated violations get you banned. This philosophy can be reasonable like Beehaw.org, which I think works very well for them and makes it a welcoming safe space, because there is no tolerance for bigotry and jerks. It can also be unreasonable like lemmygrad.ml, where dissent to the pro-Russian narrative is swiftly dealt with.

    Admins of other instances should ban users that go against their philosophy from reaching their servers, if they follow the latter two styles of moderation. That’s how it is with federation, sometimes different instances have conflicting philosophies (the vegan one for example). It’s up to each admin to decide whether a foreign Fediverse user belongs in their kingdom. The moderation style that lemmy.ca has lets it be a good neutral place to discuss various drama and lore from other servers.

    • Nils@lemmy.ca
      link
      fedilink
      arrow-up
      4
      ·
      14 days ago

      The problem is, who is the arbiter of that?

      Intolerance is well-defined in many languages, and, so people do not confuse I am talking about milk intolerance, the hate crime is defined in many law codes across the globe, including Canada. There is no need for philosophical discussion of what is “intolerance”.

      There is no need for a linguistic expert to realize someone’s discourse is ill intentioned, when the semantics of “the victim deserves to suffer” is the same as the call to action.

      For countries that depend on common law, the account in question was already punished in other instances, creating precedent.

      The modus operandi of these kinds of accounts are also well-know and documented. And popularity contests should not be a tool to define what is right in an online platform where there is no real accountability. How many upvotes do you think a single worker in a troll farm can generate in a couple of minutes?

      We should not depend on admin humour for results (philosophies, as you suggest), but I agree that we should help when/where we can, their volunteer work is invaluable for the health of the instance.

      I think that the discussions worth having in these kinds of posts are about methods, checks and balance to prevent bad decisions from people in power, and that people will be fairly treated.

      Methods are many, and there are many examples out there.

      • would twitter like community notes solve some of these problems or create more? Would lemmy repo accept such PR?
      • the problem of twitter x Brazil: is it worth locking those accounts while an investigation is pending? One of them was instigating machete attacks in school/nursery. When would this lock be ok, or not ok?
      • how long should people complain/report before a something (an investigation, a lock, or a conclusion) happens. - The account we both mentioned not in this thread (but in this post) went on it for 2 months before being banned - they did not leave on their own. …
      • Rentlar@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        14 days ago

        Sure, we should not tolerate intolerance, “No Bigotry” is rule #1 here so if you see that then please report it. Misinformation, though? That’s the main thing OP is talking about and they gave a few examples, they are propaganda but not intolerance.

        • Nils@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          13 days ago

          I feel like you are arguing with me about OP points. I am not sure if it is a Lemmy error, but my comment you replied to first

          I do not understand people here defending misinformation/intolerance as a merit of discussion. The dichotomy of naive or complicity.

          People spreading misinformation and intolerance are not here for healthy arguments, you just need to check their history to see their dishonesty and ill temper.

          In the meanwhile, accounts like the one OP highlighted are just creating trouble for mods of other instances to solve.

          I don’t feel like you are here defending that person’s acts or being complicit, neither trying to defend misinformation/intolerant with malicious intent, or being disingenuous with semantics. So, for the healthy discussion, I continue.

          You don’t need to go too far in that person’s history to see the examples of their dishonesty and ill temper, if that is the hill you chose to defend. You might need some special privilege to see their removed content in other instances.

          From your message, sorry if I am mistaken your words the first time, but I imagine now that by that you were not saying intolerance, but misinformation, as in:

          who is the arbiter of “misinformation”

          In that case,

          Canada might be a little behind on misinformation laws, it was always behind when the subject involves technology. But they define very well their types (MDM they call), qualify damages and campaign to raise awareness and minimize its effects. https://www.cyber.gc.ca/en/guidance/how-identify-misinformation-disinformation-and-malinformation-itsap00300 https://www.canada.ca/en/campaign/online-disinformation.html

          “Misinformation” is serious, causes harm, and should not be used interchangeably with “agreement”.

          Just because OP is complaining about misinformation, it does not make it any less severe than intolerance, when used for the same goal - to cause harm.

          Even before technology, we had laws and procedures about harmful discourse, be them intolerance or misinformation, it just makes things different.

          That is why I was suggesting a discussion of well-defined and transparent methods to deal with them, that should be constantly reviewed and improved.

          Edit: bold line

          • Rentlar@lemmy.ca
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            13 days ago

            That’s reasonable. It’s my bad that I was unclear with the use of that. It’s okay for you to argue against spreading intolerance, but I refer that the main topic of the post is about misinformation, and even as you rightly argue that the two often can have shared purpose and goals, and I also agree with you that both should have clear boundaries set here as to what is allowed and disallowed, they are distinct concepts. To be clear, I’m not making the distinction between MDM and intolerance to excuse either of them. Misinformation is bad too, and I agree that we should inform and root it out where we find it. However the banhammer is a tool that can make any comment look like a nail, so care should be taken when it is used. Conflating removal of clearly intolerant takes with removal of possibly misinformed takes when it comes to enforcement actions, would be viewed as mod/admin abuse and lower users’ trust in admins of that server.

            The main example from the OP is the endlesswar community. The user there is pushing takes not fully related to “endlesswar” but are from other sources, questionable as some of them may be if we were to analyze each of them carefully. A separate example I have is a comrade I have seen around Lemmy since I joined, https://lemmy.ml/u/yogthos. This user has been constantly pushing narratives, to the point that one might think they could be paid to do it. Over the past couple years, they have become far more careful to avoid getting banned for intolerant takes, and now selectively posts articles and graphs that supports a specific narrative.

            Do these users, or the users that might post a misinformed take within the power-users’ posts deserve bans? Do we analyze every comment, post and news-source and remove those that meet the criteria for MDM? Do we have a whitelist/blacklist to only permit links to reputable news sites server-wide (to stop someone from creating a community where they allow themselves to post from wherever)? Lemmy.world news communities had a Media Bias Fact Check bot that was rather inaccurate and very unpopular.

            That is why I was suggesting a discussion of well-defined and transparent methods to deal with them, that should be constantly reviewed and improved.

            I support a thorough discussion on how best to deal with it both locally and across the Fediverse. It’s not “not a problem”, but at the moment I don’t see any fair solutions that don’t rely on an undue amount of mod/admin discretion, besides removing intolerant takes and downvoting misinformed takes.

            E: One solution could be like SlrPnk pleasant politics which instituted an AI moderator that checks and will issue temp bans for bad behaviour it detects. I’m still a little skeptical of it as to me it falls under “undue amount of mod/admin discretion” but at least it takes a lot of the tiring work for admins out of the equation.

            • Nils@lemmy.ca
              link
              fedilink
              arrow-up
              2
              ·
              11 days ago

              I imagine that the discussion we are having would be more beneficial to c/main

              However the banhammer is a tool that can make any comment look like a nail,

              That is why methods are important, if your only tool is a hammer, the screws will look like nails. And people waiting for a solution will be expecting a “thunk”.

              In real life, people do not get arrested for reposting fake news. Well, maybe if you call a war “war” in some countries. Correcting myself, in real life, people should not get arrested for reposting fake news.

              Many people share them because they do not know better, are afraid, for many reasons. I have so many examples in my family. Usually, speaking with them with compassion and understanding, while using a common language works.

              But there are people that benefit from it, instigators, bad actors. How long do you think it should be allowed to fester before you get to a point of no return?

              From my experience, the places doing it properly without installing a censorship state, are the ones with well-defined and transparent process. They are doing proper investigations, working with the community, taking proper action against bad actors. Canada is not far from achieving it, it needs work, and I wish it was faster.

              You cannot expect that an online community that depends on volunteer work would have the same level of scrutiny. I don’t even know if it would be possible to create some sort of committee to oversee lemmy.ca like is common in some forums and other open-source communities.

              Do these users, or the users that might post a misinformed take within the power-users’ posts deserve bans? Do we analyze every comment, post and news-source and remove those that meet the criteria for MDM? …

              Yeah, those are the questions that need to be discussed! And plans made over it.

              I have facts, experience and opinion - for one, I am averse to mass scanning, even more without proper methods, but I have been proven wrong many times, if people think is the right way to go, I might as well understand it better and help where I can.

              Back, to the community from OPs post

              I imagine you are old enough in lemmy.ca to remember Geopolitics, trying to be more neutral, but the mod there would pin all his posts to hide people’s post, or just delete them. The account was also accumulating bans across instances until they were banned fully in this instance. Endless war does not try to be neutral, and the mod accumulated an even longer rep sheet, and in less than 2 months.

              I understand a human can read their posts, analyze their actions, and understand they are acting in bad faith intentionally, dishonestly arguing and ill temper when talking to people. I don’t think an AI can classify this kind of thing consistently yet. But in quantitative analyze, there are enough bans and content removed to warranty at least an investigation and a warning.

              …Over the past couple years, they have become far more careful to avoid…

              Wow, that user was banned in so many communities and had a lot of content removed over time, including in ml.

              They post many memes, but their reaction to people’s comments are not the most amicable, they take everything as direct offence, that could surely be improved.