I made a robot moderator. It models trust flow through a network that’s made of voting patterns, and detects people and posts/comments that are accumulating a large amount of “negative trust,” so to speak.
In its current form, it is supposed to run autonomously. In practice, I have to step in and fix some of its boo-boos when it makes them, which happens sometimes but not very often.
I think it’s working well enough at this point that I’d like to experiment with a mode where it can form an assistant to an existing moderation team, instead of taking its own actions. I’m thinking about making it auto-report suspect comments, instead of autonomously deleting them. There are other modes that might be useful, but that might be a good place to start out. Is anyone interested in trying the experiment in one of your communities? I’m pretty confident that at this point it can ease moderation load without causing many problems.
Is there a way of tailoring the moderation to a communities needs? One problem that I can see arising is that it could lead to a mono culture of moderation practices. If there is a way of making the auto reports relative that would be interesting.
Anti Commercial-AI license (CC BY-NC-SA 4.0)
I tried that early on. It does have a “perspective,” in terms of what communities are the trusted ones. What I found was that more data is simply better. It’s able to sort out for itself who the jerks are, and who are the widely trusted social networks, when it looks at a global picture. Trying to tell it to interpret the data a certain configured way or curtail things, when I tried it, only increased the chance of error without making it any better-tuned to the specific community it’s looking at.
I think giving people some insight into how it works, and ability to play with the settings, so to speak, so they feel confident that it’s on their side instead of being a black box, is a really good idea. I tried some things along those lines, but I didn’t get very far along.
Maybe it’d be nice to set it up so it’s more transparent. Instead of auto-banning, it can send auto-reports to the moderators with comments which it considers to be bad, and an indication of how bad or why. And then, once a week, it can publish a report of what it’s done and why, some justification for anyone who it took action against, so that everyone in the community can see it, so there aren’t surprises or secrets.
I thought about some other ideas, such as opening up an “appeal” community where someone can come in and talk with people and agree not to be a jerk, and get unbanned as long as they aren’t toxic going forward. That, coupled with the idea that if you come in for your appeal and yell at everyone that you are right and everyone else is wrong and this is unfair, your ban stays, could I think be a good thing. Maybe it would just be a magnet for toxicity. But in general, one reason I really like the idea is that it’s getting away from one individual making decisions about what is and isn’t toxic and outsourcing it more to the community at large and how they feel about it, which feels more fair.