14
Masnick's Impossibility Theorem: Content Moderation At Scale Is Impossible To Do Well
(www.techdirt.com)
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
You know, I've had an idea fermenting for some time now around how content moderation at scale might work. I have no idea if it's feasible, or not, nor do I have the technical expertise to bring it to fruition but I think the following pertinent points lead towards the capability of content moderation at scale:
When all of this combines, it makes you wonder if content moderation couldn't be accomplished more akin to how a small town might deal with a problematic individual - which is to say lots of small interactions with the problematic person, with some people helping, others chastising, some educating, their actions being more monitored, etc. How does this translate to a digital environment? That's the part I'm still trying to figure out. Perhaps comments that are problematic can be flagged by other users, such as in existing systems, but maybe this can fall into a queue where regular users or community members can vote on how appropriate it was and based on some kind of credit system (perhaps influenced by how much these people contribute or receive positive feedback in that particular community) determining the outcome of said comment. As it is, many of the conversational parts of this community feedback already happen (people both arguing with or pushing back against and educating or attempting to help). A system might even encourage or link up users with appropriate self-flagged educators who can talk directly with problematic individuals to help them learn and grow. Honestly, I don't know all the specifics, but I think it's interesting to think about.
@EnglishMobster @Gaywallet I think this is a way you could take that would actually work, seeing as more trusted people essentially have more power. And as long as the people in power don't radically change their opinions there would be very little potential for abuse. Especially if moderator-actions for example had to go through review, meaning that if a moderator decides to do something another moderator has to sign off on it.
@EnglishMobster @Gaywallet Some years back I was moderating GMod-communities and we had a similar power structure. You would initially publicly (in a forum) apply for a moderation role, where everyone could comment on the prior experiences they have had with you and if the resonance is good you would become a Trial-Moderator, where you would be coached by a senior-member of staff. If you did well, you would be promoted to higher roles, to eventually teach new staff yourself.