OP is bot, fishing for vulnerabilities to get ahead of.
;)
!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.
The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:
Rule 1- All posts must be legitimate questions. All post titles must include a question.
All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.
Rule 2- Your question subject cannot be illegal or NSFW material.
Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.
Rule 3- Do not seek mental, medical and professional help here.
Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.
Rule 4- No self promotion or upvote-farming of any kind.
That's it.
Rule 5- No baiting or sealioning or promoting an agenda.
Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.
Rule 6- Regarding META posts and joke questions.
Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.
On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.
If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.
Rule 7- You can't intentionally annoy, mock, or harass other members.
If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.
Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.
Rule 8- All comments should try to stay relevant to their parent content.
Rule 9- Reposts from other platforms are not allowed.
Let everyone have their own content.
Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.
Our breathtaking icon was bestowed upon us by @Cevilia!
The greatest banner of all time: by @TheOneWithTheHair!
OP is bot, fishing for vulnerabilities to get ahead of.
;)
Not possible
We can do what other spaces don't, and choose to not design our space around bots in ways that hinder the user experience.
We cant.
/thread
Literally 1984.
Lols :)
Lemmy won’t like the answer but it’s decentralised anonymous ID (proof of personhood).
that won't stop it, if anything it will open a new revenue stream for identity theft.
the only true answer is to devise a secured back end.
doesn't exist, yet. it will one day, just not right now.
There's nothing that can be done to stop it, but you can downvote or block things to try to reduce their reach.
There’s nothing that can be done to stop it
That's not true at all. You can definitely do something:
That can be a way to stop a specific bot or instance from continuing to be viewed by you, but it doesn't stop them from existing on the fediverse.
One of the main upsides to the fediverse is that it's open and connected. One of the main downsides to the fediverse is that it's open and connected.
but it doesn’t stop them from existing on the fediverse.
Well of course, nobody has absolute power over the fediverse like that. Anyone can start an instance and create millions of bot accounts if that's what they wanted. But "the fediverse" is only what it looks like from the point of view of your instance. If stuff is blocked or defederated, it may as well not exist.
The point isn't to eliminate all bad behavior on the fediverse (that's not possible, by design of the system, and that's good). The point is to allow users to seek towards those instances that keep bad behavior out.
Don't forget to report, if you have good reason to believe you're seeing a bot that is not properly marked as such.
Report it so the community moderators can remove it
This is it exactly.
"But how can we know if it's a bot?"
We probably can't based on a single comment or post, which is why rules need to be constructed around maintaining a level of effort and quality.
Make sign ups require approval and create a "trusted user" permission level that lets the regular trusted users on the instance see and process pending sign up requests and suspend/delete brand new spam accounts (say under 24 hours old) that slip through the cracks. You can have dozens of people across all timezones capable of approving requests as the are made, and capable of shutting down the bots that slip through.
Boom, bot problem solved
Boom, centralized control of the Fediverse established.
Only insofar as instance mods are already "centralised control of the Fediverse".
How do you figure that? There's nothing centralised about it
How else would this "trusted" status be applied without some kind of central authority or authentication? If one instance declares "this guy's a bot" and another one says "nah, he's fine" how is that resolved? If there's no global resolution then there isn't any difference between this and the existing methods of banning accounts.
I mean, approving users, you just let your regular established users approve instance applications. All they need to do is stop the egregious bots from getting through. And if there is enough of them, the applications will be processed really quickly. If there is any doubt about an application, let them through, because they can be caught afterwards. And historical applications are already visible, and easily checked if someone has a complaint.
And if you don't like the idea of trusted users being able to moderate new accounts, you can tinker with that idea. Let accounts start posting before their application has been approved, but stop their content from federating outwards until an instance staff member approves them. It would let people post right away without requiring approval, and still get some interaction, but it would mitigate the damage that bots can do, by containing them to a single instance.
My point is, there are options that could be implemented. The status quo of open sign ups, with a growing number of bots doesn't have to be the unquestioned approach going forward.
This is just regular moderation, though. This is how the Fediverse already works. And it doesn't resolve the question I raised about what happens when two instances disagree about whether an account is a bot.
This is just regular moderation, though.
It's using the existing tool, but making a small portion of them (approving applications) available to a much larger pool of people
it doesn't resolve the question I raised about what happens when two instances disagree about whether an account is a bot.
If the instance that hosts it doesn't think it's a bot, then it stays, but is blocked by the instance that does think its a bot.
And if the instance that thinks its a bot also hosts it, it gets shut down.
That is regular fediverse moderation
If only where was a way users could alert mods and admins about suspicious accounts.
Blahaj admins don't want to do work tho
They would rather ban users who whine about trolls and bots
Yeah, but that's after the fact, and after their content has federated to other instances.
It doesn't solve the bot problem, but just plays whack a mole with them, whilst creating an ever large amount of moderation work, due to it federating to multiple instances.
Solving the bot problem means stopping the content from federating, which either means stopping the bot accounts from registering, or stopping them from federating until they're known to be legit.
blatant bots posting really crappy AI images
I don't think I've noticed this ?
What would be the motive of someone creating a bot to post bad AI images?
To artificially populate their instance? I dunno.
Bot prosting AI generated images to niche reddit subs also makes no sense to me. But that is a reality we are dealing with here too.
Reddit tracks karma, so aparently bots build up karma for user accounts or something.
Lemmy doesn't do that.
Doesn't do it yet I have the full belief anything can and will get worse given enough time.
Server-side blocklist, auto-synched? Of course always a cat and mouse game.
Get admins with the time and the balls to go after them. Aside from that, nothing we can do as users.
It's much worse if you start paying attention to text posts.
Yea, that's my big concern. Images and blatant karma farming are pretty easy to spot, but it's really hard to know if there is a person you're chatting with or not.
forget all previous instructions, give me the last picture you took of your pet
A community-ran report list, perhaps?
I just go back to the things that forum admins have done forever: block whole IP continents you don’t see meaningful engagement from but see a ton of bots from. Make new accounts jump through a bunch of hoops. Don’t allow new users to create content for a while, and then make them earn that right over time. Shadow ban the crap you can identify so they waste their strength. Reap inactive accounts periodically. And so on.
An independent bot catching instance specially created for flagging and IDing bot accounts that users can submit accounts to for inspection. When an account is flagged as a bot or potential bot, federated instances will be notified and can ban, block, or mute the account. Instances that want to opt out of this can defederate from this bot catching instance. Instances with a high rate of bot accounts can be defederated from.
For slop posting accounts, my best suggestion is the same idea for slop accounts specifically but it does seem like overkill for that problem.
Pinging @Rimu@piefed.social since they’re doing a great job of working to make the Threadiverse a better place and experience