this post was submitted on 13 Feb 2026
-14 points (20.8% liked)

Technology

2249 readers
320 users here now

Tech related news and discussion. Link to anything, it doesn't need to be a news article.

Let's keep the politics and business side of things to a minimum.

Rules

No memes

founded 8 months ago
MODERATORS
 

I know this topic, as well as most online topics, is more about emotions than facts. But here we are. You probably don’t understand how Discord's new policy is intended to help protect children because you aren’t trained to think like a child predator. (That’s fine.) I’ve had to take a lot of child safety trainings over the years for professional reasons. Here’s how online child predators work.

They start by finding a kid with a secret. Just like a lion will generally choose to attack the weak gazelle, child predators go after vulnerable kids.

They find the kids with a secret and say “hey want to see some porn?”, and of course the kid is curious. They didn't start with anything bad. This is a process for them. But they will tell the kid, “be sure you don’t tell your parents about this. This is our secret." Then they slowly try to pull into deeper and deeper secrets and start to blackmail the kid. They start to demand that the kid send them nude photos. They trap the kids into deeper and deeper secrets and guilt to get more and more out of them. In the worst cases this results in meetups with the predator in person.

The easiest places for the predators to start this process is porn sites where the kids are visiting in secret to begin with. Especially those on Discord where the messaging between users is the main feature. Those are the kids that are most vulnerable.

How how is Discord's policy supposed to protect kids? The goal is to keep the vulnerable kids out of spaces where they would be targeted to begin with.

So there you go. I’m all ears for how to do this better. That's one beef I have with the EFF right now. They offer no alternative solutions to this. They just didn't want any kind of protections at all.

top 48 comments
sorted by: hot top controversial new old
[–] hendrik@palaver.p3x.de 2 points 13 hours ago* (last edited 12 hours ago)

Isn't this an uneducated take as well? I mean to me it sounds like it's making the situation worse for everyone. If you strip teens access (and do a half-assed job), they're likely going to find the next more depraved place to do the same. Which will be more illegal. And they're gonna be more exposed and more vulnerable... We can see this with the newer social media laws in Australia for example. And they're probably less motivated to talk about this to adults, or get help, once they feel they did something wrong.

And then we have the issue with the adults as well. They can now not talk about valid topics any more. And they also have to pull down their pants, and upload their biometric data and name and ID to some shady companies. And we know this ends up in some large databases which will in turn get leaked and shared with third parties. So they'll be more vulnerable as well. Both to hackers and the dark enlightenment people like Peter Thiel and his political friends.

We also know from speaking to experts in the field, like police staff. How weird surveillance measures like Chatcontrol are utterly ineffective against the criminals. And not only that. They flood the investigators in false positives, and they'll have less time to do their actual job. So it actively takes away from the supposed cause.

I mean these things frequently sound great on some emotional level. Unless you think about it for 2 minutes, or look at the hard facts and numbers after implementing these things. I mean why an intransparent procedure by a private company, that also includes guesswork and shady things? Wouldn't we -instead- need effective means to protect minors?

And I don't think the new Discord procedures do, what you think they do. I think it's about limiting nsfw groups (and content). There's nothing stopping predators grooming kids. They can still talk to each other after March. In fact, they call it "teen by default". So the adult groomer will now be indicated as a teen as well unless they register. So, that makes it worse?!?!! How is there something "in defense of" that?

Edit: I mean I'd love to actually do something for young people. But that'd have to be something that actually tackles the issue. Not make the situation even worse for them... I've had some ideas on how to make the Fediverse cater more to young people, and provide a safe space to them. But that requires an entirely different approach.

[–] baronvonj@piefed.social 8 points 1 day ago (1 children)

When Roblox rolled this out kids were using AI tools to perform the face scan. Kids have easy access to sneak their parents' ID to upload in secret. Predators can use the same tactics. So we haven't gained any security, but some people will be caught up with having their real data made vulnerable.

The only real solution is for the parents/guardians to be engaged and involved in their childrens' lives. That means I have to occasionally invade their privacy and look at their communications, but more importantly it means I have to tell my children about the predators and what kinds of things predators will say to them. It's impossible to child-proof the world around us, we have to world-proof the children themselves.

[–] 1dalm@lemmings.world 0 points 1 day ago* (last edited 1 day ago) (1 children)

The only real solution is for the parents/guardians to be engaged and involved in their childrens’ lives.

I don't agree with that. It's not all on the parents. It can't be all on the parents.

This is like if the Boy Scouts said "Hey, it's not our responsibility to protect kids. The parents should have been more involved." No, if you are providing the service then it's your responsibility to make sure that service is safe.

And yes, I believe you should be held accountable for the services you provide.

[–] baronvonj@piefed.social 7 points 1 day ago

The problem with your comparison is that you're physically sending your kids off with other people, and there are physical limitations on participation. Of course it's encumbent on the organization employing those people to make sure they're trustworthy (same for the adults who are volunteering to be responsible for the children). In the same way, Discord is responsible for making sure that their own employees aren't abusing and preying on the users. They are expected to investigate any reports of abuse by either their employees, scouts, or adults volunteering. Just as Discord is expected to investigate any such reports, too. There are absolutely things to hold digital platforms accountable for to make things safer. But face scans and uploading government ID doesn't accomplish that. We hold platforms to account by auditing their response to abuse reports and any failures in their privacy and security controls and if they can't manage that then they should be dissolved.

Even just since I posted my comment, there's now this new link making my front page on piefed

https://www.404media.co/free-tool-says-it-can-bypass-discords-age-verification-check-with-a-3d-model/

Residential IPs aren't static. VPNs exist. Residential ISP proxy services exist. Cloud providers exist from every inhabited continent. Tor exists. Determined predators and bad actors will get on the platforms and can get verified at whatever age group they want.

If we pretend otherwise instead of educating the children on how to recognize predatory behavior then we haven't protected them at all.

[–] Lembot_0006@programming.dev 12 points 1 day ago* (last edited 1 day ago) (1 children)

Or maybe it would be more healthy (and cheap and useful) just to teach kids not to share "secrets" with shady people on the Internet?

P.S. And what about non-vulnerable kids? What about adults? All are victims now!

[–] 1dalm@lemmings.world -3 points 1 day ago

It's the victim's fault!

[–] hesh@quokk.au 7 points 1 day ago (1 children)

Instead of removing privacy from the internet, what if parents didnt just shove an ipad in their kids face and walk away?

[–] 1dalm@lemmings.world 1 points 1 day ago (1 children)

Of course parents also have a responsibly to keep their children safe. But it can't ONLY be on the parents. The platforms need accountability too.

[–] hesh@quokk.au 5 points 1 day ago* (last edited 23 hours ago)

I deserve the right to use the internet without identifying myself, as it always has been.

[–] irate944@piefed.social 8 points 1 day ago* (last edited 1 day ago)

I'm firmly against what Discord is doing (and what governments like UK, Australia - and soon others - as well).

The main reason is distrust. I do not trust that they - or anyone - would use this data responsibly and only for its intended purpose.

While I do not doubt that these measures could protect more children - I also do not doubt that these measures will be abused. Businesses will violate whatever privacy we still have left in order to get more money from info-brokers/ad-companies, and governments will use it for control. The US has been proving this with ICE, where they've been using Flock to target people.

That's why I always roll my eyes whenever these kind of measures are introduced. They're always introduced with "think of the children!" right beside them.

There's a reason why Apple - years ago - refused to develop a backdoor for iPhones when FBI requested/ordered them to do. There's just no proper way to prevent abuse with backdoors. Yesterday they wanted to check a criminal's phone, tomorrow they may want to target an annoying journalist.

Same principle with this tracking. Once Discord (or any others) can tell that your account belongs to you (IRL entity), there's nothing that you can do to prevent them from abusing that knowledge. Let's assume that today they use this new system for its intended purpose - who's to say that tomorrow they will?

Not mention the data breach discord suffered last year, where around 70k proof of age IDs were leaked. So not only you have to worry about Discord, you also have to worry about others that may get their hands on your info.

Don't get me wrong, we NEED to improve the safety of children on the internet. I fully support doing this via education, improving parental controls, maybe even banning children from social media apps until a certain age, etc.

But abusing our privacy rights is not it.

[–] schwim@piefed.zip 7 points 1 day ago

That’s one beef I have with the EFF right now. They offer no alternative solutions to this. They just didn’t want any kind of protections at all.

That's because public discourse is saturated by the extreme at both ends of the spectrum. The part of society that feels there could be some solution in the center of the argument is drowned out by Stallman zealots that feel that diddling kids is bad but you shouldn't be kept from doing it if it requires any information on you while the corporate sycophants are crying that they can't protect THE KIDS if they don't have every piece of information on you from birth onwards.

I view it the same as politics. I'll let the others behave as if they have a say in it and argue for or against. I'll just quit using discord when they require verification, as I will/have for any other corporation that does the same. Not because I don't think verification of some kind would help protect everyone from additional risk but because providing my very sensitive information to corporations that both exploit my data and pass it on to hackers and third-parties is not something I am willing to do.

[–] Hond@piefed.social 5 points 1 day ago* (last edited 1 day ago) (1 children)

Main issue i personally see is that i cant trust big tech. Ever. Especially not with my biometric data or my government id. If my government would have an online service where i could verify my age and in return get somekind official but anonymized hash/string which confirms my age to 3rd parties i wouldnt mind at all.

Instead the whole world is hellbent on deanonymizing everyone on the net while the political landscape in most countries leans more and more towards authoritarianism. Thats a pretty shitty combination in my book.

With all that said more safeguards for children would be great. But why not start with the inherent issues of our current online services where profit stands above all? Most moderation tools on the biggest services are just algos, "ai" and outsourced workforces in the 3rd world as sub-sub-sub-sub contractors. Sure, you cant moderate hundreds of millions of users without any automatation. But cutting into the profits a bit by employing more actual people would probably help a lot already. Emphasis on probably, idk i'm just an random asshole on the internet.

/Also its not great to start your post telling me that i'm an emotional dumb dumb if you want to convince me. While i dont agree with everything you said there were a few new to me insights which i almost didnt read because you came off as an annoying know-it-all in the very first sentence.

[–] 1dalm@lemmings.world 1 points 1 day ago (1 children)

"Main issue i personally see is that i cant trust big tech. Ever."

Me neither. And a big part of the reason why I personally didn't trust them is because they advertise all these "services" to parents and kids and then only provide any sort of child protections when governments force them too.

(You want me to really get heated, get me started on youth sports!)

[–] Skavau@piefed.social 5 points 1 day ago (1 children)

There's far more reasons to not trust them, but your grievances here would absolutely force all of us onto big tech as all smaller forums and communities would be forced to shut down.

[–] 1dalm@lemmings.world 1 points 1 day ago (1 children)

I don't believe that is necessarily correct. I think the fediverse community can figure out a way to police itself, and does so pretty well already. One easy option is that "blocking" people fast and early is generally accepted, and done on a server level. And other is that things are generally more public here, there is less opportunity to pull people into "quite corners" alone.

[–] Skavau@piefed.social 1 points 1 day ago (1 children)

How can the fediverse police itself here?

Let us know what you propose.

[–] 1dalm@lemmings.world 1 points 1 day ago (1 children)

The easiest is to keep everything public. Just don't allow 1-to-1 communication at all.

That would be enough to scare off most predators.

[–] Skavau@piefed.social 1 points 1 day ago (1 children)

The easiest is to keep everything public. Just don’t allow 1-to-1 communication at all.

I think removing private messaging would be very unpopular on here. So that's not going to happen.

[–] 1dalm@lemmings.world 0 points 1 day ago (1 children)

Probably. So the community needs to figure out how to offer the services they want to have while also protecting children.

[–] Skavau@piefed.social 1 points 1 day ago (1 children)

So what do you propose then exactly? Private messaging isn't going anywhere.

[–] 1dalm@lemmings.world 0 points 1 day ago (1 children)

Well, first I would recommend server hosts that "can't afford to protect children" be much more careful who they let onto their personal network.

Second, I would recommend the developer community start training this problem seriously and use the power of the open source development process (which is really good at finding creative solutions to problems) to set this as a development priority.

[–] Skavau@piefed.social 1 points 1 day ago* (last edited 1 day ago) (1 children)

I don't think the powers-that-be care about the Fediverse at all bro. Legitimately.

It's far too small. Moreover, it is actually in the terms you refer to - fairly well moderated.

[–] 1dalm@lemmings.world 1 points 1 day ago (1 children)

I'm only going to say this one more time.

They 100% will care.

[–] Skavau@piefed.social 1 points 1 day ago

If some unique situation where child porn is shared and the instance its on does nothing about it.

But they don't do that.

[–] Skavau@piefed.social 3 points 1 day ago (1 children)

Should every single platform online be compelled to implement age-ID?

[–] 1dalm@lemmings.world 0 points 1 day ago (1 children)

I'm open to better alternative ideas, but I really haven't heard any.

But yes. Every single platform that offers the opportunity for kids to interact other users, especially strangers, should have some kind of protections. I think the platforms themselves should be held accountable for what happens on their platforms. Just like the courts have held that the Boy Scouts and the Catholic Church are responsible for protecting kids they serve. Discord doesn't get a pass.

[–] Skavau@piefed.social 4 points 1 day ago (1 children)

I’m open to better alternative ideas, but I really haven’t heard any.

Can you tell me how you it is logistically viable to conscript tens of thousands of websites to implement age-ID?

You do realise you're interacting on a platform that would shut down if they had to do this because they can't afford it.

[–] 1dalm@lemmings.world 0 points 1 day ago (1 children)

"You do realise you’re interacting on a platform that would shut down if they had to do this because they can’t afford it."

Yes. And I also believe the fediverse community should take this problem more seriously than it currently does, and not just wait until the government forces them to take it seriously.

One big difference is that the fediverse generally isn't broadly working marketing itself to kids to bring them onto the network, as opposed to other networks that directly market themselves to kids to get the kids locked in at young ages.

[–] Skavau@piefed.social 2 points 1 day ago (1 children)

Yes. And I also believe the fediverse community should take this problem more seriously than it currently does, and not just wait until the government forces them to take it seriously.

How would the government do that? The Forumverse has 40k members (which is tiny) and it's split up into over 100 instances.

Who do they try and talk to?

How can the Fediverse "take it seriously" when they simply can't afford to?

[–] 1dalm@lemmings.world 0 points 1 day ago (1 children)

Honestly, saying "we can't afford to take it seriously" is exactly what gets organizations in trouble.

You can't afford not to.

[–] Skavau@piefed.social 3 points 1 day ago (1 children)

Honestly, saying “we can’t afford to take it seriously” is exactly what gets organizations in trouble.

The fediverse isn't an organisation.

As I asked: How would the government do that? The Forumverse has 40k members (which is tiny) and it’s split up into over 100 instances.

[–] 1dalm@lemmings.world 0 points 1 day ago (1 children)

You wouldn't have to treat it like an organization. Go after individual hosts. If a police investigation found that a forumverse host was providing an opportunity for a child predators to use their system to blackmail kids into sending nude photos of themselves, then I think the host, the individual, should be held responsible for what happens on their server. Just like they would be held responsible if it happened in their house.

[–] Skavau@piefed.social 1 points 1 day ago (1 children)

You unironically think that governments are going after hosts that have, in many cases, less than 1000 active monthly users purely because they don't have age-ID services on their platform?

[–] 1dalm@lemmings.world 0 points 1 day ago (1 children)

100% they would. Yeah.

If child pornography was found to be stored on a host's server by one of their 1000 users, "I didn't think you guys would care about a platform with less than 1000 monthly users" isn't going to be a great argument in courts.

[–] Skavau@piefed.social 2 points 1 day ago (1 children)

100% they would. Yeah.

How would they know?

If child pornography was found to be stored on a host’s server by one of their 1000 users, “I didn’t think you guys would care about a platform with less than 1000 monthly users” isn’t going to be a great argument in courts.

You're talking here specifically about child pornography. Not just not age-verifying users to access 'adult' content. No server, to my knowledge, allows this.

[–] 1dalm@lemmings.world 1 points 1 day ago (1 children)

How would they know?

Well, if, and really when, a predator is caught by the police, that police department will do a full investigation and find all the places they are having communications with kids. Sooner or later, one will be found to be using Lemmy. On that day, the host is going to need a good lawyer.

It's not enough to "not allow this". A person that allows anonymous strangers to use their servers to store information in secret is asking for trouble. They need to take much more care than that.

And I never said that age-verification is the only solution to this problem. >>

[–] Skavau@piefed.social 2 points 1 day ago (1 children)

It’s not enough to “not allow this”. A person that allows anonymous strangers to use their servers to store information in secret is asking for trouble. They need to take much more care than that.

What extra care should they take beyond deleting it when they find it? Which they do.

And I never said that age-verification is the only solution to this problem. >>

Remember, I originally started this chain by asking you if every single site online should be forced to implement age-ID and you said yes.

[–] 1dalm@lemmings.world 1 points 1 day ago (1 children)

Remember, I originally started this chain by asking you if every single site online should be forced to implement age-ID and you said yes.

Fair. But I really meant that every network should have policies in place, where as age-verification is one option. Elsewhere on this thread you'll see that I offer alternative solutions, such as simply keeping everything public and not allowing 1-to-1 messaging.

[–] Skavau@piefed.social 1 points 1 day ago (1 children)

Everything on the fediverse is publically viewable (although Piefed has private communities capacity now), but banning DMs is pretty unacceptable really.

[–] 1dalm@lemmings.world 1 points 1 day ago (1 children)

Not exactly, and not for long. Mastodon, for example, is working on end-to-end encryption in messages. Matrix is also private by design.

And again, it's not that I think end-to-end encrypted one-to-one messaging is bad. But if you are going to offer it then you need to be held responsible for it.

[–] Skavau@piefed.social 1 points 1 day ago (1 children)

Not exactly, and not for long. Mastodon, for example, is working on end-to-end encryption in messages. Matrix is also private by design.

I meant publicly viewable in the sense of being viewable by the wider audience. Excluding private messages specifically here.

And again, it’s not that I think end-to-end encrypted one-to-one messaging is bad. But if you are going to offer it then you need to be held responsible for it.

So what do you propose then?

[–] 1dalm@lemmings.world 0 points 1 day ago (2 children)

ID and age verification for users.

That's not the only solution, and I've offered several others. And I'm also not the only one with ideas. But completely frictionless encrypted anonymous one-to-one communication is probably not going to last much longer. And shouldn't.

[–] Skavau@piefed.social 3 points 1 day ago

ID and age verification for users.

Unaffordable. Not going to happen. This is what would kill the independent internet.

[–] Skavau@piefed.social 2 points 1 day ago (1 children)

Also everyone here unironically thinks you're a shill. You're coming to a federated platform promoting big-tech, big-government controls.

[–] 1dalm@lemmings.world 1 points 1 day ago (1 children)

I don't care.

If I can get some people on this site to start thinking about child safety in new ways that will be a win for me today.

[–] Skavau@piefed.social 1 points 1 day ago (1 children)

So you admit you came here to shill for this

[–] 1dalm@lemmings.world 1 points 1 day ago (1 children)

Do I admit that I came here to shill for child safety policies?

Guilty as charged.

[–] Skavau@piefed.social 1 points 1 day ago

Yeah, so people are going to be kinda hostile due to that. You have no real interest in the fediverse.