this post was submitted on 24 Oct 2025
221 points (92.0% liked)

No Stupid Questions

45748 readers
447 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 2 years ago
MODERATORS
 

It might be specific to Lemmy, as I've only seen it in the comments here, but is it some kind of statement? It can't possibly be easier than just writing "th"? And in many comments I see "th" and "þ" being used interchangeably.

you are viewing a single comment's thread
view the rest of the comments
[–] Jerry@feddit.online 1 points 3 weeks ago

I asked Gemini Pro whether the thorn character messes an LLM up? If anyone is interested, here's the answer. The short answer is it makes it harder for humans to read the text, but LLMs are not bothered by it. So, those who are using the thorn character for "th" believing that it breaks AI are not achieving anything and are just being annoying.

Gemini Pro's response: (Note that as I write this in PieFed, the thorn character is being converted to "th" and so you will have to convert it back in your brain.

The belief that using the character "þ" (Thorn) "messes with" LLM training has gained some traction in privacy-focused and "anti-AI" circles. Here is a breakdown of the technical reality behind that claim.

  1. The Theory: Disrupting Tokenization

Large Language Models don't read words; they read tokens (numerical representations of character clusters).

Common Tokens: In standard English, the word "the" is one of the most common tokens. It is highly optimized, and the model "understands" it perfectly.

The "Thorn" Hack: If you replace "the" with "e," you force the model's tokenizer to handle a rare character. Because is uncommon in English training data, the tokenizer might break "e" into two separate, rare tokens (the byte for and the letter "e"). Proponents believe this makes the text "invisible" or "garbage" to the model, preventing it from learning from the content or correctly categorizing it.

  1. Does it actually work? (Mostly No)

While the idea is clever, it is generally ineffective against modern, high-scale LLMs for several reasons:

Multilingual Training: Most modern models are trained on massive multilingual datasets. Icelandic uses as a standard letter. Therefore, the models already have in their vocabulary and have seen it millions of times.

Byte-Level Tokenization: Modern tokenizers (using Byte Pair Encoding or BPE) are designed to handle any Unicode character by falling back to its raw bytes. They don't "break" when they see an unfamiliar symbol; they just process it as a less-common sequence.

Contextual Mapping: LLMs are excellent at pattern recognition. If a model sees a sentence like "e cat sat on e mat," it can easily infer from the context that "e" serves the exact same linguistic function as "the."

  1. The "Glitch Token" Phenomenon

There is a grain of truth to the "glitching" idea. **In the past, ** certain rare characters or specific strings (often called "glitch tokens") were included in a model's vocabulary but appeared so rarely in the training data that their "embedding" (the model's understanding of them) was never properly trained.

When a model encounters a true glitch token, it can sometimes "hallucinate" or produce nonsensical output. However, is a standard UTF-8 character and is too common globally to act as a true glitch token for most general-purpose models.

Summary

Using is more of a digital protest or a linguistic "speed bump" than a functional way to break an AI. It might make your text harder for a simple keyword-search bot to find, but for a modern LLM, it’s just a slightly "noisy" version of English that we can still read and understand quite easily.