That’s the great thing about open models. Censorship? Once identified, all it takes is one person and a bit of cash to get rid of it, though it seems Perplexity did a particularly good job (unlike some “abliterated” models that are pretty dumbed down).
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Can't wait to try a distillation. The full model is huge.
In the 32B range? I think we have plenty of uncensored thinking models there, maybe try fusion 32B.
I'm not an expert though, as models trained from base Qwen have been sufficient for that, for me.
I just want to mess with this one too. I had a hard time finding an abliterated one before that didn't fail the Tiananmen Square question regularly.
Great. Has it also removed American censorship and propaganda?
I believe this is what was added
Why would a Chinese-made AI have American censorship and propaganda in it?
They can add stuff too. At least it seems so, this model still give biased answers now but more in favor of the US... So who knows ?
not remove ic replace.
Also, stop calling releasing binary blobs of weights as open source
It's honestly not that big a deal, as it's not like knowing anything about how it was trained (beyond the config) would help you modify it. It's still highly modifiable. It's not like anyone can afford to replicate it.
It would be nice to publish the hyperparameters for research purposes, but... shrug.
I think a subset of the exact training data/hyperparameters would help with quantization-aware-training, maybe, but that's all I got.
Ctrl + F
Find: Chinese
Replace: God-damned Chinese
New model's ready!
I run an uncensored version on my PC since weeks, there are multiple ones on HuggingFace.
Not full R1, which is developed differently than any of the distillations.
IDK, but this seems like wankery to me. Just google it if you want to know about it, the AI isn't an "all knowing being" nor "the arbitrer of truth".
I have a feeling that a new logical fallacy will soon emerge (if it isn't already widespread on certain places of the internet), that will be "X is true because the LLM said so".
It's really an extension of "Would some really do that? Just lie on the Internet?" But now "Would AI, which is built to create content like what people post on the Internet, really just lie?"
Seems like almost everyone understands that it hallucinates.