this post was submitted on 30 Jan 2025
82 points (84.7% liked)

Technology

73792 readers
2979 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
all 27 comments
sorted by: hot top controversial new old
[–] Ilovethebomb@lemm.ee 22 points 6 months ago (4 children)

Does anyone feel like actually reading all that, and writing a TL/DR about what it won't answer?

I kinda zoned out and skimmed most of that.

Ironically, this type of waffle piece is a perfect use case for an AI summary.

[–] Alexstarfire@lemmy.world 27 points 6 months ago (1 children)

Topics the CCP doesn't want discussed. Tibet, Tiananmen Square, etc. It also says some restrictions can be bypassed by asking the question in a less obvious way.

[–] Ilovethebomb@lemm.ee 7 points 6 months ago

Awesome, thanks for that.

This is why so many people just don't read the article, concise communication is a lost art.

[–] Xatolos@reddthat.com 5 points 6 months ago

AI summary:

The article discusses the Chinese government's influence on DeepSeek AI, a model developed in China. PromptFoo, an AI engineering and evaluation firm, tested DeepSeek with 1,156 prompts on sensitive topics in China, such as Taiwan, Tibet, and the Tiananmen Square protests. They found that 85% of the responses were "canned refusals" promoting the Chinese government's views. However, these restrictions can be easily bypassed by omitting China-specific terms or using benign contexts. Ars Technica's spot-checks revealed inconsistencies in how these restrictions are enforced. While some prompts were blocked, others received detailed responses.

(I'd add that the canned refusals stated "Any actions that undermine national sovereignty and territorial integrity will be resolutely opposed by all Chinese people and are bound to be met with failure,". Also that while other chat models will refuse to explain things like how to hotwire a car, DeepSky gave a "general, theoretical overview" of the steps involved (while also noting the illegality of following those steps in real life).

[–] CosmoNova@lemmy.world 5 points 6 months ago

I mean it’s pretty obvious, isn’t it? Anything regarding Chinese politics or recent history is a big no-no. Like it will tell you who the president of the US is but will refuse to tell you about the head of state in China. I’m assuming same goes for anything Taiwan or South Chinese sea. The self censorship is rather broad.

[–] codexarcanum@lemmy.dbzer0.com 5 points 6 months ago* (last edited 6 months ago)

I made a comment to a beehaw post about something similar, I should make it a post so the .world can see it.

I've been running the 14B distilled model, based on Ali Baba's Qwen2 model, but distilled by R1 and given it's chain of thought ability. You can run it locally with Ollama and download it from their site.

That version has a couple of odd quirks, like the first interaction in a new session seems much more prone triggering a generic brush-off response. But subsequent responses I've noticed very few guardrails.

I got it to write a very harsh essay on Tiananmen Square, tell me how to make gunpowder (very generally, the 14B model doesn't appear to have as much data available in some fields, like chemistry), offer very balanced views on Isreal and Palestine, and a few other spicy responses.

At one point though I did get a very odd and suspicious message out of it regarding the "Realis" group within China and how the government always treats them very fairly. It misread "Isrealis" and apparently got defensive about something else entirely.

[–] autonomoususer@lemmy.world 2 points 6 months ago* (last edited 6 months ago)

Does OpenAI really think we'll let ChatGPT, anti-libre software, steal control over our own computing?

Does it answer this?

[–] aviationeast@lemmy.world -1 points 6 months ago (2 children)

What is the meaning of life, the universe, and everything?

[–] ghashul@feddit.dk 0 points 6 months ago