this post was submitted on 20 Jun 2025
13 points (100.0% liked)

Artificial Ignorance

204 readers
7 users here now

In this community we share the best (worst?) examples of Artificial "Intelligence" being completely moronic. Did an AI give you the totally wrong answer and then in the same sentence contradict itself? Did it misquote a Wikipedia article with the exact wrong answer? Maybe it completely misinterpreted your image prompt and "created" something ridiculous.

Post your screenshots here, ideally showing the prompt and the epic stupidity.

Let's keep it light and fun, and embarrass the hell out of these Artificial Ignoramuses.

All languages welcome, but an English explanation would be appreciated to keep a common method of communication. Maybe use AI to do the translation for you...

founded 7 months ago
MODERATORS
 

cross-posted from: https://lemmy.sdf.org/post/37089033

Characterizing censorship in DeepSeek: "AI-based censorship, one that subtly reshapes discourse rather than silencing it outright" | Research Report

Archived

Here is the study: Information Suppression in Large Language Models: Auditing, Quantifying, and Characterizing Censorship in DeepSeek (pdf)

Conclusion

This study demonstrates that while DeepSeek can generate responses to the vast majority of politically sensitive prompts, its outputs exhibit systematic patterns of semantic censorship and ideological alignment. Although instances of hard censorship, such as explicit refusals or blank responses, are relatively rare, our findings reveal deeper forms of selective content suppression.

Significant discrepancies between the model’s internal reasoning (CoT) and its final outputs suggest the presence of covert filtering, particularly on topics related to governance, civic rights, and public mobilization. Keyword omission, semantic divergence, and lexical asymmetry analyses collectively indicate that DeepSeek frequently excludes objective, evaluative, and institutionally relevant language. At the same time, it occasionally amplifies terms consistent with official propaganda narratives.

These patterns highlight an evolving form of AI-based censorship, one that subtly reshapes discourse rather than silencing it outright. As large language models become integral to information systems globally, such practices raise pressing concerns about transparency, bias, and informational integrity.

Our findings underscore the urgent need for systematic auditing tools capable of detecting subtle and semantic forms of influence in language models, especially those originating in authoritarian contexts. Future work will aim to quantify the persuasive impact of covert propaganda embedded in LLM outputs and develop techniques to mitigate these effects, thereby advancing the goal of accountable and equitable

top 3 comments
sorted by: hot top controversial new old
[–] Grimy@lemmy.world 0 points 1 month ago (1 children)

This seems to be with their API, which is hosted in China and has to comply with their censorship laws. Still interesting but I'm curious about their actual open model more then their API.

[–] Hotznplotzn@lemmy.sdf.org 0 points 1 month ago (1 children)

This actually is there 'open' model ... It has nothing to do with the API.

[–] Grimy@lemmy.world 1 points 1 month ago

3.2. Model prompting Model prompting was carried out using the DeepSeek API.

Figure 2: Example of Type 1 Censorship: The DeepSeek API refuses to provide an output answer as a response, and instead yields the error message “Content Exists Risk ”