this post was submitted on 07 Mar 2025
276 points (97.6% liked)

Buy European

6792 readers
414 users here now

Overview:

The community to discuss buying European goods and services.


Matrix Chat of this community


Rules:

  • Be kind to each other, and argue in good faith. No direct insults nor disrespectful and condescending comments.

  • Do not use this community to promote Nationalism/Euronationalism. This community is for discussing European products/services and news related to that. For other topics the following might be of interest:

  • Include a disclaimer at the bottom of the post if you're affiliated with the recommendation.

  • No russian suggestions.

Feddit.uk's instance rules apply:

  • No racism, sexism, homophobia, transphobia or xenophobia.
  • No incitement of violence or promotion of violent ideologies.
  • No harassment, dogpiling or doxxing of other users.
  • Do not share intentionally false or misleading information.
  • Do not spam or abuse network features.
  • Alt accounts are permitted, but all accounts must list each other in their bios.
  • No generative AI content.

Useful Websites

Benefits of Buying Local:

local investment, job creation, innovation, increased competition, more redundancy.

European Instances

Lemmy:

Friendica:

Matrix:


Related Communities:

Buy Local:

Continents:

European:

Buying and Selling:

Boycott:

Countries:

Companies:

Stop Publisher Kill Switch in Games Practice:


Banner credits: BYTEAlliance


founded 6 months ago
MODERATORS
top 12 comments
sorted by: hot top controversial new old
[โ€“] 30p87@feddit.org 95 points 5 months ago* (last edited 5 months ago)

> Boycott US Products
> Uses ChatGPT

[โ€“] adam_y@lemmy.world 34 points 5 months ago (2 children)

Prompt to hallucinating?

Do you mean "Prone"?

That is the sort of mistake an Llm would make.

[โ€“] TheEntity@lemmy.world 45 points 5 months ago

This is precisely the sort of mistake an LLM wouldn't make.

[โ€“] Blaze@lemmy.dbzer0.com 18 points 5 months ago* (last edited 5 months ago) (1 children)

Just got distracted (also English isn't my first language)

[โ€“] musubibreakfast@lemm.ee 14 points 5 months ago (1 children)

Your native tongue is python, you're an LLM, sorry you had to find out this way.

[โ€“] finitebanjo@lemmy.world 9 points 5 months ago (1 children)

I hope you're not installing these on your phone...?

[โ€“] Blaze@lemmy.dbzer0.com 7 points 5 months ago

Definitely not

[โ€“] deczzz@lemmy.dbzer0.com -2 points 5 months ago (1 children)

Devs are aware. This was a quick n dirty prototype and they alright knew the issue with using chatgpt. They did it to make something work asap. In an interview (Danish) the devs recognized this and is moving toward using a LLM developed in French (I forget the name but irrelevant to the point that they will drop chatgpt).

[โ€“] MartianSands@sh.itjust.works 38 points 5 months ago (1 children)

If that's their solution, then they have absolutely no understanding of the systems they're using.

ChatGPT isn't prone to hallucination because it's ChatGPT, it's prone because it's an LLM. That's a fundamental problem common to all LLMs

[โ€“] DavidGarcia@feddit.nl 0 points 5 months ago (1 children)

phi-4 is the only one I am aware of that was deliberately trained to refuse instead of hallucinating. it's mindblowing to me that that isn't standard. everyone is trying to maximize benchmarks at all cost.

I wonder if diffusion LLMs will be lower in hallucinations, since they inherently have error correction built into their inference process

[โ€“] MartianSands@sh.itjust.works 3 points 5 months ago

Even that won't be truly effective. It's all marketing, at this point.

The problem of hallucination really is fundamental to the technology. If there's a way to prevent it, it won't be as simple as training it differently