this post was submitted on 08 Aug 2025
271 points (94.1% liked)

196

4128 readers
1714 users here now

Community Rules

You must post before you leave

Be nice. Assume others have good intent (within reason).

Block or ignore posts, comments, and users that irritate you in some way rather than engaging. Report if they are actually breaking community rules.

Use content warnings and/or mark as NSFW when appropriate. Most posts with content warnings likely need to be marked NSFW.

Most 196 posts are memes, shitposts, cute images, or even just recent things that happened, etc. There is no real theme, but try to avoid posts that are very inflammatory, offensive, very low quality, or very "off topic".

Bigotry is not allowed, this includes (but is not limited to): Homophobia, Transphobia, Racism, Sexism, Abelism, Classism, or discrimination based on things like Ethnicity, Nationality, Language, or Religion.

Avoid shilling for corporations, posting advertisements, or promoting exploitation of workers.

Proselytization, support, or defense of authoritarianism is not welcome. This includes but is not limited to: imperialism, nationalism, genocide denial, ethnic or racial supremacy, fascism, Nazism, Marxism-Leninism, Maoism, etc.

Avoid AI generated content.

Avoid misinformation.

Avoid incomprehensible posts.

No threats or personal attacks.

No spam.

Moderator Guidelines

Moderator Guidelines

  • Don’t be mean to users. Be gentle or neutral.
  • Most moderator actions which have a modlog message should include your username.
  • When in doubt about whether or not a user is problematic, send them a DM.
  • Don’t waste time debating/arguing with problematic users.
  • Assume the best, but don’t tolerate sealioning/just asking questions/concern trolling.
  • Ask another mod to take over cases you struggle with, if you get tired, or when things get personal.
  • Ask the other mods for advice when things get complicated.
  • Share everything you do in the mod matrix, both so several mods aren't unknowingly handling the same issues, but also so you can receive feedback on what you intend to do.
  • Don't rush mod actions. If a case doesn't need to be handled right away, consider taking a short break before getting to it. This is to say, cool down and make room for feedback.
  • Don’t perform too much moderation in the comments, except if you want a verdict to be public or to ask people to dial a convo down/stop. Single comment warnings are okay.
  • Send users concise DMs about verdicts about them, such as bans etc, except in cases where it is clear we don’t want them at all, such as obvious transphobes. No need to notify someone they haven’t been banned of course.
  • Explain to a user why their behavior is problematic and how it is distressing others rather than engage with whatever they are saying. Ask them to avoid this in the future and send them packing if they do not comply.
  • First warn users, then temp ban them, then finally perma ban them when they break the rules or act inappropriately. Skip steps if necessary.
  • Use neutral statements like “this statement can be considered transphobic” rather than “you are being transphobic”.
  • No large decisions or actions without community input (polls or meta posts f.ex.).
  • Large internal decisions (such as ousting a mod) might require a vote, needing more than 50% of the votes to pass. Also consider asking the community for feedback.
  • Remember you are a voluntary moderator. You don’t get paid. Take a break when you need one. Perhaps ask another moderator to step in if necessary.

founded 6 months ago
MODERATORS
all 27 comments
sorted by: hot top controversial new old
[–] LostXOR@fedia.io 30 points 3 days ago (1 children)
>>> "blueberry".count('B')
0
[–] haungack@lemmy.dbzer0.com 1 points 1 day ago

Likewise, instruct the AI to break the word down into letters one per line first, and then they get it right more often. I think that's the point the post is trying to make.

The letter counting issue is actually a fundamental problem of whole-word or subword-tokenization that's had an obvious solution since ~2016, and i don't get why commercial AI won't implement a solution. Probably because it's a lot of training code complexity (but not much compute) for solving a very small problem.

[–] HappyFrog@lemmy.blahaj.zone 83 points 4 days ago (3 children)

Are they trying to say that AI haters are equivalent to people who can't code?

[–] pennomi@lemmy.world 95 points 4 days ago (2 children)

I suspect it’s more like “use the tool correctly or it will give bad results”.

Like, LLMs are a marvel of engineering! But they’re also completely unreliable for use cases where you need consistent, logical results. So maybe we shouldn’t use them in places where we need consistent, logical results. That makes them unsafe for use in most business.

There are like twelve genuine use cases and because of the cult of the LLM bro 9 of those are negated by weird blind faith. Two more are crimes against humanity.

[–] StrixUralensis@tarte.nuage-libre.fr 16 points 4 days ago (1 children)

Yeah, they should be used to generate text

[–] Sekoia@lemmy.blahaj.zone 34 points 4 days ago (3 children)

Not even, they should be used to interpret/process natural language and maybe generate some filler things (smart defaults etc; a good use is generating titles for things). Translation it's very good at too.

The more text an LLM has to generate, the worse it is, and the less it can base itself off of real text, the less it'll do it correctly.

[–] Swedneck@discuss.tchncs.de 13 points 4 days ago

LLMs are basically optimized for making newspapers to put in the background of games, put some relevant stuff in the prompt and it'll shit out text that's sensible enough that players can skim things in the world and actually feel immersed.

[–] TriflingToad@sh.itjust.works 8 points 4 days ago* (last edited 4 days ago)

the streamer dougdoug has made some really good uses for AI, in entertainment he is really creative in ways to use it. For example he had a dream where he was playing a rage game (a really difficult Mario maker world) and an AI would process everything he says and punish him if it decided he was being negative

or he would send every chat message in the last 30 sec to a LLM to get the top 3 answers to play family fued where he had to guess the top 3 things his chat says for a question like "what do you do in GTA that you wish you could do IRL" and it would summarize all the slightly different answers into 3 categories.

or simply as entertainment for his chat to play with as he plays a game, where a "joke bot" would rate the jokes people would tell it and decide if it is either funny or not funny and they'd get banned for a million seconds if the AI decided it was unfunny

he also just codes and makes stuff that's not AI. He mapped Obama's hand from a UN meeting to control Mario party against his chat and him trying to control it while only using a voice to text program

I'll admit I once used an LLM to generate a comparison between the specs of three printers. It did a great job, but doing it myself is still faster and doesn't make me feel dirty.

[–] uranibaba@lemmy.world 45 points 4 days ago (1 children)

I understood this a a vibe coder trying python, the vibe-coding-way.

[–] StarvingMartist@sh.itjust.works 29 points 4 days ago (1 children)

I'm not sure their intent, but I follow this guy on bluesky, he's super pro open source and worked on a bunch of Google projects back in the day so I think if anything he might just be making fun of vibe coders

[–] uranibaba@lemmy.world 4 points 4 days ago

making fun of vibe coders

That was implied, this is 196 after all. :-)

[–] lagoon8622@sh.itjust.works 37 points 4 days ago (1 children)

There are zero 'B's in 'blueberry'

[–] Valmond@lemmy.world 3 points 3 days ago

Have you never coded on windows file system?

[–] Smorty@lemmy.blahaj.zone 6 points 4 days ago* (last edited 4 days ago) (4 children)

~i~ ~kno~ ~peeps~ ~will~ ~get~ ~mad~ ~but~ ~imma~ ~comment~ ~anyway!!!~ ~u~ ~cnt~ ~stop~ ~me!~

i think dis is kindsa real-... *vine boom* 💥🤨🤨🤨

dis funi got so many - like - sub-layers 🧅

observe — — — the layers — — — (if u care)

  • obvious reference to peeps dismissin llms for not bein able to answer spelling questions
  • funi fake-thing, cuz programmin is actulli preddi useful - one has to kno whad its gud for tho
  • secret funi: python is suuuuupr good at countin lettrs "strawberry".count("r") while current llms r not (largely due to their tokenization step, making them literally unable to count the letters, but they can still count occurances of words)
  • the funi could also be seen in a way, where the poster got into coding via llms, then realized thad this "coding" is actually not as easy as he thought...

anyway - i believe thad using a rule-based query-interpreation system (like siri or googles query-specific UI) with llms as a fallback gives much improved human-input-handlin-systems.

besides thad - i dun see much use quite yet

(i hope shareholders-chan is fine with thad >~< )

[–] regdog@lemmy.world 4 points 3 days ago (1 children)
[–] Smorty@lemmy.blahaj.zone 4 points 3 days ago

no dear lemm world user, no i am not having a stroke.

[–] Maxxie@piefed.blahaj.zone 3 points 4 days ago

A bubble doesn't mean its useless though. It means shareholders think it's much more valuable and profitable that it really is, and the value is mostly propped up by hype.

If there was a trillion dollar investment in python, that would be a bubble.

[–] mfed1122@discuss.tchncs.de 5 points 4 days ago (1 children)

Fully agreed, this is a very clever post and people are just getting antsy about it because it threatens their black-and-white AI opinions

[–] SkyezOpen@lemmy.world 7 points 4 days ago (2 children)

Sure, but those opinions are created by the fact that basically everyone is trying to cram ai into basically everything, regardless of suitability. Naturally there's going to be massive backlash.

[–] turbowafflz@lemmy.world 10 points 4 days ago

Also, note that python politely tells you in didn't understand because it isn't a lying machine it's a programming language, meanwhile an LLM just makes up a lie.

[–] mfed1122@discuss.tchncs.de 1 points 4 days ago

Trying to cram AI into everything regardless of suitability is indeed bad. However, that does not justify black-and-white opinions. It explains them, but it doesn't justify them.