this post was submitted on 03 Aug 2025
-45 points (22.9% liked)

Technology

41044 readers
451 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

Just got schooled by an AI.

According to Wiktionary:

(UK) IPA(key): /ˈstɹɔːb(ə)ɹi/
(US) IPA(key): /ˈstɹɔˌbɛɹi/

...there are indeed only two /ɹ/ in strawberry.

So much for dissing on AIs for not being able to count.

top 50 comments
sorted by: hot top controversial new old
[–] Sxan@piefed.zip 48 points 4 months ago (2 children)

You don't use IPA for counting the number of letters in words. That would be stupid, and even linguists would laugh at you.

It's still a stupid AI, and it was confidently, and unambiguously, wrong.

[–] Powderhorn@beehaw.org 11 points 4 months ago

I use IPAs to forget about work crap. (Former linguistics major; I know the other meaning, but it doesn't come up much at bars.)

[–] ChaoticNeutralCzech@feddit.org 3 points 4 months ago

Yup. Letters ≠ phonemes and in this case, even the character is different: r ≠ ɹ.

[–] KazuchijouNo@lemy.lol 37 points 4 months ago (3 children)

This is nonsense, you cannot justify this type of errors from a language model. It's just a bunch of words strung together based on probability. This is just an artifact of such a construction, it's all right, don't break your brain on it. The "AI" sure isn't.

load more comments (3 replies)
[–] spit_evil_olive_tips@beehaw.org 21 points 4 months ago (3 children)

So much for dissing on AIs for not being able to count.

no, I'm still going to do that.

load more comments (3 replies)
[–] 01189998819991197253 18 points 4 months ago (1 children)

Did you ask how many /ɹ/ there are, or how many r there are? It can't count, then it went and tried to justify its moronic behavior, and manipulated you into believing its "logic".

load more comments (1 replies)
[–] user224@lemmy.sdf.org 18 points 4 months ago (2 children)

Just AI being good at math again

[–] Krauerking@lemy.lol 3 points 4 months ago

I love that its wrong/right the whole first response (everyone knows A comes first in PEMDAS right?) and corrects itself even more out of touch with reality because thats what the developers told it would appease the users.
Say the user is correct and try an even worse answer.

[–] TranquilTurbulence@lemmy.zip 3 points 4 months ago

Well, at least I’m not worried about my job. Not even a little.

[–] zonnewin@feddit.nl 17 points 4 months ago (2 children)

A normal human would understand that the question is about the spelling, not the pronunciation.

AI still has a lot to learn.

[–] megopie@beehaw.org 9 points 4 months ago

It also is just making up a string of words that are probabilistically plausible as a continuation of the dialog.

You can do the same tests with other words and it will just contradict it’s self and get things wrong about how many times a letter is pronounced in a word.

[–] jarfil@beehaw.org 0 points 4 months ago (1 children)

It's not a "normal human", it's an AI using an LLM.

AI still has a lot to learn.

Does it, though? Does a hammer have a lot to learn, or does the person wielding it have to learn how not to smash their own fingers?

[–] zonnewin@feddit.nl 3 points 4 months ago (1 children)

it's an AI using an LLM

Which we know by now often produces wrong answers.

Also, the term AI would assume some kind of intelligence, for which I see no evidence.

[–] jarfil@beehaw.org 0 points 4 months ago (1 children)

I'm seeing about as many wrong questions as wrong answers. We're at a point, where it's becoming more accurate to ask, whether the quality of the answer, is "aligned" with the quality of the question.

As for "AI" and "intelligence"... not so long ago, dogs had no intelligence or soul, and a tic-tac-toe machine was "AI". The exact definition of "intelligence", seems to constantly flow and bend, mostly following anthropocentric egocentrism trends.

[–] SmartmanApps@programming.dev 1 points 4 months ago (5 children)

a tic-tac-toe machine was “AI”.

No it wasn't. It was (and is) a deterministic program. AI isn't.

load more comments (5 replies)
[–] LukeZaz@beehaw.org 13 points 4 months ago (1 children)

I shudder to think how much electricity got wasted so you could get fooled by an LLM into believing nonsense. Let alone the equally-unnecessary followup questions.

[–] Vodulas@beehaw.org 2 points 4 months ago

Also, the LLM is just Yes Manning. OP gave it the 'rr' counts as a single 'r' answer with a very loaded question

[–] Krauerking@lemy.lol 13 points 4 months ago (1 children)

Yeah there is a stupid human in this chat but mostly cause they let themselves get tricked by bad logic in order to justify a bad answer.

[–] pruwybn@discuss.tchncs.de 9 points 4 months ago (1 children)

Yes, this is the saddest thing about this, that people trust these bullshitting chatbots so much that they doubt their own knowledge.

[–] jarfil@beehaw.org 1 points 4 months ago (1 children)

Not as sad as those so secure of their own knowledge, that they refuse to ever revise it.

[–] pruwybn@discuss.tchncs.de 2 points 4 months ago (1 children)

I'm just not convinced there are only 2 r's in strawberry.

[–] jarfil@beehaw.org 2 points 4 months ago

Indeed. The point is, that asking about r is ambiguous.

[–] lvxferre@mander.xyz 12 points 4 months ago* (last edited 4 months ago) (4 children)

Wrong maths, you say?

Anyway. You didn't ask the number of times the phoneme /ɹ/ appears in the spoken word, so by context you're talking about the written word, and the letter ⟨r⟩. And the bot interpreted it as such, note it answers

here, let me show you: s-t-r-a-w-b-e-r-r-y

instead of specifying the phonemes.

By the way, all explanation past the «are you counting the "rr" as a single r?» is babble.

[–] SmartmanApps@programming.dev 2 points 4 months ago* (last edited 4 months ago) (1 children)

Wrong maths, you say?

Yes. If I want to know what 1+2 equals, and I throw a dice, there's a chance I will get the correct answer. If I do, that doesn't mean it knows how to do Maths. Also, notice where it said "Here's the calculation", it didn't actually show you the calculation? e.g. long multiplication, or even grouping, or the way the Chinese do it. Even a broken clock is right twice a day. Even if AI manages to randomly get a correct answer here and there, it still doesn't know how to do Maths (which includes not knowing how to count to begin with)

[–] lvxferre@mander.xyz 1 points 4 months ago

What's interesting IMO is that it got the first two and the last two digits right; and this seems rather consistent across attempts with big numbers. It doesn't "know" how to multiply numbers, but it's "trying" to output an answer that looks correct.

In other words, it's "bullshitting" - showing disregard to truth value, but trying to convince you.

load more comments (3 replies)
[–] megopie@beehaw.org 11 points 4 months ago* (last edited 4 months ago) (1 children)

I asked it how many X’s there are in the word Bordeaux it told me there are none.

I asked it how many times X is pronounced in Bordeaux it told me the x in Bordeaux isn’t pronounced with the word ending in an “o” sound.

I asked it how many “o” there are in Bordeaux it told me there are no o in Bordeaux.

So, is it counting the sounds made in the word? Or is it counting the letters? Or is it doing none of the above and just giving a probabilistic output based on an existing corpus of language, without any thought or concepts.

[–] jarfil@beehaw.org 1 points 4 months ago

Yes, no, both... and all other interpretations... all at once.

With any ambiguity in a prompt, it assumes a "blend" of all the possible interpretations, then responds using them all over the place.

In the case of "Bordeaux":

It's pronounced "bor-DOH", with the emphasis on the second syllable and a silent "x."

So... depending on how you squint: there is no "o", no "x", only a "bor" and a "doh", with a "silent x", and ending in an "oh like o".

Perfectly "logical" 🤷

[–] Rhaedas@fedia.io 10 points 4 months ago (1 children)

Oh wow, I didn't think about how many r sounds. But then if you ask it how many ks are in knight, it should say none.

load more comments (1 replies)
[–] kbal@fedia.io 6 points 4 months ago

It just goes to show that the AI is not yet superhuman. If it were really smart it would know, as humans can tell at a glance, that there are four r's in strawberry. There's the first one, the two in the double r combination, and then the rr digram itself which counts as a fourth r.

[–] Bob_Robertson_IX@discuss.tchncs.de 6 points 4 months ago (1 children)
[–] Powderhorn@beehaw.org 4 points 4 months ago

Oh, for fuck's sake ... another land war in Asia?

[–] Vodulas@beehaw.org 5 points 4 months ago (1 children)

Also ignoring the fact it said one r was in the middle of the word

[–] jarfil@beehaw.org 1 points 4 months ago* (last edited 4 months ago) (1 children)

There is a middle ground between "blindly rejecting" and "blindly believing" whatever an AI says.

LLMs use tokens. The answer is "correct, in its own way", one just needs to explore why and how much. Turns out, that can also lead to insights.

[–] Vodulas@beehaw.org 2 points 4 months ago (5 children)

It is not correct in any way, though. Unless you count a way you gave it to justify it's wrong answer, but that is just it being a Yes Man to keep you engaged.

load more comments (5 replies)
[–] Ulrich@feddit.org 4 points 4 months ago (1 children)

You know if the letter was L and the language spanish it'd almost be right...

[–] jarfil@beehaw.org 1 points 4 months ago* (last edited 4 months ago)

At first I thought it was talking about "rr" as a Spanish digraph. Not sure how far that lies from the truth, these models are multilingual and multimodal after all. My guess is that it's surfacing the ambiguity of its internal vector for a "token: rr" vs "token: r", though.

Could be interesting to dig deeper... but I think I'm fine with this for now. There are other "curious" behaviors of the chatbot, that have me more intrigued right now. Like, it is self-adapting to any repeated mistakes in the conversation history, but at other times it can come up with surprisingly "complex" status tracking, then present it spontaneously as bullet points with emojis. Not sure what to make out of that one yet.

[–] psx_crab@lemmy.zip 2 points 4 months ago

Kinda remind me of 13yo me, so why not both?

load more comments
view more: next ›