this post was submitted on 10 Feb 2024
954 points (100.0% liked)

196

18146 readers
852 users here now

Be sure to follow the rule before you head out.


Rule: You must post before you leave.



Other rules

Behavior rules:

Posting rules:

NSFW: NSFW content is permitted but it must be tagged and have content warnings. Anything that doesn't adhere to this will be removed. Content warnings should be added like: [penis], [explicit description of sex]. Non-sexualized breasts of any gender are not considered inappropriate and therefore do not need to be blurred/tagged.

If you have any questions, feel free to contact us on our matrix channel or email.

Other 196's:

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] riodoro1@lemmy.world 110 points 2 years ago* (last edited 2 years ago) (2 children)

The future of information ladies and gentlemen

[–] casmael@lemm.ee 30 points 2 years ago

Wow it’s so realistic and smart and easy to use I can feel my knowledge being revolutionised

[–] A_Very_Big_Fan@lemmy.world 5 points 2 years ago* (last edited 2 years ago)

Tbf I'm sure this is an unpaid version of some online LLM, you can only expect so much lol.

When I use GPT3.5 for things like finding specific quotes from famous books, it's excellent... but asking it to play chess gives you blatantly illegal moves. Then GPT4 kicks my ass in chess.

[–] huntrss@feddit.de 100 points 2 years ago* (last edited 2 years ago) (1 children)

It's so human how - instead of admitting its error - it's pulling this bs right out of its ass 🤣

[–] darthfabulous42069@lemm.ee 12 points 2 years ago (1 children)

🤔 I wonder what the hell it is that's so scary about admitting they're wrong to other people.

[–] Duranie@literature.cafe 28 points 2 years ago (1 children)

Growing up in an environment where mistakes were unacceptable sets the stage. Our willingness and ability to understand that that's fucked up and change our attitudes about mistakes takes more growth.

For some people it's easier to dig in their heels and double down.

[–] darthfabulous42069@lemm.ee 11 points 2 years ago* (last edited 2 years ago)

🤔🤔🤔 I guess I can empathize. People are always traumatized by whatever their parents tell them. What a shame.

[–] vox@sopuli.xyz 69 points 2 years ago (1 children)
[–] SpunkyMcGoo@lemmy.world 32 points 1 year ago

"where?" comes across as confrontational, you made it scared :(

[–] hark@lemmy.world 46 points 2 years ago (1 children)

Large Lying Model. This could make politicians and executives obsolete!

[–] fidodo@lemmy.world 16 points 2 years ago (1 children)

More like large guessing models. They have no thought process, they just produce words.

[–] TotallynotJessica@lemmy.world 14 points 2 years ago (1 children)

They don't even guess. Guessing would imply them understanding what you're talking about. They only think about the language, not the concepts. It's the practical embodiment of the Chinese room thought experiment. They generate a response based on the symbols, but not the ideas the symbols represent.

[–] fidodo@lemmy.world 7 points 1 year ago

I'm equating probability with guessing here, but yes there is a nuanced difference.

[–] Viking_Hippie@lemmy.world 39 points 2 years ago

Mayonnaine: mayo with cocaine. The favorite condiment of Wall Street.

[–] unreachable@lemmy.world 36 points 2 years ago (2 children)
[–] FakeGreekGirl@lemmy.blahaj.zone 10 points 2 years ago

HOW BABBY IS FORMED

[–] chetradley@lemmy.world 9 points 2 years ago

PRAGERT SEX. Hurt baby top of head?

[–] megopie@lemmy.blahaj.zone 32 points 2 years ago (2 children)

Yah, people don’t seem to get that LLM can not consider the meaning or logic of the answers they give. They’re just assembling bits of language in patterns that are likely to come next based on their training data.

The technology of LLMs is fundamentally incapable of considering choices or doing critical thinking. Maybe new types of models will be able to do that but those models don’t exist yet.

[–] CurlyMoustache@lemmy.world 13 points 2 years ago* (last edited 2 years ago) (2 children)

A grown man I work with, he's in his 50s, tells me he asks ChatGPT stuff all the time, and I can't for the life of me figure out why. It is a copycat designed to beat the Turing test. It is not a search engine or Wikipedia, it just gambles it can pass the Turing test after every prompt you give it.

[–] megopie@lemmy.blahaj.zone 6 points 2 years ago

People want functioning web searching back, but rather than address issues in the industry breaking an otherwise functional concept, they want a new fancy technology to make the problem go away.

load more comments (1 replies)
load more comments (1 replies)
[–] jkozaka@lemm.ee 32 points 2 years ago* (last edited 2 years ago) (2 children)

You forgot the rest of the posts where the llm gaslights her after. There are too many images to put here, so I'll link a post to them.
I'm not sure if this is the original post, but it's where I found it. initially

[–] n0clue@lemmy.world 5 points 2 years ago

AI coming for those management jobs.

[–] fidodo@lemmy.world 4 points 2 years ago

Or they are so agreeable that they'll agree with you even when you're wrong and completely drop what they were claiming.

The funniest thing is that even when the answer is correct, asking an LLM to explain its reasoning step by step can produce the dumbest results

[–] MacStache@sopuli.xyz 28 points 2 years ago

Artificial Intelligencensence.

[–] Bazz@feddit.de 24 points 2 years ago (2 children)
[–] sverit@feddit.de 7 points 2 years ago
[–] Kwakigra@beehaw.org 3 points 2 years ago

Another victory for humanity.

[–] mondoman712@lemmy.ml 24 points 2 years ago (1 children)

I just tried in google gemini

[–] Xanvial@lemmy.world 15 points 2 years ago (1 children)
[–] Shardikprime@lemmy.world 4 points 2 years ago
[–] Wilzax@lemmy.world 21 points 2 years ago (1 children)

The letter n appears twice in the letter m. The count is correct, the reasoning is not

[–] fidodo@lemmy.world 10 points 2 years ago

That's not what it was doing behind the scenes

[–] fox2263@lemmy.world 20 points 2 years ago (1 children)

Their coming fer are jerbs

load more comments (1 replies)
[–] sleep_deprived@lemmy.world 19 points 2 years ago (2 children)

If anybody's curious, I tried it with GPT4 and it got it right.

load more comments (1 replies)
[–] Happybara@lemmy.world 9 points 2 years ago

Bless it's heart it's doing its best.

[–] Waluigi@feddit.de 7 points 2 years ago

That escalated quickly

[–] Daxtron2@startrek.website 6 points 2 years ago (2 children)

Wow another repost of incorrectly prompting an LLM to produce garbage output. What great content!

[–] Umbrias@beehaw.org 10 points 2 years ago (8 children)

This is genuinely great content for demonstrating that ai search engines and chat bots are not in a place where you can trust them implicitly, though many do

load more comments (8 replies)
[–] fidodo@lemmy.world 8 points 2 years ago (3 children)

They didn't ask it to produce incorrect output, the prompts are not leading it to an incorrect answer. It does highlight an important limitation of LLMs which is that it doesn't think, it just produces words off of probability.

However it's wrong to think that just because it's limited that it's useless. It's important to understand the flaws so we can make them less common through how we use the tool.

For example, you can ask it to think everything through step by step. By producing a more detailed context window for itself it can reduce mistakes. In this case it could write out the letters with the count numbered and that would give it enough context to properly answer the question since it would have the numbers and letters together giving it more context. You could even tell it to write programs to assist itself and have it generate a letter counting program to count it accurately and produce the correct answer.

People can point out flaws in the technology all they want but smarter people are going to see the potential and figure out how to work around the flaws.

load more comments (3 replies)
[–] cupcakezealot@lemmy.blahaj.zone 5 points 2 years ago

can't spell mayonnaise without no

[–] Prethoryn@lemmy.world 4 points 2 years ago

Now ask if it is an instrument.

[–] Pretzilla@lemmy.world 4 points 2 years ago* (last edited 2 years ago)

"How many fingers am I holding up?"

[–] pleb_maximus@feddit.de 4 points 2 years ago

Mhhh... Mayonnaise...

load more comments
view more: next ›