this post was submitted on 05 Jun 2025
642 points (97.5% liked)

People Twitter

7853 readers
1037 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a pic of the tweet or similar. No direct links to the tweet.
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] solsangraal@lemmy.zip 129 points 2 months ago (43 children)

it only takes a couple times of getting a made-up bullshit answer from chatgpt to learn your lesson of just skip asking chatgpt anything altogether

[–] QueenHawlSera@sh.itjust.works 42 points 2 months ago (3 children)

I stopped using it when I asked who I was and then it said I was a prolific author then proceeded to name various books I absolutely did not write.

[–] miss_demeanour@lemmy.dbzer0.com 23 points 2 months ago

I just read "The Autobiography of QueenHawlSera"!
Have I been duped?

[–] Halosheep@lemm.ee 5 points 2 months ago

Why the fuck would it know who you are?

[–] noodlejetski@piefed.social 5 points 2 months ago

and I'm apparently a famous Tiktoker and Youtuber.

[–] ColeSloth@discuss.tchncs.de 20 points 2 months ago (2 children)

But chatgpt always gives such great answers on topics I know nothing at all about!

[–] julietOscarEcho@sh.itjust.works 3 points 2 months ago

Gell-mann amnesia. Might have to invent a special name for the AI flavour of it.

[–] Tar_alcaran@sh.itjust.works 1 points 2 months ago

Oh yeah, AI can easily replace all the jobs I don't understand too!

[–] papalonian@lemmy.world 13 points 2 months ago (3 children)

I was using it to blow through an online math course I'd ultimately decided I didn't need but didn't want to drop. One step of a problem I had it solve involved finding the square root of something; it spat out a number that was kind of close, but functionally unusable. I told it it made a mistake three times and it gave a different number each time. When I finally gave it the right answer and asked, "are you running a calculation or just making up a number" it said that if I logged in, it would use real time calculations. Logged in on a different device, asked the same question, it again made up a number, but when I pointed it out, it corrected itself on the first try. Very janky.

[–] stratoscaster@lemmy.world 11 points 2 months ago (1 children)

ChatGPT doesn't actually do calculations. It can generate code that will actually calculate the answer, or provide a formula, but ChatGPT cannot do math.

[–] SaharaMaleikuhm@feddit.org 3 points 2 months ago

It's just like me fr fr

[–] OpenStars@piefed.social 4 points 2 months ago

So it forced you to ask it many times? Now imagine that you paid for it each time. For the creator then, mission fucking accomplished.

[–] vivendi@programming.dev 3 points 2 months ago

You need multi-shot prompting when it comes to math. Either the motherfucker gets it right, or you will not be able to course correct it in a lot of cases. When a token is in the context, it's in the context and you're fucked.

Alternatively you could edit the context, correct the parameters and then run it again.

On the other side of the shit aisle

Shoutout to my man Mistral Small 24B who is so insecure, it will talk itself out of correct answers. It's so much like me in not having any self worth or confidence.

[–] stratoscaster@lemmy.world 2 points 2 months ago

I've only really found it useful when you provide the source of information/data to your prompt. E.g. say you want to convert one data format to another like table data into JSON

It works very consistently in those types of use cases. Otherwise it's a dice roll.

[–] HollowNaught@lemmy.world 2 points 2 months ago

I feel like a lot of people in this community underestimate the average person's willingness to trust an AI. Over the past few months, every time I've seen a coworker ask something and search it up, I have never seen them click on a website to view the answer. They'll always take what the AI summary tells them at face value

Which is very scary

[–] SuperSaiyanSwag@lemmy.zip 1 points 2 months ago

My girlfriend gave me a mini heart attack when she told me that my favorite band broke up. Turns out it was chat gpt making shit up, came up with a random name for the final album too.

[–] RickyRigatoni@retrolemmy.com 1 points 2 months ago

That's what people get when they ask me questions too but they still bother me all the time so clearly that's not going to work.

load more comments (36 replies)