this post was submitted on 21 Aug 2025
1065 points (96.7% liked)

Microblog Memes

11288 readers
1676 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

RULES:

  1. Your post must be a screen capture of a microblog-type post that includes the UI of the site it came from, preferably also including the avatar and username of the original poster. Including relevant comments made to the original post is encouraged.
  2. Your post, included comments, or your title/comment should include some kind of commentary or remark on the subject of the screen capture. Your title must include at least one word relevant to your post.
  3. You are encouraged to provide a link back to the source of your screen capture in the body of your post.
  4. Current politics and news are allowed, but discouraged. There MUST be some kind of human commentary/reaction included (either by the original poster or you). Just news articles or headlines will be deleted.
  5. Doctored posts/images and AI are allowed, but discouraged. You MUST indicate this in your post (even if you didn't originally know). If an image is found to be fabricated or edited in any way and it is not properly labeled, it will be deleted.
  6. Absolutely no NSFL content.
  7. Be nice. Don't take anything personally. Take political debates to the appropriate communities. Take personal disagreements & arguments to private messages.
  8. No advertising, brand promotion, or guerrilla marketing.

RELATED COMMUNITIES:

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Professorozone@lemmy.world 10 points 7 months ago (4 children)

I have a love/ hate relationship. Sometimes I'm absolutely blown away by what it can do. But then I asked a compounded interest question. The first answer was AI, so I figured ok, why not. I should mention I don't know much about it. The answer was impressive. It gave the result, a brief explanation about how it came to the result and presented me with the equation it used. Since I needed it for all time sake, I entered the equation into a spreadsheet and got what I thought was the wrong answer. I spent quite a few minutes trying to figure out what I was doing wrong and found a couple of things. But fixing them still didn't give me the correct result. After I had convinced myself I had done it correctly I looked up the equation. It was the right one. Then I put it into a non-AI calculator online to check my work. Sure enough, the AI had given me the wrong result with the right equation. So be rule, never accept the AI answer with verifying it. But you know what, if you have to verify it, what's the point of using it in the first place? You just have to do the same work as you would without it.

[–] homesweethomeMrL@lemmy.world 9 points 7 months ago

So be rule, never accept the AI answer with verifying it. But you know what, if you have to verify it, what's the point of using it in the first place? You just have to do the same work as you would without it.

Exactly

[–] Mika@sopuli.xyz 3 points 7 months ago

LLM aren't good at math at all. They know the formulas, but they aren't built to do math. They are built to predict the next syllable in the stream of thought.

What are they good for? When you need to generate lots of things and it's faster to check after it rather than do it yourself.

Like you could've asked to generate a python app that solves your math problem, you would be able to doublecheck the correctness of the code and run it, knowing that the answer is predictably good.

[–] mojofrododojo@lemmy.world 3 points 7 months ago

So be rule, never accept the AI answer with verifying it. But you know what, if you have to verify it, what’s the point of using it in the first place?

pfft that ecosystem isn't going to fuck itself, now, is it?

[–] drmoose@lemmy.world -1 points 7 months ago (1 children)

You need to verify all resources though. I have a lot of points on stackexchange and after contributing for almost a decade now I can tell you for a fact that LLM's hallucination issue is not much worse than people hallucination issue. Information exchange will never be perfect.

You get this incredible speed of an answer which means you have a lot of remaining budget to verify it. It's a skill issue.

[–] prole@lemmy.blahaj.zone 3 points 7 months ago (1 children)

LLM’s hallucination issue is not much worse than people hallucination issue.

Is this supposed to be comforting?

[–] drmoose@lemmy.world 1 points 7 months ago

Yes if you have the skill to handle this.