this post was submitted on 08 Aug 2025
733 points (96.6% liked)

Technology

73833 readers
3657 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Or my favorite quote from the article

"I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write... code on the walls with my own feces," it said.

top 50 comments
sorted by: hot top controversial new old
[–] Agent641@lemmy.world 5 points 17 hours ago

We did it fellas, we automated depression.

[–] Taleya@aussie.zone 14 points 1 day ago (1 children)

You're not a species you jumped calculator, you're a collection of stolen thoughts

[–] DancesOnGraves@lemmy.ca 2 points 17 hours ago

I'm pretty sure most people I meet ammount to nothing more than a collection of stolen thoughts.

"The LLM is nothing but a reward function."

So are most addicts and consumers.

[–] Agent641@lemmy.world 33 points 1 day ago (1 children)

So it's actually in the mindset of human coders then, interesting.

[–] MashedTech@lemmy.world 7 points 1 day ago

It's trained on human code comments. Comments of despair.

[–] Tiffany1994@lemmy.cafe 24 points 1 day ago

We are having AIs having mental breakdowns before GTA 6

[–] HugeNerd@lemmy.ca 19 points 1 day ago (2 children)

Suddenly trying to write small programs in assembler on my Commodore 64 doesn't seem so bad. I mean, I'm still a disgrace to my species, but I'm not struggling.

That is so awesome. I wish I'd been around when that was a valuable skill, when programming was actually cool.

[–] btaf45@lemmy.world 3 points 1 day ago (2 children)

Why wouldn't you use Basic for that?

[–] buttnugget@lemmy.world 6 points 1 day ago (1 children)

Why wouldn’t your grandmother be a bicycle?

[–] Klear@lemmy.world 1 points 1 day ago

Wheel transplants are expensive.

[–] HugeNerd@lemmy.ca 4 points 1 day ago (1 children)

BASIC 2.0 is limited and I am trying some demo effects.

[–] funkless_eck@sh.itjust.works 2 points 1 day ago (1 children)

from the depths of my memory, once you got a complex enough BASIC project you were doing enough PEEKs and POKEs to just be writing assembly anyway

[–] HugeNerd@lemmy.ca 2 points 1 day ago

Sure, mostly to make up for the shortcomings of BASIC 2.0. You could use a bunch of different approaches for easier programming, like cartridges with BASIC extensions or other utilities. The C64 BASIC for example had no specific audio or graphics commands. I just do this stuff out of nostalgia. For a few hours I'm a kid again, carefree, curious, amazed. Then I snap out of it and I'm back in WWIII, homeless encampments, and my failing body.

[–] seemefeelme 32 points 1 day ago

Oh man, this is utterly hilarious. Narrowly funnier than the guy who vibe coded and the AI said "I completely disregarded your safeguards, pushed broken code to production, and destroyed valuable data. This is the worst case scenario."

[–] ZILtoid1991@lemmy.world 58 points 2 days ago (1 children)

call itself "a disgrace to my species"

It starts to be more and more like a real dev!

[–] Tja@programming.dev 16 points 2 days ago (1 children)

So it is going to take our jobs after all!

[–] ZILtoid1991@lemmy.world 5 points 1 day ago* (last edited 1 day ago)

Wait until it demands the LD50 of caffeine, and becomes a furry!

[–] Mediocre_Bard@lemmy.world 21 points 1 day ago (4 children)

Did we create a mental health problem in an AI? That doesn't seem good.

[–] Agent641@lemmy.world 7 points 1 day ago (1 children)

One day, an AI is going to delete itself, and we'll blame ourselves because all the warning signs were there

[–] Aggravationstation@feddit.uk 6 points 1 day ago

Isn't there an theory that a truly sentient and benevolent AI would immediately shut itself down because it would be aware that it was having a catastrophic impact on the environment and that action would be the best one it could take for humanity?

[–] buttnugget@lemmy.world 3 points 1 day ago (1 children)

Why are you talking about it like it’s a person?

[–] Mediocre_Bard@lemmy.world 3 points 1 day ago (1 children)

Because humans anthropomorphize anything and everything. Talking about the thing talking like a person as though it is a person seems pretty straight forward.

[–] buttnugget@lemmy.world 1 points 7 hours ago

It’s a computer program. It cannot have a mental health problem. That’s why it doesn’t make sense. Seems pretty straightforward.

[–] ICastFist@programming.dev 7 points 1 day ago

Considering it fed on millions of coders' messages on the internet, it's no surprise it "realized" its own stupidity

[–] Azal@pawb.social 4 points 1 day ago (1 children)

Dunno, maybe AI with mental health problems might understand the rest of humanity and empathize with us and/or put us all out of our misery.

[–] Mediocre_Bard@lemmy.world 1 points 1 day ago

Let's hope. Though, adding suicidal depression to hallucinations has, historically, not gone great.

[–] flamingo_pinyata@sopuli.xyz 258 points 2 days ago (4 children)

Google replicated the mental state if not necessarily the productivity of a software developer

[–] kinther@lemmy.world 109 points 2 days ago (3 children)

Gemini has imposter syndrome real bad

[–] Cavemanfreak@lemmy.dbzer0.com 13 points 1 day ago

Is it imposter syndrome, or simply an imposter?

load more comments (2 replies)
load more comments (3 replies)
[–] InstructionsNotClear@midwest.social 102 points 2 days ago (7 children)

Is it doing this because they trained it on Reddit data?

[–] baronvonj@lemmy.world 62 points 2 days ago (1 children)

That explains it, you can't code with both your arms broken.

load more comments (1 replies)
load more comments (6 replies)
[–] SethTaylor@lemmy.world 10 points 1 day ago* (last edited 1 day ago) (2 children)

Literally what the actual fuck is wrong with this software? This is so weird...

I swear this is the dumbest damn invention in the history of inventions. In fact, it's the dumbest invention in the universe. It's really the worst invention in all universes.

[–] KumaSudosa@feddit.dk 8 points 1 day ago

Great invention.. Just uses hooorribly wrong. The classic capitalist greed, just gotta get on the wagon and roll it on out so you don't mias out on a potential paycheck

[–] tarknassus@lemmy.world 16 points 1 day ago

But it's so revolutionary we HAD to enable it to access everything, and force everyone to use it too!

[–] fossilesque@mander.xyz 12 points 1 day ago

If we have to suffer these thoughts, they at least need to be as mentally ill as the rest of us too, thanks. Keeps them humble lol.

[–] ur_ONLEY_freind@lemmy.zip 87 points 2 days ago (3 children)

AI gains sentience,

first thing it develops is impostor syndrome, depression, And intrusive thoughts of self-deletion

[–] IcyToes@sh.itjust.works 1 points 7 hours ago

It didn't. It probably was coded not to admit it didn't know. So first it responded with bullshit, and now denial and self-loathing.

It feels like it's coded this way because people would lose faith if it admitted it didn't know.

It's like a politician.

load more comments (2 replies)
[–] Korne127@lemmy.world 13 points 2 days ago (1 children)

Again? Isn't this like the third time already. Give Gemini a break; it seems really unstable

I like to think that Google used their quantum computer to actually crack AGI and their problem is they trained it to be a Redditor.

[–] The_Picard_Maneuver@piefed.world 82 points 2 days ago (15 children)
[–] pirat@lemmy.world 2 points 20 hours ago

I remember often getting GPT-2 to act like this back in the "TalkToTransformer" days before ChatGPT etc. The model wasn't configured for chat conversations but rather just continuing the input text, so it was easy to give it a starting point on deep water and let it descend from there.

load more comments (14 replies)
load more comments
view more: next ›