this post was submitted on 07 Jan 2025
106 points (82.7% liked)

Technology

73792 readers
3244 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 38 comments
sorted by: hot top controversial new old
[–] kibiz0r@midwest.social 54 points 7 months ago (2 children)

Tim Harford mentioned this in his 2016 book “Messy”.

They just wanna call it AI and make it sound like some mysterious intelligence we can’t comprehend.

[–] frezik@midwest.social 8 points 7 months ago* (last edited 7 months ago)

It sorta is.

A key way that human intelligence works is to break a problem down into smaller components that can be solved individually. This is in part due to the limited computational ability of the human brain; there's not enough there to tackle the complete problem.

However, there's no particular reason AI would need to be limited that way, and it often isn't. Expert Go players see this in AI for that game. The AI tends to make all sorts of moves early on that don't seem to be following the usual logic, and it's because it's laid out the complete game in its "head" and going directly for the goal. Go is basically impossible for humans to win against the best AIs at this point.

This is a different kind of intelligence than we're used to, but there's no reason to discount it as invalid.

See the paper Understanding Human Intelligence through Human Limitations

[–] rottingleaf@lemmy.world 2 points 7 months ago (1 children)

Except we can't build what we can't comprehend that also works.

The problem here is that people with power to direct funds are, more often than not, utterly ignorant in building anything.

I think where all this is generally directed is a society, like in Asimov's Foundation or Plato's Republic (with additional step), where people competent in building something are reduced to a small caste, most of them with local, not professional, competencies, like priests, and with a techno-religion centered on that "AI". This is a hierarchical structure very vulnerable to, well, that kind of powerful people.

The majority will work non-essential jobs (like in Heinlein's Door Into Summer), which do not give them any kind of power, the soldier caste will work the military, and the builder caste will work the technology, and the philosopher caste will be those powerful people. The difference with Plato is in having that first group of people which does not fit into any main caste. By Plato they would all be builder (worker) caste, but that would create a problem with the attempt to make it a religion and a hierarchical monopolized structure. The builder caste should be small.

You might see a whole lot of problems with that idea (which still seems to be attempted), that's because the people from whom it comes don't understand how civilization works and that instruments change the rules constantly, not just to the point they can understand.

Recommend reading: Jodorowsky’s Technopriests

[–] RedWeasel@lemmy.world 41 points 7 months ago (6 children)

This isn’t exactly new. I heard a few years ago about a situation where the ai had these wires on the chip that should not do anything as they didn’t go anywhere , but if they removed it the chip stopped working correctly.

[–] drosophila@lemmy.blahaj.zone 50 points 7 months ago (1 children)

That was a different technique, using simulated evolution in an FPGA.

An algorithm would create a series of random circuit designs, program the FPGA with them, then evaluate how well each one accomplished a task. It would then take the best design, create a series of random variations on it, and select the best one. Rinse and repeat until the circuit is really good at performing the task.

[–] RedWeasel@lemmy.world 7 points 7 months ago

I think this is what I am thinking of. Kind of a predecessor of modern machine learning.

[–] CandleTiger@programming.dev 25 points 7 months ago (2 children)

I don’t know about AI involvement but this story in general is very very old.

http://www.catb.org/jargon/html/magic-story.html

[–] massive_bereavement@fedia.io 11 points 7 months ago (1 children)

I thought of this as well. In fact, as a bit of fun I added a switch to a rack at our lab in a similar way with the same labels. This one though does nothing, but people did push the "turbo" button on old pc boxes despite how often those buttons weren't connected.

[–] Gormadt@lemmy.blahaj.zone 10 points 7 months ago

My turbo button was connected to an LED but that was it

[–] RedWeasel@lemmy.world 4 points 7 months ago* (last edited 7 months ago)

I remember that as well.

Edit; moved comment to correct reply.

[–] db2@lemmy.world 10 points 7 months ago (2 children)

Sounds like RF reflection used like a data capacitor or something.

[–] GreyEyedGhost@lemmy.ca 11 points 7 months ago

The particular example was getting clock-like behavior without a clock. It had an incomplete circuit that used RF reflection or something very similar to simulate a clock. Of course, removing this dead-end circuit broke the design.

[–] piecat@lemmy.world 3 points 7 months ago

Yeah, that probably sounds so unintuitive and weird to anyone who has never worked with RF.

[–] rezifon@lemmy.world 7 points 7 months ago* (last edited 7 months ago) (1 children)
[–] buffalobuffalo@lemmy.blahaj.zone 3 points 7 months ago

It may interest you to know that the switch still exists. https://github.com/PDP-10/its/issues/1232

[–] FourPacketsOfPeanuts@lemmy.world 4 points 7 months ago (2 children)

I remember this too, it was years and years ago (I almost want to say 2010-2015). Can't find anything searching for it

[–] GreyEyedGhost@lemmy.ca 3 points 7 months ago (1 children)

You helped me narrow it down. I expect Adrian Thompson's research from the 90s, referenced in this Wikipedia article is what you're thinking of.

[–] FourPacketsOfPeanuts@lemmy.world 2 points 7 months ago

Yes! Exactly this thank you

For example, one group of gates has no logical connection to the rest of the circuit, yet is crucial to its function

(I should have gone with my gut, I knew it was ages ago. 30ish years by the sound of it!)

[–] ShepherdPie@midwest.social 2 points 7 months ago (1 children)

Perhaps you're an AI who only hallucinated a circuit design.

[–] FourPacketsOfPeanuts@lemmy.world 2 points 7 months ago

:)

It's been found. Adrian Thompson's research from almost 30 years ago..

https://en.m.wikipedia.org/wiki/Evolvable_hardware

[–] intensely_human@lemm.ee 2 points 7 months ago

So the wires did something

[–] Flaqueman@sh.itjust.works 14 points 7 months ago (5 children)

See? I want this kind of AI. Not a word dreaming algorithm that spews misinformation

[–] FourPacketsOfPeanuts@lemmy.world 16 points 7 months ago (2 children)

Read the article, it's still 'dreaming' and spewing garbage, it's just that in some iterations it's gotten lucky. "Human oversight needed" they say. The AI has no idea what it's doing.

[–] Flaqueman@sh.itjust.works 15 points 7 months ago

Yeah I got that. But I still prefer "AI doing science under a scientist's supervision" over "average Joe can now make a deepfake and publish it for millions to see and believe"

[–] BrianTheeBiscuiteer@lemmy.world 3 points 7 months ago* (last edited 7 months ago) (1 children)

I wonder how well it could work to use AI in developing an algorithm to generate chip designs. My annoyance with all of this stuff is how much people say, "Look! AI invented something new! It only took a few hours and 100x the resources!"

AI is mainly the capitalist dream of a drinking bird toy keeping a nuclear reactor online and paying a layman slave wages to make sure the bird does its job (obligatory "Simpsons did it").

[–] FourPacketsOfPeanuts@lemmy.world 1 points 7 months ago

Maybe, but remember generative AI isn't any kind of deductive or methodical reasoning. It's literally "mash up the publicly available info and give a crowd sourced version of what to add next". This works for art because this kind of random harmony appeals to us asthetically and art is an area where people seek fewer constraints. But when you're engineering it's the opposite. Maybe it's useful to get engineers out of a rut and imagine new possibilities. But that's it. Generative AI has no idea if what's it's smushed together is garbage or randomly insightful.

[–] Dkarma@lemmy.world 11 points 7 months ago (1 children)

This is what most all ai is. Gpt models are a tiny subsect.

[–] db2@lemmy.world 7 points 7 months ago (1 children)
[–] prex@aussie.zone 5 points 7 months ago (1 children)

You are correct but I like subsect better.

[–] db2@lemmy.world 3 points 7 months ago

I like the subtlety of it tbh.

[–] riskable@programming.dev 6 points 7 months ago

You want AI that makes chips that run AI faster and better?

You've fallen into its trap!

[–] brlemworld@lemmy.world 6 points 7 months ago

I want AI that takes a foreign language movie, and augments their face and mouth so it looks like they are speaking my language, and also changes their voice (not a voice over) to be in my language.

[–] KeenFlame@feddit.nu 2 points 7 months ago

They are all of the same breed and it's an ongoing field of study. The megacorps have soiled the use of them but they are still extremely strong support tools for some things, like detecting cancer on xrays and stuff

[–] A_A@lemmy.world -1 points 7 months ago* (last edited 7 months ago) (1 children)

What used to take weeks of highly skilled work can now be accomplished in hours.
(...) delivers stunning high-performance devices that run counter to the usual rules of thumb and human intuition (...)

Eventually, a.i. created circuits will power better a.i. The singularity may happen soon. This is unpredictable.

[–] Realitaetsverlust@lemmy.zip 6 points 7 months ago (1 children)

Lmao calm down AI can't even reliably differentiate cats from dogs

[–] graff@lemm.ee 1 points 7 months ago* (last edited 7 months ago) (1 children)

Cat is when meow

Dog is when woof

There, I solved it 😂

[–] Nfamwap@lemmy.world 1 points 7 months ago

What a narc