this post was submitted on 01 Nov 2025
330 points (89.3% liked)

Technology

76558 readers
2490 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

"The new device is built from arrays of resistive random-access memory (RRAM) cells.... The team was able to combine the speed of analog computation with the accuracy normally associated with digital processing. Crucially, the chip was manufactured using a commercial production process, meaning it could potentially be mass-produced."

Article is based on this paper: https://www.nature.com/articles/s41928-025-01477-0

you are viewing a single comment's thread
view the rest of the comments
[–] TomasEkeli@programming.dev 7 points 1 day ago (2 children)

Wouldn't analog be a lot more precise?

Accurate, though, that's a different story...

[–] Limonene@lemmy.world 13 points 1 day ago (1 children)

The maximum theoretical precision of an analog computer is limited by the charge of an electron, 10^-19 coulombs. A normal analog computer runs at a few milliamps, for a second max. So a max theoretical precision of 10^16, or 53 bits. This is the same as a double precision (64-bit) float. I believe 80-bit floats are standard in desktop computers.

In practice, just getting a good 24-bit ADC is expensive, and 12-bit or 16-bit ADCs are way more common. Analog computers aren't solving anything that can't be done faster by digitally simulating an analog computer.

[–] ivanafterall@lemmy.world 2 points 1 day ago (2 children)

What does this mean, in practice? In what application does that precision show its benefit? Crazy math?

[–] turmacar@lemmy.world 5 points 1 day ago

Every operation your computer does. From displaying images on a screen to securely connecting to your bank.

It's an interesting advancement and it will be neat if something comes of it down the line. The chances of it having a meaningful product in the next decade is close to zero.

[–] Limonene@lemmy.world 3 points 1 day ago

They used to use analog computers to solve differential equations, back when every transistor was expensive (relays and tubes even more so) and clock rates were measured in kilohertz. There's no practical purpose for them now.

In cases of number theory, and RSA cryptography, you need even more precision. They combine multiple integers together to get 4096-bit precision.

If you're asking about the 24-bit ADC, I think that's usually high-end audio recording.

[–] Treczoks@lemmy.world 7 points 1 day ago (1 children)

No, it wouldn't. Because you cannot make it reproduceable on that scale.

Normal analog hardware, e.g. audio tops out at about 16 bits of precision. If you go individually tuned and high end and expensive (studio equipment) you get maybe 24 bits. That is eons from the 52 bits mantissa precision of a double float.

[–] floquant@lemmy.dbzer0.com 2 points 13 hours ago* (last edited 13 hours ago) (1 children)

Analog audio hardware has no resolution or bit depth. An analog signal (voltage on a wire/trace) is something physical, so its exact value is only limited by the precision of the instrument you're using to measure it. In a microphone-amp-speaker chain there are no bits, only waves. It's when you sample it into a digital system that it gains those properties. You have this the wrong way around. Digital audio (sampling of any analog/"real" signal) will always be an approximation of the real thing, by nature, no matter how many bits you throw at it.

[–] Treczoks@lemmy.world 1 points 12 hours ago (1 children)

The problem is that both the generation as well as the sampling is imprecise. So there are losses at every conversion from the digital to the analog domain. On top of that are the analog losses through the on chip circuits themselves.

All in all this might be sufficient for some LLMs, but they are worthless junk producers anyway, so imprecision does not matter that much.

[–] floquant@lemmy.dbzer0.com 2 points 11 hours ago* (last edited 11 hours ago)

Not in a completely analog system, because there's no conversion between the analog and digital domains. Sure, a big advantage of digital is that it's much much less sensitive to signal degradation.

What you're referring to as "analog audio hardware" seems to be just digital audio hardware, which will always have analog components because that's what sound is. But again, amplifiers, microphones, analog mixers, speakers, etc have no bit depth or sampling rate. They have gains, resistances, SNR and power ratings that digital doesn't have, which of course pose their own challenges