this post was submitted on 01 Nov 2025
        
      
      330 points (89.3% liked)
      Technology
    76558 readers
  
      
      2490 users here now
      This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
 - Only tech related news or articles.
 - Be excellent to each other!
 - Mod approved content bots can post up to 10 articles per day.
 - Threads asking for personal tech support may be deleted.
 - Politics threads may be removed.
 - No memes allowed as posts, OK to post as comments.
 - Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
 - Check for duplicates before posting, duplicates may be removed
 - Accounts 7 days and younger will have their posts automatically removed.
 
Approved Bots
        founded 2 years ago
      
      MODERATORS
      
    you are viewing a single comment's thread
view the rest of the comments
    view the rest of the comments
The maximum theoretical precision of an analog computer is limited by the charge of an electron, 10^-19 coulombs. A normal analog computer runs at a few milliamps, for a second max. So a max theoretical precision of 10^16, or 53 bits. This is the same as a double precision (64-bit) float. I believe 80-bit floats are standard in desktop computers.
In practice, just getting a good 24-bit ADC is expensive, and 12-bit or 16-bit ADCs are way more common. Analog computers aren't solving anything that can't be done faster by digitally simulating an analog computer.
What does this mean, in practice? In what application does that precision show its benefit? Crazy math?
Every operation your computer does. From displaying images on a screen to securely connecting to your bank.
It's an interesting advancement and it will be neat if something comes of it down the line. The chances of it having a meaningful product in the next decade is close to zero.
They used to use analog computers to solve differential equations, back when every transistor was expensive (relays and tubes even more so) and clock rates were measured in kilohertz. There's no practical purpose for them now.
In cases of number theory, and RSA cryptography, you need even more precision. They combine multiple integers together to get 4096-bit precision.
If you're asking about the 24-bit ADC, I think that's usually high-end audio recording.