this post was submitted on 18 Apr 2026
1005 points (98.6% liked)

Technology

83929 readers
3368 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] r00ty@kbin.life 2 points 1 day ago (1 children)

I'm pretty sure that nic cards for those speeds would really need more hardware offloading and dma to stand a chance of those speeds. With those it should be possible. With the right hardware handling there shouldn't be a problem, ssds connected to pci manage a lot more.

In real terms, right now who needs it aside from to post speed test results?

I have gigabit symmetric and can upgrade to 2.5. But, I cannot imagine we'd need 2.5 let alone 10 or 25. And I'm a fairly heavy user.

[–] stardreamer@lemmy.blahaj.zone 3 points 1 day ago* (last edited 1 day ago) (1 children)
  1. All NICs already work off of DMA to access/copy packets into/from memory. Yes, even your $10 ones. So "would need DMA to stand a chance" doesn't have any technical meaning other than putting a bunch of words together.

  2. The bottleneck for TCP is sequence number processing, which must be done on a single core (for each flow) and cannot be parallelized. You also cannot offload sequence number processing without making major sacrifices that result in corrupted data in several edge cases (see TCP chimney offload, which cannot handle the required TCP extensions needed to run TCP at 1Gbps). So no, "more offloading" is easy to say but not feasible.

  3. Who needs it: data centers trying to scale legacy software, or dealing with multi region data replication (rocev2 is terrible for long distance links). But no, no home user would need it

[–] r00ty@kbin.life 1 points 14 hours ago

Well I was thinking more home users since this was what the post was about. Pretty sure data centres have solutions for this already with price tags that would make us cry.

At home I can see only edge cases where even going to 2.5 would be useful for us here anyway. Let alone more.

Now I'm sure as time passes demands will continue to increase and we'll need more speed. But for now running 2.5/10 internally and 1gbit to the Internet is more than enough.