this post was submitted on 22 Sep 2025
295 points (95.1% liked)

Technology

75489 readers
2613 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] horse@feddit.org 3 points 2 days ago (2 children)

To me it seems like a thing that sounds kinda cool on paper, but is not actually that useful in practice. We already have the ability to do real time translations or point the camera at something to get more information via AI with our smartphones, but who actually uses that on the regular? It's just not useful or accurate enough in its current state and having it always available as a HUD isn't going to change that imo. Being able to point a camera at something and have AI tell me "that's a red bicycle" is a cool novelty the first few times, but I already knew that information just by looking at it. And if I'm trying to communicate with someone in a foreign language using my phone to translate for me, I'll just feel like a dork.

[–] AwesomeLowlander@sh.itjust.works 5 points 2 days ago (1 children)

real time translations or point the camera at something to get more information via AI with our smartphones, but who actually uses that on the regular?

Anybody living in a foreign country with a different language.

[–] Alcoholicorn@mander.xyz 2 points 2 days ago

Being able to read signs and storefronts from a motorbike in real time would be life-changing.

[–] GamingChairModel@lemmy.world 0 points 2 days ago (1 children)

Being able to point a camera at something and have AI tell me "that's a red bicycle" is a cool novelty the first few times, but I already knew that information just by looking at it.

Visual search is already useful. People go through the effort of posting requests to social media or forums asking "what is this thing" or "help me ID these shoes and where I can buy them" or "what kind of spider is this" all the time. They're not searching for red bicycles, they're taking pictures of a specific Bianchi model and asking what year it was manufactured. Automating the process and improving the reliability/accuracy of that search will improve day to day life.

And I have strong reservations about the fundamental issues of inference engines being used to generate things (LLMs and diffusers and things like that), but image recognition, speech to text, and translation are areas where these tools excel today.

[–] horse@feddit.org 0 points 2 days ago

they're taking pictures of a specific Bianchi model and asking what year it was manufactured

And the answer they get will probably be wrong, or at least wrong often enough that you can't trust it without looking it up yourself. And even if these things do get good enough people will still won't be using it frequently enough to want to wear a device on their face to do it, when they can already do it better on their phone.