this post was submitted on 01 Aug 2025
218 points (99.1% liked)

Technology

73534 readers
3767 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] tabular@lemmy.world 0 points 23 hours ago* (last edited 22 hours ago) (1 children)

Also no, you don't know what it's doing so you could be blindsighted by the latest AI update making unexpected changes. Not only from good-intentioned features but also bugs, or malicious anti-features after the CEO throws their toys out of the Twitter pram.

[–] 14th_cylon@lemmy.zip 2 points 21 hours ago* (last edited 21 hours ago) (1 children)

no malicious update can force you to generate a text and file it in court as your own work.

[–] tabular@lemmy.world 1 points 14 hours ago (1 children)

Consider that a program can edit the file while running at any point, not merely during user input. Like a virus with access to user's files it could even edit a document that's not even being displayed to the user on the screen.

[–] 14th_cylon@lemmy.zip 1 points 11 hours ago (1 children)

well that would be fucked up for sure. are you suggesting any existing program works like that, or are just speculating what if?

[–] tabular@lemmy.world 1 points 10 hours ago

This may be out of date but in this video by Lawful Masses lawyers are concerned that software AI tools which somehow (I don't recall) help them understand a case. This issue is the AI should not use information sourced from another client's confidential case/documents to inform them about another case but they don't know how it works. Responses from Microsoft were not forthcoming.

I would argue they can't know unless they have access to the source code to verify what any (local) AI can do (not personally do it, but a trusted 3rd party audit which isn't behind closed doors).