this post was submitted on 01 Aug 2025
220 points (99.1% liked)

Technology

73534 readers
2647 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] unexposedhazard@discuss.tchncs.de 26 points 1 day ago (1 children)

Law people have a duty to be up to date on this stuff tho. If you dont know how to avoid LLMs from seeing or interacting with your stuff then you shouldnt be allowed to practice law.

[–] tabular@lemmy.world 7 points 1 day ago* (last edited 1 day ago) (3 children)

To really be sure would require knowing what software is actually doing - not just taking the claims made by the programmers (or more likely, the mere owners) of the proprietary software.

That sounds doubly difficult job.

[–] jbloggs777@discuss.tchncs.de 12 points 1 day ago

Judges usually don't know this stuff, but they primarily work with systems and software supplied by the state...whose experts should know what they are doing.

My bet is that this guy decided to work on personal equipment, probably in violation of the rules. Being a judge, he's unlikely to be sanctioned for it, and will certainly learn from the experience. If anything, there may be some internal discussions which we'll never hear about.

Law is an area where AI can add value, though... searching through past rulings and legal opinions is tedious, and anything that can assist to find needles in haystacks would be welcome. It shouldn't be used to write legal judgements or arguments though...

[–] 14th_cylon@lemmy.zip 2 points 1 day ago (2 children)

To really be sure would require knowing what software is actually doing

i am pretty sure you do know whether you wrote a text, or it just magically spawned in front of your eyes out of thin air - you don't need degree in computer science for that.

[–] tabular@lemmy.world 1 points 1 day ago

Creating text is not the only issue, it may be trained from your confidential files.

[–] tabular@lemmy.world 0 points 1 day ago* (last edited 23 hours ago) (1 children)

Also no, you don't know what it's doing so you could be blindsighted by the latest AI update making unexpected changes. Not only from good-intentioned features but also bugs, or malicious anti-features after the CEO throws their toys out of the Twitter pram.

[–] 14th_cylon@lemmy.zip 2 points 22 hours ago* (last edited 22 hours ago) (1 children)

no malicious update can force you to generate a text and file it in court as your own work.

[–] tabular@lemmy.world 1 points 16 hours ago (1 children)

Consider that a program can edit the file while running at any point, not merely during user input. Like a virus with access to user's files it could even edit a document that's not even being displayed to the user on the screen.

[–] 14th_cylon@lemmy.zip 1 points 13 hours ago (1 children)

well that would be fucked up for sure. are you suggesting any existing program works like that, or are just speculating what if?

[–] tabular@lemmy.world 1 points 12 hours ago

This may be out of date but in this video by Lawful Masses lawyers are concerned that software AI tools which somehow (I don't recall) help them understand a case. This issue is the AI should not use information sourced from another client's confidential case/documents to inform them about another case but they don't know how it works. Responses from Microsoft were not forthcoming.

I would argue they can't know unless they have access to the source code to verify what any (local) AI can do (not personally do it, but a trusted 3rd party audit which isn't behind closed doors).

Pretty sure there is such a thing as legally certified software where the liability would then lie with the software vendor.