this post was submitted on 01 Aug 2025
220 points (99.1% liked)
Technology
73534 readers
2647 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Law people have a duty to be up to date on this stuff tho. If you dont know how to avoid LLMs from seeing or interacting with your stuff then you shouldnt be allowed to practice law.
To really be sure would require knowing what software is actually doing - not just taking the claims made by the programmers (or more likely, the mere owners) of the proprietary software.
That sounds doubly difficult job.
Judges usually don't know this stuff, but they primarily work with systems and software supplied by the state...whose experts should know what they are doing.
My bet is that this guy decided to work on personal equipment, probably in violation of the rules. Being a judge, he's unlikely to be sanctioned for it, and will certainly learn from the experience. If anything, there may be some internal discussions which we'll never hear about.
Law is an area where AI can add value, though... searching through past rulings and legal opinions is tedious, and anything that can assist to find needles in haystacks would be welcome. It shouldn't be used to write legal judgements or arguments though...
i am pretty sure you do know whether you wrote a text, or it just magically spawned in front of your eyes out of thin air - you don't need degree in computer science for that.
Creating text is not the only issue, it may be trained from your confidential files.
Also no, you don't know what it's doing so you could be blindsighted by the latest AI update making unexpected changes. Not only from good-intentioned features but also bugs, or malicious anti-features after the CEO throws their toys out of the Twitter pram.
no malicious update can force you to generate a text and file it in court as your own work.
Consider that a program can edit the file while running at any point, not merely during user input. Like a virus with access to user's files it could even edit a document that's not even being displayed to the user on the screen.
well that would be fucked up for sure. are you suggesting any existing program works like that, or are just speculating what if?
This may be out of date but in this video by Lawful Masses lawyers are concerned that software AI tools which somehow (I don't recall) help them understand a case. This issue is the AI should not use information sourced from another client's confidential case/documents to inform them about another case but they don't know how it works. Responses from Microsoft were not forthcoming.
I would argue they can't know unless they have access to the source code to verify what any (local) AI can do (not personally do it, but a trusted 3rd party audit which isn't behind closed doors).
Pretty sure there is such a thing as legally certified software where the liability would then lie with the software vendor.