I am playing with it, sandboxed in an isolated environment, only interacting with a local LLM and only connected to one public service with a burner account. I haven’t even given it any personal info, not even my name.
It’s super fascinating and fun, but holy shit the danger is outrageous. Multiple occasions, it’s misunderstood what I’ve asked and it will fuck around with its own config files and such. I’ve asked it to do something and the result was essentially suicide as it ate its own settings. I’ve only been running it for like a week but have had to wipe and rebuild twice already (probably could have fixed it, but that’s what a sandbox is for). I can’t imagine setting it loose on anything important right now.
But it is undeniably cool, and watching the system communicate with the LLM model has been a huge learning opportunity.
Don’t be obtuse, she’s been meeting with his lawyers, this is all bullshit. She’s going to lie and “exonerate” him in exchange for a pardon. We’ve all seen where this is going.