Or how about parents regulate their children, so that we don't have government nannies telling full grown adults what they're allowed to do with chatbots?
FaceDeer
Ah, this is that Daenerys bot story again? It keeps making the rounds, always leaving out a lot of rather important information.
The bot actually talked him out of suicide multiple times. The kid was seriously disturbed and his parents were not paying the attention they should have been to his situation. The final chat before he committed suicide was very metaphorical, with the kid saying he wanted to "join" Daenerys in West World or wherever it is she lives, and the AI missed the metaphor and roleplayed Daenerys saying "sure, come on over" (because it's a roleplaying bot and it's doing its job).
This is like those journalists that ask ChatGPT "if you were a scary robot how would you exterminate humanity?" And ChatGPT says "well, poisonous gasses with traces of lead, I guess?" And the journalists go "gasp, scary robot!"
My understanding is it's because black holes are the way to maximum entropy. Widely dispersed material has lots of potential energy and lots of possible states, but black holes are "the end" - there's no further change possible once you get there. There is no state of matter or spacetime with more entropy than that.
I love that we're this far along in physics and the question of "what even is gravity anyway?" Is still fundamentally unsolved.
My favourite theory that I've seen so far is "entropy increases, and black holes have maximum entropy of anything in the universe, so everything is always trying to become a black hole." Stuff falls downward just because that's the easiest and most immediate way of making progress towards being a black hole.
Obviously, this is a layman's understanding.
There's no reason a planet couldn't have a very high inclination.
It's been nearly 20 years. Give it up.
Horses are incredibly expensive to house and maintain. I wouldn't bet on this being more, when you're not using it you can just stash it in your garage for as long as you want without having to worry about it.
Indeed. I still remember how Trump suspended the TikTok ban before he was actually president just by saying he would do so.
Ah, there's another angle on why Trump has decided to focus so much ire on Harvard. Trump is obsessed with winning a Nobel Peace Prize, since Obama got one he wants to have one too.
Given how dramatically LLMs have improved over the past couple of years I think it's pretty clear at this point that AI trainers do know something of what they're doing and aren't just randomly stumbling around.
It's not "anthropomorphic bullshit", it's technical jargon that you're not understanding because you're applying the wrong context to the definitions. AI researchers use terms like "hallucination" to mean specific AI behaviours, they use it in their scientific papers all the time.
Be that as it may this particular instance is much more complicated and extreme than the "average" and so makes a poor basis for arguing anything in particular. The details of this specific situation don't back up a simple interpretation.
I would recommend using studies by psychologists as a better basis.