Yes. It's crazy. That's why the vast majority of us don't do it.
It's one thing to be a vegetarian for health or environmental reasons.
When you try to convince people that meat==murder, you come across as a wackadoodle.
nednobbins
That's great news. The other 9 of the 10 biggest protests were were extremely successful at affecting change.
Since we made such massive progress on all the others, this is clearly a harbinger of social and political progress.
How would you react if you saw a similar exchange between MAGAs?
MAGA A: .
MAGA B: You don't really mean that, right? It's not all of them.
MAGA A: I'm just joking. Relax.
Would you take that response at face value or would you assume that the joke is a thinly veiled statement of their actual beliefs?
Fuck the whole HP franchise.
It was always shitty writing and the plot was garbage. The whole story was a thinly veiled glorification of British exceptionalism.
The only saving grace of that stinking turd of a franchise is that, in the '90s, it seemed like a good way to get kids to read.
It's a nice idea but the military makes it really hard to do that.
I wouldn't either but that's exactly what lmsys.org found.
That blog post had ratings between 858 and 1169. Those are slightly higher than the average rating of human users on popular chess sites. Their latest leaderboard shows them doing even better.
https://lmarena.ai/leaderboard has one of the Gemini models with a rating of 1470. That's pretty good.
I imagine the "author" did something like, "Search http://google.scholar.com/ find a publication where AI failed at something and write a paragraph about it."
It's not even as bad as the article claims.
Atari isn't great at chess. https://chess.stackexchange.com/questions/24952/how-strong-is-each-level-of-atari-2600s-video-chess
Random LLMs were nearly as good 2 years ago. https://lmsys.org/blog/2023-05-03-arena/
LLMs that are actually trained for chess have done much better. https://arxiv.org/abs/2501.17186
Sometimes it seems like most of these AI articles are written by AIs with bad prompts.
Human journalists would hopefully do a little research. A quick search would reveal that researches have been publishing about this for over a year so there's no need to sensationalize it. Perhaps the human journalist could have spent a little time talking about why LLMs are bad at chess and how researchers are approaching the problem.
LLMs on the other hand, are very good at producing clickbait articles with low information content.
That makes sense. Not everything needs to be testable. There are many interesting and important ideas outside of science.
The main problem would be if someone wanted to set policy based on it. That includes the implicit experiment of, "If we adopt policy A we can expect outcome B." If we haven't tested that before turning it into a policy, the policy itself becomes the experiment, and then we need to be very careful about the ethics surrounding such an experiment.
It's kind of like string theory. It has a bunch of interesting conjectures but nobody can figure out a way to test any of it.
Take the "selfish gene" (the idea predates Dawkins). One of the theories states that it may be evolutionarily advantageous for an individual to sacrifice themselves for the group if they share enough DNA. They lose the DNA in their bodies but save the exact same DNA in the bodies of their extended family. That's a nice idea and you can get the math to work out in game theory models but how do we test if that's why ducks sometimes lag behind when a hunter tries to shoot them?
That's not to say it can never be tested. There are other cases where we needed to wait for technological breakthroughs until theories could actually be tested.
Have you looked up the history of the word "moron"?
Over the Iran attack? I'm pretty sure he broke ranks years ago.