Ow god, the bots pretended to be stuff like SA survivors and the like. Also the whole research is invalid just because they cannot tell that the reactions they will get are not also bot generated. What is wrong with these people.
Soyweiser
'a bad person writes a callout post ...'
Ow no, they are going after tracingwoodgrains. ;) (E: ok, I this joke lands differently after reading this: "Crémieux seemed to understand this [this behavior is plagiarism] when former Harvard president Claudine Gay was accused of plagiarism.").
'Here is a ChatGPT legal analysis on ...'
Ow god all sides suck. (Also blog length tweets were a mistake, im not reading all that).
E: read part of the initial complaint
Beefing is a funny thing. Before we invented police and laws as courts, gossip was the only method human beings had to enforce the social compact.
What...
E2: Yes, if you steelman the above statement it makes sense, but that is stilly, you can make almost anything make sense if you steelman it long enough, pigs fly if you count falling as flying, liking romance novels is now a hardcore porn addiction (wonder why he picked her to go after), ignore peer review and blogging can be on the same level as academia. (Wait, I gotta think a bit more about that last one).
Of course there is going to be an ai for every word. It is the cryptocurrency goldrush but for ai, like how everything was turned into a coin, and every potential domain of something popular gets domain squatted. Tech has empowered parasite behaviour.
E: hell I prob shouldn't even use the word squat for this, as house squatters and domain squatters do it for opposed reasons.
Just whole movie praising Peter Weyland and his legacy.
Damn, had missed somebody did an effort ~~post~~ series on all these problems. What I have read so far is pretty good.
Imagine the horrible product they would have created if they had actually followed up on the oppenheimer thing. A soulless vaguely wrong feeling pro technology movie created by altman and musk. The amount of people it would have driven away would have been big.
You, a human can respond like that, a llm, esp a search one with the implied authority it has should admit it doesnt know things. It shouldn't make up things, or use sensational clickbait headlines to make up a story.
I'm glad that messing it up is at least common.
Search engines also must wonder why I'm so interested in couches and coaches because I know which word I mean, I don't always know how it is spelled. There prob is a nice mnemonic involving Vance and Waltz I could think off that would help me with that problem however.
Otoh, this does open up the potential for a headline like 'Chatgpt sanctioned for using stolen cryptocurrency assets.'
I thought of the old sneerclub/ssc poster (def not a regular on the former, while a former regular on the latter) yodatsracist
Allright, let me put my 'warnings for young demonologists' guidebook to the side. ;).
I think unrelated to the attack above, but more about prompt hack security, so while back I heard people in tech mention that the solution to all these prompt hack attacks is have a secondary LLM look at the output of the first and prevent bad output that way. Which is another LLM under the trench coat (drink!), but also doesn't feel like it would secure a thing, it would just require more complex nested prompthacks. I wonder if somebody is just going to eventually generalize how to nest various prompt hacks and just generate a 'prompthack for a LLM protected by N layers of security LLMs'. Just found the 'well protect it with another AI layer' to sound a bit naive, and I was a bit disappointed in the people saying this, who used to be more genAI skeptical (but money).