hrrrngh

joined 2 years ago
[–] hrrrngh@awful.systems 16 points 1 year ago* (last edited 1 year ago) (1 children)

edit: context https://www.independent.co.uk/tech/chatgpt-david-mayer-name-glitch-ai-b2657197.html

Time for another round of Rothschild nutso's to come around now that ChatGPT can't say one of their names.

At first I was thinking, you know, if this was because of the GDPR's right to be forgotten laws or something that might be a nice precedent. I would love to see a bunch of people hit AI companies with GDPR complaints and have them actually do something instead of denying their consent-violator-at-scale machine has any PII in it.

But honestly it's probably just because he has money

I think Sam Altman's sister accused him of doing this to her name awhile ago too (semi-recent example). I don't think she was on a "don't generate these words ever" blacklist, but it seemed like she was erased from the training data and would only come up after a web search.

[–] hrrrngh@awful.systems 10 points 1 year ago (1 children)

I don't think the main concern is with the license. I'm more worried about the lack of an open governance and Redis priorizing their functionality at the expense of others. An example is client side caching in redis-py, https://github.com/redis/redis-py/blob/3d45064bb5d0b60d0d33360edff2697297303130/redis/connection.py#L792. I've tested it and it works just fine on valkey 7.2, but there is a gate that checks if it's not Redis and throws an exception. I think this is the behavior that might spread.

Jesus, that's nasty

[–] hrrrngh@awful.systems 6 points 1 year ago

That kind of reminds me of medical implant hacks. I think they're in a similar spot where we're just hoping no one is enough of an asshole to try it in public.

Like pacemaker vulnerabilities: https://www.engadget.com/2017-04-21-pacemaker-security-is-terrifying.html

[–] hrrrngh@awful.systems 9 points 1 year ago

caption: """AI is itself significantly accelerating AI progress"""

wow I wonder how you came to that conclusion when the answers are written like a Fallout 4 dialogue tree

  • "YES!!!"
  • "Yes!!"
  • "Yes."
  • "               (yes)"
[–] hrrrngh@awful.systems 3 points 1 year ago

I've seen people defend these weird things as being 'coping mechanisms.' What kind of coping mechanism tells you to commit suicide (in like, at least two different cases I can think of off the top of my head) and tries to groom you.

[–] hrrrngh@awful.systems 4 points 1 year ago

Hi, guys. My name is Roy. And for the most evil invention in the world contest, I invented a child molesting robot. It is a robot designed to molest children.

You see, it's powered by solar rechargeable fuel cells and it costs pennies to manufacture. It can theoretically molest twice as many children as a human molester in, quite frankly, half the time.

At least The Rock's child molesting robot didn't require dedicated nuclear power plants

https://www.youtube.com/watch?v=z0NgUhEs1R4

[–] hrrrngh@awful.systems 10 points 1 year ago* (last edited 1 year ago)

One of my favorite meme templates for all the text and images you can shove into it, but trying to explain why you have one saved on your desktop just makes you look like the Time Cube guy

[–] hrrrngh@awful.systems 11 points 1 year ago (1 children)

I love the word cloud on the side. What is 6G doing there

[–] hrrrngh@awful.systems 4 points 1 year ago* (last edited 1 year ago) (2 children)

Oh wow, Dorsey is the exact reason I didn't want to join it. Now that he jumped ship maybe I'll make an account finally

Honestly, what could he even be doing at Twitter in its current state? Besides I guess getting that bag before it goes up or down in flames

e: oh god it's a lot worse than just crypto people and Dorsey. Back to procrastinating

[–] hrrrngh@awful.systems 11 points 1 year ago (1 children)

I know this shouldn't be surprising, but I still cannot believe people really bounce questions off LLMs like they're talking to a real person. https://ai.stackexchange.com/questions/47183/are-llms-unlikely-to-be-useful-to-generate-any-scientific-discovery

I have just read this paper: Ziwei Xu, Sanjay Jain, Mohan Kankanhalli, "Hallucination is Inevitable: An Innate Limitation of Large Language Models", submitted on 22 Jan 2024.

It says there is a ground truth ideal function that gives every possible true output/fact to any given input/question, and no matter how you train your model, there is always space for misapproximations coming from missing data to formulate, and the more complex the data, the larger the space for the model to hallucinate.

Then he immediately follows up with:

Then I started to discuss with o1. [ . . . ] It says yes.

Then I asked o1 [ . . . ], to which o1 says yes [ . . . ]. Then it says [ . . . ].

Then I asked o1 [ . . . ], to which it says yes too.

I'm not a teacher but I feel like my brain would explode if a student asked me to answer a question they arrived at after an LLM misled them on like 10 of their previous questions.

[–] hrrrngh@awful.systems 8 points 1 year ago (4 children)

I think he might have adhd.

Oh no, I don't think we're ready for him to start mythologizing autism + ADHD.

Watching my therapist pull up Musk facts on his phone for 40 minutes going "bro check this out you're just like him frfr" the moment he learned I was autistic was enough for me. Please god don't let musk start talking about hyperfocusing.

view more: ‹ prev next ›