self

joined 2 years ago
MODERATOR OF
[–] self@awful.systems 8 points 2 years ago (4 children)

(I have the feeling that this comes from the same shithead who pushed to include spicy autocomplete in Firefox.)

it definitely reads like the same shithead, but I’ve had them blocked on mastodon for some time so I can’t say for sure if it was for rampant LLMery or for doing the “without advertising the modern web would die and you don’t want that do you” thing advertisers do constantly

[–] self@awful.systems 21 points 2 years ago

First of all. You could simply prompt the LLM to become conscious, and I bet none of you so-called AI skeptics have noticed that Open-AI has NEVER included text like that in any of their system prompts.

[–] self@awful.systems 15 points 2 years ago

This is a weird kind of assertion. First of all. You could make facts a token value in an LLM if you had some pre-calculated truth value for your data set. That’s not how it works now but it’s a weird assertion to make about an unknown new generation of AI.

fuck almighty where do you people pull this absolute horseshit from

What Strawberry apparently is, is a machine that reasons, which is NOT similar to what Open-AI ever claimed ChatGPT ever was.

well shit, you’ve got a long list of supposed AI researchers to tell that to. here, I’ll make sure you’ve got plenty of time to read their back catalog of utterly fucking stupid claims!

[–] self@awful.systems 4 points 2 years ago (1 children)

what do you mean you don’t like when your package search command is one of several random, probably-unmaintained ecosystem packages that has to very slowly index everything every time nixpkgs updates because it doesn’t have access to the evaluator’s internals?

[–] self@awful.systems 7 points 2 years ago (3 children)

Evaluation is 5-20% faster than 2.18, depending on which benchmark is in use, thanks to eldritch horrors.

this is awesome

nix flake lock --update-input nixpkgs is now the much more reasonable nix flake update nixpkgs.

but this is making me go “fuck yeah” on the inside. it seems like a small change, but I can’t emphasize enough how frequently this command gets used (for every flake dependency, not just nixpkgs) for how longwinded and non-memorable the old form of it was. it’s kind of fucking incredible how many UX warts Nix has just from the old evaluator’s devs digging in their heels on shit like this.

[–] self@awful.systems 22 points 2 years ago

same energy as “your request could not be processed due to the following error: Success”

[–] self@awful.systems 7 points 2 years ago

absolutely!

I do have a question from the Mastodon side of the Fediverse. Is there a cleaner way I can share links to Lemmy communities?

I’ll do a little bit of experimentation tonight on our test instance and see if there’s a cleaner way to do link-only posts from mastodon! this’ll kill two birds with one stone for me — I needed to run some federation tests as part of an infrastructure upgrade I’m looking to deploy to our main instance. unfortunately, the federation between Lemmy and Mastodon is very limited and janky in a lot of ways so there might not be a cleaner way to do it, but the thread you created looks good on our end.

also, I can’t speak to how the experience for Blind users would be in any of the Lemmy apps or its web frontend, but if any of them end up being a more convenient way to interact with our posts, we can definitely get you set up with an account on our instance if desired.

If you want my Javascript prompt injection I made, DM me because I don’t wanna give LLM developers easy ways to put up input and output guardrails against my prompt injection.

definitely! I will reach out on Mastodon when I get the chance; I don’t remember if Lemmy even attempts to federate DMs between us and Mastodon, but I don’t trust it to do it well if it does.

[–] self@awful.systems 17 points 2 years ago (1 children)

Genuine question.

So rude, you didn’t answer my question at all.

yeah find me one single instance of someone doing this “genuine question” shit that doesn’t result in the most bad faith interpretation possible of the answers they get

If I’m missing something obvious I’d love it if you told me.

  • most security vulnerabilities look like they cause the targeted program to spew gibberish, until they’re crafted into a more targeted attack
  • it’s likely that gibberish is the LLM’s training data, where companies are increasingly being encouraged to store sensitive data
  • there’s also a trivial resource exhaustion attack where you have one or more LLMs spew garbage until they’ve either exhausted their paid-for allocation of tokens or cost their hosting organization a relative fuckload of cash
  • either you knew all of the above already and just came here to be a shithead, or you’re the type of shithead who doesn’t know fuck about computer security but still likes to argue about it
  • fuck off
[–] self@awful.systems 8 points 2 years ago

I’m kind of jealous of how readable and usable Robert’s site is compared with a typical React SPA with a tiny enforced font size and awful color contrast and jank everywhere — like the Lemmy frontend, for example

[–] self@awful.systems 12 points 2 years ago (6 children)

this is an excellent post! I really like the examples you’ve given of how to actively resist the discriminatory systems enabled by LLMs, alongside personal examples of how those systems have negatively impacted Blind and marginalized folks. it’s very rare to get this kind of perspective on LLMs and generative AI, and it’s very much appreciated.

this quote towards the end of the post stood out in particular:

(I’m not sure how the quote will be represented by the flawed ActivityPub bridge from Lemmy to Mastodon and then from Mastodon to a screen reader, so I’ll note that the quote starts here:)

To the bafflement of tech people everywhere, books will still be popular even though they work on imaginations and not code. Even though tech people will still not understand books, I, along with others, will still be here providing art because it’s our way of speaking to the world. What’s even better is that people will continue to appreciate and enjoy art instead of morning the shattering of LLM servers because, well, people are people and people like art. I don’t know what to tell you. Maybe you should learn to art and people instead of code.

(end quote)

this says so much about the extreme lack of imagination we’ve seen among LLM and generative AI boosters; there’s a fundamental flaw in the way they conceptualize and engage with creative work. we’ve seen again and again how those AI boosters will try to appropriate the style and trappings of especially science fiction while demonstrating barely a surface-level understanding of the work — see also our recent posts on awful.systems about how poorly some rather loud voices hyping AI understand Iain M Banks’ Culture novels

[–] self@awful.systems 8 points 2 years ago

it’s that time in every pet owner’s life where your pet just saunters up to you with this look on their face and you’re like “what did you do. marc! whatever it is spit it out. Marc Andreessen! no you come here and spit it out right now”

[–] self@awful.systems 26 points 2 years ago (1 children)

An AI-driven nagbot will surely fix these systemic issues — Huffington specifically claims the bot will “address growing health inequities” — and revolutionize healthcare, because “behavior change can be a miracle drug, both for preventing disease and for optimizing the treatment of disease.”

fuck off Arianna. there’s so many life-altering, commonplace diseases that behavioral changes don’t do fuck all for, and meanwhile some of the most beneficial behavioral therapy you can do is fucking impossible to get cause insurance won’t cover it in any achievable form. all this horseshit does is, as usual, shift the blame for beyond inadequate healthcare from a thoroughly broken system to the people suffering, because they supposedly didn’t try hard enough to change their behavior and get better.

view more: ‹ prev next ›