chaonaut

joined 8 months ago
[–] chaonaut@lemmy.4d2.org 4 points 2 days ago

It's like supporting those companies, voting for politicians who support them then deny your responsability for that.

It really isn't, particularly for those of us who have been getting yelled at for doing exactly not that, and being told that not having full-throated support for Harris when we were specifically told that the campaign didn't need our support and locked out of speaking up. For those who have been told that our lack of support is why Trump got elected and Palestinians are being killed. Collapsing the entirety of electoral politics into "we voted for this" is harmfully reductive. We cannot keep telling ourselves that no matter what we do while working together, since the overall result was this it is our fault. It's literally ignoring the actions of political opponents to blame ourselves no matter the outcome.

Placing a blanket blame on voters for this is still just electoralism. Voting should be one political expression of many; reducing everything down to the outcome of an election--even if you're blaming just those who voted--doesn't build political movements.

[–] chaonaut@lemmy.4d2.org 8 points 2 days ago (2 children)

Focusing on the culpability of individual voters is just reputation-washing for Israel.

It's like stepping over massive fossil fuel companies to blame someone who put plastic wrap in the trash instead of the recycle bin. This is not how to get people to stay engaged with politics between elections, and actually work together to do something now, which is how individual voters can actually impact the situation. Don't instill hopelessness by focusing on the blame of those so far from direct culpability.

[–] chaonaut@lemmy.4d2.org 39 points 4 days ago

It's not all that much of a conspiracy theory as those pushing this line at the payment processoers openly advocate that since LGBTQ+ references sex by way of sexuality and gender, then that is sexual content, and is therefore inappropriate for children. This, of course, completely ignores heterosexuality and cisgender because they consider queer people existing to be harmful to children. And trying to get through to them about how important age-appropriate sexual education is in combating child abuse is an exercise in frustration.

[–] chaonaut@lemmy.4d2.org 0 points 3 weeks ago

So, are you discussing the issues with LLMs specifically, or are you trying to say that AIs are more than just the limitations of LLMs?

[–] chaonaut@lemmy.4d2.org 0 points 3 weeks ago (2 children)

I mean, I argue that we aren't anywhere near AGI. Maybe we have a better chatbot and autocomplete than we did 20 years, but calling that AI? It doesn't really track, does it? With how bad they are at navigating novel situations? With how much time, energy and data it takes to eek out just a tiny bit more model fitness? Sure, these tools are pretty amazing for what they are, but general intelligences, they are not.

[–] chaonaut@lemmy.4d2.org 0 points 3 weeks ago (4 children)

It questionable to measure these things as being reflective of AI, because what AI is changes based on what piece of tech is being hawked as AI, because we're really bad at defining what intelligence is and isn't. You want to claim LLMs as AI? Go ahead, but you also adopt the problems of LLMs as the problems of AIs. Defining AI and thus its metrics is a moving target. When we can't agree to what is is, we can't agree to what it can do.

[–] chaonaut@lemmy.4d2.org 0 points 3 weeks ago (6 children)

I mean, sure, in that the expectation is that the article is talking about AI in general. The cited paper is discussing LLMs and their ability to complete tasks. So, we have to agree that LLMs are what we mean by AI, and that their ability to complete tasks is a valid metric for AI. If we accept the marketing hype, then of course LLMs are exactly what we've been talking about with AI, and we've accepted LLMs features and limitations as what AI is. If LLMs are prone to filling in with whatever closest fits the model without regard to accuracy, by accepting LLMs as what we mean by AI, then AI fits to its model without regard to accuracy.

[–] chaonaut@lemmy.4d2.org 5 points 3 weeks ago (10 children)

Calling AI measurable is somewhat unfounded. Between not having a coherent, agreed-upon definition of what does and does not constitute an AI (we are, after all, discussing LLMs as though they were AGI), and the difficulty that exists in discussing the qualifications of human intelligence, saying that a given metric covers how well a thing is an AI isn't really founded on anything but preference. We could, for example, say that mathematical ability is indicative of intelligence, but claiming FLOPS is a proxy for intelligence falls rather flat. We can measure things about the various algorithms, but that's an awful long ways off from talking about AI itself (unless we've bought into the marketing hype).

[–] chaonaut@lemmy.4d2.org 9 points 3 weeks ago (12 children)

Maybe the marketers should be a bit more picky about what they slap "AI" on and maybe decision makers should be a little less eager to follow whatever Better Auto complete spits out, but maybe that's just me and we really should be pretending that all these algorithms really have made humans obsolete and generating convincing language is better than correspondence with reality.

[–] chaonaut@lemmy.4d2.org 5 points 1 month ago

What I expect is all the "the FDA doesn't want you to know this" grifters are really excited to have their snake oil supported by the government so they can sell their stuff better. No further thought that "we could make a lot of money doing this" and of the similar myopic thinking that cares about next quarter's warnings call more than being in business next year

[–] chaonaut@lemmy.4d2.org 1 points 1 month ago

No, of course you fall back from your claimed reason, you just want more bloodshed. And I doubt you particularly care whose it is.

[–] chaonaut@lemmy.4d2.org 1 points 1 month ago

Yeah, I already got that you really want people to hurt people with the goal of causing fear, and aren't concerned with with the fallout. How are you planning on dealing with the massive industry set up to cultivate and direct fear towards the ends of conservatives? Or is having a theory of change libshit?

view more: next ›