Perspectivist

joined 4 weeks ago
[–] Perspectivist@feddit.uk 1 points 2 weeks ago

No disagreement there. While it’s possible that Trump himself might not be - but also might be - guilty of any wrongdoing in this particular case, he sure acts like someone who is. And if he’s not protecting himself, then he’s protecting other powerful people around him who may have dirt on him, which they can use as leverage to stop him from throwing them under the bus without taking himself down in the process.

But that’s a bit beside the point. My original argument was about refraining from accusing him of being a child rapist on insufficient evidence, no matter how much it might serve someone’s political agenda or how satisfying it might feel to finally see him face consequences. If there’s undeniable proof that he is guilty of what he’s being accused of here, then by all means he should be prosecuted. But I’m advocating for due process. These are extremely serious accusations that should not be spread as facts when there’s no way to know - no matter who we’re talking about.

[–] Perspectivist@feddit.uk -2 points 2 weeks ago

It’s actually the opposite of a very specific definition - it’s an extremely broad one. “AI” is the parent category that contains all the different subcategories, from the chess opponent on an old Atari console all the way up to a hypothetical Artificial Superintelligence, even though those systems couldn’t be more different from one another.

[–] Perspectivist@feddit.uk -3 points 2 weeks ago

It’s a system designed to generate natural-sounding language, not to provide factual information. Complaining that it sometimes gets facts wrong is like saying a calculator is “stupid” because it can’t write text. How could it? That was never what it was built for. You’re expecting general intelligence from a narrowly intelligent system. That’s not a failure on the LLM’s part - it’s a failure of your expectations.

[–] Perspectivist@feddit.uk 13 points 2 weeks ago

There are plenty of similarities in the output of both the human brain and LLMs, but overall they’re very different. Unlike LLMs, the human brain is generally intelligent - it can adapt to a huge variety of cognitive tasks. LLMs, on the other hand, can only do one thing: generate language. It’s tempting to anthropomorphize systems like ChatGPT because of how competent they seem, but there’s no actual thinking going on. It’s just generating language based on patterns and probabilities.

[–] Perspectivist@feddit.uk 3 points 2 weeks ago

I have next to zero urge to “keep up with the news.” I’m under no obligation to know what’s going on in the world at all times. If something is important, I’ll hear about it from somewhere anyway - and if I don’t hear about it, it probably wasn’t that important to begin with.

I’d argue the “optimal” amount of news is whatever’s left after you actively take steps to avoid most of it. Unfiltered news consumption in today’s environment is almost certainly way, way too much.

[–] Perspectivist@feddit.uk 56 points 2 weeks ago (7 children)

Large language models aren’t designed to be knowledge machines - they’re designed to generate natural-sounding language, nothing more. The fact that they ever get things right is just a byproduct of their training data containing a lot of correct information. These systems aren’t generally intelligent, and people need to stop treating them as if they are. Complaining that an LLM gives out wrong information isn’t a failure of the model itself - it’s a mismatch of expectations.

[–] Perspectivist@feddit.uk 1 points 2 weeks ago* (last edited 2 weeks ago)

This isn’t a failure of the model - it’s a misunderstanding of what the model is. ChatGPT is a tool, not a licensed practitioner. It has one capability: generating language. That sometimes produces correct information as a side effect of the data it was trained on, but there is no understanding, no professional qualification, and no judgment behind it.

If you ask it whether it’s qualified to act as a therapist, it will tell you no. If you instruct it to role-play as one, it will however do that - because following instructions is the thing it’s designed to do. Complaining that a language model behaves like a language model, and then demanding more guardrails to stop people from using it badly, is just outsourcing common sense.

There’s also this odd fixation on Sam Altman as if he’s hand-crafting the bot’s behavior in real time. It’s much closer to an open-ended, organic system that reacts to input than a curated service. What you get out of it depends entirely on what you put in.

[–] Perspectivist@feddit.uk 0 points 2 weeks ago (3 children)

Trust what? I’m simply pointing out that we don’t know whether he’s actually done anything illegal or not. A lot of people seem convinced that he did - which they couldn’t possibly be certain of - or they’re hoping he did, which is a pretty awful thing to hope for when you actually stop and think about the implications. And then there are those who don’t even care whether he did anything or not, they just want him convicted anyway - which is equally insane.

Also, being “on the list” is not the same thing as being a child rapist. We don’t even know what this list really is or why certain people are on it. Anyone connected to Epstein in any capacity would dread having that list released, regardless of the reason they’re on it, because the result would be total destruction of their reputation.

[–] Perspectivist@feddit.uk 7 points 2 weeks ago

If I meant social media I would've said so.

[–] Perspectivist@feddit.uk 3 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

your chatbot shouldn’t be pretending to offer professional services that require a license, Sam.

It generates natural sounding language. That's all it's designed to do. The rest is up to the user - if a therapy session is what they ask then a therapy session is what they get. I don't think it should refuse this request either.

view more: ‹ prev next ›