admin

joined 2 years ago
[–] admin@lemmy.my-box.dev 2 points 2 years ago (2 children)

I agree. That will need to be proven. But when they are better than, say 90% of all drivers, it would make sense to switch. Waiting until they're "perfect" (which is the requirement I object to), is just wasting needless lives.

[–] admin@lemmy.my-box.dev 1 points 2 years ago* (last edited 2 years ago) (1 children)

Yeah, legislation and requirements for a self driving car to be allowed on the road will have to be updated. But an automated car can't drink and drive, or make the intentional decision to run someone over because they hate them. I don't see how vehicular homicide would apply.

If somebody reprograms a car to murder someone, they are at fault. In all other cases - accidents - the insurance would have to shift from the driver to the car creator.

[–] admin@lemmy.my-box.dev 1 points 2 years ago (1 children)

I never said better than the average driver, I said better than human drivers (preferably by a long shot).

So let's say that means... Better than 90% of all drivers. That isn't going to cost lives, it's going to save them. Not to mention improve traffic flow.

[–] admin@lemmy.my-box.dev 0 points 2 years ago (3 children)

None of those fields have achieved perfection. Airplanes crash, people die in hospitals and space shuttles. If anything, computer assistance has managed to make those safer than before.

If (when) robotcars are safer than human drivers, less people will die in traffic accidents. It's not a perfect bar to settle on, but it's better then the current standard.

Again, denying improvements, because it's less than perfect is just insane.

[–] admin@lemmy.my-box.dev 8 points 2 years ago (12 children)

As the other guy said. Demanding perfection is insane - we don't demand that from human drivers either. As long as it's better than humans (preferably by a long shot), I'm all in favour.

[–] admin@lemmy.my-box.dev 1 points 2 years ago

You're lucky I can't downvote from my instance (and that you're on reddit)!

[–] admin@lemmy.my-box.dev 2 points 2 years ago

It definitely used to, but I have been using my laptop with dual boot Ubuntu / windows 10 since last years summer (using either several times per week, and keeping up with all the updates), and not once did the bootloader break.

My biggest problem was chasing down the windows drivers, but after that it was golden.

[–] admin@lemmy.my-box.dev 4 points 2 years ago

Gotcha, good point.

[–] admin@lemmy.my-box.dev 1 points 2 years ago* (last edited 2 years ago) (1 children)

It's not the same as turning it into a play, but it's doing something with it beyond its intended purpose, specifically with the intention to produce derivatives of it at an enormous scale.

Whether or not a computer needs more or less of it than a human is not a factor, in my opinion. Actually, the fact that more input is required than for a human only makes it worse, since more of the creators work has to be used without their permission.

Again, the reason why I think it's incomparable is that when a human learns to do this, the damage is relatively limited. Even the best writer can only produce so many pages per day. But when a model learns to do it, the ability to apply it is effectively unlimited. The scale of the infraction is so exponentially more extreme, that I don't think it's reasonable to compare them.

Lastly, if I made it sound like that, I apologise, that was not my intention. I don't think it's the models fault, but the people who decided to (directly or indirectly by not vetting their input data) take somebody's copyrighted work and train an LLM on it.

[–] admin@lemmy.my-box.dev 13 points 2 years ago (2 children)

It still records who you talk to, as well as how much and when. That info is held by the biggest peddler in privacy info out there. No way I trust Facebook/meta as much as any of the other e2e chat clients.

view more: ‹ prev next ›