StopTech

joined 1 month ago
MODERATOR OF
[–] StopTech@lemmy.today 10 points 1 month ago (2 children)
[–] StopTech@lemmy.today 9 points 1 month ago (3 children)

Have you seen Gattaca?

[–] StopTech@lemmy.today 5 points 1 month ago

I wouldn't trust those funds myself. Plenty of oil companies say they're all about reducing CO2 and as I remember ESG was playing favorites rather than reflecting carbon emissions. Even companies that are trying to reduce emissions can still be invading people's privacy, lobbying (bribing) for bad legislation and doing other evil things.

[–] StopTech@lemmy.today 16 points 1 month ago* (last edited 1 month ago) (3 children)

It is reality. And unfortunately any "ethical" funds usually just focus on avoiding oil companies or military companies but are just fine with AI companies, surveillance companies, eugenics companies and so on. Nobody agrees on what is ethical I'm afraid. One man's unethical practice is another man's unethical-to-avoid practice.

[–] StopTech@lemmy.today 0 points 1 month ago (2 children)

I think they will give us the cancer cure which may even be cheap, but it will come with lots of other downsides for society and your individual physical and mental health. Technology is like black magic that solves the problem you asked it to but gives you a thousand new issues that end up being worse than the original situation.

[–] StopTech@lemmy.today 5 points 1 month ago* (last edited 1 month ago)

Genetic engineering every little detail could become dirty cheap, but it will still be terrible for humanity because it will remove diversity, we'd be messing with forces we don't understand that could lead to diseases or greater population-wide susceptibilities and the government would also like to have its say on how your baby is made so that they will be a good little order follower

[–] StopTech@lemmy.today 29 points 1 month ago (1 children)

As well as the military contractors, insurance companies, big food, big media, big think tanks and consultancy, etc

[–] StopTech@lemmy.today 1 points 1 month ago (1 children)

This is 100% true

No you appear to be recalling something you read incorrectly. The NSA was allegedly concerned Furbys could record sensitive conversations and they were banned from Fort Meade. The idea that they recorded sound was incorrect, but the concern wasn't about Furbys learning or having artificial intelligence. Besides, bringing this up is a distraction from verifiable facts that computers can already identify targets in real time camera feeds and make decisions on whether to pursue and shoot them. You're in denial my friend.

[–] StopTech@lemmy.today 1 points 1 month ago (4 children)

Someone didn't read the news about the Pentagon threatening Anthropic because they want to use AI for fully autonomous weapons

[–] StopTech@lemmy.today -1 points 1 month ago

People do talk about writing things that "the compiler can understand" so it's nothing new. Also I think you meant to say regular expressions understand strings, not patterns - or that regular expression engines understand patterns.

[–] StopTech@lemmy.today -5 points 1 month ago (2 children)

This depends on the definition of understanding. If by understanding you mean mental processing then obviously AI can never do that because it has no mind, it only simulates the behaviors of a mind. But if instead understanding is understood (pun intended) to mean the process of extracting accurate information from something and responding to it in a rational way, then yes AIs do understand lots of things.

[–] StopTech@lemmy.today -1 points 1 month ago

Arguably if you give AI access to the nuclear launch system then it can cause human extinction "by itself". Every "by itself" extinction scenario requires some pre-existing circumstances so this has a right to qualify as one of those scenarios.

Contrary to before we now have general purpose AIs that can understand all types of scenarios and make decisions in them. This means they can cause extinction with less human guidance. And there's no strong reason to doubt AI could become as intelligent and autonomous as humans, probably in a decade or two. Then it's pretty much bye bye humans.

view more: ‹ prev next ›