this post was submitted on 07 Nov 2023
96 points (100.0% liked)
technology
23218 readers
2 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This is so asinine. ChaptGPT-4 does not reason. It does not decide. It does not provide instructtions. What it does is write text based on a prompt. That's it. This headline is complete nonsense.
Maybe this is conspiracy-brained, but I am 99% sure that the way people like Hinto is talking about this technology being so scary and dangerous is marketing to drive up the hype.
There's no way someone who worked with developing current AI doesn't understand that what he's talking about at the end of this article, AI capable of creating their own goals and basically independent thought, is so radically different from today's probability-based algorithms that it holds absolutely zero relevance to something like ChatGPT.
Not that there aren't ways current algorithm-based AI can cause problems, but those are much less marketable than it being the new, dangerous, sexy sci-fi tech.
This is the common consensus among AI critics. People who are heavily invested in so-called "AI" companies are also the ones who push this idea that it's super dangerous, because it accomplishes two goals: a) it markets their product, b) it attracts investment into "AI" to solve the problems that other "AI"s create.
AI papers from most of the world: "We noticed a problem with this type of model, so we plugged in this formula here and now it has state-of-the-art performance. No, we don't really know why or how it works."
AI papers from western authors: "If you feed unfiltered data to this model, and ask it to help you do something bad, it will do something bad 😱😱😱"