this post was submitted on 05 Aug 2025
249 points (96.6% liked)

Fuck AI

3642 readers
629 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

I’m in IT at an upper level and know painfully well what “AI” really is and that it’s not the disruptor people think it will be. However I feel like I can’t post it anywhere without being judged about it as almost every exec I know has bought into it hook, line and sinker. Even other people I talk to about the issues and limitations look at me like I’m completely weird “you’re in IT and you don’t embrace AI? wtf is wrong with you?”

So what do you all do? I don’t want to make things career limiting but I feel like I’m screaming in the dark seeing where things will really go. It reminds me a lot of the move to cloud and everyone going all in on it without knowing the real ramifications.

you are viewing a single comment's thread
view the rest of the comments
[–] Canconda@lemmy.ca 2 points 1 day ago* (last edited 1 day ago) (1 children)

Specialized, yes, but generative AI is an entirely different subject from cybersecurity applications. These are not general purpose models... they're specifically trained and tasked to do cyber attacks.

  1. Hardware vulnerabilities. There a millions of devices that an AI could easily cross reference against a database of known hardware vulnerabilities. Consider the Windows 10 or Android 12 situation. IOT devices that are no longer being updated. AI could mass target devices with any known vulnerability.

  2. Brute forcing passwords and cross referencing libraries of credentials. Basically what scammers currently do but x1000000. Weak passwords below 12-16 character lengths may become obsolete.

  3. Scalability. 1 AI agent could be the equivalent of 100 human cyber security agents. Meanwhile the minimum skill level to activate these AI agents will be far, far, below the skill requirements of becoming a cyber security professional.

AI is better at computer stuff just like humans are better at human stuff.

[–] Opisek@lemmy.world 2 points 1 day ago (1 children)

I agree with the cross referencing and scalability, but can you explain how at LLM might be faster at password bruteforcing at all? Those models are not known for their speed.

[–] Canconda@lemmy.ca 0 points 21 hours ago* (last edited 16 hours ago)

LLM might be faster at password bruteforcing at al

AI agents can use automation tools and are not limited to being chat bots. LLM just gives dumbasses like you and me the ability to communicate with them.

An AI agent could triage vast libraries of vulnerable targets, designate server resources to facilitate multiple attack types in tandem, and effect cyber warfare on a scale that would require 100s, possibly 1000s, of human agents.

AI could develop innovative malware that simultaneously causes harm and obfuscates it's presence. It could coordinate DDOS attacks etc against rival cyber security assets. Attack power stations.

I am far from the only person concerned about this. https://gizmodo.com/get-ready-the-ai-hacks-are-coming-2000639625