BlueMonday1984

joined 2 years ago
[–] BlueMonday1984@awful.systems 11 points 3 weeks ago (1 children)

Found an archive of vibe-coding disasters recently - recommend checking it out.

[–] BlueMonday1984@awful.systems 12 points 3 weeks ago* (last edited 3 weeks ago) (3 children)
[–] BlueMonday1984@awful.systems 13 points 3 weeks ago* (last edited 3 weeks ago)

Found a good security-related sneer in response to a low-skill exploit in Google Gemini (tl;dr: "send Gemini a prompt in white-on-white/0px text"):

I've got time, so I'll fire off a sidenote:

In the immediate term, this bubble's gonna be a goldmine of exploits - chatbots/LLMs are practically impossible to secure in any real way, and will likely be the most vulnerable part of any cybersecurity system under most circumstances. A human can resist being socially engineered, but these chatbots can't really resist being jailbroken.

In the longer term, the one-two punch of vibe-coded programs proliferating in the wild (featuring easy-to-find and easy-to-exploit vulnerabilities) and the large scale brain drain/loss of expertise in the tech industry (from juniors failing to gain experience thanks to using LLMs and seniors getting laid off/retiring) will likely set back cybersecurity significantly, making crackers and cybercriminals' jobs a lot easier for at least a few years.

[–] BlueMonday1984@awful.systems 8 points 3 weeks ago (1 children)

Found a neat tangent whilst going through that thread:

The single most common disciplinary offense on scpwiki for the past year+ has been people posting AI-generated articles, and it is EXTREMELY rare for any of those cases to involve a work that had been positively received

On a personal note, I expect the Foundation to become a reliable source of post-'22 human-made work for the same reasons I stated Newgrounds would recently:

  • An explicit ban on AI slop, which deters AI bros and allow staff to nuke it on sight

  • A complete lack of an ad system, which prevents content farms from setting up shop

  • Dedicated quality control systems (deletion and rewrite policies, in this case) which prevent slop from gaining a foothold and drowning out human-made work

[–] BlueMonday1984@awful.systems 5 points 3 weeks ago

Tangential: I’ve heard that there are 3D printer people that print junk and sell them. This would not be much of a problem if they didn’t pollute the spaces they operate in.

So, essentially AI slop, but with more microplastics. Given the 3D printer bros are much more limited in their ability to pollute their spaces (they have to pay for filament/resin, they're physically limited in where they can pollute, and they produce slop much slower than an LLM), they're hopefully easier to deal with.

[–] BlueMonday1984@awful.systems 9 points 3 weeks ago (2 children)

Similarly, at the chip production facilities, a committee of representatives stands at the end of the production line basically and rolls a ten-sided die for each chip; chips that don’t roll a 1 are destroyed on the spot.

Ah, yes, artificially kneecap chip fabs' yields, I'm sure that will go over well with the capitalist overlords who own them

[–] BlueMonday1984@awful.systems 7 points 3 weeks ago

New post from Matthew Hughes: People Are The Point, effectively a manifesto against gen-AI as a concept.

[–] BlueMonday1984@awful.systems 6 points 3 weeks ago (5 children)

The only complexity theory I know of is the one which tries to work out how resource-intensive certain problems are for computers, so this whole thing sounds iffy right from the get-go.

[–] BlueMonday1984@awful.systems 11 points 3 weeks ago (1 children)

The deluge of fake bug reports is definitely something I should have noted as well, since that directly damages FOSS' capacity to find and fix bugs.

Baldur Bjanason has predicted that FOSS is at risk of being hit by "a vicious cycle leading to collapse", and security is a major part of his hypothesised cycle:

  1. Declining surplus and burnout leads to maintainers increasingly stepping back from their projects.

  2. Many of these projects either bitrot serious bugs or get taken over by malicious actors who are highly motivated because they can’t relay on pervasive memory bugs anymore for exploits.

  3. OSS increasingly gets a reputation (deserved or not) for being unsafe and unreliable.

  4. That decline in users leads to even more maintainers stepping back.

[–] BlueMonday1984@awful.systems 14 points 3 weeks ago (5 children)

Potential hot take: AI is gonna kill open source

Between sucking up a lot of funding that would otherwise go to FOSS projects, DDOSing FOSS infrastructure through mass scraping, and undermining FOSS licenses through mass code theft, the bubble has done plenty of damage to the FOSS movement - damage I'm not sure it can recover from.

[–] BlueMonday1984@awful.systems 10 points 3 weeks ago (1 children)

Reading through some of the examples at the end of the article it’s infuriating when these slop reports have opened and when the patient curl developers try to give them benefit of the doubt the reporter replies with “you have a vulnerability and I cannot explain further since I’m not an expert”

At that point, I feel the team would be justified in telling these slop-porters to go fuck themselves and closing the report - they've made it crystal clear they're beyond saving.

(And on a wider note, I suspect the security team is gonna be a lot less willing to give benefit of the doubt going forward, considering the slop-porters are actively punishing them for doing so)

[–] BlueMonday1984@awful.systems 10 points 3 weeks ago

This is pure speculation, but I suspect machine learning as a field is going to tank in funding and get its name dragged through the mud by the popping of the bubble, chiefly due to its (current) near-inability to separate itself from AI as a concept.

view more: ‹ prev next ›