ChairmanMeow

joined 2 years ago
[–] ChairmanMeow@programming.dev 5 points 2 weeks ago (10 children)

I can think it's a messed up fantasy, but that doesn't mean it should immediately be banned by a payment processor.

Regardless, there are tons of studies showing that consuming this kind of porn actually helps prevent people from acting on these fantasies. The net result is likely less sexual abuse, not more. Because it's fantasy media, it likely is able to keep the fantasy a fantasy, it gives people an outlet.

[–] ChairmanMeow@programming.dev 15 points 2 weeks ago (24 children)

"Porn" is extremely broad. There's plenty of perfectly ethical porn around. Most major producers have pretty strong standards these days. It's not the same industry as it was 10 years ago.

But in this specific case they went after a porn game, not featuring real people. There's basically no real harm here. People occasionally argue that porn addiction is a problem, but that's mostly an addiction problem, which goes for most addictions. The thing addicted too isn't the problem, it's the very nature of being addicted that's causing the issue.

It's fine of course to dislike porn, but to effectively ban people from producing and consuming it is an entirely different matter. That does seem like a massive encroachment on individual rights to me.

[–] ChairmanMeow@programming.dev 1 points 2 weeks ago (1 children)

That's applying existing solutions to a different programming language or domain, but ultimately every single technique used already exists. It only applied what it knew, it did not come up with something new. The problem as stated is also not really "new" either, image extraction, conversion and rendering isn't exactly a "new problem".

I'm not disputing that LLMs can speed up some work, I know it occasionally does so for me as well. But what you have to understand is that the LLM only remembered similar problems and their solutions, it did not at any point invent something truly new. I understand the distinction is difficult to make.

[–] ChairmanMeow@programming.dev 1 points 2 weeks ago (3 children)

You're referring to more generic machine learning, not LLMs. These are vastly different technologies.

And I have used them for programming, I know their limitations. They don't really transfer solutions to new problems, not on their own anyway. It usually requires pretty specific prompting. They can at best apply solutions to problems, but even then it's not a truly generalised thing, even if it seems to work for many cases.

That's the trap you're falling into as well; LLMs look like they're doing all this stuff, because they're trained on data produced by people who actually do so. But they can't think of something truly novel. LLMs are mathematically unable to truly generalize, it would prove P=NP if they did (there was a paper from a researcher in IIRC Nijmegen that proved this). She also proved they won't scale, and lo and behold LLM performance is plateauing hard (except in very synthetic, artificial benchmarks designed to make LLMs look good).

[–] ChairmanMeow@programming.dev 8 points 2 weeks ago (5 children)

Well the thing is, LLMs don't seem to really "solve" complex problems. They remember solutions they've seen before.

The example I saw was asking an LLM to solve "Towers of Hanoi" with 100 disks. This is a common recursive programming problem, takes quite a while for a human to write the answer to. The LLM manages this easily. But when asked to solve the same problem with with say 79 disks, or 41 disks, or some other oddball number, the LLM fails to solve the problem, despite it being simpler(!).

It can do pattern matching and provide solutions, but it's not able to come up with truly new solutions. It does not "think" in that way. LLMs are amazing data storage formats, but they're not truly 'intelligent' in the way most people think.

[–] ChairmanMeow@programming.dev 16 points 2 weeks ago (9 children)

Completion is not the same as only returning the exact strings in its training set.

LLMs don't really seem to display true inference or abstract thought, even when it seems that way. A recent Apple paper demonstrated this quite clearly.

[–] ChairmanMeow@programming.dev 1 points 2 weeks ago

Oh that's probably right actually.

I don't know anymore. But for me that probably means I shouldn't give it a rewatch. If it was any good, I'd remember it better I think.

[–] ChairmanMeow@programming.dev 1 points 2 weeks ago (2 children)

IIRC not quite? I think he scolded her for getting near those caves. But I don't quite remember.

[–] ChairmanMeow@programming.dev 4 points 2 weeks ago (5 children)

It's Sith Rey, she encounters her during a vision while training under Luke.

[–] ChairmanMeow@programming.dev 9 points 2 weeks ago

To lower prices presumably.

[–] ChairmanMeow@programming.dev 2 points 2 weeks ago

I understand the sentiment but that's an extremely small subset of people that are inconvenienced, in exchange for a significant reduction in the plastic littering in rivers, seas and oceans.

[–] ChairmanMeow@programming.dev 4 points 2 weeks ago (3 children)
view more: ‹ prev next ›