drhead

joined 5 years ago
[–] drhead@hexbear.net 6 points 2 years ago

$10k sounds a bit too good to be true? Probably not including "installation".

[–] drhead@hexbear.net 9 points 2 years ago

We're just going to uncover a whole DC Capitol cruising scene by the end of this aren't we?

Hope they had fun.

[–] drhead@hexbear.net 1 points 2 years ago

Meta and Google open sources PyTorch and Tensorflow so people can hopefully make one better than the other.

That's a bit optimistic at this point... PyTorch is basically the Windows of machine learning libraries -- it's not particularly great on its own merits, because a lot of core features (XLA, jit compiling) are clearly added as afterthoughts and have a lot of very apparent issues, but everyone uses it because everyone uses it. It is a perfect expression of why maybe "move fast and break things" isn't such a great philosophy for important libraries.

[–] drhead@hexbear.net 1 points 2 years ago (1 children)

I think of random furries online who just dislike AI art

A few people I know are actually getting harassment, up to and including death threats from this group. Unfortunately those are also part of that movement and tend to be some of the ones freshest in my mind at any given time.

[–] drhead@hexbear.net 2 points 2 years ago (1 children)

and kind of an interesting fractal when you think about how most generative ML models are trained by adjusting their parameters to maximize the likelihood that they fool a so-called discriminator model

This isn't as common anymore, most modern image models are diffusion models which do not rely on a discriminator but which transform noise into an image using an iterative refinement process. GANs are annoying to train and don't work quite as well for image synthesis but they are still somewhat used as components (like as an encoder to transform an image into a latent image so it is easier to process and decode it back at the end, e.g. Stable Diffusion's VAE) or as extra models for other processing (like ESRGAN and its derivatives which is fairly old at this point, often used for image upscaling or sometimes for removing compression noise). The main force that pushes AI model output to be less detectable is that AI models are built to represent the distribution of the dataset they are trained on, and over time better designed models and training regimes will fit that distribution better, which by definition includes outputs becoming more indistinguishable from the dataset.

As far as I have seen, the AI classifier arms race is already very far behind on the classifier side. I have seen far more cases of things like ZeroGPT returning false positives than I have seen true positives that don't include "As a large language model...". I have seen plenty of instances of photos of the current conflict in Israel where people fed a photo to an AI classifier site and confidently said it was 97% chance of being AI when visually the photo doesn't even show any signs of being fake, and it's more likely that the photo is just a real photo that doesn't actually show what is claimed (which shows that people need to learn more about propaganda in general -- the base unit of propaganda is not lies, it is emphasis, because of this you need to be more wary of context than whether information is factual in most cases). The fact that people blindly trust AI classifiers is arguably somewhat more damaging right now than generative AI models.

[–] drhead@hexbear.net 1 points 2 years ago (3 children)

That sounds like a very bad faith reading.

I am sure that there are plenty of people in the movement who are only looking for that, and I support things like the Writers Guild wanting protections in their contracts. That is not the dominant theme in the anti-AI movement. By far the most prominent voices are large corporations and a handful of fairly successful independent artists who are interested in strengthening copyright, which will be of little benefit to anyone who is not already wealthy enough to pursue a copyright infringement case. There's also plenty of people who do actually want to ban the technology outright or who fantasize about sabotaging it somehow, I don't know how anyone could follow anti-AI discourse and not see any of that. The likely outcome of strengthening copyright as part of this, though, is that large media companies will then continue to displace workers using AI tools while also making a larger share of money from the development from either selling access to datasets built from their internal libraries or by leveraging their exclusive access to said data, none of which actually benefits artists. IP law is not there to protect small artists, it is only capable of protecting those who can afford to go to court over it, everyone else will get fucked over as usual. But I'm sure that the Copyright Alliance and the handful of independent artists that they want to present as a human face will be pretty happy about it.

The one thing that this could restrict is open-source development of said models, which will make them harder to access for any independent artist who wishes to use them (if we assume that use of AI tools becomes a prevailing standard this will be necessary, if we assume that independent artists will be fine without them then presumably it follows that we don't need to do anything at all) by making sure that they are reliably behind a paywall and generating profits for either an AI company or a media company. At best, this leaves independent artists slightly worse off when accounting for the effort spent on putting this plan into action, at worst it would make it far more profitable for tech companies and media companies alike.

If a movement is claiming to do something in the name of labor, but material analysis shows that the plan is very obviously DOA and if anything will make the issue worse, I'm going to oppose that, and I am going to have heavy disagreements with the anti-AI movement as long as its dominant messaging is clinging to IP law in the hopes that it will somehow magically transform into something that benefits workers without comparable effort to what it would take to overthrow capitalism outright.

[–] drhead@hexbear.net 2 points 2 years ago (5 children)

Depends on where you look, really. Most of the interesting new developments (and the bulk of what's available only for open source models and not commercial ones because commercial models can't possibly adapt these things and make them user friendly fast enough) have been a bunch of conditioning models, whose only purpose is adding another layer of human input. And they're usually extremely useful, because there's far more that can be expressed spatially that you can't express with text.

Yeah, the instant art button is what gets the most attention (usually in the form of anime girls with anatomically impossible proportions since straight people are boring), but you can also definitely make things more complicated and gain far more control in the process, and I see plenty of people who came for the instant art ending up doing this down the line. Plenty even going as far as picking up a pen tablet and developing conventional drawing skills to use alongside it. At some point along that process, I think it's clear that it starts being used as a tool.

[–] drhead@hexbear.net 4 points 2 years ago (7 children)

giving every user knobs and tools and making it unintuitive in all the ways art creation software are on purpose because they're necessary for being an actual creation tool.

remove the intent, and you have the current state of open source AI

[–] drhead@hexbear.net 9 points 2 years ago (2 children)

The better (materialist) argument for being in support of AI (or at least being against the current anti-AI movement) would be more along the lines that Luddites were wrong because they were fighting the means of production, which is absolutely pointless because that is just fighting the tendency for the rate of profit to fall. The only way to solve the issues with AI and its impacts on labor would be to attack the relations of production, which would remove the need to actually do anything about the technology itself (good thing too, because the sheer amount of effort that would be required to remove all generative AI from existence and keep it suppressed indefinitely would make overthrowing an entire social order look easy by comparison).

The linked argument does not cover this, it is instead comparing it to the aesthetics of reaction, which is the least useful thing that could be done unless they're just looking for a talking point.

[–] drhead@hexbear.net 6 points 2 years ago (2 children)

Elite: Dangerous

No motion controls or anything so it isn't the best full demonstration of VR, but (with the exception of on-foot content with Odyssey) everything that it does do in VR from how it handles the UI to the overall feeling of actually being inside your own ship and piloting it is done extremely well. Paired with its sound design, it should be a very memorable experience. Also hope you don't have too much motion sickness, because piloting ship-launched fighters in VR is way too fun to not try to experience at least once.

[–] drhead@hexbear.net 13 points 2 years ago

nah this is the coolest thing anyone has done there most likely. it elevates the space. he is not disrespecting his workplace.

[–] drhead@hexbear.net 30 points 2 years ago (3 children)

Religious people have been doing shit like this for centuries, this is just the same thing in a shiny new package, just like the whole "Fight The New Drug" campaign.

view more: ‹ prev next ›