this post was submitted on 11 Aug 2025
939 points (86.7% liked)

Lemmy Shitpost

33815 readers
3231 users here now

Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.

Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!


Rules:

1. Be Respectful


Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.

Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.

...


2. No Illegal Content


Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.

That means:

-No promoting violence/threats against any individuals

-No CSA content or Revenge Porn

-No sharing private/personal information (Doxxing)

...


3. No Spam


Posting the same post, no matter the intent is against the rules.

-If you have posted content, please refrain from re-posting said content within this community.

-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.

-No posting Scams/Advertisements/Phishing Links/IP Grabbers

-No Bots, Bots will be banned from the community.

...


4. No Porn/ExplicitContent


-Do not post explicit content. Lemmy.World is not the instance for NSFW content.

-Do not post Gore or Shock Content.

...


5. No Enciting Harassment,Brigading, Doxxing or Witch Hunts


-Do not Brigade other Communities

-No calls to action against other communities/users within Lemmy or outside of Lemmy.

-No Witch Hunts against users/communities.

-No content that harasses members within or outside of the community.

...


6. NSFW should be behind NSFW tags.


-Content that is NSFW should be behind NSFW tags.

-Content that might be distressing should be kept behind NSFW tags.

...

If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.


Also check out:

Partnered Communities:

1.Memes

2.Lemmy Review

3.Mildly Infuriating

4.Lemmy Be Wholesome

5.No Stupid Questions

6.You Should Know

7.Comedy Heaven

8.Credible Defense

9.Ten Forward

10.LinuxMemes (Linux themed memes)


Reach out to

All communities included on the sidebar are to be made in compliance with the instance rules. Striker

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] rustydrd@sh.itjust.works 29 points 2 days ago (1 children)

Lots of AI is technologically interesting and has tons of potential, but this kind of chatbot and image/video generation stuff we got now is just dumb.

[–] MrMcGasion@lemmy.world 26 points 2 days ago* (last edited 2 days ago) (3 children)

I firmly believe we won't get most of the interesting, "good" AI until after this current AI bubble bursts and goes down in flames. Once AI hardware is cheap interesting people will use it to make cool things. But right now, the big players in the space are drowning out anyone who might do real AI work that has potential, by throwing more and more hardware and money at LLMs and generative AI models because they don't understand the technology and see it as a way to get rich and powerful quickly.

[–] NewDayRocks@lemmy.dbzer0.com 4 points 2 days ago (2 children)

AI is good and cheap now because businesses are funding it at a loss, so not sure what you mean here.

The problem is that it's cheap, so that anyone can make whatever they want and most people make low quality slop, hence why it's not "good" in your eyes.

Making a cheap or efficient AI doesn't help the end user in any way.

[–] SolarBoy@slrpnk.net 7 points 2 days ago (2 children)

It appears good and cheap. But it's actually burning money, energy and water like crazy. I think somebody mentioned to generate a 10 second video, it's the equivalent in energy consumption as driving a bike for 100km.

It's not sustainable. I think the thing the person above you is referring to is if we ever manage to make LLMs and such which can be run locally on a phone or laptop with good results. That would make people experiment and try out things themselves, instead of being dependent on paying monthly for some services that can change anytime.

[–] krunklom@lemmy.zip 2 points 2 days ago

i mean. i have a 15 amp fuse in my apartment and a 10 second cideo takes like 10 minutes to make, i dont know how much energy a 4090 draws but anyone that has an issue with me using mine to generate a 10 second bideo better not play pc games.

[–] NewDayRocks@lemmy.dbzer0.com 1 points 2 days ago

You and OP are misunderstanding what is meant by good and cheap.

It's not cheap from a resource perspective like you say. However that is irrelevant for the end user. It's "cheap" already because it is either free or costs considerably less for the user than the cost of the resources used. OpenAI or Meta or Twitter are paying the cost. You do not need to pay for a monthly subscription to use AI.

So the quality of the content created is not limited by cost.

If the AI bubble popped, this won't improve AI quality.

[–] MrMcGasion@lemmy.world 2 points 2 days ago (1 children)

I'm using "good" in almost a moral sense. The quality of output from LLMs and generative AI is already about as good as it can get from a technical standpoint, continuing to throw money and data at it will only result in minimal improvement.

What I mean by "good AI" is the potential of new types of AI models to be trained for things like diagnosing cancer, and and other predictive tasks that we haven't thought of yet that actually have the potential to help humanity (and not just put artists and authors out of their jobs).

The work of training new, useful AI models is going to be done by scientists and researchers, probably on a limited budgets because there won't be a clear profit motive, and they won't be able to afford thousands of $20,000 GPUs like are being thrown at LLMs and generative AI today. But as the current AI race crashes and burns, the used hardware of today will be more affordable and hopefully actually get used for useful AI projects.

[–] NewDayRocks@lemmy.dbzer0.com 1 points 2 days ago

Ok. Thanks for clarifying.

Although I am pretty sure AI is already used in the medical field for research and diagnosis. This "AI everywhere" trend you are seeing is the result of everyone trying to stick and use AI in every which way.

The thing about the AI boom is that lots of money is being invested into all fields. A bubble pop would result in investment money drying up everywhere, not make access to AI more affordable as you are suggesting.

[–] FauxLiving@lemmy.world -1 points 1 day ago (2 children)

I firmly believe we won’t get most of the interesting, “good” AI until after this current AI bubble bursts and goes down in flames.

I can't imagine that you read much about AI outside of web sources or news media then. The exciting uses of AI is not LLMs and diffusion models, though that is all the public talks about when they talk about 'AI'.

For example, we have been trying to find a way to predict protein folding for decades. Using machine learning, a team was able to train a model (https://en.wikipedia.org/wiki/AlphaFold) to predict the structure of proteins with high accuracy. Other scientists have used similar techniques to train a diffusion model that will generate a string of amino acids which will fold into a structure with the specified properties (like how image description prompts are used in an image generator).

This is particularly important because, thanks to mRNA technology, we can write arbitrary sequences of mRNA which will co-opt our cells to produce said protein.


Robotics is undergoing similar revolutionary changes. Here is a state of the art robot made by Boston Dynamics using a human programmed feedback control loop: https://www.youtube.com/watch?v=cNZPRsrwumQ

Here is a Boston Dynamics robot "using reinforcement learning with references from human motion capture and animation.": https://www.youtube.com/watch?v=I44_zbEwz_w


Object detection, image processing, logistics, speech recognition, etc. These are all things that required tens of thousands of hours of science and engineering time to develop the software for, and the software wasn't great. Now, freshman at college can train a computer vision network that outperforms these tools using free tools and a graphics card which will outperform the human-created software.

AI isn't LLMs and image generators, those may as well be toys. I'm sure eventually LLMs and image generation will be good, but the only reason it seems amazing is because it is a novel capability that computers have not had before. But the actual impact on the real world will be minimal outside of specific fields.

[–] mojofrododojo@lemmy.world 2 points 1 day ago (1 children)
[–] FauxLiving@lemmy.world 0 points 1 day ago (1 children)

AI isn’t LLMs and image generators

[–] mojofrododojo@lemmy.world 1 points 18 hours ago (1 children)

then pray tell where is it working out great?

again, you have nothing to refute the evidence placed before you except "ah that's a bunch of links" and "not everything is an llm"

so tell us where it's going so well.

Not the meacha-hitler swiftie porn, heh, yeah I wouldn't want to be associated with it either. But your aibros don't care.

[–] FauxLiving@lemmy.world -1 points 17 hours ago (1 children)

I was talking about public perception of AI. There is a link to a study by a prestigious US university which support my claims.

AI is doing well in protein folding and robotics, for example

[–] mojofrododojo@lemmy.world 0 points 12 hours ago (1 children)

ah what great advances has alpha fold delivered?

and that robotics training, where has that improved human lives? because near as I can tell it's simply going to put people out of work. the lowest paid people. so that's just great.

but let's give you some slack: let's leave it to protein folding and robotics and stop sticking it into every fuckin facet of our civilization.

and protein folding and robotics training wouldn't require google, x, meta and your grandmother to be rolling out datacenters EVERYWHERE, driving up the costs of electricity for the average user, while polluting the air and water.

Faux, I get it, you're an aibro, you really are a believer. Evidence isn't going to sway you because this isn't evidence driven. The suffering of others isn't going to bother you, that's their problem. The damage to the ecosystem isn't your problem, you apparently don't need water or air to exist. You got it made bro.

pfft.

[–] FauxLiving@lemmy.world 1 points 11 hours ago* (last edited 11 hours ago) (1 children)

ah what great advances has alpha fold delivered?

The ability to know how any sequence of amino acids will create a protein and what shape the protein would have. This also led to other scientists creating diffusion models which can be prompted with protein properties and they generate the sequence of amino acids which will create a protein with those properties. We also can write those arbitrary sequences into mRNA and introduce that into a local area of our cells.

But what do I know, I'm just an aibro. So, I'll listen to scientists who write peer reviewed papers which are published in scientific journals: AI-Enabled Protein Design: A Strategic Asset for Global Health and Biosecurity

and that robotics training, where has that improved human lives?

Well, Fukushima would be one place.

Now they can use disposable robotic dogs to do clean up and monitoring in high radiation areas. A job that humans were doing at the beginning. I'm sure those humans appreciate not having to die of cancer early.

Faux, I get it, you’re an aibro, you really are a believer. Evidence isn’t going to sway you because this isn’t evidence driven. The suffering of others isn’t going to bother you, that’s their problem. The damage to the ecosystem isn’t your problem, you apparently don’t need water or air to exist. You got it made bro

🙄. If you can't win an argument just switch to insults, the tactic of choice for the ignorant.

[–] mojofrododojo@lemmy.world 1 points 10 hours ago* (last edited 10 hours ago)

Ah I see you read a wiki article and consider yourself an expert, again.

what has it DELIVERED?

In the not-so-distant future, the authors envision

my god man, what has it delivered?

But what do I know, I’m just an aibro

yes yes that's been established.

Now they can use disposable robotic dogs to do clean up and monitoring in high radiation areas. A

now you're just lying. the robots used in fukushima aren't AI trained.

https://apnews.com/article/japan-fukushima-reactor-melted-fuel-robot-9ffc309fb072580bee0161e8a24c8490

you're so fulla shit it's dripping down your beard. gonna block you now, go lie to someone else.

[–] MrMcGasion@lemmy.world 2 points 1 day ago (1 children)

Oh I have read and heard about all those things, none of them (to my knowledge) are being done by OpenAI, xAI, Google, Anthropic, or any of the large companies fueling the current AI bubble, which is why I call it a bubble. The things you mentioned are where AI has potential, and I think that continuing to throw billions at marginally better LLMs and generative models at this point is hurting the real innovators. And sure, maybe some of those who are innovating end up getting bought by the larger companies, but that's not as good for their start-ups or for humanity at large.

[–] FauxLiving@lemmy.world -1 points 1 day ago

AlphaFold is made by DeepMind, an Alphabet (Google) subsidiary.

Google and OpenAI are also both developing world models.

These are a way to generate realistic environments that behave like the real world. These are core to generating the volume of synthetic training data that would allow training robotics models massively more efficient.

Instead of building an actual physical robot and having it slowly interact with the world while learning from its one physical body. The robot's builder could create a world model representation of their robot's body's physical characteristics and attach their control software to the simulation. Now the robot can train in a simulated environment. Then, you can create multiple parallel copies of that setup in order to generate training data rapidly.

It would be economically unfeasible to build 10,000 prototype robots in order to generate training data, but it is easy to see how running 10,000 different models in parallel is possible.

I think that continuing to throw billions at marginally better LLMs and generative models at this point is hurting the real innovators.

On the other hand, the billions of dollars being thrown at these companies is being used to hire machine learning specialists. The real innovators who have the knowledge and talent to work on these projects almost certainly work for one of these companies or the DoD. This demand for machine learning specialists (and their high salaries) drives students to change their major to this field and creates more innovators over time.

[–] haungack@lemmy.dbzer0.com 0 points 2 days ago

I don't know if the current AI phase is a bubble, but i agree with you that if it were a bubble and burst, it wouldn't somehow stop or end AI, but cause a new wave of innovation instead.

I've seen many AI opponents imply otherwise. When the dotcom bubble burst, the internet didn't exactly die.