this post was submitted on 22 Sep 2025
1057 points (99.1% liked)

Microblog Memes

9285 readers
2693 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] scrubbles@poptalk.scrubbles.tech 297 points 3 days ago (10 children)

The majority of "AI Experts" online that I've seen are business majors.

Then a ton of junior/mid software engineers who have use the OpenAI API.

Finally are the very very few technical people who have interacted with models directly, maybe even trained some models. Coded directly against them. And even then I don't think many of them truly understand what's going on in there.

Hell, I've been training models and using ML directly for a decade and I barely know what's going on in there. Don't worry I get the image, just calling out how frighteningly few actually understand it, yet so many swear they know AI super well

[–] waigl@lemmy.world 90 points 3 days ago* (last edited 3 days ago) (5 children)

And even then I don’t think many of them truly understand what’s going on in there.

That's just the thing about neural networks: Nobody actually understands what's going on there. We've put an abstraction layer over how we do things that we know we will never be able to pierce.

[–] notabot@piefed.social 57 points 3 days ago (1 children)

I'd argue we know exactly what's going on in there, we just don't necessarily, know for any particular model why it's going on in there.

[–] GreenMartian@lemmy.dbzer0.com 24 points 3 days ago (2 children)

But, more importantly, who is going on in there?

[–] Klear@quokk.au 11 points 3 days ago (3 children)

And how is it going in there?

[–] GreenMartian@lemmy.dbzer0.com 23 points 3 days ago

Not bad. How's it going with you?

[–] jqubed@lemmy.world 8 points 2 days ago (1 children)

That’s what we’re trying to find out! We’re trying to find out who killed him, and where, and with what! Tim Curry in Clue shouting the above text

[–] Gigasser@lemmy.world 2 points 2 days ago

The real question is where it's going on?

[–] nightwatch_admin@feddit.nl 1 points 2 days ago

Excellent opportunity for a “that’s what she said” joke.

[–] limelight79@lemmy.world 14 points 2 days ago* (last edited 2 days ago) (1 children)

I have a masters degree in statistics. This comment reminded me of a fellow statistics grad student that could not explain what a p-value was. I have no idea how he qualified for a graduate level statistics program without knowing what a p-value was, but he was there. I'm not saying I'm God's gift to statistics, but a p-value is a pretty basic concept in statistics.

Next semester, he was gone. Transferred to another school and changed to major in Artificial Intelligence.

I wonder how he's doing...

[–] fushuan@lemmy.blahaj.zone 2 points 2 days ago (1 children)

I have a bachelor's and master's in computer science, specialised in data manipulation and ML.

The problem with AI is that you don't really need to understand the math behind it to work with it, even with training. Who cares how the distribution of the net affects results and information retention? who cares how stochastic gradient descent really works? You get a network crafted by professionals that gets X input parameters, which modify the network's capacity in a way that's given to you, explained, and you just press play in the script that trains stuff.

It's the fact that you only need to care about input data quality and quantity and some input parameters that freaking anyone can work with AI.

All the thinking on the NN is given to you, all the tools to work with training the NN are given to you.

I even worked with darknet and Yolo and did my due diligence to learn Yolov4, how it condensed info and all that, but I really didn't need to for the given use case. Most of the work was labelling private data and cleaning it thoroughly. Then, playing with some Params to see how the final results worked, how the model over fitted...

That's the issue with people building AI models, their work is more technical that that of "prompt engineers" (😫), but not much.

[–] Poik@pawb.social 2 points 1 day ago

When you're working at the algorithm level, you get funny looks... Even if it gets to state of the art results, who cares because you can throw more electricity and data at it instead.

I worked specifically on low data algorithms, so my work was particularly frowned upon by modern ai scientists.

I'm not doxxing myself, but unpublished work of mine got published in parallel as Prototypical Networks in 2017. And everyone laughed (<- exaggeration) at me researching RBFs which were considered defunct. (I still think they're an untapped optimization.)

[–] sp3ctr4l@lemmy.dbzer0.com 24 points 3 days ago* (last edited 3 days ago) (2 children)

Ding ding ding.

It all became basically magic, blind trial and error roughly ten years ago, with AlexNet.

After AlexNet, everything became increasingly more and more black box and opaque to even the actual PhD level people crafting and testing these things.

Since then, it has basically been 'throw all existing information of any kind at the model' to train it better, and then a bunch of basically slapdash optimization attempts which work for largely 'i dont know' reasons.

Meanwhile, we could be pouring even 1% of the money going toward LLMs snd convolutional network derived models... into other paradigms, such as maybe trying to actually emulate real brains and real neuronal networks... but nope, everyone is piling into basically one approach.

Thats not to say research on other paradigms is nonexistent, but it is barely existant in comparison.

[–] SkyeStarfall@lemmy.blahaj.zone 7 points 2 days ago* (last edited 2 days ago) (1 children)

Il'll give you the point regarding LLMs.. but conventional neural networks? Nah. They've been used for a reason, and generally been very successful where other methods have failed. And there very much are investments into stuff with real brains or analog brain-like structures.. it's just that it's far more difficult, especially as have very little idea on how real brains work.

A big issue regarding digitally emulating real brain structures is that it's very computationally expensive. Real brains work using chemistry, after all. Not something that's easy to simulate. Though there is research in this are, but that research is mostly to understand brains more, not for any practical purpose, from what I know. But also, this won't solve the black box problem.

Neural networks are great at what they do, being a sort of universal statistics optimization process (to a degree, no free lunch etc.). They solved problems that failed to be solved before, that now are considered mundane. Like, would anyone really think it would be possible to have your phone be able to detect what it was you took a picture of 15 years ago? That was considered to be practically impossible. Take this xkcd from a decade ago, for example https://xkcd.com/1425/

In addition, there are avenues that are being explored such as "Explainable AI" and so on. The field is more varied and interesting than most people realize. And, yes, genuinely useful. And not every neural network is a massive large scale one, many are small-scale and specialized.

[–] sp3ctr4l@lemmy.dbzer0.com 2 points 2 days ago (1 children)

I take your critiques in stride, yes, you are more correct than I am, I was a bit sloppy.

Corrections appreciated =D

[–] SkyeStarfall@lemmy.blahaj.zone 3 points 2 days ago (1 children)

Hopefully I don't appear as too much of a know-it-all 😭 I often end up rambling too much lmao

It's just always fun to talk about one's field ^^ or stuff adjacent to it

[–] sp3ctr4l@lemmy.dbzer0.com 2 points 2 days ago* (last edited 2 days ago)

Oh no no no, being an actual subject matter expert or at least having more precise and detailed knowledge and or explanations is always welcome imo.

You're talking to an(other?) autist who loves data dumping walls of text about things they actually know something about, lol.

Really, I appreciate constructive critiques or corrections.

How else would one learn things?

Keep oneself in check?

Today you have helped me verify that at least some amount of metacognition is still working inside of this particular blob of wetware, hahaja!

EDIT:

One motto I actually do try to live by, from the Matrix:

Temet Nosce.

Know Thyself.

... and a large part of that is knowing 'that I know nothing'.

[–] Aceticon@lemmy.dbzer0.com 2 points 2 days ago

Way back in the 90s when Neural Networks were at their very beginning and starting to be used in things like postal code recognition for automated mail sorting, it was already the case that the experts did not know why it worked, including why certain topologies worked better than others at certain things, and we're talking about networks with less than a thousand neurons.

No wonder that "add shit and see what happens" is still the way the area "advances".

[–] catch22@programming.dev 10 points 3 days ago

Feature Visualization How neural networks build up their understanding of images

https://distill.pub/2017/feature-visualization/

[–] expr@programming.dev 52 points 3 days ago (1 children)

Yeah, I've trained a number of models (as part of actual CS research, before all of this LLM bullshit), and while I certainly understand the concepts behind training neural networks, I couldn't tell you the first thing about what a model I trained is doing. That's the whole thing about the black box approach.

Also why it's so absurd when "AI" gurus claim they "fixed" an issue in their model that resulted in output they didn't want.

No, no you didn't.

Love this because I completely agree. "We fixed it and it no longer does the bad thing". Uh no, incorrect, unless you literally went through your entire dataset and stripped out every single occurrence of the thing and retrained it, then no there is no way that you 100% "fixed" it

[–] skisnow@lemmy.ca 18 points 2 days ago (1 children)

I’ve given up attending AI conferences, events and meetups in my city for this exact reason. Show up for a talk called something like “Advances in AI” or “Inside AI” by a supposed guru from an AI company, get a 3 hour PowerPoint telling you to stop making PowerPoints by hand and start using ChatGPT to do it, concluding with a sales pitch for their 2-day course on how to get rich creating Kindle ebooks en masse

Even the dev oriented ones are painfully like this too. Why would you make your own when you subscribe to ours instead? Just sign away all of your data and call this API which will probably change in a month, you'll be so happy!

[–] JandroDelSol@lemmy.world 42 points 3 days ago (2 children)

business majors are the worst i swear to god

[–] SexualPolytope@lemmy.sdf.org 36 points 2 days ago (1 children)

They are literally what's causing the fall of our society.

[–] Dogiedog64@lemmy.world 9 points 2 days ago
[–] scrubbles@poptalk.scrubbles.tech 21 points 3 days ago (1 children)

Didn't you know? Being adept at business immediately makes you an expert in many science and engineering fields!

[–] kboy101222@sh.itjust.works 5 points 2 days ago

adept

I think you're giving them a little too much credit there

[–] GreenShimada@lemmy.world 31 points 3 days ago (1 children)

I have personally told coworkers that if they train a custom GPT, they should put "AI expert" on their resume as it's more than 99% of people have done - and 99% of those people didn't do anything more than tricked ChatGPT into doing something naughty once a year ago and now consider themselves "prompt engineers."

Absolutely agree there

[–] FauxLiving@lemmy.world 10 points 2 days ago (2 children)

Hell, I’ve been training models and using ML directly for a decade and I barely know what’s going on in there.

Outside of low dimensional toy models, I don’t think we’re capable of understanding what’s happening. Even in academia, work on the ability to reliably understand trained networks is still in its infancy.

[–] sobchak@programming.dev 1 points 2 days ago

I remember studying "Probably Approximately Correct" learning and such, and it was a pretty cool way of building axioms, theorems, and proofs to bound and reason about ML models. To my knowledge, there isn't really anything like it for large networks; maybe someday.

[–] Aceticon@lemmy.dbzer0.com 1 points 2 days ago (1 children)

Which is funny considering that Neural Networks have been a thing since the 90s.

[–] Poik@pawb.social 2 points 1 day ago

... 1957

Perceptrons. The math dates back to the 40s, but '57 marks the first artificial neural network.

Also 35 years is infancy in science, or at least teenage, as we see from deep learning's growing pains right now. Visualizations of neural network responses and reverse engineering neural networks to understand how they tick predate 2010 at least. Deep Dream was actually built off an idea of network inversion visualizations, and that's ten years old now.

[–] Treczoks@lemmy.world 7 points 3 days ago

NONE of them knows what's going on inside.

We are right back in the age of alchemy, where people talking latin and greek threw more or less things together to see what happens, all the while claiming to trying to make gold to keep the cash flowing.

[–] stinky@redlemmy.com 3 points 3 days ago

The image feels like "Those who know 😀 Those who don't know 😬"

[–] TropicalDingdong@lemmy.world 2 points 2 days ago

And the number of us who build these models from scratch, from the ground up, even fewer.

[–] blazeknave@lemmy.world 2 points 3 days ago

I've been selling it even longer than that and I refuse to use the word expert.