this post was submitted on 22 Sep 2025
1058 points (99.1% liked)

Microblog Memes

9285 readers
2582 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] fushuan@lemmy.blahaj.zone 2 points 2 days ago (1 children)

I have a bachelor's and master's in computer science, specialised in data manipulation and ML.

The problem with AI is that you don't really need to understand the math behind it to work with it, even with training. Who cares how the distribution of the net affects results and information retention? who cares how stochastic gradient descent really works? You get a network crafted by professionals that gets X input parameters, which modify the network's capacity in a way that's given to you, explained, and you just press play in the script that trains stuff.

It's the fact that you only need to care about input data quality and quantity and some input parameters that freaking anyone can work with AI.

All the thinking on the NN is given to you, all the tools to work with training the NN are given to you.

I even worked with darknet and Yolo and did my due diligence to learn Yolov4, how it condensed info and all that, but I really didn't need to for the given use case. Most of the work was labelling private data and cleaning it thoroughly. Then, playing with some Params to see how the final results worked, how the model over fitted...

That's the issue with people building AI models, their work is more technical that that of "prompt engineers" (๐Ÿ˜ซ), but not much.

[โ€“] Poik@pawb.social 2 points 2 days ago

When you're working at the algorithm level, you get funny looks... Even if it gets to state of the art results, who cares because you can throw more electricity and data at it instead.

I worked specifically on low data algorithms, so my work was particularly frowned upon by modern ai scientists.

I'm not doxxing myself, but unpublished work of mine got published in parallel as Prototypical Networks in 2017. And everyone laughed (<- exaggeration) at me researching RBFs which were considered defunct. (I still think they're an untapped optimization.)