this post was submitted on 23 Sep 2025
1654 points (98.5% liked)

Science Memes

17767 readers
1529 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
(page 6) 50 comments
sorted by: hot top controversial new old
[–] Etterra@discuss.online 2 points 2 months ago

People believe enough random bullshit to tickle their memories with their classics list.

[–] homura1650@lemmy.world 2 points 2 months ago (2 children)

China is the most populace country.

load more comments (2 replies)
[–] missfrizzle@discuss.tchncs.de 2 points 2 months ago* (last edited 2 months ago) (1 children)

I was taught that serious academics favored Support Vector Machines over Neural Networks, which industry only loved because they didn't have proper education. oops...

also, Computer Vision was considered "AI-complete" and likely decades away. ImageNet dropped a couple years I graduated. though I guess it ended up being "AI-complete" in a way...

[–] bluemellophone@lemmy.world 2 points 2 months ago* (last edited 2 months ago) (1 children)

Before AlexNet, SVMs were the best algorithms around. LeNet was the only comparable success case for NNs back then, and it was largely seen as exclusively limited to MNIST digits because deep networks were too hard to train. People used HOG+SVM, SIFT, SURF, ORB, older Haar / Viola-Jones features, template matching, random forests, Hough Transforms, sliding windows, deformable parts models… so many techniques that were made obsolete once the first deep networks became viable.

The problem is your schooling was correct at the time, but the march of research progress eventually saw 1) the creation of large, million-scale supervised datasets (ImageNet) and 2) larger / faster GPUs with more on-card memory.

It was fact back in ~2010 that SVMs were superior to NNs in nearly every aspect.

Source: started a PhD on computer vision in 2012

[–] missfrizzle@discuss.tchncs.de 2 points 2 months ago (1 children)

HOG and Hough transforms bring me back. honestly glad that I don't have to mess with them anymore though.

I always found SVMs a little shady because you had to pick a kernel. we spent time talking about the different kernels you could pick but they were all pretty small and/or contrived. I guess with NNs you pick the architecture/activation functions but there didn't seem to be an analogue in SVM land for "stack more layers and fatten the embeddings." though I was only an undergrad.

do you really think NNs won purely because of large datasets and GPU acceleration? I feel like those could have applied to SVMs too. I thought the real win was solving vanishing gradients with ReLU and expanding the number of layers, rather than throwing everything into a 3 or 5-layer MLP, preventing overfitting, making the gradient landscape less prone to local maxima and enabling hierarchical feature extraction to be learned organically.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›