People believe enough random bullshit to tickle their memories with their classics list.
Science Memes
Welcome to c/science_memes @ Mander.xyz!
A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.

Rules
- Don't throw mud. Behave like an intellectual and remember the human.
- Keep it rooted (on topic).
- No spam.
- Infographics welcome, get schooled.
This is a science community. We use the Dawkins definition of meme.
Research Committee
Other Mander Communities
Science and Research
Biology and Life Sciences
- !abiogenesis@mander.xyz
- !animal-behavior@mander.xyz
- !anthropology@mander.xyz
- !arachnology@mander.xyz
- !balconygardening@slrpnk.net
- !biodiversity@mander.xyz
- !biology@mander.xyz
- !biophysics@mander.xyz
- !botany@mander.xyz
- !ecology@mander.xyz
- !entomology@mander.xyz
- !fermentation@mander.xyz
- !herpetology@mander.xyz
- !houseplants@mander.xyz
- !medicine@mander.xyz
- !microscopy@mander.xyz
- !mycology@mander.xyz
- !nudibranchs@mander.xyz
- !nutrition@mander.xyz
- !palaeoecology@mander.xyz
- !palaeontology@mander.xyz
- !photosynthesis@mander.xyz
- !plantid@mander.xyz
- !plants@mander.xyz
- !reptiles and amphibians@mander.xyz
Physical Sciences
- !astronomy@mander.xyz
- !chemistry@mander.xyz
- !earthscience@mander.xyz
- !geography@mander.xyz
- !geospatial@mander.xyz
- !nuclear@mander.xyz
- !physics@mander.xyz
- !quantum-computing@mander.xyz
- !spectroscopy@mander.xyz
Humanities and Social Sciences
Practical and Applied Sciences
- !exercise-and sports-science@mander.xyz
- !gardening@mander.xyz
- !self sufficiency@mander.xyz
- !soilscience@slrpnk.net
- !terrariums@mander.xyz
- !timelapse@mander.xyz
Memes
Miscellaneous
I was taught that serious academics favored Support Vector Machines over Neural Networks, which industry only loved because they didn't have proper education. oops...
also, Computer Vision was considered "AI-complete" and likely decades away. ImageNet dropped a couple years I graduated. though I guess it ended up being "AI-complete" in a way...
Before AlexNet, SVMs were the best algorithms around. LeNet was the only comparable success case for NNs back then, and it was largely seen as exclusively limited to MNIST digits because deep networks were too hard to train. People used HOG+SVM, SIFT, SURF, ORB, older Haar / Viola-Jones features, template matching, random forests, Hough Transforms, sliding windows, deformable parts models… so many techniques that were made obsolete once the first deep networks became viable.
The problem is your schooling was correct at the time, but the march of research progress eventually saw 1) the creation of large, million-scale supervised datasets (ImageNet) and 2) larger / faster GPUs with more on-card memory.
It was fact back in ~2010 that SVMs were superior to NNs in nearly every aspect.
Source: started a PhD on computer vision in 2012
HOG and Hough transforms bring me back. honestly glad that I don't have to mess with them anymore though.
I always found SVMs a little shady because you had to pick a kernel. we spent time talking about the different kernels you could pick but they were all pretty small and/or contrived. I guess with NNs you pick the architecture/activation functions but there didn't seem to be an analogue in SVM land for "stack more layers and fatten the embeddings." though I was only an undergrad.
do you really think NNs won purely because of large datasets and GPU acceleration? I feel like those could have applied to SVMs too. I thought the real win was solving vanishing gradients with ReLU and expanding the number of layers, rather than throwing everything into a 3 or 5-layer MLP, preventing overfitting, making the gradient landscape less prone to local maxima and enabling hierarchical feature extraction to be learned organically.