this post was submitted on 04 Jun 2024
17 points (100.0% liked)
Hacker News
2171 readers
1 users here now
A mirror of Hacker News' best submissions.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Regarding linguistics, the usage of machine "learning" models feels in some cases justified. Often you have that huge amount of repetitive data that you need to sort out and generalise, M"L" is great for that.
For example, let's say that you're studying some specific vowel feature. You're probably recording the same word thrice for each informer; and there's, like, 15 words? You'll want people from different educational backgrounds, men and women, and different ages, so let's say 10 informers. From that you're already dealing with 450 recordings, that you'll need to throw into Praat, identify the relevant vowel, measure the average F₁ and F₂ values.
It's an extremely simple task, easy to generalise, but damn laborious to perform by hand. That's where machine learning should kick in; you should be able to teach it "this is a vowel, look those smooth formants, we want this" vs. "this is a fricative, it has static-like noise, disregard it", then feed it the 450 audio files, and have it output F1 and F2 values for you in a clean .csv
And, if you're using it to take the conclusions for you, you probably suck as a researcher.
This picture from the link is amazing:
And exemplified by the very discussion in "Reddit LARPs as h4x0rz @ ycombinator dot com". Specially the top right, the sheer amount of users there stinking credulousness is funny.
Onwards I'll copypaste a few HN comments. I'll use the same letter for the same user, and numbers to tell their comments apart when necessary.
I also feel so. But every time that you state the obvious, you'll get a crowd of functionally illiterate and irrational people disputing the obvious, and their crowds are effectively working like a big moustached "MRRROOOOOOO"-ing sealion.
That user is playing the "dancing chairs" game, with definitions. If you do provide a definition, users like this are prone to waste your time chain-gunning "ackshyually" statements. Eventually evolving it into appeal to ignorance + ad nauseam.
A is calling C1 a deflection. I'd go further: it's whataboutism plus extended analogy. They're simply taking the "they're like humans" analogy as if it was more than a simple analogy.
"I demand you to bite my whataboutism!".
A2 actually answers C1, by correctly pointing out that the question is nonsensical.
At this rate A bite the bait - "All that can be said is that they are not intelligent in the way that humans are." opens room for "ackshyually" and similar idiocies.
It is not shit if you know what to use it for. ...but then even shit can become fertiliser.
Serious now, the faster you ditch the faith that machine "learning" is intelligent, the faster you find things that it does in a non-shitty way. Myself listed one example.
Whataboutism, again; and the same one, that boils down to "I dun unrrurstand, but wharabout ppl doin mistakes? I is confusion!". Are you noticing the pattern?
Answering the question: AI should not "be allowed" to do that for the same reason why a screwdriver should not "be allowed" to let screws slip. Because it's a tool malfunctioning in a way that it interferes with what people use it for.
[F, another comment chain] Is "leakage" just another term for overfitting?
Not quite. I'll use a human like analogy, but bear in mind that this is solely a didactic example, and that analogies break if pushed too far.
In other words: