keegomatic

joined 2 years ago
[–] keegomatic@kbin.social 15 points 2 years ago* (last edited 2 years ago) (3 children)

In my experience, this has always been a problem after a forum grows beyond a certain size. It’s not really a Reddit-exclusive thing. It’s also not related to karma/reputation-tracking, IMO.

Early adopters of a small, somewhat empty community are people who want to grow the community and encourage posting. Discussion is bright and careful in certain ways because it’s usually just a few commenters interacting with each other who all want the same thing.

Once a community grows big enough to support lurkers and a variety of topics, with multifaceted discussion happening naturally, you have a familiar effect happen: you know how people are disproportionately more likely to review a product or business if they had a negative experience than a positive one? Well, in a similar way, when there’s enough content to lurk (and not be one of the early enthusiasts who post in spite of a lack of content, as a duty to help the community grow), then lurkers are more likely to come out of the woodwork and join a discussion when they see something they disagree with or feel strongly about.

Honestly, though, it has a few silver linings. I grew up learning a lot from arguments online in various places. Sometimes they are handled well and sometimes they are handled poorly by the participants. Learn from both. It’s great to see two sides of an issue, even a petty one. It can teach you a ton about how to behave well, how to actually persuade someone on a topic, and how to avoid conflict in the first place. It can also teach you about a controversial topic you knew little about, and spark your curiosity to learn more (if only to refute something with citations) and sometimes change your opinion altogether.

The healthy/toxic dichotomy starts in your own mind. You can’t control others, but you can control yourself. So find those little positive nuggets where you can.

[–] keegomatic@kbin.social 36 points 2 years ago (1 children)

Ever since Obama beat Clinton 15 years ago

Jesus I thought you were exaggerating and then I did the math

[–] keegomatic@kbin.social 3 points 2 years ago

Hey, this is excellent. I was looking to do something like this a few months ago. Bought a few ESP devices to mess with, but never got around to it. I might try it out now, though, using your guide. Thank you!

[–] keegomatic@kbin.social 6 points 2 years ago* (last edited 2 years ago)

Got a source? When I first read about this people were cautiously optimistic partly because the head researcher was well-respected.

[–] keegomatic@kbin.social 52 points 2 years ago (2 children)

our compound shows greatly consistent x-ray diffraction spectrum with the previously reported structure data

Uhh, doesn’t look like it to me. This paper’s X-ray diffraction spectrum looks pretty noisy compared to the one from the original paper, with some clear additional/different peaks in certain regions. That could potentially affect the result. I was under the impression from the original paper that a subtle compression of the lattice structure was pretty important to formation of quantum wells for superconductivity, so if the X-ray diff isn’t spot on I’ll wait for some more failures before calling it busted.

[–] keegomatic@kbin.social 8 points 2 years ago

This is a really terrific explanation. The author puts some very technical concepts into accessible terms, but not so far from reality as to cloud the original concepts. Most other attempts I’ve seen at explaining LLMs or any other NN-based pop tech are either waaaay oversimplified, heavily abstracted, or are meant for a technical audience and are dry and opaque. I’m saving this for sure. Great read.

[–] keegomatic@kbin.social 17 points 2 years ago

Down by the bay
Where the watermelons grow
Back to my home
I dare not go
For if I do
My mother will say
“Have you ever seen a deer crushing a beer?”
Down by the bay

[–] keegomatic@kbin.social 28 points 2 years ago (1 children)

I honestly only made it a few minutes in, and there is probably plenty of merit to the rest of her perspective. But… I just couldn’t get past the “AI doesn’t exist” part. I get that you don’t know or care about the difference and you associate the term “AI” with sci-fi-like artificial sentience/AGI, but “AI” has been used for decades to refer to things that mimic intelligence, not just full-on artificial general intelligence. Algorithms governing NPC behavior and pathfinding in video games is AI, and that’s a perfectly accurate description. SmarterChild was AI… even ELIZA was AI. Stuff like GAN models and LLMs are certainly AI. The goal posts for “intelligence” have moved farther and farther back with every innovation. The AI we have now was fantasy just 20 years ago. Even just five years ago, to most people.

[–] keegomatic@kbin.social 9 points 2 years ago

That’s not really how LLMs work. You’re basically describing Markov chains. The statement “It’s just a statistical prediction model with billions of parameters” also applies to the human brain. An LLM is much more of a black box than you’re implying.

[–] keegomatic@kbin.social 1 points 2 years ago (1 children)

I’m in my early 30s and I learned metric pretty thoroughly as early as elementary school. Grew up in Massachusetts and went to public school, for what it’s worth.

[–] keegomatic@kbin.social 4 points 2 years ago

This really is true. Experiencing it now, myself.

[–] keegomatic@kbin.social 1 points 2 years ago (1 children)

Oh, interesting! Thanks for pointing that out. Side note: entries… I hope kbin adopts better language for what to call Reddit-like posts (articles), Twitter-like microblog posts (posts), and comments (entries?). I never would have guessed entries == comments. Maybe this is ActivityPub-specific naming? It reminds me of a past job where we surfaced internal technical names as the names of products and features… it just confused customers.

view more: ‹ prev next ›