They havent released one for the razor I have, but honestly I might try modeling them myself. Doesn't seem impossible, and I've been waning a deeper comb than they sell.
IrritableOcelot
These aren't relying on gravity, theyre relying on maintaining a vacuum, and concrete is extremely porous. They're obviously sealing the inside of the chamber, but basically no coatings have a lifetime of 60 years for holding vacuum.
"Meat's back on the menu, boys" hits different in this timeline...
(Yes, I know that Taking the Chickens to Isengard is noncanonical in other ways...)
There are currently 252 Catholic cardinals, but only 135 are eligible to cast ballots as those over the age of 80 can take part in debate but cannot vote.
You're telling me the Catholic church has more term limits than the US Supreme Court?
Maybe the graph mode of logseq?
This gives strong "Lovecraft describing things he doesn't understand as noneuclidian" vibes.
🎶 Saturday night and we in the spot, don't believe me just watch 🎶
Chuck mangione soothes my soul
WELL ACKSHUALLY its a clay tablet, you just press into it with a little stick, then its fired...
Gotta love the low-quality-copper memes
While I agree that publishers charging high open access fees is a bad practice, the ACS journals aren't the kind of bottom-of-the-barrel predatory journals you're describing. ACS nano in particular is a well respected journal for nanochem, with a generally well-respected editorial board, and any suspicions of editorial misconduct of the type you're describing would be a three-alarm fire in the community.
I will also note that this article is labelled "free to read" -- when the authors have paid an (as you said, exhorbitant) publishing fee to have the paper be open access, the label used by ACS journals is "open access". The "free to read" label would be an editorial decision, typically because the article is relevant outside the typical readerbase of the journal, and so it makes sense both from a practical perspective (and more cynically for the journal's PR) to make it available to everyone, not just the community who has institutional access.
Also, the fact that the authors had a little fun with the title doesn't mean its low-effort slop -- this was actually an important critique at the time, because for years people had been adding different modifications to graphene and making a huge deal about how revolutionary their new magic material was.
The point this paper was trying to make is that finding modifications to graphene which make it better for electrocatalysis is not some revolutionary thing, because almost any modification works. It was actually a useful recalibration for expectations, as well as a good laugh.
Edit: typo
Not somebody who knows a lot about this stuff, as I'm a bit of an AI Luddite, but I know just enough to answer this!
"Tokens" are essentially just a unit of work -- instead of interacting directly with the user's input, the model first "tokenizes" the user's input, simplifying it down into a unit which the actual ML model can process more efficiently. The model then spits out a token or series of tokens as a response, which are then expanded back into text or whatever the output of the model is.
I think tokens are used because most models use them, and use them in a similar way, so they're the lowest-level common unit of work where you can compare across devices and models.
Well, I doubt they'll release one for my clippers since they're discontinued, so that inspired me to go ahead and model a variable-depth one for myself. Based on some of the comments here, I thickened the comb blades to make them print more easily.