1% slowdown is pretty bad. You'd still do better just not using it. 19% is huge!
vrighter
no, they aren't processing high quality data from multiple sources. They're giving you a statistical average of that data. They will always be wrong by nature. Hallucinations cannot be eliminated. Anyone saying otherwise (irrelevant of how rich they are) is bullshitting.
python is a language explicitly designed to resist any form of proper optimization. It just can't be made fast
yep. you could of course swap weights in and out, but that would slow things down to a crawl. So they get lots of vram (edit: for example, an H100 has 80gb of vram)
that's why they need huge datacenters and thousands of GPUs. And, pretty soon, dedicated power plants. It is insane just how wasteful this all is.
yes, but that doesn't help if the software refuses to run on modern java
i wasn't born yet. I don't even think half of me was in my dad's balls yet
imagine that to type one letter, you need to manually read all unicode code points several thousand times. When you're done, you select one letter to type.
Then you start rereading all unicode code points again for thousands of times again, for the next letter.
That's how llms work. When they say 175 billion parameters, it means at least that many calculations per token it generates
funny how everyone who wants to write a new browser (except the ladybird guys) always skimp on writing the actual browser part
ai chip demand explodes amongst manufacturers of crap who hope that demand for ai chips amongst consumers somehow explodes too
in yes/no type questions, 50% success rate is the absolute worst one can do. Any worse and you're just giving an inverted correct answer more than half the time
yes they can. I regularly do. Regexes aren't hard to write, their logic is quite simple. They're hard to read, yes, but they are almost always one-offs (ex, substitutions in nvim).