this post was submitted on 09 Nov 2023
17 points (100.0% liked)

Futurology

3188 readers
12 users here now

founded 2 years ago
MODERATORS
top 2 comments
sorted by: hot top controversial new old
[โ€“] blackfire@lemmy.world 4 points 2 years ago (1 children)

So it was a perf test of a 1b token size model not the full 3.7T that get3 is trained with. I mean great. They are showing improvement but this is just a headline grabber they haven't done anything actually useful here.

[โ€“] Oisteink@feddit.nl 1 points 2 years ago

Just checking in to say they are still there - so many rascals showing off rigs these days