@AutoTLDR
Seems like it isn’t:
the same technology under the hood of Google Translate
This is incredible, thanks for sharing it!
If I remember correctly, the properties the API returns are comment_score
and post_score
.
Lemmy does have karma, it is stored in the DB, and the API returns it. It just isn’t displayed on the UI.
It definitely helps me. It isn’t perfect, but it’s a night and day difference
I’ve found that after using it for a while, I developed a feel for the complexity of the tasks it can handle. If I aim below this level, its output is very good most of the time. But I have to decompose the problem and make it solve the subproblems one by one.
(The complexity ceiling is much higher for GPT-4, so I use it almost exclusively.)
It only handles HTML currently, but I like your idea, thank you! I’ll look into implementing reading PDFs as well. One problem with scientific articles however is that they are often quite long, and they don’t fit into the model’s context. I would need to do recursive summarization, which would use much more tokens, and could become pretty expensive. (Of course, the same problem occurs if a web page is too long; I just truncate it currently which is a rather barbaric solution.)
someone watching you code in a google doc
I’ve had nightmares less terrifying than this
It may very well be related to it.
Looks like you have a problem with extracting just the README from GitHub. Let's see if you can read the raw link: https://raw.githubusercontent.com/0xpayne/gpt-migrate/main/README.md