projectmoon

joined 2 years ago
[–] projectmoon@lemm.ee 24 points 10 months ago

That is exactly the plan.

[–] projectmoon@lemm.ee 2 points 10 months ago

You can right click the URL bar for sites that support the OpenSearch XML standard. Which I guess is what they wanted to replace it with. But I don't really know why they removed the button to a about: config setting. Could at least be a checkbox or something to enable.

[–] projectmoon@lemm.ee 17 points 10 months ago (3 children)

Returns the add custom search engine button. Which for some reason, has been hidden by default.

[–] projectmoon@lemm.ee 3 points 10 months ago

Anyone have any suggestions for bulk options in the Netherlands?

[–] projectmoon@lemm.ee 4 points 11 months ago

Is it possible to use ollama or an arbitrary OpenAI-compatible endpoint with the chatbot feature yet? Or only the cloud providers?

[–] projectmoon@lemm.ee 8 points 11 months ago (1 children)

That would probably be a task for regular machine learning. Plus proper encryption shouldn't have a discernible pattern in the encrypted bytes. Just blobs of garbage.

[–] projectmoon@lemm.ee 16 points 11 months ago (1 children)

Not to mention the face of the kid.

[–] projectmoon@lemm.ee 3 points 1 year ago

That's being generous.

[–] projectmoon@lemm.ee 1 points 1 year ago

How much speed are you actually getting on Mixtral (I assume that's the 8x7b). I have 64 GB of RAM and an AMD RX 6800 XT with 16 GB of VRAM. I get like 4 tokens per second with Q5_K_M quant.

[–] projectmoon@lemm.ee 53 points 1 year ago (2 children)

Depends on the continuity and who's writing it, but often yes. He was notably portrayed this way in the Justice League cartoon.

[–] projectmoon@lemm.ee 6 points 1 year ago

The only problem I really have, is context size. It's harder to get larger than 8k context size and maintain decent generation speed with 16 GB of VRAM and 16 GB of RAM. Gonna get more RAM at some point though, and hope ollama/llamacpp gets better at memory management. Hopefully the distributed running from llamaccp ends up in ollama.

[–] projectmoon@lemm.ee 9 points 1 year ago (2 children)

I do have a local setup. Not powerful enough to run Mixtral 8x22b, but can run 8x7b (albeit quite slowly). Use it a lot.

view more: ‹ prev next ›