xcjs

joined 2 years ago
[–] xcjs@programming.dev 1 points 1 year ago

They added a video player with version 3, I think.

[–] xcjs@programming.dev 12 points 1 year ago

Now the question is - are they open sourcing the original Winamp, or the awful replacement?

[–] xcjs@programming.dev 1 points 1 year ago

We all mess up! I hope that helps - let me know if you see improvements!

[–] xcjs@programming.dev 1 points 1 year ago* (last edited 1 year ago) (2 children)

I think there was a special process to get Nvidia working in WSL. Let me check... (I'm running natively on Linux, so my experience doing it with WSL is limited.)

https://docs.nvidia.com/cuda/wsl-user-guide/index.html - I'm sure you've followed this already, but according to this, it looks like you don't want to install the Nvidia drivers, and only want to install the cuda-toolkit metapackage. I'd follow the instructions from that link closely.

You may also run into performance issues within WSL due to the virtual machine overhead.

[–] xcjs@programming.dev 1 points 1 year ago (4 children)

Good luck! I'm definitely willing to spend a few minutes offering advice/double checking some configuration settings if things go awry again. Let me know how things go. :-)

[–] xcjs@programming.dev 1 points 1 year ago* (last edited 1 year ago)

It should be split between VRAM and regular RAM, at least if it's a GGUF model. Maybe it's not, and that's what's wrong?

[–] xcjs@programming.dev 1 points 1 year ago (6 children)

Ok, so using my "older" 2070 Super, I was able to get a response from a 70B parameter model in 9-12 minutes. (Llama 3 in this case.)

I'm fairly certain that you're using your CPU or having another issue. Would you like to try and debug your configuration together?

[–] xcjs@programming.dev 2 points 1 year ago

Unfortunately, I don't expect it to remain free forever.

[–] xcjs@programming.dev 5 points 1 year ago (1 children)

No offense intended, but are you sure it's using your GPU? Twenty minutes is about how long my CPU-locked instance takes to run some 70B parameter models.

On my RTX 3060, I generally get responses in seconds.

[–] xcjs@programming.dev 3 points 1 year ago* (last edited 1 year ago) (1 children)

It's a W3C managed standard, but there are tons of behavior not spelled out in the specification that platforms can choose to impose.

The standard doesn't impose a 500 character limit, but there's nothing that says there can't be a limit.

[–] xcjs@programming.dev 4 points 1 year ago

Or maybe just let me focus on who I choose to follow? I'm not there for content discovery, though I know that's why most people are.

[–] xcjs@programming.dev 4 points 1 year ago* (last edited 1 year ago)

I was reflecting on this myself the other day. For all my criticisms of Zuckerberg/Meta (which are very valid), they really didn't have to release anything concerning LLaMA. They're practically the only reason we have viable open source weights/models and an engine.

view more: ‹ prev next ›