SavvyBeardedFish

joined 2 years ago
[–] SavvyBeardedFish@reddthat.com 89 points 1 month ago (1 children)

... I can't undervolt my card...

People usually use/recommend LACT for undervolting/overlocking on Linux

[–] SavvyBeardedFish@reddthat.com 8 points 2 months ago* (last edited 2 months ago) (1 children)

Have you enabled Southern Islands support as a kernel parameter? Your generation of GPU was originally supported on radeon, so you need to explicitly enable SI (Southern Islands) support to use amdgpu.

See ArchWiki for more information

[–] SavvyBeardedFish@reddthat.com 7 points 3 months ago* (last edited 3 months ago)

Pretty sure that information is stored in the driver, so you should be able to query it using monitoring software, i.e. see:

NVML-API

I know tooling like nvtop uses the API, but unsure whether it displays the maximum temperature

[–] SavvyBeardedFish@reddthat.com 4 points 5 months ago (2 children)

Maybe the LLMs they prompted didn't know about the built-in SSH support, hence still recommends PuTTY? 🤔

[–] SavvyBeardedFish@reddthat.com 26 points 5 months ago* (last edited 5 months ago) (1 children)

Der8auer's video is worth a watch, he got one of the Redditor's card:

https://youtu.be/Ndmoi1s0ZaY

[–] SavvyBeardedFish@reddthat.com 7 points 6 months ago

Yes, so R&D and finalizing the model weight is done on NVIDIA GPUs (I guess you need an excessive amount of VRAM).

Inference is probably gonna be offloaded to consumers in the end where the NPU is taking care of the inference cost (See Apple, Qualcomm etc)

[–] SavvyBeardedFish@reddthat.com 25 points 6 months ago* (last edited 6 months ago) (4 children)

Not the best on AI/LLM terms, but I assume that training the models was done on Nvidia, while inference (using the model/getting the data from the model) is done on Huawei chips

To add: Training the model is a huge single-cost expense, while inference is a continuous expense.

[–] SavvyBeardedFish@reddthat.com 19 points 7 months ago* (last edited 7 months ago) (5 children)

The whole downside is that not everyone is a data horder with space for videos

Some media players allows for streaming directly using yt-dlp, e.g.;

mpv <youtube url>

Will use yt-dlp if installed

[–] SavvyBeardedFish@reddthat.com 5 points 8 months ago

Sounds like you might just be max'ing out the capacity of the coax cable as well (depending on length/signal integrity). E.g. ITGoat (not sure how trustworthy this webpage is, just an example) lists 1 Gbps as the maximum for coax while you would typically expecting less than that, again depending on your situation (cable length, material, etc)

[–] SavvyBeardedFish@reddthat.com 2 points 8 months ago (2 children)

What's your situation into the wall? Depending on country/ISP/regulations they might give you up to 1000 Mbps under the assumption that it's a single line going to a single user, however quite often that line is shared with potentially a lot of different customers.

Some countries allows you to buy packages where you have a standalone line going to your wall, however at an additional cost

[–] SavvyBeardedFish@reddthat.com 4 points 8 months ago* (last edited 8 months ago) (1 children)

If all nodes are connected through ethernet to each other (or at least one common node) you could go for OpenWRT's 'Dumb AP' setup as well

Edit: Already mentioned here; https://feditown.com/comment/1980836

[–] SavvyBeardedFish@reddthat.com 8 points 9 months ago* (last edited 9 months ago) (2 children)

Maintainer has been absent for some time so kernel v6.11 and v6.12 isn't supported OOTB, to get it to work with kernel v6.11 you need to pull the fix from: !48

view more: next ›