relevants

joined 2 years ago
[–] relevants@feddit.de 40 points 2 years ago (1 children)

Nowhere in the post does OP claim that Discord's privacy issues are China related...

[–] relevants@feddit.de 2 points 2 years ago (1 children)

No I am not sure actually, it might very well be! Both would make sense conceptually but I never actually looked into which one it is

[–] relevants@feddit.de 4 points 2 years ago (3 children)

We say "das ist mir Wurst" in Hamburg too, so it must be a pretty universal saying.

Is Mietschuldenfreiheitsbescheinigung used in a saying? The only meaning I can think of is the literal one (attestation of no rental debt)

[–] relevants@feddit.de 4 points 2 years ago (3 children)

I'm from Hamburg and I know the majority of these as well, but some are a bit different. Here's some variations on yours:

  • Das macht den Kohl auch nicht fett (that doesn't fatten up the cabbage)
  • Herr, lass Hirn vom Himmel regnen! (lord, let it rain brains!)
  • Wie ein Schluck Wasser in der Kurve (like a sip of water turning a corner) - sitting very lazily/not upright
[–] relevants@feddit.de 1 points 2 years ago

Bears Favor exists in German too (jemandem einen Bärendienst erweisen)

[–] relevants@feddit.de 4 points 2 years ago (1 children)

If it's in the minified front end code it's already client side, of course you don't show it to the user but they could find out if they wanted to. Server side errors are where you really have to watch out not to give out any details, but then logging them is also easier since it's already on the server.

[–] relevants@feddit.de 2 points 2 years ago (1 children)

When any message with some contents is received - Right now this is limited to a list of contacts. I'd like to have a shortcut where someone could text me, "Where are you?" and it'll just auto-send my location.

You can already do this! Just leave the Sender field on "Choose" and fill out the Message Contains field only.

[–] relevants@feddit.de 13 points 2 years ago

In a socialist system he would still be allowed to sell his own work and profit from his labor. Your point makes no sense

[–] relevants@feddit.de 30 points 2 years ago (9 children)

anyone who likes logical consistency

That's the neat part, they don't.

[–] relevants@feddit.de 2 points 2 years ago

Do keep in mind that if you upgrade your regular RAM this will only benefit models running on the CPU, which are far slower than models on the GPU. So with more RAM you may be able to run bigger models, but when you run them they will also be more than a literal order of magnitude slower. If you want a response within seconds you would want to run that model on the GPU, where only VRAM counts.

Probably in the near future there will be models that perform much better at consumer device scale, but for now unfortunately it's still a pretty steep tradeoff, especially since large VRAM hasn't really been in high demand and is therefore much harder to come by.

[–] relevants@feddit.de 2 points 2 years ago

How does she feel about the MacBook keyboard? I personally quite like it, now that they're normal again, but especially for an aspiring writer I think that is a pretty significant criterium.

[–] relevants@feddit.de 5 points 2 years ago* (last edited 2 years ago) (2 children)

"Runs locally" is a very different requirement and not one you'll likely be able to find anything for. There are smaller open source LLMs but if you are looking for GPT-4 level performance your device will not be able to handle it. Llama is probably your best bet, but unless you have more VRAM than any consumer gpu currently does , you'll have to go with lower size models which have lower quality output.

view more: ‹ prev next ›