theterrasque

joined 2 years ago
[–] theterrasque 3 points 1 year ago

It’s a watch that says you have no taste.

They know their target demographic

[–] theterrasque 3 points 1 year ago

Careful, if you spend 8 hours playing with your deck you might go blind

[–] theterrasque 5 points 1 year ago (1 children)

Hah. Snake oil vendors will still sell snake oil, CEO will still be dazzled by fancy dinners and fast talking salesmen, and IT will still be tasked with keeping the crap running.

[–] theterrasque 10 points 1 year ago (3 children)

This has a lot of "I can use the bus perfectly fine for my needs, so we should outlaw cars" energy to it.

There are several systems, like firewalls , switches, routers, proprietary systems and so on that only has a manual process for updating, that can't be easily automated.

[–] theterrasque 3 points 1 year ago* (last edited 1 year ago)

Most phones these days use randomized MACs

https://www.guidingtech.com/what-is-mac-randomization-and-how-to-use-it-on-your-devices/

Not sure if that is for BT too, but looks like there is some support for it in the standards

https://novelbits.io/how-to-protect-the-privacy-of-your-bluetooth-low-energy-device/

https://novelbits.io/bluetooth-address-privacy-ble/

The recommendation per the Bluetooth specification is to have it change every 15 minutes (this is evident in all iOS devices).

So seems like it is implemented on some phones at least

https://www.bluetooth.com/blog/bluetooth-technology-protecting-your-privacy/

From 2015. So this seems to be a solved problem for a decade now

[–] theterrasque 6 points 1 year ago

That's because they don't see the letters, but tokens instead. A token can be one letter, but is usually bigger. So what the llm sees might be something like

  • st
  • raw
  • be
  • r
  • r
  • y

When seeing it like that it's more obvious why the llm's are struggling with it

[–] theterrasque 1 points 1 year ago

In many cases the key exchange (kex) for symmetric ciphers are done using slower asymmetric ciphers. Many of which are vulnerable to quantum algos to various degrees.

So even when attacking AES you'd ideally do it indirectly by targeting the kex.

[–] theterrasque 12 points 1 year ago (1 children)

I generally agree with your comment, but not on this part:

parroting the responses to questions that already existed in their input.

They're quite capable of following instructions over data where neither the instruction nor the data was anywhere in the training data.

They're completely incapable of critical thought or even basic reasoning.

Critical thought, generally no. Basic reasoning, that they're somewhat capable of. And chain of thought amplifies what little is there.

[–] theterrasque 1 points 1 year ago

No, all sizes of llama 3.1 should be able to handle the same size context. The difference would be in the "smarts" of the model. Bigger models are better at reading between the lines and higher level understanding and reasoning.

[–] theterrasque 4 points 1 year ago (5 children)

Wow, that's an old model. Great that it works for you, but have you tried some more modern ones? They're generally considered a lot more capable at the same size

[–] theterrasque 2 points 1 year ago* (last edited 1 year ago) (2 children)

Increase context length, probably enable flash attention in ollama too. Llama3.1 support up to 128k context length, for example. That's in tokens and a token is on average a bit under 4 letters.

Note that higher context length requires more ram and it's slower, so you ideally want to find a sweet spot for your use and hardware. Flash attention makes this more efficient

Oh, and the model needs to have been trained at larger contexts, otherwise it tends to handle it poorly. So you should check what max length the model you want to use was trained to handle

[–] theterrasque 3 points 2 years ago

Like when under Arab spring the Egyptian politicians tried to get the military involved to stop the protests, and got back (paraphrased)

"Our primary job is to protect the Egyptian people from violence. You really don't want us involved in this"

view more: ‹ prev next ›