dannym

joined 2 years ago
[–] dannym@lemmy.escapebigtech.info 9 points 2 years ago* (last edited 2 years ago) (11 children)

I will tell you something that most people won't: you don't have to use those websites.

It doesn't matter how important you think they are, you can take a stand by not using them if they don't respect you.

Do you know the reasoning behind the common saying "the united states doesn't engage with terrorists"? Politics aside, it's because engaging with your enemy legitimizes or empowers them. By refusing to negotiate or engage with terrorists, the policy aims to avoid granting them recognition or validation for their methods.

You can take the same stance; when a website stops working with non-chromium browsers you stop using it. You IMMEDIATELY stop using it, even better if you pay them money, you should IMMEDIATELY cancel citing that they're stealing your intellectual freedom. If the US government does the same and you're required to use a chromium browser to fill out your taxes for example, do it on paper, give them a message that you'd rather not use technology than have guns pointed at you

Fair point!

To be clear I wasn't arguing that DARE is enough, you are absolutely correct that depending on the situation it isn't, but in my opinion in this specific case. if the data was DAREd, and sent to the user in its encrypted state and only decrypted on the user's machine with the user's key, that's not stored in any server, it would have completely fixed this specific issue. Naturally, however, to your point, with encryption there is no one-size-fits-all argument!

[–] dannym@lemmy.escapebigtech.info 17 points 2 years ago (1 children)

Yeah, there is one way to make it better, but it won't happen until they're forced to change: force them to integrate with the matrix protocol

yes, I know that it's possible to use a bridge, and I do it, but it still requires a discord account, it would be great if discord rooms were just accessible with the matrix protocol

because I’m pretty sure they need some of that data to be unencryped; records of related customers can improve accuracy drastically

I don't even think this should be a feature, but, if it has to, then they can have two versions of it, one that they use for training and improving the results and a user can only access their data from a frontend by decryping it (locally) with their key

also this “hack” was done by just abusing built-in features (“dna relatives” system), not actually breaking any security.

irrelevant. if you had a key pair no amount of password guessing would get them there

[–] dannym@lemmy.escapebigtech.info 21 points 2 years ago* (last edited 2 years ago) (5 children)

It's truly a shame that in this advanced age of technology, encryption remains a distant, unattainable dream! In this archaic age of ours, safeguarding customer data is just not possible yet because nobody has ever invented the concept of public private key pairs yet, and hackers are having a field day with our data. Clearly, we're still stuck in the digital dark ages where safeguarding sensitive information is just a pipe dream. 🙄

Seriously, how is it possible that they're still not using key pairs for encrypting this data? It would be so simple, you just include a flash drive, or a qr code, in the box with the key and accessing the website to view the data would require that key, how is that still not something they're doing?

#EncryptionPlease

[–] dannym@lemmy.escapebigtech.info 10 points 2 years ago (1 children)

I have thought about this for a long time, basically since the release of ChatGPT, and the problem in my opinion is that certain people have been fooled into believing that LLMs are actual intelligence.

The average person severely underestimates how complex human cognition, intelligence and consciousness are. They equate the ability of LLMs to generate coherent and contextually appropriate responses with true intelligence or understanding, when it's anything but.

In a hypothetical world where you had a dice with billions of sides, or a wheel with billions of slots, each shifting their weight with grains of sand, depending on the previous roll or spin, the outcome would closely resemble the output of an LLM. In essence LLMs operate by rapidly sifting through a vast array of pre-learned patterns and associations, much like the shifting sands in the analogy, to generate responses that seem intelligent and coherent.

maybe report it as a bug? I'm not experiencing it personally, so maybe it's related to the order of the sources?

for me it's my peertube instance -> odysee -> youtube

[–] dannym@lemmy.escapebigtech.info 1 points 2 years ago (2 children)

How so? I watched the video IN grayjay

[–] dannym@lemmy.escapebigtech.info 5 points 2 years ago (4 children)

Yep! I prefer not to use products by companies that hate me so I mostly avoid YouTube and other similar platforms.

[–] dannym@lemmy.escapebigtech.info 7 points 2 years ago* (last edited 2 years ago) (1 children)

I was referring to his edit which is:

Edit: Oh god… It’s Rossman. Of course it’s dishonest.

And my argument was that it's fine to disagree with him (especially if you have conflicting evidence), but I don't think that it's warranted to call Rossmann dishonest


By the way, I don't even necessarily disagree with his main opinion, the video title is clickbaity for sure

[–] dannym@lemmy.escapebigtech.info 14 points 2 years ago (3 children)

How is he dishonest? It's fine if you disagree with his opinions, but saying he's dishonest is very.... well.... dishonest :P

view more: ‹ prev next ›