....it's literally about accusing NSA of trying to implement back-doors for quantum resistant encryption.
I have no idea what you're trying to get at.
....it's literally about accusing NSA of trying to implement back-doors for quantum resistant encryption.
I have no idea what you're trying to get at.
I think you vastly underestimate modern encryption. I would recommend looking up concepts and math from encryption, it makes more sense for why thinking that practically unbreakable encryption is very much possible once you do.
It's why governments want to implement back-doors, because they are not actually capable of breaking it more directly.
...there very much is practically unbreakable encryption. We use those every day (it's part of the s in https).
And your example is just a very rudimentary form of encryption that is far far weaker than the typical encryption methods used on the internet today.
Please elaborate on why it's a scam
What's wrong with it? It's just a cutesy name for a dog.
I want to note that everything you talk about is happening on the scales of months to single years. That's incredibly rapid pace, and also too short of a timeframe to determine true research trends.
Usually research is considered rapid if there is meaningful progression within a few years, and more realistically about a decade or so. I mean, take something like real time ray tracing, for comparison.
When I'm talking about the future of AI, I'm thinking like 10-20 years. We simply don't know enough about what is possible to say what will happen by then.
People need to remember that chances are they are far too unimportant to use expensive stuff like anthrax on them. And if they are important enough for it, then they would already know and take precautions.
The thing with AI, is that it mostly only produces trash now.
But look back to 5 years ago, what were people saying about AI? Hell, many thought that the kind of art that AI can make today would be impossible for it to create! ..And then it suddenly did. We'll, it wasn't actually suddenly, and the people in the space probably saw it coming, but still.
The point is, we keep getting better at creating AIs that do stuff we thought were impossible a few years ago, stuff that we said would show true intelligence if an AI can do them. And yet, every time some new impressive AI gets developed, people say it sucks, is boring, is far from good enough, etc. While it slowly, every time, creeps on closer to us, replacing a few jobs here and there in the fringes. Sure, it's not true intelligence, and it still doesn't beat humans, but, it beats most, at demand, and what happens when inevitably better AIs get created?
Maybe we're in for another decades long AI winter.. or maybe we're not, and plenty more AI revolutions are just around the corner. I think AIs current capabilities are frighteningly good, and not something I expected to happen this soon. And the last decade or so has seen massive progress in this area, who's to say where the current path stops?
It's also the difference of their privilege
The owl is a predator, it got less to worry about. So from its perspective, it makes sense.
But adhd doesn't really prevent you from having hobbies or personal projects. It does make it hard to work and survive in the labour system we have today though, at least hard to not get stressed the hell out.
If I had much more free time and less stuff demanded to me and sapping my energy, I'd happily work away at a bunch of personal projects or other stuff. And in the times where I had such freedom, I did. But now that I'm stressed and bogged down by the working everyday, I do little, and it sucks.
The whole "took a risk" stuff is so dumb because in most cases what they risk is ending up with less money.. which is still more than most people have throughout their lifetime.
I'd feel totally safe risking 900 million USD if I already had 1 billion USD. What's the worst that can happen, like, really?
We got vibrators and other toys for that. Imagine just using your hand when you're already a cyborg, smh smh