Yeah, I check that it's not .ml first.
Perspectivist
Someone want to explain to a muggle in plain english what this does and how it's different from simply using a VPN?
Looking back, I realize I was pretty immature at 22. It didn’t feel that way at the time, but it sure does now. These days, 18‑year‑olds look like kids to me.
I didn’t want kids back then, and I still don’t - but my perspective has shifted a little. When I see parents now, there’s a slight melancholic feeling that comes with knowing that’s something I’ll probably never experience.
So yeah, if you’re 30 and don’t want kids, that’s probably not going to change. Before that, though, there’s always a chance.
No, this is just you being mean on purpose - and a hypocrite on top of it.
I don’t get what you gain from acting like a total jerk toward complete strangers who haven’t been hostile to you in any way. There was absolutely no reason to make it personal, but that’s where you chose to take it. I hope you’re satisfied with yourself now. Sure showed me.
Upon closer inspection, the chisel end seems to have a lower angle than the knife edge. It’s a relatively thick carbon steel blade, so I’m not worried about durability, and it should be easy to sharpen. I’ll probably just keep the chisel end sharp and leave the blade itself dull.
Seems better that they rather live within their own community with likeminded people that within the general population. Win-win.
Maybe so, but we already have an example of a generally intelligent system that outperforms our current AI models in its cognitive capabilities while using orders of magnitude less power and memory: the human brain. That alone suggests our current brute‑force approach probably won’t be the path a true AGI takes. It’s entirely conceivable that such a system improves through optimization - getting better while using less power, at least in the beginning.
I personally think the whole concept of AGI is a mirage. In reality, a truly generally intelligent system would almost immediately be superhuman in its capabilities. Even if it were no “smarter” than a human, it could still process information at a vastly higher speed and solve in minutes what would take a team of scientists years or even decades.
And the moment it hits “human level” in coding ability, it starts improving itself - building a slightly better version, which builds an even better version, and so on. I just don’t see any plausible scenario where we create an AI that stays at human-level intelligence. It either stalls far short of that, or it blows right past it.
If AI ends up destroying us, I’d say it’s unlikely to be because it hates us or wants to destroy us per se - more likely it just treats us the way we treat ants. We don’t usually go out of our way to wipe out ant colonies, but if there’s an anthill where we’re putting up a house, we don’t think twice about bulldozing it. Even in the cartoonish “paperclip maximizer” thought experiment, the end of humanity isn’t caused by a malicious AI - it’s caused by a misaligned one.
That would by definition mean it's not superintelligent.
Superintelligence doesn’t imply ethics. It could just as easily be a completely unconscious system that’s simply very, very good at crunching data.
The problem I'm dealing with right now is kind of a first world one in that I'm unable to ride my electric bicycle because the speed sensor is malfunctioning and the sensor from my other bike is incompatible as it's a 4-pin one and the other bike uses a 3-pin one. I do have a 3rd bike too but it's not fun to ride due to lack of electric assist.