I'm not writing code for a medical device. I'm tinkering with a mod for a game. I can't imagine how getting something wrong would do any greater harm than wasting some of my time.
FaceDeer
Argues for the importance of student essays, and then:
When artificial intelligence is used to diagnose cancer or automate soul-crushing tasks that require vapid toiling, it makes us more human and should be celebrated.
I remember student essays as being soul-crushing vapid toiling, personally.
The author is very fixated on the notion that these essays are vital parts of human education. Is he aware that for much of human history - and even today, in many regions of the world - essay-writing like this wasn't so important? I think one neat element of AI's rise will be the growth of some other methods of teaching that have fallen by the wayside. Socratic dialogue, debate, personal one-on-one tutoring.
I've been teaching myself some new APIs and programming techniques recently, for example, and I'm finding it way easier having an AI to talk me through it than it is grinding my way through documentation directly.
One of those times it's nice to be a prepper, even if only on a relatively small scale. I bought a couple of months' worth of gasoline last week.
That would require an ever-increasing amount of forested land. A carbon pyramid scheme. As soon as you stop increasing the forest's area it goes back to an equilibrium of trees decaying equalling trees growing.
And the rest of them just stay frozen upright forever, I suppose.
The world isn't short of water. I'd be more concerned about phosphorus and other such mineral nutrients, those would get pulled out of the soil and then not returned.
Frankly, I think the best approach to sequestration is to make plastic and bury it. Plastic has a much more controllable chemical structure, you can be sure to only get carbon that way.
You think trees don't die and fall down on their own?
And even if you did do that, where would you store the wood afterwards? You can't let it decay, that'd just put the carbon back into the atmosphere.
I'm interested to see how this turns out. My prediction is that the AI trained from the results will be insane, in the unable-to-reason-effectively sense, because we don't yet have AIs capable of rewriting all that knowledge and keeping it consistent. Each little bit of it considered in isolation will fit the criteria that Musk provides, but taken as a whole it'll be a giant mess of contradictions.
Sure, the existing corpus of knowledge doesn't all say the same thing either, but the contradictions in it can be identified with deeper consistent patterns. An AI trained off of Reddit will learn drastically different outlooks and information from /r/conservative comments than it would from /r/news comments, but the fact that those are two identifiable communities means that it'd see a higher order consistency to this. If anything that'll help it understand that there are different views in the world.
It's just predicting when the wars start, not when they end. They can overlap.
My main reasoning behind having a buffer of gasoline stored (I rotate through it, no need for stabilizer) is so that if something absolutely catastrophic happens I can make it to a relative's farm without having to stop for gas along the way. It also happens to be handy for the less-catastrophic-but-still-annoying situation we might be facing now, of a sudden fuel price spike.