I've been active in the field of AI since 2012, since the beginning of the GPGPU revolution.
I feel like many, not most, of the experts and scientists until the early stages of the GPGPU revolution and before shared a similar sentiment as what i'm stating in the title.
If asked by the public and by investors about what it's all actually good for, most would respond with something along the lines of "idk, medicine or something? Probably climate change?" when actually, many were really just trying to make Data from TNG a reality, and many others were trying to be the first in line to receive AI immortality and other transhumanist dreams. And these are the S-Tier dinosaur savants in AI research that i'm talking about, not just the underlings. See e.g. Kurzweil and Schmidthuber.
The moment AI went commercial it all went to shit. I see AI companies sell dated methods with new compute to badly solve X, Y, Z and more things that weren't even problems. I see countless people hate and criticize, and i can't even complain, because for the most part, i agree with them.
I see people vastly overstate, and other people trivialize what it is and what it isn't. There's little inbetween, and of the people who wish AI for only its own sake, virtually none are left, save for mostly vulnerable people who've been manipulated into parasocial relationships with AI, and a handful of experts that face brutal consequences and opposition from all sides the moment they speak openly.
Call me an idiot for ideologically defending a technology that, in the long term, in 999999 out of 1000000 scenarios will surely harm us. But AI has been inevitable since the invention of the transistor, and all major post-commercialization mindsets steer us clear of the 1 in a million paths where we'd still be fine in 2100.
I was going mad about that and hoping you wouldn't notice. You noticed.
I should play that game. The 2nd quote rings with smth i've been rambling on about elsewhere regarding why humanity embraced agriculture and urbanism, where the expert discourse (necessity) contradicts the common assumption (discovery and desire).
Yes, but i think you misunderstood my edit? I meant to say that a strong enough semblance to humanity should make it worth considering under even human-centric ethics, whichever those ethics are. AKA rationally deserving of ethical consideration.
I believe even that is of material origin. I call it "beauty" but it's really just the analogy used by complexity theorists (as in the study of complex systems) to describe what they study. Yes, that would make "beauty," in the uncommon sense that i use the term here (story of literally every philosophical debate and literature), not subjective. Apologies for not stating this more clearly.
Following my clarification - taking a barren planet, terraforming it, seeding it with the beginnings of new multicellular life, and doing the same with every workable world out there, i would say is spreading or generating beauty. Just as one potential example of all the things that humanity will never do, but our inevitable successor might. It might itself be a creature of great complexity, i would say such ability would definitely imply it, a seemingly entropy-defying whirl in a current, itself actually accelerating entropy increase, as life itself. I am referencing an analogy made in The Physics of Life, by PBS Spacetime, if i'm not misremembering. The vid has a mild intro into complexity science, as in the study of complex systems.
I'm a bit confused myself right now. Let's backtrack, originally you stated:
And now
That is a very fair point, but i don't see a logical contradiction anymore. If i understand correctly, you saw the contradiction in me asking ethical questions, and stating ethical opinions, while rejecting the notion of ethics. As i clarified, i do not reject the notion of ethics.
I reduce ethics to the bare bones of basic moral intuition, try to refrain from overcomplicating it, and the "ethical authority" (see also pure reason, which failed; or God, which you can't exactly refute; or utility, which is a shitshow; as other ultimate "authorities" proposed in absolute takes on ethics) that i personally kind of add to that is the aforementioned concept of "beauty". You may disagree with it being a reasonable basis for ethics, as you do, and you may it's all philosophically equivalent to faith anyways. But i don't see a strict contradiction?
I think my "ethics" are largely compatible with common human ethics, but add "making ugly/boring/banal things is inherently bad" and "making pretty/interesting/complex things is good," and you get "Current AI is ugly, that's bad, i wish it weren't so. If we made AI 'for its own sake' as opposed to as a means to an end, we would be trying to make it pretty, the existence of beauty i see as an end in itself." I think i'm just vastly overwording the basic sentiment of many designers, creators, gardeners, etc.
Understandable. I should do the same ^^