The main issue with this idea of punishment and reward, in the sense that you mean them, is that their results depend entirely on the criteria by which you are punished or rewarded. Say, the law says being gay is illegal and the punishment is execution, does that mean it's immoral?
Being moral boils down to making certain decisions, the method by which they are achieved is irrelevant if the decisions are "correct". Most moral philosophies agree that moral decisions can be made by applying rational reasoning to some basic principles (e.g. the categorical imperative). We reason through language, and these models capture and simulate that. The question is not whether AI can make moral decisions, it's whether it can be better than humans at it, and I believe it can.
I watched the video, honestly I don't find anything too surprising. ChatGPT acknowledges that there are multiple moral traditions (as it should) and that which decision is right for you depends on which tradition you subscribe to. It avoids making clear choices because it is designed that way for legal reasons. When there exists a consensus in moral philosophy about the morality of a decision, it doesn't hesitate to express that. The conclusions it comes to aren't inconsistent, because it always clearly expresses that they pertain to a certain path of moral reasoning. Morality isn't objective, taking a conclusive stance on an issue based on one moral framework (which humans like to do) isn't superior to taking an inconclusive one based on many. Really this is one of our greatest weaknesses, not being able to admit we aren't always entirely sure about things. If ChatGPT was designed to make conclusive moral decisions, it would likely take the majority stance on any issue, which is basically as universally moral as you can get.
The idea that AI could be immoral because it holds the stances of its developers is invalid, because it doesn't. It is trained on a vast corpus of text, which captures popular views and not the views of the developers.
Holding someone accountable doesn't undo their mistakes, once a decision is made, there is often nothing you can do about it. Humans make bad decisions too, whether unknowingly or intentionally. It's clear that accountability isn't some magic catch-all.
I find the idea that punishment and reward are prerequisites of morality rather pessimistic, do you believe people are entirely incompetent of acting morally in the absence of external motivation?
Whichever way, AI does essentially function on the principle punishment and reward, you could even say it has been pre-punished and rewarded in millions of iterations during its training.
AI simply has clear advantages in decision making. Without self-interest it can make truly selfless decisions, it far less prone to biases and takes much more information into account in its decisions.
Try throwing some moral, even political questions at LLMs, you will find they do surprisingly well, and these are models that aren't optimized for decision making.
Work on your own projects and it will come naturally, it's the best way to thoroughly learn a language (probably JS in your case). Try to really understand the basics (like OOP), it's knowledge which will both translate to other languages and help you learn frameworks/libraries. Instead of relying solely on tutorials, try reading the documentation, it will give you a more thorough understanding (if it's good), also stack overflow isn't cheating, you can't always remember everything. Trust me, you are already way ahead of others if you plan to take CS.
I wouldn't say Marxism is incompatible with dualism for example, yes Marx heavily focuses on the material struggle, but interpreting the theory in a dualist sense doesn't really change its implications. Wealth really matters because of the way it makes us feel, the experiences it enables, not because of some inherent value. If being poor didn't feel bad, nobody would have a problem with it.
The fact that this man is a father is sad
Next step is baking your own!
The cat carrier stays on during sex
Yes, I agree it seems scary, but all it really means is that morality is not universal but specific to humans. You could say everything is inherently morally permissible in the sense that there is no higher power which will punish you for your actions, so essentially there is nothing preventing you from committing them. In short, the universe doesn't give a shit what you do.
Still, your actions do have consequences, and you are inevitably forced to live with them (pretty much Sartre's viewpoint). Because of this, doing things you think are wrong is often bad for you, because it causes you emotional pain in the form of guilt and regret, and also usually carries along negative social repercussions which outweigh the value of the immoral act in the first place. You could say that people are naturally compelled to act in certain ways out of completely selfish reasons. In this sense, I prefer to look at morality more as a "deal" between the members of a society to act in a certain mutually beneficial way (which is fueled by our instincts, a product of evolution), than something universal and objective.
The reason I doubt in our current understanding of consciousness is because I find its distinction between what is conscious and what isn't quite arbitrary and problematic. At which point does an embryo become conscious, and how can something conscious be created from something unconscious? The simplest explanation I can imagine is that consciousness is present everywhere and cannot be created nor destroyed. This view (called panpsychism) is absolutely ancient, but seems to be gaining some recognition again, even among neuroscientists.
As you mentioned, "cogito, ergo sum" might be the only real objective truth that philosophy has uncovered so far. I am an optimist in that I believe surely more than one such truth must exist. If it was only discovered 400 years ago, surely there is more to be found. Maybe it is possible to collect some of these small fragments and build some larger philosophical theory from them, one that will be grounded in fact and built up using logic. I guess only time will tell.
And yes, of course some abstraction is beneficial in order to make sense of the world, even if it isn't completely correct or objective.
The issue I see with these theories is that this idea of inherent value they all arrive at is very abstract in a way. What does it even mean for something to have inherent value, and why is it wrong to destroy it?
Another problem is that we talk about destroying life without even fully understanding it in the first place. What if life (in the sense of consciousness) is indestructible?
The way I see it, people accept that life has some inherent value because our self preservation instinct tells us that we don't want to die and empathy allows us to extend that instinct to other living beings. Both are easily explained as products of evolution, not rational or objective, but simply evolutionarily favourable. All these theories are attempts to rationally explain this feeling, but they all inevitably fail, as they're (in my opinion) trying to prove something that simply isn't objectively true.
Anyways, I feel like even if you accepted any individual theory that seems to confirm our current understanding of morality and stuck with it fully, you would come to conclusions which are completely conflicting with it. For example in the case of utilitarianism, you could easily come to the conclusion that not donating most of your money to charity is immoral, as that would be the course of action which would result in the largest total amount of pleasure.
You can make ethical arguments based in reason.
Come on, I'd love to hear some, also the stakes are still up if you can give me a rational argument why killing is wrong.
Why is killing people wrong, but ok in war? Why do we still kill animals even though we know it's wrong? Why is killing wrong in the first place? I bet you can't find a single rational reason. That is because ethics isn't based on reason, but instead on emotion. Given that, I don't find it very surprising that it's often very hypocritical.