Become more involved. Do you know your mayor?
If you are not helping pick the candidates on the ballot, then you just need to pick the lesser evil. If you want to do more than that, then be part of the decision of WHO ends up on the ballot, because that process has already started.
There's a significant hurdle to run for even the mayoral office: it doesn't pay well relative to a corporate job, and doesn't have the same job security. People can only run for the decision-making positions when they already have enough wealth to be comfortable without a "real job." Help find or select people with your values and we can take this back.
If you are mad at your options, the solution is not to give up, but to make better options.
sadly. I don't have enough money to turn this shit-hose off.
Gen AI is neat, and I use it for personal processes including code, image gen, llm/chat; but it is sooooo faaaar awaaaay from being a real game changer - while all the people poised to profit off it claim it is - that it's just insane to claim it's the next wave. evidence: all the creative (photo/art/code/etc) people who are adamantly against it and have espoused reasoning.
There's another story on my feed about a 10-year-old refactoring a code base with a LLM. Go look at the comments from actual experts that take into account things like unit tests, readability, manageability, security. Humans have more context than any AI will.
LLMs are not intelligent. They are patently not. They make shit up constantly, since that is exactly what they do. Sometimes, maybe even most of the time, the shit they make up is mostly accurate... but do you want to rely on them?
When a doctor prescribes you the wrong drug, you can sue them as a recourse. When a software company has a data breach, there is often a class-action (better than nothing) as a recourse. When an AI tells you to put glue on your pizza to hold the toppings, there is no recourse, since the AI is not a legal thing and the company disclaims all liability for its output. When an AI denies your health insurance claim because of inscrutable reasons, there is no recourse.
In the first two, there is a penalty for being wrong, which is in effect an incentive to be correct -- to be accurate, to be responsible.
In the last, as an AI llm/agent/fuckingbuzzword, there is no penalty and no incentive. The AI just is as good as its input, and half the world is fucking stupid, so if we average out all the world's input, we get "barely getting by" as a result. A coding AI is at least partially trained on random stackoverflow posts asking for help. The original code there is wrong!
Sadly, it's not going anywhere. But people who rely on it will find short-term success for long-term failure. And a society relying on it is doomed. AI relies on the creative works that already exist. If we don't make any new things, AI will stagnate and die. Where will we be then?
There are places AI/LLM/Machine-Learning can be used successfully and helpfully, but they are niche. The AI bros need to be figuring out how to quickly meet a specific need instead of trying to meet all needs at the same time. Think the early 2000-s Folding at Home, how to convince republicans to wear a fucking mask during covid, why we shouldn't just eat the billionaires*.
*Hermes-3 says cannibalism is "barbaric" in most cultures, but otherwise doesn't give convincing arguments.