markovs_gun

joined 4 months ago
[–] markovs_gun@lemmy.world 9 points 3 hours ago

Climate change, although the younger generations aren't doing much to help with that either.

[–] markovs_gun@lemmy.world 15 points 8 hours ago (2 children)

This isn't actually the problem. In natural conversation I would say the most likely response to someone saying they need some meth to make it through their work day (actual scenario in this article) is to say "what the fuck dude no" but LLMs don't use just the statistically most likely response. Ever notice how ChatGPT has a seeming sense of "self" that it is an to LLM and you are not? If it were only using the most likely response from natural language, it would talk as if it were human, because that's how humans talk. Early LLMs did this, and people found it disturbing. There is a second part of the process that gives a score to each response based on how likely it is to be voted good or bad and this is reinforced by people providing feedback. This second part is how we got here, because people who make LLMs are selling competing products and found people are much more likely to buy LLMs that act like super agreeable sycophants than LLMs that don't do this. Therefore, they have intentionally tuned their models to prefer agreeable, sycophantic responses because it helps them be more popular. This is why an LLM tells you to use a little meth to get you through a tough day at work if you tell it that's what you need to do.

TL;DR- as with most of the things people complain about with AI, the problem isn't the technology, it's capitalism. This is done intentionally in search of profits.

[–] markovs_gun@lemmy.world 53 points 8 hours ago (5 children)

The full article is kind of low quality but the tl;dr is that they did a test pretending to be a taxi driver who felt he needed meth to stay awake and llama (Facebook's LLM) agreed with him instead of pushing back. I did my own test with ChatGPT after reading it and found that I could get ChatGPT to agree that I was God and that I created the universe in only 5 messages. Fundamentally these things are just programmed to agree with you and that is really dangerous for people who have mental health problems and have been told that these are impartial computers.

[–] markovs_gun@lemmy.world 25 points 8 hours ago* (last edited 8 hours ago) (1 children)

It's all spelled phonetically. Zucchini, potatoes ('taters), tomatoes ('maters), jalapenos, bell peppers.

[–] markovs_gun@lemmy.world 21 points 1 day ago (2 children)

I work a lot with ML based control systems for industrial manufacturing and I think one thing to point out here is that there are a lot of tools you can use to determine how confident an AI model is, what factors caused it to make a classification, and the statistical likelihood of a false positive or negative. You can also easily determine behavior in uncertainty. So you can say "if you're not 99% sure this is a dent, don't classify it as one" and you can just set that certainty threshold. In other words, Hertz had the ability to tune this system to err on the side of not classifying things as dents in "edge cases" and decided not to because it benefits them financially to classify more things as dents even when that isn't true, and would potentially face losses if they erred on the side of not classifying things as dents. The right thing to do, if they felt this was necessary in the first place, would be to roll this out and have humans review afterwards, especially in edge cases. In this case, however, they chose not to do that either.

All together, this paints a pretty damning picture- Hertz is intentionally scamming people with this. There's not really any other rational explanation for everything to "go wrong" in just the right way to cause them to make more money on fake damage to their rental cars. This is a really disheartening trend in AI systems because this is not the only company pulling this scam of using supposedly impartial and 100% accurate AI-based systems to claim damages and charge customers for them with no possibility of appeal, and it is really hurting the reputation of ML based solutions in general. I mean the very existence of this community is evidence that in many people's minds, AI is synonymous with scams and shitty uses like outsourcing creativity to computers. It's now a non-trivial barrier to getting these systems put into place industrially, even in circumstances where they provide real tangible value, especially because the false classification problem is well researched and easy to mitigate if you actually want to mitigate it.

[–] markovs_gun@lemmy.world 6 points 1 day ago

I agree on one hand, but I also feel like video games and other online spaces are kind of unique because parents don't really think about their kids having one on one conversations with adults on them. If your kid is going outside they are mostly talking to other kids and not other adults. If an adult in your kid's life IRL starts telling them Hitler was right you will probably catch wind of that much more easily than if it's online. If a guy on an obscure medieval combat simulation game starts telling your kid Hitler was right (not a hypothetical - this happened to me as a teen and thankfully I saw through what was happening) you're probably not going to even know about it unless you're really engaged with your kid and what they're getting into. I agree that's on the parents but a lot of the kids these guys are resonating with are the ones whose parents aren't particularly engaged with them or what they're doing. I think there's nothing wrong with acknowledging that this does happen and is an intentional strategy from the far right, and I think trying to pretend that there isn't a problem that is specific to games and the broader gaming community is harmful.

[–] markovs_gun@lemmy.world 83 points 1 day ago (3 children)

Bruh as someone who used to play a lot of video games this article is 15 years too late. I remember neo nazis attempting to recruit me as a teenager in the early 2010s and they were not subtle. This shit is why we're losing the war against fascism, the so-called "experts" don't even know where the war is being waged and don't even show up to the fight.

[–] markovs_gun@lemmy.world 2 points 1 day ago

Naptha is a mixture so I'm okay with that one. Benzene is just bad though.

[–] markovs_gun@lemmy.world 49 points 5 days ago (8 children)

The ring's influence isn't just from touching it. If Gandalf had been the one bringing the ring to Mordor or would have constantly tempted him with its power, urging him to just try using it. Frodo succumbed to the ring's power temporarily several times but didn't have the desire or knowledge of how to use it to its full potential, he just used it to turn invisible. Gandalf knows the ring's true power and his temptation to use it in an emergency would be even greater. Imagine Gandalf fighting the Balrog and feeling desperate knowing that he has this incredibly powerful secret weapon literally in his back pocket. The temptation to use it would be incredible, and the corruption from the ring's power would be even greater. What if he had gone to meet Saruman and Saruman got the ring or Gandalf felt he needed to use it to prevent Saruman from getting it?

Fundamentally, the ring preys upon desire. Hobbits for the most part desire little, and are less susceptible to its draw, but even Smeagol and Bilbo were corrupted by it for desiring the ring itself. Gandalf wants nothing more than to defeat Sauron and the ring could tempt him by showing him (false) ways that he could become the Lord of the Rings and defeat Sauron with it. Boromir was the same- he thought the Ring could help protect his kingdom. Aragorn, Legolas, and Gimli, as chosen heroes and leaders of their respective nations, would be similarly tempted if they were to try to carry the ring- the ring tells you that it can be used for good even though it itself is evil.

The ring represents evil itself, and how some might be tempted to use evil in small amounts to try to do good, that the "ends justify the means," but ultimately the evil deeds (using the ring) compound on themselves and lead to evil thoughts and ideologies (being under the control of the ring) that corrupt and consume a person or entire society. The hobbits have a childlike innocence about them, and are simply caught up in this struggle between good and evil rather than being fundamentally part of it. They simply don't think about using the ring for good just as they don't think about using the ring for evil, and using the ring for good is even more dangerous than using it for personal gain.

[–] markovs_gun@lemmy.world 5 points 5 days ago

I think what pisses me off most about this is everyone is just letting Trump get away with this shit and acting like this is a win because we didn't get 30%. If everyone had just held strong and not wavered Trump would just destroy the American economy with his tariffs or chicken out like he did last time. Sure it would suck for a bit, but it would show that the international community can't just be bullied around like this. Now, everyone has seen that this works and everyone will do it. Even worse, giving the US a deal that is this unbalanced against Europe just reinforces Trump's political power back home. If they had just held strong Trump might have started feeling some real pressure from Congress when people ask why the fuck everything costs 30% more overnight. Now things will cost 15% more but all of the sudden corporations are suddenly able to just eat into profits to pay for it without raising prices because they're so scared of Trump or think that it's easy to get money sucking him off instead of competing fairly. The EU has fucked over everyone on Earth with this "deal" and not just their own citizens.

[–] markovs_gun@lemmy.world 5 points 5 days ago (1 children)

Just start coding and learn as you go. I know that sounds daunting but I feel like there's not really another way to learn on your own that actually works. I wouldn't worry about a specific language at the start, whatever you learn will transfer pretty easily. I would start with thinking "What would be something cool to program?" And just seeing if you can do it or maybe a simple version. ChatGPT is amazing for learning to code as well. If you get stuck somewhere and need clarification or need help interpreting why your code is giving an error just ask ChatGPT and it can explain - just be sure that you actually understand what it is saying and why instead of just copy and pasting its code. This is how you actually get better instead of just "vibe coding."

In my opinion, you'll never get good at coding just going through "code academy" or similar gamified services. It's more about practice and getting some experience under your belt. It's like trying to learn how to be a good baseball player from reading books if you don't go out on the field and play some baseball, or trying to learn the guitar without a guitar in your hand.

MIT has a really good beginner's course for free that helps a lot with theory and background but IMO it's based too much on theory for most people to actually build skills just from following it without work outside of the course.

https://ocw.mit.edu/collections/introductory-programming/

For games I recommend just learning to mod first or learn how to make a simple game first and follow along learning exactly how it works at each step. I learned a lot digging into garry's mod, TF2, and Minecraft mods back in the day.

[–] markovs_gun@lemmy.world 2 points 5 days ago

This is an absurd conspiracy theory that doesn't hold up to even the lighter scrutiny. Which data broker was Equifax secretly distributing info to when it got hacked? Data brokers don't need this type of conspiracy to buy and sell your data- it's already completely legal. How do you think these companies got it in the first place?

view more: next ›