Is that Brian "Please don't call me Brian "Brian Kibler" Kibler" Kibler?
No Stupid Questions
No such thing. Ask away!
!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.
The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:
Rules (interactive)
Rule 1- All posts must be legitimate questions. All post titles must include a question.
All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.
Rule 2- Your question subject cannot be illegal or NSFW material.
Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.
Rule 3- Do not seek mental, medical and professional help here.
Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.
Rule 4- No self promotion or upvote-farming of any kind.
That's it.
Rule 5- No baiting or sealioning or promoting an agenda.
Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.
Rule 6- Regarding META posts and joke questions.
Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.
On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.
If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.
Rule 7- You can't intentionally annoy, mock, or harass other members.
If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.
Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.
Rule 8- All comments should try to stay relevant to their parent content.
Rule 9- Reposts from other platforms are not allowed.
Let everyone have their own content.
Rule 10- Majority of bots aren't allowed to participate here. This includes using AI responses and summaries.
Credits
Our breathtaking icon was bestowed upon us by @Cevilia!
The greatest banner of all time: by @TheOneWithTheHair!
AI is trained on the Internet. Look at the bullshit on the Internet. AI will take some random schmoe's bullshit opinion and present it as hard fact.
That, and it just re-introduced the problem of being able to see search results without visiting any of the resultant websites. The last time, sites ended up burying answers down the page to avoid being able to see results in search previews. Making everything shittier. What kind of response is there going to be to AI summaries? Everything will undoubtedly get even shittier as sites try to get people to visit and not just read the AI summary. Hello even more obfuscation. We're taking the greatest method of spreading information around the globe ever devised and just absolutely filling it to the brim with bullshit.
This is only the beginning. Soon there will be LLMs trained on other LLMs garbage. And those LLMs will also post and write crap on the Internet. The true pinnacle of shite posting
oh yea it does, i see google summarizing, its just a mash up of different BLOG posts as truths, that isnt a source. its basically asking opinions of the LLM.
I hate the fact that thanks to chatgpt, every twerp out there things em-dashes are an automatic sign of something being written by ai...
As a writer and an em-dash enjoyer, hell with that!
I was never pedantic enough to get a real em dash, instead of just a regular dash
The technology is way too resource intensive for the benefit it gives. By resource, I mean environmental and technological. Have you seen the prices of DDR5 RAM? Microsoft is actually working to bring TMI 1 back online. TMI = Three Mile Island as in a full sized nuclear reactor that has been retired from service since 2019. The only reason why they are not bringing TMI2 back online is because IF F$%KING MELTED DOWN IN 1979.
Add to that Micron exited the consumer market to provide memory to the AI market only... What the actual F#$k?
Now the bubble has formed and the people that shoved tens of billions into it are trying to fill that bubble by any means necessary. Which means the entire population of this country are constantly bombarded by it for purposes it is ill suited to.
When, not if, this bubble pops it's going to be a wild ride.
At some point, we should legislate that all non production tech buisnesses have to be energy positive- as in 'wanna build a data center? Its got to have more solar/ wind etc, tha it uses or its unpermitable.
I've seen it successfully perform exactly one task without causing more harm or crearing liability for the people using it:
Misinformation campaigns.
And thats exactly how the AI Companies are using to to grow exponentially, lying about its costs and its capabilities both.
It's weird that this is somehow an unpopular opinion these days but I don't like being lied to.
Ive been hearong the claim now occasionally For the last several years that we've moved into the 'post truth' age. AI has kind of cemented that for me.
I don't entirely hate AI.
I like AI for when I want to personally use it for art ideas because I'm not an artist and honestly, paying someone to draw for you can be expensive as it is a luxury. I just don't run anything like a DA account to show it off.
I don't like AI when it has been used to lazily write scripts for movies/shows, to draft essays and be used as a shortcut for someone's work.
I like AI when it can be used as a companion tool.
I don't like AI when it tries being a therapist for serious mental issues.
I don't like AI when it is used as a poor excuse of troubleshooting.
I don't like AI for the damage it is causing to the market of PC memory.
I don't like AI being shoved down my throat and for the companies you least expect it to use it, suddenly start using it.
So with a score of 2-5, if AI was just erased right now, I wouldn't really miss it. But I don't entirely hate it. It is a completely misused and abused kind of tool that's shoved into everyone's lives and marketed as a catch-all solution to nearly all problems, when there is a mountain of evidence and recorded studies saying otherwise.
I don't hate AI, I hate it being forced everyone's throat and I don't trust the companies running it to keep the data they collect safe and private
I don't hate AI. I just hate the untrustworthy rich fucks who are forcing it down everyones throats.
I hate AI because it's replacing jobs (a.k.a, salaries) without us having a social safety net to make it painless.
We've replaced you with ai
-CEO
Ai is replacing most of the jobs, and there isn't enough open positions to be filled by the now unemployed.
-Ecconomists
I need food stamps, medical care, housing assistantance, and unemployment.
-Me
No! Get a job you lazy welfare queen!
-Politicians
Where? There aren't any.
-Me
Not my problem! Now, excuse me while I funnel more money to my donors.
-The same politicians
The good news is, while automation like robot arms is continuing to replace humans, the AI aspect of it has been catastrophic and early adopters are often seen expressing remorse and reverting changes.
Fuck Reddit and Fuck Spez.
I don't hate it, I hate how companies are forcing it in regardless of how stupid it is for the task.
what the fuck is this stereotype
It's probably from a redditor who probably is white and male. Y'know, self-deprecating humor is pretty common among redditors just like it is here.
I don't hate AI (specifically LLMs and image diffusion thingy) as a technology. I don't hate people who use AI (most of the time).
I do hate almost every part of AI business, though. Most of the AI stuff is hyped by the most useless "luminaries" of the tech sector who know a good profitable grift when they see one. They have zero regard for the legal and social and environmental implications of their work. They don't give a damn about the problems they are causing.
And that's the great tragedy, really: It's a whole lot of interesting technology with a lot of great potential applications. And the industry is getting run to the ground by idiots, while chasing an economic bubble that's going to end disastrously. It's going to end up with a tech cycle kind of similar to nuclear power: a few prominent disasters, a whole lot of public resentment and backlash, and it'll take decades until we can start having sensible conversations about it again. If only we would have had a little bit of moderation to begin with!
The only upside AI business has had was that at least it has pretended to give a damn about open source and open access to data, but at this point it's painfully obvious that to AI companies this is just a smoke screen to avoid getting sued over copyright concerns - they'd lock up everything as proprietary trade secrets if they could have their way.
As a software developer, I was first super excited about genAI stuff because it obviously cut down the time needed to consult references. Now, a lot of tech bosses tell coders to use AI tools even in cases that's making everyone less productive.
As an artist and a writer I find it incredibly sad that genAI didn't hit the brakes a few years ago. I've been saying this for decades: I love a good computerised bullshit generator. Algorithmically generated nonsense is interesting. Great source of inspiration for your ossified brain cells, fertile grounds for improvement. Now, however, the AI generated stuff pretends to be as human-like as possible, it's doing a terrible job at it. Tech bros are half-assedly marketing it as a "tool" for artists, while the studio bosses who buy the tech chuckle at that and know they found a replacement for the artists. (Want to make genAI tools for artists? Keep the output patently unusable out of the box.)
The value in LLMs is in the training and the data quality... so it is easy to publish the code and charge for access to the data (DaaS).
I'm hopeful that when the bubble pops it'll be more like the dot com crash, which is to say that the fallout is mostly of the economic variety rather than the superfund variety. Sure, that'll still suck in the short term. But it will ideally lead to the big players and VC firms backing away and leaving behind an oversupply of infrastructure and talent that can be soaked up at fire sale prices by the smaller, more responsible companies that are willing to stick out the downturn and do the unglamorous work of developing this technology into something that's actually sustainable and beneficial to society.
That's my naive hope. I do recognize that there's an unfortunately high probability that things won't go that way.
Sounds like something someone pretending to be white would say.
I don't hate AI. However, I:
- Am concerned about the labor displacement it may cause--though I am skeptical it will be as widespread as currently feared. I think many of the companies that have cut workers already will end up regretting it in the medium term.
- Am convinced that the massive, circular investment in this technology has produced an economic bubble that will burst in the coming years. Because we have so little insight into private credit markets, we don't know to what degree retail and commercial banks will be exposed, and thus can't anticipate the potential damage to the broader economy.
- Am fatigued (but unsurprised) that the US government is not considering thoughtful regulation that anticipates the disruption that AI is likely to cause.
- Am cognizant of its current limitations.
- Do not currently believe that AGI is imminent or even guaranteed. I think elites peddling this notion may be captured by highly motivated reasoning. In some cases, it seems like a bit of a belief system.
Anything the billionaire cabal pushes on us I automatically hate. Don't even need to know what it is. If they are pushing it you know there is some nefarious shit under the hood.
I'm still waiting for it to appear and then lets ask them how they like it. Its not like the garbage we have now is really AI.
AI is only looks good if you're an outsider to the profession. The moment you're even an amateur, you'll see all of its faults. It's just a plagiarizing machine with a built-in contextual search function (any AI model that runs as an actual contextual search instead of a wannabe assistant with a flattering personality?), that can make some crappy looking and weirdly specific clip art, stock music with funny-sounding gimmicks, and buggy code you'd better plagiarize from public domain licensed code from Github.
No one has convinced me how it is good for the general public. It seems like it will benefit corpos and governments, to the detriment of the general public.
It's just another thing they'll use to fuck over the average person.