Games

Welcome to the largest gaming community on Lemmy! Discussion for all kinds of games. Video games, tabletop games, card games etc.
Rules
1. Submissions have to be related to games
Video games, tabletop, or otherwise. Posts not related to games will be deleted.
This community is focused on games, of all kinds. Any news item or discussion should be related to gaming in some way.
2. No bigotry or harassment, be civil
No bigotry, hardline stance. Try not to get too heated when entering into a discussion or debate.
We are here to talk and discuss about one of our passions, not fight or be exposed to hate. Posts or responses that are hateful will be deleted to keep the atmosphere good. If repeatedly violated, not only will the comment be deleted but a ban will be handed out as well. We judge each case individually.
3. No excessive self-promotion
Try to keep it to 10% self-promotion / 90% other stuff in your post history.
This is to prevent people from posting for the sole purpose of promoting their own website or social media account.
4. Stay on-topic; no memes, funny videos, giveaways, reposts, or low-effort posts
This community is mostly for discussion and news. Remember to search for the thing you're submitting before posting to see if it's already been posted.
We want to keep the quality of posts high. Therefore, memes, funny videos, low-effort posts and reposts are not allowed. We prohibit giveaways because we cannot be sure that the person holding the giveaway will actually do what they promise.
5. Mark Spoilers and NSFW
Make sure to mark your stuff or it may be removed.
No one wants to be spoiled. Therefore, always mark spoilers. Similarly mark NSFW, in case anyone is browsing in a public space or at work.
6. No linking to piracy
Don't share it here, there are other places to find it. Discussion of piracy is fine.
We don't want us moderators or the admins of lemmy.world to get in trouble for linking to piracy. Therefore, any link to piracy will be removed. Discussion of it is of course allowed.
Authorized Regular Threads
Related communities
PM a mod to add your own
Video games
Generic
- !gaming@Lemmy.world: Our sister community, focused on PC and console gaming. Meme are allowed.
- !photomode@feddit.uk: For all your screenshots needs, to share your love for games graphics.
- !vgmusic@lemmy.world: A community to share your love for video games music
Help and suggestions
By platform
By type
- !AutomationGames@lemmy.zip
- !Incremental_Games@incremental.social
- !LifeSimulation@lemmy.world
- !CityBuilders@sh.itjust.works
- !CozyGames@Lemmy.world
- !CRPG@lemmy.world
- !OtomeGames@ani.social
- !Shmups@lemmus.org
- !VisualNovels@ani.social
By games
- !Baldurs_Gate_3@lemmy.world
- !Cities_Skylines@lemmy.world
- !CassetteBeasts@Lemmy.world
- !Fallout@lemmy.world
- !FinalFantasyXIV@lemmy.world
- !Minecraft@Lemmy.world
- !NoMansSky@lemmy.world
- !Palia@Lemmy.world
- !Pokemon@lemm.ee
- !Skyrim@lemmy.world
- !StardewValley@lemm.ee
- !Subnautica2@Lemmy.world
- !WorkersAndResources@lemmy.world
Language specific
- !JeuxVideo@jlai.lu: French
view the rest of the comments
There are AI's that are ethically trained. There are AI's that run on local hardware. We'll eventually need AI ratings to distinguish use types, I suppose.
Can you please share examples and criteria?
https://www.swiss-ai.org/apertus
Fully open source, even the training data is provided for download. That being said, this is the only one I know of.
Thanks, a friend recommended it few days ago indeed but unfortunately AFAICT they don't provide the CO2eq in their model card nor an analogy equivalence non technical users could understand.
Sure. My company has a database of all technical papers written by employees in the last 30-ish years. Nearly all of these contain proprietary information from other companies (we deal with tons of other companies and have access to their data), so we can't build a public LLM nor use a public LLM. So we created an internal-only LLM that is only trained on our data.
I'd bet my lunch this internal LLM is a trained open weight model, which has lots of public data in it. Not complaining about what your company has done, as I think that makes sense, just providing a counterpoint.
You are solely using your own data or rather you are refining an existing LLM or rather RAG?
I'm not an expert but AFAIK training an LLM requires, by definition, a vast mount of text so I'm skeptical that ANY company publish enough papers to do so. I understand if you can't share more about the process. Maybe me saying "AI" was too broad.
Completely from scratch?
It can use public domain licenced data
Right, and to be clear I'm not saying it's not possible (if fact I some models in mind but I'd rather let others share first). This isn't a trick question, it's a genuine request to hopefully be able to rely on such tools.
Adobe's image generator (Firefly) is trained only on images from Adobe Stock.
Does it only use that or doesn't it also use an LLM to?
The Firefly image generator is a diffusion model, and the Firefly video generator is a diffusion transformer. LLMs aren't involved in either process - rather the models learn image-text relationships from meta tags. I believe there are some ChatGPT integrations with Reader and Acrobat, but that's unrelated to Firefly.
Surprising, I would expect it'd rely at some point on something like CLIP in order to be prompted.
It's even more complicated than that: "AI" is not even a well-defined term. Back when Quake 3 was still in beta ("the demo"), id Software held a competition to develop "bot AIs" that could be added to a server so players would have something to play against while they waited for more people to join (or you could have players VS bots style matches).
That was over 25 years ago. What kind of "AI" do you think was used back then? 🤣
The AI hater extremists seem to be in two camps:
The data center haters are the strangest, to me. Because there's this default assumption that data centers can never be powered by renewable energy and that AI will never improve to the point where it can all be run locally on people's PCs (and other, personal hardware).
Yet every day there's news suggesting that local AI is performing better and better. It seems inevitable—to me—that "big AI" will go the same route as mainframes.
Opportunity costs
Power source is only one impact. Water for cooling is even bigger. There are data centers pumping out huge amounts of heat in places like AZ, TX, CA where water is scarce and temps are high.
Is the water "consumed" when used for this purpose? I don't know how data centers do it but it wouldn't seem that it would need to be constantly drawing water from a local system. They could even source it from elsewhere if necessary.
https://thecurrentga.org/2025/08/26/data-centers-consume-massive-amounts-of-water-companies-rarely-tell-the-public-exactly-how-much/
Some use up the water through evaporation, so they constantly draw water. Some "consume" the water, meaning they have a closed system of cooling water, but that uses a lot more electricity than evaporative cooling, which also uses water to generate.
Data centers typically use closed loop cooling systems but those do still lose a bit of water each day that needs to be replaced. It's not much—compared to the size of the data center—but it's still a non-trivial amount.
A study recently came out (it was talked about extensively on the Science VS podcast) that said that a long conversation with an AI chat bot (e.g. ChatGPT) could use up to half a liter of water—in the worst case scenario.
This statistic has been used in the news quite a lot recently but it's a bad statistic: That water usage counts the water used by the power plant (for its own cooling). That's typically water that would come from ponds and similar that would've been built right alongside the power plant (your classic "cooling pond"). So it's not like the data centers are using 0.5L of fresh water that could be going to people's homes.
For reference, the actual data center water usage is 12% of that 0.5L: 0.06L of water (for a long chat). Also remember: This is the worst-case scenario with a very poorly-engineered data center.
Another stat from the study that's relevant: Generating images uses much less energy/water than chat. However, generating videos uses up an order of magnitude more than both (combined).
So if you want the lowest possible energy usage of modern, generative AI: Use fast (low parameter count), open source models... To generate images 👍
colloquially today most people mean genAI like LLMs when they say “AI” for brevity.
that’s not the point at all. the point is, even before AI, our increasing energy needs were outpacing our ability/willingness to switch to green energy. Even then we were using more fossil fuels than at any point in the history of the world. Now AI is just adding a whole other layer of energy demand on top of that.
sure, maybe, eventually, we will power everything with green energy, but… we aren’t actually doing that, and we don’t have time to catch up. every bit longer it takes us to eliminate fossil fuels will add to negative effects on our climate and ecosystems.
The power use from AI is orthogonal to renewable energy. From the news, you'd think that AI data centers have become the number one cause of global warming. Yet, they're not even in the top 100. Even at the current pace of data center buildouts, they won't make the top 100... ever.
AI data center power utilization is a regional problem specific to certain localities. It's a bad idea to build such a data center in certain places but companies do it anyway (for economic reasons that are easy to fix with regulation). It's not a universal problem across the globe.
Aside: I'd like to point out that the fusion reactor designs currently being built and tested were created using AI. Much of the advancements in that area are thanks to "AI data centers". If fusion power becomes a reality in the next 50 years it'll have more than made up for any emissions from data centers. From all of them, ever.