For instance, this includes minerals for battery and other components to produce EVs and wind turbines – such as iron, lithium, and zinc
I found nothing within the IEA's announcement that indicates a shortage of those three elements. Iron is like the fourth most abundant thing on the planet.
In fact, this story literally reports this whole thing all wrong. It's not that there's a shortage, it's that the demand for renewables is vastly larger than what we're mining for. Which "duh" we knew this already. The thing this report does is quantify it.
That said, the "human rights abuses" isn't the IEA report. That comes from the Business and Human Rights Resource Centre (BHRRC).
Specifically, the BHRRC has tracked these for seven key minerals: bauxite, cobalt, copper, lithium, manganese, nickel and zinc. Companies and countries need these for renewable energy technology, and electrification of transport.
These aren't just limited to the renewable industry. Copper specifically, you've got a lot of it in your walls and in the device that you are reading this comment on. We have always had issues with copper and it's whack-a-mole for solutions to this. I'm not dismissing BHRRC's claim here, it's completely valid, but it's valid if we do or do not do renewables. Either way, we still have to tackle this problem. EVs or not.
Of course, some companies were particularly complicit. Notably, BHRRC found that ten companies were associated with more than 50% of all allegations tracked since 2010
And these are the usual suspects who routinely look the other way in human right's abuses. China, Mexico, Canada, and Switzerland this is the list of folks who drive a lot of the human rights abuses, it's how it has been for quite some time now. That's not to be dismissive to the other folks out there (because I know everyone is just biting to blame the United States somehow) but these four are usually getting their hand smacked. Now to be fair, it's really only China and Switzerland that usually does not care one way or the other. Canada and Mexico are just the folks the US convinced to take the fall for their particular appetite.
For example, Tanzania is extracting manganese and graphite. However, he pointed out that it is producing none of the higher-value green tech items like electric cars or batteries that need these minerals
Third Congo war incoming. But yeah, seriously, imperialism might have officially ended after World War II, but western nations routinely do this kind of economic fuckening, because "hey at least they get to self-govern". It's what first world nations tell themselves to sleep better for what they do.
Avan also highlighted the IEA’s advice that companies and countries should shift emphasis to mineral recycling to meet the growing demand.
This really should have happened yesterday. But if they would do something today, that would actually be proactive about the situation. Of course, many first world nations when they see a problem respond with "come back when it's a catastrophe."
OVERALL This article is attempting to highlight that recycling is a very doable thing if governments actually invested in the infrastructure to do so and that if we actually recycled things, we could literally save ⅓ the overall cost for renewables. It's just long term economic sense to recycle. But of course, that's not short term economic sense. And so with shortages to meet demand on the horizon, new production is going to be demanded and that will in turn cause human rights violations.
They really worded the whole thing oddly and used the word shortage, like we're running out, when they meant shortage as in "we can't keep up without new production". They got the right idea here, I just maybe would have worded all of it a bit differently.
Okay for anyone who might be confused on how a model that's not been trained on something can come up with something it wasn't trained for, a rough example of this is antialiasing.
In the simplest of terms antialiasing looks at a vector over a particular grid, sees what percentage it is covering, and then applies that percentage to to shade the image and reduce the jaggies.
There's no information to do this in the vector itself, it's the math that is what is giving the extra information. We're creating information from a source that did not originally have it. Now, yeah this is really simple approach and it might have you go "well technically we didn't create any new information".
At the end of the day, a tensor is a bunch of numbers that give weights to how pixels should arrange themselves on the canvas. We have weights that show us how to fall pixels to an adult. We have weights that show us how to fall pixels to children. We have weights that show us how to fall pixels to a nude adult. There's ways to adapt the lower order ranking of weights to find new approximations. I mean, that's literally what LoRAs do. I mean that's literally their name, Low-Rank Adaptation. As you train on this new novel approach, you can wrap that into a textual inversion. That's what that does, it allows an ontological approach to particular weights within a model.
Another way to think of this. Six finger people in AI art. I assure you that no model was fed six fingered subjects, so where do they come from? The answer is that the six finger person is a complex "averaging" of the tensors that make up the model's weights. We're getting new information where there originally was none.
We have to remember that these models ARE NOT databases. They are just multidimensional weights that tell pixels from a random seed where to go to in the next step in the diffusion process. If you text2image "hand" then there's a set of weights that push pixels around to form the average value of a hand. What it settles into could be a four fingered hand, five fingers, or six fingers, depends on the seed and how hard the diffuser should follow the guidance scale for that particular prompt's weight. But it's distinctly not recalling pixel for pixel some image it has seen earlier. It just has a bunch of averages of where pixels should go if someone says hand.
You can generate something new from the average of complex tensors. You can put your thumb on the scale for some of those weights, give new maths to find new averages, and then when it's getting close to the target you're after use a textual inversion to give a label to this "new" average you've discovered in the weights.
Antialiasing doesn't feel like new information is being added, but it is. That's how we can take the actual pixels being pushed out by a program and turn it into a smooth line that the program did not distinctly produce. I get that it feels like a stretch to go from antialiasing to generating completely novel information. But it's just numbers driving where pixels get moved to, it's maths, there's not really a lot of magic in these things. And given enough energy, anyone can push numbers to do things they weren't supposed to do in the first place.
The way models that come from folks who need their models to be on the up and up is to ensure that particular averages don't happen. Like say we want to avoid outcome B', but you can average A and C to arrive at B'. Then what you need is to add a negative weight to the formula. This is basically training A and C to average to something like R' that's really far from the point that we want to avoid. But like any number, if we know the outcome is R' for an average of A and C, we can add low rank weights that don't require new layers within the model. We can just say, anything with R' needs -P' weight, now because of averages we could land on C' but we could also land on A' or B' our target. We don't need to recalculate the approximation of the weights that A and C give R' within the model.