backgroundcow

joined 2 years ago
[–] backgroundcow@lemmy.world 57 points 18 hours ago* (last edited 18 hours ago) (1 children)

MasterCard's and Valve's statements seems to point at Stripe and PayPal as the ones who folded to the pressure. These payment processors then cited MasterCard's rules to back up their change in policy.

MasterCard now clarifying that the payment processors are over-interpreting the rules and anything legal is ok seems a very good thing here. Valve should be able to go back to Stripe and PayPal with this and say: "Hey, you've misunderstood the rules you are quoting; MasterCard themselves say anything legal is ok, and that is the exact policy we've been using!"

[–] backgroundcow@lemmy.world 2 points 19 hours ago

Also id probably lose 4 of these before giving up outright.

You buy them in bags of 20ish, and keep buying them until you have established an equilibrium in your home where there always are a few around to put on new bags. I'm not joking.

[–] backgroundcow@lemmy.world 20 points 1 day ago* (last edited 1 day ago) (1 children)

I discovered that recent versions of the built-in photo apps on Android flat out refuses to do this. The UI for removing location info is there, but it is intentionally blocked if the exif info was added automatically by GPS (i.e., it only works if you manually have set a location). It seems so weird, and outright evil, to block one of the key ways for people to stay safe.

[–] backgroundcow@lemmy.world -1 points 6 days ago* (last edited 6 days ago)

The only reason this is "click bait" is because someone chose to do this, rather than their own mental instability bringing this out organically.

This is my point. The case we are discussing now isn't noteworthy, because someone doing it deliberately is equally "impressive" as writing out a disturbing sentence in MS Paint. One cannot create a useful "answer engine" without it being capable of producing something that looks weird/provoking/offensive when taken out of context; no more than one can create a useful drawing program that blocks out all offensive content. Nor is it a worthwhile goal.

The cases to care about are those where the LLM takes a perfectly reasonable conversation off the rails. Clickbait like the one in the OP is actually harmful in that they drown out such real cases, and is therefore deserving of ridicule.

[–] backgroundcow@lemmy.world -2 points 6 days ago (2 children)

Does the marketing matter when the reason for the offending output is that the user spent significant deliberate effort in coaxing the LLM to output what it did? It still seems like MS Paint with extra steps to me.

I get not wanting LLMs to unprompted output "offensive content". Just like it would be noteworthy if "Clear canvas" in MS Paint sometimes yielded a violent bloody photograph. But, that isn't what is going on in OPs clickbait.

[–] backgroundcow@lemmy.world -3 points 6 days ago* (last edited 6 days ago) (4 children)

And, the thing is, LLMs are quite well protected. Look what I coaxed MS Paint to say with almost no effort! Don't get me started on plain pen and paper! Which we put in the hands of TODDLERS!

[–] backgroundcow@lemmy.world 7 points 2 weeks ago* (last edited 2 weeks ago)

If someone is trying to do the most good with their money, it seems logical to give via an organization that distributes the funds according to a plan. To instead hand out money to people closest at hand seems it could be motivated more by trying to make me feel good than to actually make a difference.

Furthermore, there are larger scale systemic issues. Begging takes up a lot of time. It becomes a problem if it pays someone enough to outcompete more productive use of time that could, in some cases, pay, and in other cases, at least be more useful: childcare/teaching kids, home maintenance, cooking, cleaning, etc. In contrast, state welfare programs and aid organizations usually do not condition help on that the receiver has to sit idle for long times to receive help. Add to this that begging really only works in crowded areas, which may limit the possibility to relocate somewhere where living might be more sustainable. Hence, in the worst case, handing out money to those who begs for it could actually add to the difficulty for people stuck in a very difficult situation to get out of it.

This "analysis" of course skips over the many, many individual circumstances that get people into a situation where begging seems the right choice. What we should be doing is investing public funds even heavier in social programs and other aids to (1) avoid as much as possible that people end up in these situations; and (2) get people out of these situations as effectively as possible.

[–] backgroundcow@lemmy.world 11 points 3 weeks ago* (last edited 3 weeks ago)

I don't get this. Why are so many countries willing to play Trump's game? It seems a horrible long-term strategy to allow one country to hold global trade hostage this way. Shouldn't we negotiate between ourselves, i.e., between the affected countries?

The idea should be: for us, exports of X, Y, and Z are taking a hit, and for you A, B, and C. So, let's lower our tariffs in these respective areas to soften the blow to the affected industries. That way, we would partly make up for, say, lost exports to the US for cars, at the cost of additional competition on the domestic market for, say, soy beans; and vise-versa; evening out the effects as best we can.

With such agreements in place, we can return to Trump from a stronger position and say: we are willing to negotiate, but not under threat. We will do nothing until US tariffs are back to the levels before this started. But, at that point, we will be happy to discuss the issues you appear to see with trade inbalances and tariffs, so that we can find a mutual beneficial agreement going forward.

Something like this would send a message that would do far more good towards trade stability for the future.

[–] backgroundcow@lemmy.world 1 points 3 weeks ago

No shade on people trying to make sustainable choices, but if the solution to the climate crisis is us trusting everyone to "get with the program" and pick the right choice; while unsustainable alternatives sit right there beside them at lower prices, then we are truly doomed.

What the companies behind these foods and products don't want to talk about is that to get anywhere we have to target them. It shouldn't be a controversial standpoint that: (i) all products need to cover their true full environmental and sustainability costs, with the money going back into investments into the environment counteracting the negative impacts; (ii) we need to regulate, regulate, and regulate how companies are allowed to interact with the environment and society, and these limits must apply world-wide. There needs to be careful follow-up on that these rules are followed: with consequences for individuals that take the decisions to break them AND "death sentences" (i.e. complete disbandment) for whole companies that repeatedly oversteps.

[–] backgroundcow@lemmy.world 10 points 1 month ago (3 children)

What we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data

Prove to me that this isn't exactly how the human mind -- i.e., "real intelligence" -- works.

The challenge with asserting how "real" the intelligence-mimicking behavior of LLMs is, is not to convince us that it "just" is the result of cold deterministic statistical algoritms running on silicon. This we know, because we created them that way.

The real challenge is to convince ourselves that the wetware electrochemical neural unit embedded in our skulls, which evolved through a fairly straightforward process of natural selection to improve our odds at surviving, isn't relying on statistical models whose inner principles of working are, essentially, the same.

All these claims that human creativity is so outstanding that it "obviously" will never be recreated by deterministic statistical models that "only" interpolates into new contexts knowledge picked up from observation of human knowledge: I just don't see it.

What human invention, art, idé, was so truly, undeniably, completely new that it cannot have sprung out of something coming before it? Even the bloody theory of general relativity--held as one of the pinnacles of human intelligence--has clear connections to what came before. If you read Einstein's works he is actually very good at explaining how he worked it out in increments from models and ideas - "what happens with a meter stick in space", etc.: i.e., he was very good at using the tools we have to systematically bring our understanding from one domain into another.

To me, the argument in the linked article reads a bit as "LLM AI cannot be 'intelligence' because when I introspect I don't feel like a statistical machine". This seems about as sophisticated as the "I ain't no monkey!" counter- argument against evolution.

All this is NOT to say that we know that LLM AI = human intelligence. It is a genuinely fascinating scientific question. I just don't think we have anything to gain from the "I ain't no statistical machine" line of argument.

[–] backgroundcow@lemmy.world 9 points 1 month ago

That's perfect. You already know your lines!

 
 
view more: next ›