Technology

34832 readers
1 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 6 years ago
MODERATORS
1876
 
 

We Have No Moat And neither does OpenAI

We’ve done a lot of looking over our shoulders at OpenAI. Who will cross the next milestone? What will the next move be?

But the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch.

I’m talking, of course, about open source. Plainly put, they are lapping us. Things we consider “major open problems” are solved and in people’s hands today.

1877
 
 

The European Commission has made a formal antitrust complaint against Google and its ad business. In a preliminary opinion, the regulator says Google has abused its dominant position in the digital advertising market. It says that forcing Google to sell off parts of its business may be the only remedy, if the company is found guilty of the charges.

1878
 
 

Former r/linuxmasterrace members: Feel free to join the newly created !linuxmasterrace@feddit.de community.

Let's make the full transition to the decentralized Fediverse!

1879
1880
 
 

Guys, I created a community to discuss anything about Mozilla Thunderbird. Feel free to join. Thanks https://lemmy.world/c/thunderbird https://lemmy.world/c/thunderbird

!thunderbird@lemmyworld

1881
1882
19
submitted 2 years ago* (last edited 2 years ago) by 9up999@lemmy.ml to c/technology@lemmy.ml
 
 

1883
1884
 
 

The algorithm used for the cash relief program is broken, a Human Rights Watch report found.

A program spearheaded by the World Bank that uses algorithmic decision-making to means-test poverty relief money is failing the very people it’s intended to protect, according to a new report by Human Rights Watch. The anti-poverty program in question, known as the Unified Cash Transfer Program, was put in place by the Jordanian government.

Having software systems make important choices is often billed as a means of making those choices more rational, fair, and effective. In the case of the poverty relief program, however, the Human Rights Watch investigation found the algorithm relies on stereotypes and faulty assumptions about poverty.

“Its formula also flattens the economic complexity of people’s lives into a crude ranking.”

“The problem is not merely that the algorithm relies on inaccurate and unreliable data about people’s finances,” the report found. “Its formula also flattens the economic complexity of people’s lives into a crude ranking that pits one household against another, fueling social tension and perceptions of unfairness.” Join Our Newsletter Original reporting. Fearless journalism. Delivered to you. I'm in

The program, known in Jordan as Takaful, is meant to solve a real problem: The World Bank provided the Jordanian state with a multibillion-dollar poverty relief loan, but it’s impossible for the loan to cover all of Jordan’s needs.

Without enough cash to cut every needy Jordanian a check, Takaful works by analyzing the household income and expenses of every applicant, along with nearly 60 socioeconomic factors like electricity use, car ownership, business licenses, employment history, illness, and gender. These responses are then ranked — using a secret algorithm — to automatically determine who are the poorest and most deserving of relief. The idea is that such a sorting algorithm would direct cash to the most vulnerable Jordanians who are in most dire need of it. According to Human Rights Watch, the algorithm is broken.

The rights group’s investigation found that car ownership seems to be a disqualifying factor for many Takaful applicants, even if they are too poor to buy gas to drive the car.

Similarly, applicants are penalized for using electricity and water based on the presumption that their ability to afford utility payments is evidence that they are not as destitute as those who can’t. The Human Rights Watch report, however, explains that sometimes electricity usage is high precisely for poverty-related reasons. “For example, a 2020 study of housing sustainability in Amman found that almost 75 percent of low-to-middle income households surveyed lived in apartments with poor thermal insulation, making them more expensive to heat.”

In other cases, one Jordanian household may be using more electricity than their neighbors because they are stuck with old, energy-inefficient home appliances.

Beyond the technical problems with Takaful itself are the knock-on effects of digital means-testing. The report notes that many people in dire need of relief money lack the internet access to even apply for it, requiring them to find, or pay for, a ride to an internet café, where they are subject to further fees and charges to get online.

“Who needs money?” asked one 29-year-old Jordanian Takaful recipient who spoke to Human Rights Watch. “The people who really don’t know how [to apply] or don’t have internet or computer access.”

Human Rights Watch also faulted Takaful’s insistence that applicants’ self-reported income match up exactly with their self-reported household expenses, which “fails to recognize how people struggle to make ends meet, or their reliance on credit, support from family, and other ad hoc measures to bridge the gap.”

The report found that the rigidity of this step forced people to simply fudge the numbers so that their applications would even be processed, undermining the algorithm’s illusion of objectivity. “Forcing people to mold their hardships to fit the algorithm’s calculus of need,” the report said, “undermines Takaful’s targeting accuracy, and claims by the government and the World Bank that this is the most effective way to maximize limited resources.” Related AI Tries (and Fails) to Detect Weapons in Schools

The report, based on 70 interviews with Takaful applicants, Jordanian government workers, and World Bank personnel, emphasizes that the system is part of a broader trend by the World Bank to popularize algorithmically means-tested social benefits over universal programs throughout the developing economies in the so-called Global South.

Confounding the dysfunction of an algorithmic program like Takaful is the increasingly held naïve assumption that automated decision-making software is so sophisticated that its results are less likely to be faulty. Just as dazzled ChatGPT users often accept nonsense outputs from the chatbot because the concept of a convincing chatbot is so inherently impressive, artificial intelligence ethicists warn the veneer of automated intelligence surrounding automated welfare distribution leads to a similar myopia.

The Jordanian government’s official statement to Human Rights Watch defending Takaful’s underlying technology provides a perfect example: “The methodology categorizes poor households to 10 layers, starting from the poorest to the least poor, then each layer includes 100 sub-layers, using statistical analysis. Thus, resulting in 1,000 readings that differentiate amongst households’ unique welfare status and needs.”

“These are technical words that don’t make any sense together.”

When Human Rights Watch asked the Distributed AI Research Institute to review these remarks, Alex Hanna, the group’s director of research, concluded, “These are technical words that don’t make any sense together.” DAIR senior researcher Nyalleng Moorosi added, “I think they are using this language as technical obfuscation.”

As is the case with virtually all automated decision-making systems, while the people who designed Takaful insist on its fairness and functionality, they refuse to let anyone look under the hood. Though it’s known Takaful uses 57 different criteria to rank poorness, the report notes that the Jordanian National Aid Fund, which administers the system, “declined to disclose the full list of indicators and the specific weights assigned, saying that these were for internal purposes only and ‘constantly changing.’”

While fantastical visions of “Terminator”-like artificial intelligences have come to dominate public fears around automated decision-making, other technologists argue civil society ought to focus on real, current harms caused by systems like Takaful, not nightmare scenarios drawn from science fiction.

So long as the functionality of Takaful and its ilk remain government and corporate secrets, the extent of those risks will remain unknown.

1885
1886
 
 
1887
1888
1889
1890
1891
 
 

Yo everyone. I'm trying to understand how USB power works with batteries, hopefully this is the correct community to ask. We have so many different devices that can be charged with USB-C, yet all the power bricks and power banks have different voltage, amps, watts. Can I use any USB-C brick or power bank with any device and battery and it must intelligently never draw over the limits of the power source?

I thought that USB-C is made for this yet I can read reports that people damaged their Nintendo Switch because of a power brick. How careful do we have to be when charging our phones, tablets, laptops, gaming devices? What exactly to look out for when deciding what to plug in where? Thank you very much for any replies.

1892
1893
1894
 
 

TL;DR: LLMs are just mimicking natural language and conversation. Fact checking and healthy skepticism is not part of their model. For example they can be easily tricked into advocating conspiracy theories, like a fake moon landing. Google Bard is even stating arithmetic falsehoods like 5*6 != 30

1895
 
 

" Reddit is a complicated product. Can it really be built from scratch, without much funding? "

1896
 
 
1897
 
 

hi! Is there a simple way to push a RSS feed into a Pleroma account so I can create an account for my website and automatically publish new posts? Thanks

1898
 
 

This is all about EPOK LEAGUE the best esports organization on the planet!

1899
1900
view more: ‹ prev next ›