tfm

joined 5 months ago
MODERATOR OF
 
 
 

cross-posted from: https://lemm.ee/post/66544085

Text to avoid paywall

The Wikimedia Foundation, the nonprofit organization which hosts and develops Wikipedia, has paused an experiment that showed users AI-generated summaries at the top of articles after an overwhelmingly negative reaction from the Wikipedia editors community.

“Just because Google has rolled out its AI summaries doesn't mean we need to one-up them, I sincerely beg you not to test this, on mobile or anywhere else,” one editor said in response to Wikimedia Foundation’s announcement that it will launch a two-week trial of the summaries on the mobile version of Wikipedia. “This would do immediate and irreversible harm to our readers and to our reputation as a decently trustworthy and serious source. Wikipedia has in some ways become a byword for sober boringness, which is excellent. Let's not insult our readers' intelligence and join the stampede to roll out flashy AI summaries. Which is what these are, although here the word ‘machine-generated’ is used instead.”

Two other editors simply commented, “Yuck.”

For years, Wikipedia has been one of the most valuable repositories of information in the world, and a laudable model for community-based, democratic internet platform governance. Its importance has only grown in the last couple of years during the generative AI boom as it’s one of the only internet platforms that has not been significantly degraded by the flood of AI-generated slop and misinformation. As opposed to Google, which since embracing generative AI has instructed its users to eat glue, Wikipedia’s community has kept its articles relatively high quality. As I recently reported last year, editors are actively working to filter out bad, AI-generated content from Wikipedia.

A page detailing the the AI-generated summaries project, called “Simple Article Summaries,” explains that it was proposed after a discussion at Wikimedia’s 2024 conference, Wikimania, where “Wikimedians discussed ways that AI/machine-generated remixing of the already created content can be used to make Wikipedia more accessible and easier to learn from.” Editors who participated in the discussion thought that these summaries could improve the learning experience on Wikipedia, where some article summaries can be quite dense and filled with technical jargon, but that AI features needed to be cleared labeled as such and that users needed an easy to way to flag issues with “machine-generated/remixed content once it was published or generated automatically.”

In one experiment where summaries were enabled for users who have the Wikipedia browser extension installed, the generated summary showed up at the top of the article, which users had to click to expand and read. That summary was also flagged with a yellow “unverified” label.

An example of what the AI-generated summary looked like.

Wikimedia announced that it was going to run the generated summaries experiment on June 2, and was immediately met with dozens of replies from editors who said “very bad idea,” “strongest possible oppose,” Absolutely not,” etc.

“Yes, human editors can introduce reliability and NPOV [neutral point-of-view] issues. But as a collective mass, it evens out into a beautiful corpus,” one editor said. “With Simple Article Summaries, you propose giving one singular editor with known reliability and NPOV issues a platform at the very top of any given article, whilst giving zero editorial control to others. It reinforces the idea that Wikipedia cannot be relied on, destroying a decade of policy work. It reinforces the belief that unsourced, charged content can be added, because this platforms it. I don't think I would feel comfortable contributing to an encyclopedia like this. No other community has mastered collaboration to such a wondrous extent, and this would throw that away.”

A day later, Wikimedia announced that it would pause the launch of the experiment, but indicated that it’s still interested in AI-generated summaries.

“The Wikimedia Foundation has been exploring ways to make Wikipedia and other Wikimedia projects more accessible to readers globally,” a Wikimedia Foundation spokesperson told me in an email. “This two-week, opt-in experiment was focused on making complex Wikipedia articles more accessible to people with different reading levels. For the purposes of this experiment, the summaries were generated by an open-weight Aya model by Cohere. It was meant to gauge interest in a feature like this, and to help us think about the right kind of community moderation systems to ensure humans remain central to deciding what information is shown on Wikipedia.”

“It is common to receive a variety of feedback from volunteers, and we incorporate it in our decisions, and sometimes change course,” the Wikimedia Foundation spokesperson added. “We welcome such thoughtful feedback — this is what continues to make Wikipedia a truly collaborative platform of human knowledge.”

“Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March,” a Wikimedia Foundation project manager said. VPT, or “village pump technical,” is where The Wikimedia Foundation and the community discuss technical aspects of the platform. “As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further.”

The project manager also said that “Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such, and that “We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.”

 

cross-posted from: https://lemm.ee/post/66544085

Text to avoid paywall

The Wikimedia Foundation, the nonprofit organization which hosts and develops Wikipedia, has paused an experiment that showed users AI-generated summaries at the top of articles after an overwhelmingly negative reaction from the Wikipedia editors community.

“Just because Google has rolled out its AI summaries doesn't mean we need to one-up them, I sincerely beg you not to test this, on mobile or anywhere else,” one editor said in response to Wikimedia Foundation’s announcement that it will launch a two-week trial of the summaries on the mobile version of Wikipedia. “This would do immediate and irreversible harm to our readers and to our reputation as a decently trustworthy and serious source. Wikipedia has in some ways become a byword for sober boringness, which is excellent. Let's not insult our readers' intelligence and join the stampede to roll out flashy AI summaries. Which is what these are, although here the word ‘machine-generated’ is used instead.”

Two other editors simply commented, “Yuck.”

For years, Wikipedia has been one of the most valuable repositories of information in the world, and a laudable model for community-based, democratic internet platform governance. Its importance has only grown in the last couple of years during the generative AI boom as it’s one of the only internet platforms that has not been significantly degraded by the flood of AI-generated slop and misinformation. As opposed to Google, which since embracing generative AI has instructed its users to eat glue, Wikipedia’s community has kept its articles relatively high quality. As I recently reported last year, editors are actively working to filter out bad, AI-generated content from Wikipedia.

A page detailing the the AI-generated summaries project, called “Simple Article Summaries,” explains that it was proposed after a discussion at Wikimedia’s 2024 conference, Wikimania, where “Wikimedians discussed ways that AI/machine-generated remixing of the already created content can be used to make Wikipedia more accessible and easier to learn from.” Editors who participated in the discussion thought that these summaries could improve the learning experience on Wikipedia, where some article summaries can be quite dense and filled with technical jargon, but that AI features needed to be cleared labeled as such and that users needed an easy to way to flag issues with “machine-generated/remixed content once it was published or generated automatically.”

In one experiment where summaries were enabled for users who have the Wikipedia browser extension installed, the generated summary showed up at the top of the article, which users had to click to expand and read. That summary was also flagged with a yellow “unverified” label.

An example of what the AI-generated summary looked like.

Wikimedia announced that it was going to run the generated summaries experiment on June 2, and was immediately met with dozens of replies from editors who said “very bad idea,” “strongest possible oppose,” Absolutely not,” etc.

“Yes, human editors can introduce reliability and NPOV [neutral point-of-view] issues. But as a collective mass, it evens out into a beautiful corpus,” one editor said. “With Simple Article Summaries, you propose giving one singular editor with known reliability and NPOV issues a platform at the very top of any given article, whilst giving zero editorial control to others. It reinforces the idea that Wikipedia cannot be relied on, destroying a decade of policy work. It reinforces the belief that unsourced, charged content can be added, because this platforms it. I don't think I would feel comfortable contributing to an encyclopedia like this. No other community has mastered collaboration to such a wondrous extent, and this would throw that away.”

A day later, Wikimedia announced that it would pause the launch of the experiment, but indicated that it’s still interested in AI-generated summaries.

“The Wikimedia Foundation has been exploring ways to make Wikipedia and other Wikimedia projects more accessible to readers globally,” a Wikimedia Foundation spokesperson told me in an email. “This two-week, opt-in experiment was focused on making complex Wikipedia articles more accessible to people with different reading levels. For the purposes of this experiment, the summaries were generated by an open-weight Aya model by Cohere. It was meant to gauge interest in a feature like this, and to help us think about the right kind of community moderation systems to ensure humans remain central to deciding what information is shown on Wikipedia.”

“It is common to receive a variety of feedback from volunteers, and we incorporate it in our decisions, and sometimes change course,” the Wikimedia Foundation spokesperson added. “We welcome such thoughtful feedback — this is what continues to make Wikipedia a truly collaborative platform of human knowledge.”

“Reading through the comments, it’s clear we could have done a better job introducing this idea and opening up the conversation here on VPT back in March,” a Wikimedia Foundation project manager said. VPT, or “village pump technical,” is where The Wikimedia Foundation and the community discuss technical aspects of the platform. “As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future. In consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community. Looking back, we realize the next step with this message should have been to provide more of that context for you all and to make the space for folks to engage further.”

The project manager also said that “Bringing generative AI into the Wikipedia reading experience is a serious set of decisions, with important implications, and we intend to treat it as such, and that “We do not have any plans for bringing a summary feature to the wikis without editor involvement. An editor moderation workflow is required under any circumstances, both for this idea, as well as any future idea around AI summarized or adapted content.”

 

cross-posted from: https://europe.pub/post/1353066

Meine Meinung: Man sollte Waffen im Grunde komplett verbieten.

Auch für Polizisten. Schlussendlich sollte diese nämlich niemanden töten sondern gefährliche Situationen nur entschärfen. Und dafür gibt es effektivere nicht-tötliche Werkzeuge. Für ganz harte Fälle kann es ja weiterhin die Cobra und Spezialeinheiten geben, mit richtigen Schusswaffen und Gewehren, aber normale Streifenpolizisten brauchen wirklich keine in einem Land wie Österreich.

Man könnte vielleicht Ausnahmen für Sport und z.b. Förster:innen machen. Aber es gilt halt einfach dass weniger Schusswaffen und strengere Waffengesetze zu weniger tragischen Vorfällen mit Schusswaffen führen.

 

Meine Meinung: Man sollte Waffen im Grunde komplett verbieten.

Auch für Polizisten. Schlussendlich sollte diese nämlich niemanden töten sondern gefährliche Situationen nur entschärfen. Und dafür gibt es effektivere nicht-tötliche Werkzeuge. Für ganz harte Fälle kann es ja weiterhin die Cobra und Spezialeinheiten geben, mit richtigen Schusswaffen und Gewehren, aber normale Streifenpolizisten brauchen wirklich keine in einem Land wie Österreich.

Man könnte vielleicht Ausnahmen für Sport und z.b. Förster:innen machen. Aber es gilt halt einfach dass weniger Schusswaffen und strengere Waffengesetze zu weniger tragischen Vorfällen mit Schusswaffen führen.

 
 

cross-posted from: https://europe.pub/post/1339926

 

cross-posted from: https://szmer.info/post/7894706

Poland’s Constitutional Tribunal (TK) has ruled that European Union energy and climate regulations are incompatible with the Polish constitution and breach national sovereignty in determining energy policy.

The Tribunal found that EU institutions, including the Court of Justice of the European Union (CJEU), had exceeded their competences by interpreting EU treaties in a way that significantly impacts Poland’s ability to choose its energy sources independently.

Interpretations of EU law “cannot mean that Poland loses control over the scope of its delegated competences, and thus that there are areas in which its sovereignty (here: energy) is not protected”, the court said in a statement announcing its decision.

However, the ruling is unlikely to have any real effect for now given that the current government, a coalition led by Prime Minister Donald Tusk, does not recognise the TK’s legitimacy due to it containing judges unlawfully appointed by the former Law and Justice (PiS) administration.

The case was brought by a group of opposition lawmakers led by Sebastian Kaleta, a PiS MP and former deputy justice minister. The motion challenged the compatibility of EU climate rules – including Directive 2003/87/EC, which created the EU Emissions Trading Scheme (EU ETS) – with the Polish constitution.

The MPs argued that, although Poland had transferred some powers to Brussels, it should retain sovereignty over critical energy decisions. They claimed that mandatory participation in the EU ETS restricts economic freedom and undermines the state’s ability to ensure energy security.

They also warned that EU decision-making processes, which do not require unanimity in the European Council on issues affecting Poland’s energy mix, might breach the limits of competence conferred on the EU and undermine the primacy of the Polish constitution.

In its ruling, the TK agreed with the motion’s core arguments. It held that the CJEU had extended the interpretation of the Treaty on the Functioning of the European Union beyond the conferred competences, infringing on national sovereignty.

“Competences not conferred on the European Union belong to the member states themselves, and the EU can only act on the basis of the principle of subsidiarity, subject to the scrutiny of national parliaments at all times,” the court said.

Consequently, the TK found this interpretation of EU law to be incompatible with the Polish constitution, emphasising that Poland cannot lose control over the scope of delegated powers, especially in such a key area as energy sovereignty.

The TK, however, discontinued proceedings relating specifically to the ETS “due to the incomplete, from a formal point of view, definition of the object under verification”.

The TK concluded its statement by stating that it was now up to the Polish legislature and executive to take “appropriate public law measures” to implement the decision, which enters into force upon its publication.

However, it is the government that is responsible for publishing TK rulings, and it refuses to do so due to given that some of the tribunal’s judges were illegitimately appointed under PiS.

The ruling could still reverberate in Polish politics, however. The PiS-aligned president-elect, Karol Nawrocki, who takes office in August, said last month that the TK’s decision on this case could be a way to lower the electricity prices by 33% – one of his campaign promises.

He also pledged to hold a referendum on withdrawing from the EU’s Green Deal – a set of policies aimed at reaching climate neutrality by 2050 – and reaffirmed his support for coal, which remains Poland’s main source of electricity generation and is also widely used for heating homes.

PiS politicians welcomed the verdict, insisting that it means that Poland does not have to implement the Green Deal.

“The EU has not been given the competence to decide without the consent of Poland which energy sources we can use and what fiscal burdens may be imposed on individual sources,” Kaleta wrote on X. “This opens the path for a radical reduction in electricity and heating prices now.”

The former justice minister in the PiS government, Zbigniew Ziobro, meanwhile, challenged Tusk, asking if he would “break the law again and hide the verdict to drive Poles into poverty” or “will you behave as you should?”

The government has so far not commented on the TK’s ruling.

 

cross-posted from: https://lemmy.world/post/31108228

As tensions escalate between California and the Trump administration over immigration, another potential battlefront is emerging over taxes.

The spat began with reports that the Trump administration is considering cutting funding for California's university system, the largest higher education system in the nation with about 12% of all U.S. enrolled students.

In response, Gov. Gavin Newsom wrote Friday afternoon in a social media post that California provides about $80 billion more in taxes to the federal government than it receives in return.

"Maybe it's time to cut that off, @realDonaldTrump," Newsom said.

[–] tfm@europe.pub 2 points 4 months ago (3 children)

That's an awesome list. Can I use that for the instance sidebar? :)

[–] tfm@europe.pub 3 points 4 months ago

Thanks for the shoutout! Every feedback and help is appreciated. :)

Let's bring social media to Europe together!

[–] tfm@europe.pub 1 points 4 months ago

Yeah it's a great directory

[–] tfm@europe.pub 2 points 4 months ago (1 children)

Why not? The more, the better, in my opinion.

[–] tfm@europe.pub 2 points 4 months ago (2 children)

Proton and Tuta are good. Both are European, privacy friendly services.

Here a few more: https://european-alternatives.eu/alternative-to/gmail

[–] tfm@europe.pub 2 points 4 months ago

Photon, mlmym, Voyager

[–] tfm@europe.pub 3 points 4 months ago

It would only be e2ee if only you, not the server, has the keys to decrypt. You can achieve this by storing and decrypting only locally. Proton does that for example.

[–] tfm@europe.pub 1 points 4 months ago (2 children)

Then it wouldn't be e2ee at all.

[–] tfm@europe.pub 2 points 4 months ago
[–] tfm@europe.pub 4 points 4 months ago (3 children)

Was hat das mit Besteuerung zu tun? Es macht höchstens Steuerhinterziehung schwerer. Sonst ist es derselbe Prozess.

view more: ‹ prev next ›