this post was submitted on 22 Jun 2025
48 points (92.9% liked)

Technology

327 readers
253 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No videos.
  4. Post only direct links.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

More sites will be added to the blacklist as needed.

Encouraged:

founded 2 months ago
MODERATORS
 

We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.

Then retrain on that.

Far too much garbage in any foundation model trained on uncorrected data.

Source.

More Context

Source.

Source.

top 13 comments
sorted by: hot top controversial new old
[–] WatDabney@lemmy.dbzer0.com 18 points 1 month ago (1 children)

...which has advanced reasoning...

Does he actually believe this sort of shit?

Like his constant nattering about self-driving cars or brain/computer interface (or DOGE cost-cutting for that matter). It's all bludgeoningly obvious bullshit, but I can't work out if he's more dishonest or delusional.

I lean a bit toward delusional — that to at least some notable degree, he actually believes the bullshit he spouts.

It's creepy either way.

[–] ulterno@programming.dev 1 points 1 month ago

He uses "advanced" relatively.
Grok's reasoning is probably more advanced than Elon's

[–] Isoprenoid@programming.dev 15 points 1 month ago

"I'm going to use a model trained on incorrect data, to correct the data, and then train on that data."

[–] AlecSadler@lemmy.blahaj.zone 6 points 1 month ago (1 children)

I can't really comment first hand, but people I know that are heavy in the AI arena career-wise have told me that Grok is pretty much garbage on all fronts.

Is that true?

[–] AceFuzzLord@lemm.ee 3 points 1 month ago

@grok analysis!

/s

[–] Schwim@lemmy.zip 6 points 1 month ago

It's going to refine it's capability until it just starts calling everyone "pedoguy" and stating they are in the Epstein files regardless of the question posed to it.

[–] TastyWheat@lemmy.world 5 points 1 month ago

But he apparently posted a clean drug test the other day! He can't be on coke NOW. /s

[–] Diplomjodler3@lemmy.world 5 points 1 month ago

World history according to Elon Musk:

Hitler was right.

The End

[–] MentalEdge@sopuli.xyz 2 points 1 month ago

Dude is over there literally trying to shoddily doodle his preferred reality onto the inside of his own little bubble.

[–] belastend@lemmy.dbzer0.com 2 points 1 month ago

The most liked comment under the third post is straight up holocaust denial.

[–] ulterno@programming.dev 2 points 1 month ago (1 children)

So much ado for the modern equivalent of burning down a library.

[–] Mikina@programming.dev 2 points 1 month ago (1 children)

Which probably also already happened to train it in the first place. Article is about Anthropic, but xAI would do the same if they could.

https://arstechnica.com/ai/2025/06/anthropic-destroyed-millions-of-print-books-to-build-its-ai-models/

[–] ulterno@programming.dev 1 points 1 month ago

I guess that's right.

Just that what I meant by "burning down a library" was the loss of information that would occur as a result.
Referring to historical events of burning down libraries, of which I can recall 2 of. One being the Library of Alexandria and the other being Nalanda Mahavihara.

In the modern day, where stuff is digitised and mass printed, burning books is not really going to cost loss of said information (much less, scanning it, which would cause nothing except potential loss of revenue to the publisher).
On the other hand, if you fill the internet with a bunch of false information, such that someone just looking something up, will see the false information way before the actual thing, that would have a similar (even worse) effect on the civilisation.