FaceDeer

joined 2 years ago
[–] FaceDeer@fedia.io -3 points 1 year ago (2 children)

"It's ruined and that's a bad thing, so let's ruin it more. Including the older stuff that wasn't as badly ruined."

This is a very childish approach to life, IMO. If you don't like Reddit any more then just move on and leave it be for those who do still like it.

[–] FaceDeer@fedia.io -5 points 1 year ago

The fact that it's happened before doesn't make it a good thing, and doesn't make it something that shouldn't be opposed.

Fortunately Reddit is well-archived so LLMs can still be trained off of it, regardless of what Reddit or its users try to do to the data now, but it's still a negative thing that doesn't have to happen.

[–] FaceDeer@fedia.io 2 points 1 year ago (3 children)

SpaceX has a 64% market share in the global commercial rocket launch market for sending satellites, scientific instruments, and other payloads into orbit. In the first six months of 2023, SpaceX handled 21 flights for outside customers, or 64% of the worldwide total. In the first half of 2023, SpaceX handled 88 percent of customer flights from U.S. launch sites.[1]

If success isn't their goal I'd be amazed at what they accomplished if the decided to try for it someday.

[–] FaceDeer@fedia.io 2 points 1 year ago (3 children)

No, my example is literally telling the AI that socks are edible and then asking it for a recipe.

In your quoted text:

When a model is trained on data with source-reference (target) divergence, the model can be encouraged to generate text that is not necessarily grounded and not faithful to the provided source.

Emphasis added. The provided source in this case would be telling the AI that socks are edible, and so if it generates a recipe for how to cook socks the output is faithful to the provided source.

A hallucination is when you train the AI with a certain set of facts in its training data and then its output makes up new facts that were not in that training data. For example if I'd trained an AI on a bunch of recipes, none of which included socks, and then I asked it for a recipe and it gave me one with socks in it then that would be a hallucination. The sock recipe came out of nowhere, I didn't tell it to make it up, it didn't glean it from any other source.

In this specific case what's going on is that the user does a websearch for something, the search engine comes up with some web pages that it thinks are relevant, and then the content of those pages is shown to the AI and it is told "write a short summary of this material." When the content that the AI is being shown literally has a recipe for socks in it (or glue-based pizza sauce, in the real-life example that everyone's going on about) then the AI is not hallucinating when it gives you that recipe. It is generating a grounded and faithful summary of the information that it was provided with.

The problem is not the AI here. The problem is that you're giving it wrong information, and then blaming it when it accurately uses the information that it was given.

[–] FaceDeer@fedia.io 6 points 1 year ago

Also, this was the very first test implant into a human. At this point in testing "doesn't harm the patient" is a perfectly good result to call a success.

Honestly, people calling Neuralink a failure because the first patient didn't get up and start dancing are just showing themselves to be either ignorant of the process or ridiculously biased.

[–] FaceDeer@fedia.io 5 points 1 year ago* (last edited 1 year ago)

IFT3 finished most of the goals that had been set for that test flight. It was highly successful and they learned a lot that is being applied to IFT4.

[–] FaceDeer@fedia.io -1 points 1 year ago

SpaceX launched the biggest rocket every to be launched in history, three times at this point, and you're questioning whether they're "making progress?"

As I said, you've prioritized hating Elon Musk over everything else.

[–] FaceDeer@fedia.io 2 points 1 year ago

No, the entire pad wasn't gone. The concrete under the pad had a big hole in it, but most of the structure was intact - as evidenced by the fact that they just patched the hole and continued using the pad without having to replace the whole thing.

Nobody was hurt. The rocket was damaged, but it still managed to accomplish much of what they'd wanted it to accomplish. It was a test launch, they knew it wasn't going to cruise all the way to the finish line. They wanted to see what went wrong.

Do you really think they didn't do the math at all? They did the math, they figured they could risk it based on what the math told them, they turned out to be wrong in hindsight. Plenty of things seem like good risks that turn out to be bad ones in hindsight. They're not a bunch of yee-haw wild men who do stuff without thinking or calculating, the FAA would never be giving them launch licenses if they were.

[–] FaceDeer@fedia.io 5 points 1 year ago (2 children)

They knew that it wasn't going to be enough by itself, they were predicting that it would last long enough to survive a single launch. They were already planning to replace the pad, they just figured they would do it after the first test launch.

They were slightly off in their prediction, but that's why these are test launches. Fortunately it didn't do much harm, and they were already gearing up to replace the launch pad surface anyway so free excavation.

[–] FaceDeer@fedia.io 6 points 1 year ago

It was an engine on a test stand. This sort of thing is expected from time to time.

[–] FaceDeer@fedia.io 3 points 1 year ago (10 children)

But hating people is more important than accomplishing stuff, isn't it?

view more: ‹ prev next ›