diz

joined 2 years ago
[–] diz@awful.systems 3 points 6 months ago* (last edited 6 months ago)

Film photography is my hobby and I think that there isn’t anything that would prevent from exposing a displayed image on a piece of film, except for the cost.

Glass plates it is, then. Good luck matching the resolution.

In all seriousness though I think your normal set up would be detectable even on normal 35mm film due to 1: insufficient resolution (even at 4k, probably even at 8k), and 2: insufficient dynamic range. There would probably also be some effects of spectral response mismatch - reds that are cut off by the film’s spectral response would be converted into film-visible reds by a display. Il

Detection of forgery may require use of a microscope and maybe some statistical techniques. Even if the pixels are smaller than film grains, pixels are on a regular grid and film grains are not.

Edit: trained eyeballing may also work fine if you are familiar with the look of that specific film.

[–] diz@awful.systems 5 points 6 months ago* (last edited 6 months ago)

Hmm, maybe too premature - chatgpt has history on by default now, so maybe that's where it got the idea it was a classic puzzle?

With history off, it still sounds like it has the problem in the training dataset, but it is much more bizarre:

https://markdownpastebin.com/?id=68b58bd1c4154789a493df964b3618f1

Could also be randomness.

Select snippet:

Example 1: N = 2 boats

Both ferrymen row their two boats across (time = D/v = 1/3 h). One ferryman (say A) swims back alone to the west bank (time = D/u = 1 h). That same ferryman (A) now rows the second boat back across (time = 1/3 h). Meanwhile, the other ferryman (B) has just been waiting on the east bank—but now both are on the east side, and both boats are there.

Total time

$$ T_2 ;=; \frac{1}{3} ;+; 1 ;+; \frac{1}{3} ;=; \frac{5}{3}\ \mathrm{hours} \approx 1,\mathrm{h},40,\mathrm{min}. $$

I have to say with history off it sounds like an even more ambitious moron. I think their history thing may be sort of freezing bot behavior in time, because the bot sees a lot of past outputs by itself, and in the past it was a lot less into shitting LaTeX all over the place when doing a puzzle.

[–] diz@awful.systems 10 points 6 months ago (1 children)

Now we need to make a logic puzzle involving two people and one cup. Perhaps they are trying to share a drink equitably. Each time they drink one third of remaining cup’s volume.

[–] diz@awful.systems 15 points 6 months ago* (last edited 6 months ago) (3 children)

Yeah that's the version of the problem that chatgpt itself produced, with no towing etc.

I just find it funny that they would train on some sneer problem like this, to the point of making their chatbot look even more stupid. A "300 billion dollar" business, reacting to being made fun of by a very small number of people.

[–] diz@awful.systems 9 points 6 months ago* (last edited 6 months ago)

Oh wow it is precisely the problem I "predicted" before: there are surprisingly few production grade implementations to plagiarize from.

Even for seemingly simple stuff. You might think parsing floating point numbers from strings would have a gazillion examples. But it is quite tricky to do it correctly (a correct implementation allows you to convert a floating point number to a string with enough digits, and back, and always obtain precisely the same number that you started with). So even for such omnipresent example, which has probably been implemented well over 10 000 times by various students, if you start pestering your bot with requests to make it better, if you have the bots write the tests and pass them, you could end up plagiarizing something identifiable.

edit: and even suppose there were 2, or 3, or 5 exfat implementations. They would be too different to "blur" together. The deniable plagiarism that they are trying to sell - "it learns the answer in general from many implementations, then writes original code" - is bullshit.

[–] diz@awful.systems 1 points 7 months ago

I'm kind of dubious its effective in any term whatsoever, unless the term is "nothing works but we got a lot of it".

[–] diz@awful.systems 6 points 7 months ago

I think if people are citing in another 3 months time, they’ll be making a mistake

In 3 months they'll think they're 40% faster while being 38% slower. And sometime in 2026 they will be exactly 100% slower - the moment referred to as "technological singularity".

[–] diz@awful.systems 5 points 7 months ago (2 children)

That philosophy always ends in stepping into dogshit to try to boost stock prices.

[–] diz@awful.systems 7 points 7 months ago* (last edited 7 months ago)

When they tested on bugs not in SWE-Bench, the success rate dropped to 57‑71% on random items, and 50‑68% on fresh issues created after the benchmark snapshot. I’m surprised they did that well.

After the benchmark snapshot. Could still be before LLM training data cut off, or available via RAG.

edit: For a fair test you have to use git issues that had not been resolved yet by a human.

This is how these fuckers talk, all of the time. Also see Sam Altman's not-quite-denials of training on Scarlett Johansson's voice: they just asserted that they had hired a voice actor, but didn't deny training on actual Scarlett Johansson's voice. edit: because anyone with half a brain knows that not only did they train on her actual voice, they probably gave it and their other pirated movie soundtracks massively higher weighting, just as they did for books and NYT articles.

Anyhow, I fully expect that by now they just use everything they can to cheat benchmarks, up to and including RAG from solutions past the training dataset cut off date. With two of the paper authors being from Microsoft itself, expect that their "fresh issues" are gamed too.

[–] diz@awful.systems 7 points 7 months ago

Yeah I'm thinking that people who think their brains work like LLM may be somewhat correct. Still wrong in some ways as even their brains learn from several orders of magnitude less data than LLMs do, but close enough.

[–] diz@awful.systems 6 points 7 months ago* (last edited 7 months ago) (1 children)

You can film with an actual camera then use video to video to make it look very AI. If you're just grifting, that would be the way to go I think.

[–] diz@awful.systems 7 points 7 months ago (1 children)

They're also very gleeful about finally having one upped the experts with one weird trick.

Up until AI they were the people who were inept and late at adopting new technology, and now they get to feel that they're ahead (because this time the new half-assed technology was pushed onto them and they didn't figure out they needed to opt out).

view more: ‹ prev next ›