Sam Altman wants funding right?
Here is an idea. I would pay 1000 dollars to get in a boxing ring with this guy, and probably a lot of other people would love to get a shot at that punchable face, no?
We have solved funding.
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Sam Altman wants funding right?
Here is an idea. I would pay 1000 dollars to get in a boxing ring with this guy, and probably a lot of other people would love to get a shot at that punchable face, no?
We have solved funding.
Dear Scat Altman, just add a timestamp at each response that the LLM can read
Okay, so, in case the headline is confusing anyone else, it's literal. Like, you know how there are those cringe-ass Alexa ads that are about how it does AI language processing and assistant shit? Yeah, ChatGPT can't I guess.
Just make Codex write the code for it. Should be easy. Don’t even need humans. Right?
Tell me the name of person who needs an AI for setting up timers. It is useless as ****!
Lol. Why dont they ask the AI how to program an AI?
They should just vibe code the feature. They'll have it done in an afternoon, right?
Everyone’s getting their knickers in a twist over nothing here.
Of course an AI can track time, if it’s given access to a timer MCP server.
Can we track time without tools, just in our heads? Certainly not very accurately. We can, however, track it reasonably accurately if given access to a quartz stop watch (typically +/-15 s/year)
A language model is based around language and reasoning by words/symbols. It’s not a surprise it doesn’t have timing capability.
What Altman SHOULD be embarrassed about is that the model lies about its capabilities. That implies that the context is still not right - it should be adequately trained and given context to prevent the lying. That implies a much more worrying issue - and something that Anthropic handles far better, IMHO (when asked if it can track time, if says “no, not on my own”, and then proceeds to build a JavaScript timer that it offers up to track time).



I don't use them but I follow the news about them loosely. The reason for this is epistemic humility. Claude has a pretty good idea of what its capabilities are and where the ceiling is. Chatgpt has no clue what its limits are so it believes it can do everything. Basically chatgpt has a lot of info and no idea where the gaps live and Claude has a fair idea when to search or use some external function to handle something. Gemini has less than Claude but more than chatgpt. Grok has little to no epistemic humility, but it did manage to accurately portray Musk as a world champion piss drinker, something none of the others were able to do.
I say that, but it's been a few months since I looked. That could have changed because shit moves fast. By the looks of what it's trying to do with the timer chatgpt has less than it used to. Possibly because of the way the model is trained to be helpful and confident.
Scam Altman sounds like it's a name straight from an hltv comment section, I love it
You would already be doing a great service to the world if you produced a really well tuned search engine / information digger with LLMs but no you had to periodically hype it as AGI because it can memorize entire text books with some accuracy. You did this to yourselves and if you fall it will be because of these expectations which are not met.
See, it does not understand time, so in order to.Vibe code in timer funxtionaloty, they need to start feeding it clocks.