More flaming dog poop appeared on my doorstep, in the form of this article published in VentureBeat. VB appears to be an online magazine for publishing silicon valley propaganda, focused on boosting startups, so it's no surprise that they'd publish this drivel sent in by some guy trying to parlay prompting into writing.
Point:
Apple argues that LRMs must not be able to think; instead, they just perform pattern-matching. The evidence they provided is that LRMs with chain-of-thought (CoT) reasoning are unable to carry on the calculation using a predefined algorithm as the p,roblem grows.
Counterpoint, by the author:
This is a fundamentally flawed argument. If you ask a human who already knows the algorithm for solving the Tower-of-Hanoi problem to solve a Tower-of-Hanoi problem with twenty discs, for instance, he or she would almost certainly fail to do so. By that logic, we must conclude that humans cannot think either.
As someone who already knows the algorithm for solving the ToH problem, I wouldn't "fail" at solving the one with twenty discs so much as I'd know that the algorithm is exponential in the number of discs and you'd need 2^20 - 1 (1048575) steps to do it, and refuse to indulge your shit reasoning.
However, this argument only points to the idea that there is no evidence that LRMs cannot think.
Argument proven stupid, so we're back to square one on this, buddy.
This alone certainly does not mean that LRMs can think — just that we cannot be sure they don’t.
Ah yes, some of my favorite GOP turns of phrases, "no unknown unknowns" + "big if true".

This is a joke, right?
E: my enshittified brain thought that this was some kind of AI enabled smart ring that also told the time. This is kinda fun actually, tho I would never get one