And android users are not obligated to give a good review after not receiving support.
I have no problem with his actions, (if he doesn’t have the resources/energy/time to support on all platforms, who can complain about that?), but I don’t think he’s very good at the whole communicating with other humans part of software that sadly in the OSS world tends to fall on the same devs that do the work, he could have avoided both this comment thread and the angry android user above with zero extra effort by simply phrasing things better.
The particular poor phrasing he chose seems to imply to me that he’s lumping all users of each platform together in his head, and each negative interaction builds on the previous, which isn’t the healthiest attitude, and does indeed make him look like an arsehole to anyone who’s just turned up and hasn’t yet done anything wrong.
The difference between LLMs and human intelligence is stark. But the difference between LLMs and other forms of computer intelligence is stark too (eg LLMs can’t do fairly basic maths, whereas computers have always been super intelligences in the calculator domain). It’s reasonable to assume that someone will figure out how to make an LLM that can integrate better with the rest of the computer sooner rather than later, and we don’t really know what that’ll look like. And that requires few new capabilities.
The reality is we don’t know how many steps between now and when we get AGI, some people before the big llm hype were insisting quality language processing was the key missing feature, now that looks a little naive, but we still don’t know exactly what’s missing. So better to plan ahead and maybe arrive early at solutions than wait until AGI has arrived and done something irreversible to start planning for it.