I can't stop thinking about this piece from Gary Marcus I read a few days ago, How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AI. It's a fascinating read on the differences of connectionist vs. symbolic AI and the merging of the two into neurosymbolic AI from someone who understands the topic.
I recommend giving the whole thing a read, but this little nugget at the end is what caught my attention,
Why was the industry so quick to rally around a connectionist-only approach and shut out naysayers? Why were the top companies in the space seemingly shy about their recent neurosymbolic successes?
Nobody knows for sure. But it may well be as simple as money. The message that we can simply scale our way to AGI is incredibly attractive to investors because it puts money as the central (and sufficient) force needed to advance.
AGI is still rather poorly defined, and taking cues from Ed Zitron (another favorite of mine), there will be a moving of goalposts. Scaling fast and hard to several gigglefucks of power and claiming you've achieved AGI is the next big maneuver. All of this largely just to treat AI as a blackhole for accountability; the super smart computer said we had to take your healthcare.
Yeah. I really hate it when someone has what I would call a "bad take" and then I try and engage in conversation with them and they get downvoted to hell. Like sorry friend, I swear it wasn't me, I'm just trying to have a conversation too ☹️