Member-only story
“AI Doomers” claim that superintelligence/AGI (Artificial General Intelligence) is imminent and thus we should slow or even stop AI research since the following is true:
- Humanity is indeed capable of creating an AI God
- Not only that, but we might do so “accidentally” or “inadvertently” in a hard takeoff scenario, totally ignoring resources such as compute, energy, etc.
- We can control or “align” that superintelligence, which is similar to saying an ant can somehow align humans
Meanwhile, LLM chatbots (LLM = Large Language Model) still make simple errors like dividing by zero since they can’t do real math (they just mimic something that appears like math), cannot verify simple facts about the world (factuality is a major unsolved problem), and in general fail dramatically when encountering tasks outside their training set.

In general, LLM Chatbots only “appear” to be reasoning or intelligent since they produce output in terms of human language.
I like the analogy of AI’s path to superintelligence as building a ladder to the moon in a pre-modern society. It may “appear” possible that simply adding rungs to the ladder can get us to the moon, but there exist fundamental scientific breakthroughs and massive engineering challenges to overcome, some that aren’t even conceptualized by those pre-modern civilizations such as Newtonian mechanics, rocket propulsion, etc.
Personally, I’m far more worried that science is slowing & AI progress diminishing and we hit another AI winter. Humanity’s tech capabilities as a function of time may appear as a single exponential, but it’s really comprised of overlapping S-curves. LLMs are incredible, but they aren’t really reasoning, & we need at least another scientific breakthrough to superintelligence.
As for AI, why hasn’t it already solved Nuclear War, Cancer, Heart Disease, Dementia, Energy, Education, Healthcare, Aging, Ukraine/Russia, Israel/Palestine, China/Taiwan, quantum/relativity unification, and “real” problems? It’s because these are outside the training set and require actual novel, complex, and nuanced thinking.
Of course, it’s “possible” we have the hard takeoff and self-improving…