“AI Doomers” claim that superintelligence/AGI (Artificial General Intelligence) is imminent and thus we should slow or even stop AI research since the following is true:
- Humanity is indeed capable of creating an AI God
- Not only that, but we might do so “accidentally” or “inadvertently” in a hard takeoff scenario, totally ignoring resources such as compute, energy, etc.
- We can control or “align” that superintelligence, which is similar to saying an ant can somehow align humans
Meanwhile, LLM chatbots (LLM = Large Language Model) still make simple errors like dividing by zero since they can’t do real math (they just mimic something that appears like math), cannot verify simple facts about the world (factuality is a major unsolved problem), and in general fail dramatically when encountering tasks outside their training set.
In general, LLM Chatbots only “appear” to be reasoning or intelligent since they produce output in terms of human language.
I like the analogy of AI’s path to superintelligence as building a ladder to the moon in a pre-modern society. It may “appear” possible that simply adding rungs to the ladder can get us to the moon, but there exist fundamental scientific breakthroughs and massive engineering challenges to…