True AI Risks: Slowing Progress and Infinite Hype

Kevin Ann
4 min readJul 14, 2024

“AI Doomers” claim that superintelligence/AGI (Artificial General Intelligence) is imminent and thus we should slow or even stop AI research since the following is true:

  1. Humanity is indeed capable of creating an AI God
  2. Not only that, but we might do so “accidentally” or “inadvertently” in a hard takeoff scenario, totally ignoring resources such as compute, energy, etc.
  3. We can control or “align” that superintelligence, which is similar to saying an ant can somehow align humans

Meanwhile, LLM chatbots (LLM = Large Language Model) still make simple errors like dividing by zero since they can’t do real math (they just mimic something that appears like math), cannot verify simple facts about the world (factuality is a major unsolved problem), and in general fail dramatically when encountering tasks outside their training set.

Exponential growth curve in AI Hype. The Singularity is Here.

In general, LLM Chatbots only “appear” to be reasoning or intelligent since they produce output in terms of human language.

I like the analogy of AI’s path to superintelligence as building a ladder to the moon in a pre-modern society. It may “appear” possible that simply adding rungs to the ladder can get us to the moon, but there exist fundamental scientific breakthroughs and massive engineering challenges to…

--

--

Kevin Ann
Kevin Ann

Written by Kevin Ann

AI/full-stack software engineer | trader/investor/entrepreneur | physics phd

No responses yet