AI Disruption Will Be Greater Than Nukes Suddenly Appearing to Pre-Civilization Humanity

And it’ll coming faster and have far greater impact than expected

Kevin Ann
6 min readApr 1, 2023

The capabilities of GPT-4, the Large Language Model behind ChatGPT, are absolutely incredible and mindblowing. What’s wild is that:

  • It’s only just appeared for mainstream use, and future versions will be far more capable
  • GPT-4 and ChatGPT are only a subset of the broader landscape in terms of products and companies, the field of Generative AI, and broader Artificial Intelligence
  • All changes will be accelerating so it’s hard to even imagine what’s possible in a few months, forget about years.

I’ve been thinking non-stop with a sense of wonder, but also fear and dread about the implications of accelerating AI recently and this is my first attempt to articulate and summarize the thoughts and conversations I’ve had about it. Since ChatGPT can easily rewrite this for far better organization, readability, and style, my goal is to output this in the most efficient way possible, just type as fast and as much as possible.

Created by MidJourney Prompt: “Post-singularity futuristic cityscape”. Will this Utopian version of the future occur?

1. Labor Market and Political/Economic Theories

Universal Basic Income is among the things I’ve changed my mind about, since I see the massive disruptive potential of ChatGPT. What’s interesting is that it’s only a small application of the huge changes by powerful new Generative AI, and will affect higher income workers faster.

A core part of our economic and political theories are based on scarcity of resources, most relevantly the scarcity of labor, and now in our current world, “skilled” labor that involves information workers.

But now technology is rapidly coming that can replace skilled labor and do it better, faster, and cheaper, almost to the point of free. In my daily work, I already see how ChatGPT in this first version can do 80% of my programming tasks better, faster, and for free.

White Collar workers will suffer the same fate as Blue Collar workers did with wages dropping rapidly. It’s still coming for Blue-Collar and more manual work, but even they will be gone soon with rapidly improving robotics.

So it’s not like I’m choosing arbitrarily to believe UBI works, but I think it’s inevitable.

Lots of objections to it tied to ideology, which are tied to deeper issues such as “values” like “hard work” and “desrevingness.” However, the reality is that no matter how hard you work, you won’t be better than AI in programming and many other tasks.

As for “deservingness”, I’m seeing it as an increasingly limited concept, since for example:

1. Some people are simply born with math ability or general intelligence that commands a higher wage in today’s information-based society

2. Some people are born without any genetic illnesses, don’t develop chronic illnesses, and/or lucky enough to escape other illnesses like Cancer, that prevent the most well-intentioned person from doing more (more examples, but these are just one of many)

It’s time to reassess all our political and economic beliefs and theories for the coming age of smarter-than-human AI.

2. Difference Between Weak Narrow AI and Broad AGI (Artificial “General” Intelligence)

Source: Gary Marcus ai-risk-agi-risk

There are two distinct types of risk with AI.

1. “Weak” Narrow AI

This is “just” increasingly advanced versions of ChatGPT that completely changes society with massive displacement of the labor market from complete obsolescence of many jobs. What happens if 30%, 50%, or 80% of the work force can no longer fetch a living wages? How about 95%? (Why I think Universal Basic Income is an inevitability, unless society simply collapses from people on the streets with no food, shelter, etc.)

This is scary/crazy enough even if it’s the baseline case (at least in the near future) and the crazier AI doesn’t come.

2. “Strong” Artificial General Intelligence (AGI) — Skynet from Terminator becomes a reality.

I think of AGI as a far more powerful version of nukes.

With nukes, the destructive potential is how each country uses them, but they’re “just” powerful weapons that are tools.

With AGI, everything before applies but with the extra possibility that AGI develops its own goals/motivations and becomes more powerful than the countries that created them, which is not possible with nukes.

Created by MidJourney Prompt: “Dystopian future landscape”. Is this in store for humanity?

3. Infinite Energy Via Nuclear Fusion

Among the many disruptive uses of AI will be to speed up the arrival of nuclear “fusion” for infinite free energy. This itself is a huge change to have energy too cheap to meter.

After the proof of concept of nuclear fusion as a viable energy source announced by the US Department of Energy on December 13th 2022, many people estimate a few more decades before it arrives for practical use.

I view these timelines as far too conservative and pessimistic since there’s the assumption the progress on fusion for the last 70 years will continue into the future, along with only incremental/linear advances in science and technology.

I expect rapidly arriving AI will greatly compress these timelines.

Once infinite free energy via fusion arrives, there’ll be even more uncertainty atop the uncertainty introduced by AI. Some examples include:

  • Far greater information processing in the GPU datacenters that power AI, in a glorious feedback loop where they make more powerful and efficient AI, including better theoretical progress in AI algorithms
  • Current problems involving fossil fuels and implications of climate change are solved — carbon in the atmosphere can be scrubbed from the atmosphere to create an abundant supply of carbon nanotubes, that themselves have a large number of uses
  • We can create entire metropolises underground, or under the oceans, or in space, powered by fusion and even have artificial light to grow crops for the food suppply
  • Problems with energy consumption of blockchain and cryptocurrencies will permit all the visions of crypto to be realized

In general, we can map energy to computation, so any issue involving scarcity in computational resources for progress will be resolved by infinite free fusion energy (limited only be theoretical computational issues such as computational complexity).

Wider implications here involve genomic processing for tailored personalized treatments at the genome and cellular level for enhanced longevity, aka: biological human lifespans in the hundreds or thousands of years, assuming we don’t upload our minds or “just” augment them via neural prostheses

I had a long list of potential other issues to consider, such as what AI implies for education, healthcare, government, military conflict, geopolitics, etc. that could each span far greater than waht I’ve written, but I think the three sections above cover most of it concerning:

  1. The immediate disruption of the labor market and society by AI that’ll forces us to rethink out beliefs and political/economic systems. This alone is hugely disruptive.
  2. The difference between the far lesser disruptive case of weak/narrow AI, versus the strong AGI (artificial general intelligence), in which case it’s like an advanced alien civilization or even God coming suddenly arriving on Earth
  3. Implications of infinite free energy via nuclear fusion

Any forecasting or much thinking beyond this high level will most likely be wrong since it’s limited by my imagination, and I simply don’t think I’m capable of imagining anything more precisely or bigger.

Mental Health and the Alignment Problem

Personally, I find the inevitability of the arrival of smarter-than-human AGI and one(s) that might not share human goals and thus present an existential risk to humanity a terrifying prospect that hits me deep. This issue falls under “The Alignment Problem.”

Much to write about there, but this post on Less Wrong talks about mental health related to this reality.

--

--

Kevin Ann

AI/full-stack software engineer | trader/investor/entrepreneur | physics phd