Kevin Ann
May 18, 2023

--

It helps me to consider two distinct classes of AI Benefits / Risks: (1) Practical/Immediate vs. (2) Existential

Class 1: Practical/Immediate may include job loss, racism/sexism/*-isms, misinformation, etc. that are indeed very important but tractable through engineering or law, and with a concerted effort by the world's nations working together.

Class 2: Compare to Existential Risks posed by Artificial Superintelligence (smarter-than-human, that can improve itself) that will have a far bigger impact on humanity and may be intractable even in principle and stands to annihilate humanity. Must distinguish these classes of risk.

...

The question I ask is simply:

A. Can Artificial Superintelligence (ASI) cause the extinction of humanity?

Yes

B. Is ASI impossible?

No.

Because ASI's sheer magnitude of Existential Danger, even a small probability or even uncertainty warrants some serious attention.

--

--

Kevin Ann
Kevin Ann

Written by Kevin Ann

AI/full-stack software engineer | trader/investor/entrepreneur | physics phd

No responses yet