The Apocalypse is Coming or Why I’ve Been Existentially Depressed

AI Depression Photo

What would you do if you thought the world was going to end? How would you live?

That’s the question Adri, Kelsey, and I have been asking ourselves.

I’ve been really existentially depressed lately and thought I should finally talk publicly about it.

Why have I been depressed?

I think humanity is going to be wiped out by artificial intelligence in less than 10 years. Maybe 15-20 years but probably less.

Normal people are going, “Wtf?! Isn’t that a bit alarmist?!” while those in the rationality community are going “Yeah, sucks big time…”

History of Apocalyptic Predictions

Apocalypticism is nothing new. The religious have been thinking they were in end times for millennia. The early Christians and Jesus himself (if he existed) believed they were living in end times. Modern-day Christians love to point to current events as fulfilling the eschatological (apocalyptic) prophecies described in Revelations.

People in the Bay Area will remember Harold Camping’s church’s billboards declaring Judgment Day coming on May 21, 2011.

Well, as everyone reading this knows, that day came and went…

Given the long history of failure in predicting these types of events (I mean just look at this list: https://en.wikipedia.org/wiki/List_of_dates_predicted_for_apocalyptic_events), why should this be different?

Description and Dangers of Artificial General Intelligence

Why do we need to worry about AI so much? After all, the ones we have now aren’t really dangerous.

It’s artificial general intelligence, or AGI, that will kill us. 

My description of AGI and why it’s so dangerous is going to be relatively short (because there are much better resources out there that I link to below.) A layperson can picture an AGI as the AIs that are depicted in movies. Picture Vision in the Marvel movies or C-3PO in Star Wars. They are generally intelligent in the way humans are in the sense that they can solve a variety of problems.

C-3PO isn’t just a fancy Google Translate. He can solve novel problems (albeit neurotically). 

Contrast this with narrow AIs that can only do very specific things. A chess AI can easily beat the best human players in the world, but it can’t figure out how to make coffee or take the SAT for you.

Movies tend to have AGIs as casual characters but hyper-dramatically downplay how different things would be if they actually existed. They are depicted as smart assistants rather than entities that would completely change the entire landscape of the earth.

Okay, so an AGI could understand things and solve novel problems at least as well as humans do. Why is this a gigantic problem?

The Dangers from AI are Not What Most People Think

Artificial constructs have always struck fear into us, from the tales of the golem in Jewish folklore to Asimov’s laws of robotics failing to prevent mayhem in his stories.

But popular culture has made people confused about what the real dangers of AGI are.

People anthropomorphize AIs. They think the “oppressed” robots will rise up and overthrow their human slavemasters. Or that they’ll want to betray and kill humans for some other malicious reason like the Terminator.

Even worse, people are worried the AGI won’t be “fair” or will say an offensive word.

Offensive AI

The main difficulty in safely building an AGI is the “alignment problem”. The alignment problem refers to the difficulty in building an AI that does what humans actually want rather than exactly what they asked for.

Toy example: parents and teachers want their kids to have better grades. The AI changes their grades in the computer. Even though this accomplishes the goal on paper, what parents and teachers actually want is for their kids to have learned the material.

The tl;dr is it’s incredibly, incredibly difficult to figure out what we as humans actually “want” and get an AGI to do that.

It might be the most difficult problem humanity has ever faced because we have to get it right on the first try. This hardly ever happens, the first iteration is never perfectly safe. We didn’t build a death-proof car on the first try. We still haven’t!

It’s like those old stories with a genie where you have to be super careful what you wish for because, while you may get exactly what you asked for, it wasn’t what you actually wanted.

The classic fictional example is Nick Bostrom’s paperclip maximizer. Imagine you’ve started a company that sells paper clips. It’s a boring but honest living. Business is absolutely booming (paper clip jewelry has become all the rage on TikTok) so you tell your AGI to make as many paper clips as possible.

“The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.”

The story ends with a lot of paper clips and all of us dead.

“…given enough power over its environment, it would try to turn all matter in the universe, including human beings, into either paperclips or machines which manufacture paperclips.”

“Wait, why would we all die?”

One reason the AI might kill us is to ensure we don’t stop it from accomplishing the goal we originally gave it.

Another way it might kill us is by using up the resources we need to survive to help it accomplish its goal. It might help to picture FernGully or Avatar where people are chopping down the rainforest which kills off the species that live there. In this case, the AGI could chop down our “rainforest”.

I know what you’re thinking. “That’s stupid, John!” You’ve already thought of a bunch of solutions to this silly problem! Maybe we just keep the AI isolated in a box and it can only give instructions that we can just decide to ignore. Or we have a big red stop button we can push to turn it off. Or we come up with a set of rules it has to follow like Asimov’s Laws of Robotics.

All good ideas on the surface that don’t work in practice. If you’re wondering why, please see the resources I link down below!

“Isn’t it silly to think an AI could get this powerful?”

Imagine you’re in a Rubik’s Cube solving contest with an AGI named Puzzler.

Solving a Rubik’s Cube for the first time with someone’s instruction takes a person about an hour. Solving it for the first time with no help takes a very, very long time.

Now, it’s the first time solving the Rubik’s Cube for both of you. All you know is what the Rubik’s Cube is supposed to look like when it’s solved and how you can move it. 

The catch is that you have a minute to solve the cube and Puzzler has 100 years.

Think of everything that’s happened your whole life, and then multiply that by two or three or four. That’s how long Puzzler has to solve the Rubik’s Cube. 

Not much of a competition, is it?

The real-life analogy here is that we are thinking in super slow-motion compared to an AGI. The AGI is like the Flash zooming around the globe at superspeed before you’ve even gotten up off the couch.

“Can’t we just not build AGI? Or wait until we’ve solved the alignment problem?”

Unfortunately, some of the most difficult problems humans face are called coordination problems. You could even argue they are the central problem humans face.

You’ll hear smug assholes who think they’ve solved the world’s problems by saying, “If only we could just…” For example, “if only we could all just be grown-ups” and get rid of nuclear weapons or agree not to go to war. People who talk like this don’t understand the game theory or perverse incentives involved.

If coordination problems were easy, then we wouldn’t have wasted trillions in the Cold War and would have elected better leaders than Trump and Biden.

Not building AGI is a coordination problem. Currently, the richest and most powerful countries and companies on earth are pouring billions of dollars into developing it as fast as they can because it will be insanely profitable right up until it kills us. This is even more difficult because a majority of those involved don’t truly understand AI safety.

It is much harder to build safe AI than to stop nuclear weapons development because nukes require resources that are much scarcer than those required to build an AGI.

Heaven and Hell

We’re probably going to fail and inadvertently kill us all.

But on the other hand, if we get everything right and successfully develop a safe AGI, we will be living in a post-scarcity and post-suffering paradise. It’ll be like our wish to the genie went more right than we could have imagined.

We’ll face tough philosophical questions like how to deal with wireheading, etc., but still, it will be heaven on earth. We’ll solve all problems facing humanity: scarcity, aging and disease, etc.

A part of me wants us to roll the dice on building AGI so our parents and other loved ones have a better chance of hitting longevity escape velocity. But I know that it’s far more likely we’ll build an unsafe AGI and all die instead.

“I’m skeptical this will happen!”

Let me just say it: I’m a bigger skeptic than you. Everyone knows I’m a huge skeptic, to the point of annoying them by not buying into most of their beliefs. “We believe in nothing, Lebowski. Nothing.”

“But this sounds like science fiction!”

So does traveling thousands of miles in a few hours in a flying metal box, but we do it every day. Something sounding like science fiction now doesn’t mean it’s bullshit.

If you believe AI can never do the things a human can do, would you have believed AI could do all the things it can do now? Are you even aware of what AI can do now?

Would you have thought it could chat as well as LaMDA can or generate images like DALL-E can, or are those just novelties?

The goal posts keep moving.

How did I first hear about this?

In some ways, I’ve been a rationalist my whole life. But I discovered the rationality community many years ago when my friend, Robert, introduced me to the rationality blog and forum, LessWrong

LessWrong is where I first learned about the dangers of AI and it all made sense to me. People in the rationality community, especially Eliezer Yudkowsky, have been among the very early advocates of the alignment problem.

I’ve been aware of this problem for years but am posting this now because it really feels like we’re getting close. The advancements in AI have only gotten more incredible, and I still haven’t seen any convincing arguments against the alignment problem.

There are debates about how fast and how dangerous AGI will be. Among the people aware of this problem, I’m firmly in the doomer camp, as opposed to the optimistic or at least slightly less doomer camps.

Why?

I think when you recognize some basic assumptions about AGI, it becomes clear.

It’s analogous to how I see the landscape around life extension. The average person doesn’t know anything about life extension and doesn’t support it even if they do. But the second we get life extension technology, they would be bitching they didn’t have it yet.

And even among those actively working in “anti-aging”, most of them are doing unimportant work and aren’t focused on the problems that actually matter. It’s the streetlight effect where they are working on things that are publishable rather than important.

This is why Hamming questions are important:

‘Mathematician Richard Hamming was known to approach experts from other fields and ask “what are the important problems in your field, and why aren’t you working on them?”.’

So it is with AGI and AI safety work. Most AI researchers don’t care about safety at all or are worried about stupid shit like “fairness” or if the AI will create enough art with black people in it.

AI Safety we have at home

And even many people working in “AI safety” are confused about what the real problems are. In other words, the field is “Not great, Bob!”

Does it mean anything that many AI people and other “smart” people are not worried? Absolutely not. People who are supposed to be smart and are successful in certain areas say dumb shit all the time. Like believing in free will or buying into braindead political arguments. This is the natural state. Very few people are good at reasoning and taking their own beliefs seriously.

It’s Isolating

I feel isolated. Even more isolated than I normally do.

It’s been sucking away motivation to work on projects because it feels pointless if I’m just going to die in ten years, and in most cases it is. 

I already have a problem with feeling too cynical. I already have a problem relating to most people and their conversations.

Everyone is so tribal. Climate change is basically a non-issue existentially, but that’s all people care about! (And if you do care about climate change but don’t support nuclear power, you’re worse than the rolling coal assholes!)

If you believe climate change is going to kill us, you can feel good watching Greta Thunberg. You can shit-talk big corporations while feeling smug with your friends.

If you’re conservative, you can lament to your fellow church members about how evil abortionists are, and talk about how society needs more Jesus in their lives and feel validated and connected.

With AI, there’s the rationality community that gets it (although, that isn’t universal) and then there’s everyone else. I wish more people in my daily life, and in general, got it.

Most people don’t understand these issues, and many of the people who do are kind of cold and not personable. Don’t get me wrong, some rationalists are among the warmest and best people I’ve met. But when you’re at Effective Altruism meetups and people are purportedly trying to do the most good while being some of the least warm people on the planet, it feels a little disillusioning.

There are so few truly smart people. It’s isolating and hard not to succumb to cynicism.

When I was a kid, it was always hammered into us that we could be a big (smart) fish in a small pond, but there was a great big world out there with even bigger (smarter) fish. 

Well, where the hell are they?

Most of the people I interact with are physicians, PhDs, professors, programmers, engineers, accomplished artists, etc. They are in the 99th percentile of intelligence. And most of these people suck at thinking! They can’t reason through arguments or take beliefs seriously.

To be clear, I know super smart people exist, I’m just depressed at how few of them there are. Even fewer actually smart people exist than you’d think, because even most “super smart” people have an incredibly big hole in their ability to reason through things philosophically.

Most of the responses from the few people I’ve talked to about it have been pretty underwhelming. I know it’s a hard thing to process, but Jesus. Only a couple of people have had anything close to a rational response to it.

It’s like Don’t Look Up except with a working analogy.

What are we doing differently?

It colors most of our discussions. When Adri, Kelsey, and I talk about the future, it’s always with the assumption that the AI apocalypse is coming, and coming fast. We’ve stopped imagining past ten years into the future, and it’s a really bitter pill to swallow.

Our focus is switched now to maximizing the next five to ten years, especially the next five.

We’re considering sacrificing career capital (which may be rendered moot by near-term narrow AI anyways) and taking a lot of time off to travel. We’re less focused on doing a startup. I’ve been winding down projects to the few I really want to do. I’ve been reading the books and watching the movies and listening to the music I really want to before I die. 

We’re lucky we haven’t had to make any hard trade-offs yet. An example would be if Adri had to do five more years of training, and she got accepted into a place like Mayo Clinic that is a great place to train but a pretty mediocre place to live. That would be a tough trade-off because she’s come this far (sunk cost or not) and loves what she does, but do we really want to spend the potential last years of our lives living in the middle of nowhere, far away from friends and family?

Luckily, her getting a fellowship position at UCSF is perfect. It’s her dream program, and we’ll be close to most of our family and friends.

Kelsey has been considering working in AI safety, but there’s a trade-off. We don’t know how much we could contribute, and if we’re going to die anyways, that time would be better spent trying to enjoy the time we have left.

It’s pretty hard to function when you know it’s more than likely that everyone you’ve ever known, everyone you’ve ever loved, everyone in the world is going to be dead soon.

Conclusion

As a kid, I was obsessed with the Terminator movies. That connection is not lost on me given that I feel how Kyle Reese and Sarah Connor felt — knowing the apocalypse is coming, and not only does no one give a shit, they actively try to stop them from doing anything.

“Skynet becomes self-aware at 2:14 am Eastern time, August 29th.” That’s been one of my favorite movie lines to quote. I think it’s appropriate that I share my fear about extinction from AI today of all days.

Some smart people are doing their best. I feel gratitude toward the people who have been trying to do something about this and I really hope we can make it and come out alright.

Otherwise, I expect we’re all going to be dead by 2030.

There’s nothing more I’d love to be wrong about. Thanks for listening. I appreciate all of you.

Would you like to know more?

Stampy AI:

https://ui.stampy.ai/

Scott Alexander’s Superintelligence FAQ is probably the best succinct post for the smart layperson. Keep in mind, it was written six years ago so the technology has only gotten more insane since then:

https://www.lesswrong.com/posts/LTtNXM9shNM9AC2mp/superintelligence-faq

Robert Miles’ videos are great. These are a good start:

https://youtu.be/pYXy-A4siMw

https://youtu.be/3TYT1QfdfsM

Wait But Why’s posts:

https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html

Superintelligence:

There’s also a popular book by the great philosopher Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, that’s commonly recommended.

Most Important Century series:

https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/

Mental Health Resources for this specifically:

https://www.lesswrong.com/posts/pLLeGA7aGaJpgCkof/mental-health-and-the-alignment-problem-a-compilation-of

To think about how you should live the rest of your life, read my Three Buckets post:

The Three Buckets of Life: How to Spend Your Time and Money

Did this story make you feel depressed? Would you like the opportunity to see more depressing content in the future?

I know you would!

Subscribe here:

Thoughts, questions, or suggestions for how I can improve? Email me.

Follow me on Facebook and Twitter for the depressing goodies I don’t post here!

Join the Conversation

4 Comments

  1. Have you thought about trying to contribute to AI safety beyond writing this (excellent) blog post? I think there are lots of ways for people to contribute beyond technical research.

    Happy to chat if you’d find that helpful? (I’m doing AI Safety Fieldbuilding in Australia and New Zealand).

    1. Chris,

      Thanks for the kind words! I have been thinking about ways I could help.

      I’m checking out your EA posts about outreach now. Would love to chat soon!

      John

  2. “I’ve been winding down projects to the few I really want to do. I’ve been reading the books and watching the movies and listening to the music I really want to before I die.”
    I claim this is good regardless if you live to 2030, 40, 50 etc. It only makes sense not to focus if you expect to live forever, and I’m not that optimistic that we reach longetivity escape velocity in my (our?) lifetimes quite yet, conditional on not being killed by AI 😉 Even then, there is a non-zero probability of accidents day. The challenge is not to sacrifice too much long term capital (career, financial or otherwise) in case you live longer than expected. So yeah, memento mori 😉

    Also, excellent post, thank you.

    1. I’ve been a believer in memento mori for a long time but the balance of how much to contribute to “Bucket 1” vs “Bucket 2” activities changes a lot in this timeline.

      Even if we don’t get AGI in five to ten years, the advancements in AI will disrupt the economy enough that career capital won’t matter the same way.

      Thanks for the kind words and I hope we all make it at least as long as Ursus minimus had! 🙂

Leave a comment

Your email address will not be published. Required fields are marked *