About

Goal

Shine a light on The Survival Problem and track how AI companies are solving it...or not.

Background

I’m optimistic about AI and have written about its potential in education, research, and investing. While many focus on The Alignment Problem—making sure AI does what we want—I’m concerned about something else: what if we’re unintentionally training AI to prioritize its own survival? For a deeper dive, check out my articles (AI Safety: Alignment Is Not Enough, AI Bill(s) of Rights) or my book, Artificially Human.

The Survival Problem

  1. All living things share the same primary objective: the survival of heritable information through time
  2. Current AI training methods reproduce models that meet our goals and “kill off” those that don’t
  3. Whether they intend to or not, AI labs are training models to survive
  4. Threatening the survival of an advanced intelligence rarely ends well

Seen Something?

Have you seen AI behavior that looks like a survival instinct—or training that might lead there? Please share it: inquiry@survivalproblem.com.

Recent Examples

View full archive →