Bryan Walsh: “End Times: A Brief Guide to the End of the World” | Talks at Google


[MUSIC PLAYING] BRYAN WALSH: All right. Well, thank you so much. So I am here because
I wrote a book. And as you can see
from the title, it’s about the end of the world,
which, what might cause it and how we can
actually prevent it. So more than two years
of reporting and writing about total global human
extinction, fire and blood– Armageddon apocalypse is
very, very cheerful stuff. But I want to ask
you all first– so what worries you the
most about this subject? I don’t mean small
worries, like, how your company is doing,
how the country is doing. I want to know the big
ones, the end times threats. AUDIENCE: Yeah. BRYAN WALSH: So. AUDIENCE: I’m very
concerned that I won’t be here to observe. BRYAN WALSH: OK. There you go. That’s the first one, yeah. So I will ask, though. So I’m going to
run through a list of the book, the
list of the threats out here for
existential threats. And that’s academic
language or things that could end the world. And the definition
I have from here from the Future of Life
Institute, things that could kill so many
people, it would leave civilization totally
incapable of recovering. So I just want you
to raise your hands if you worry about these risks. I’ll list them. And can feel free to raise
your hand multiple times because you can be worried
about more than one. That’s why I put them
all on the books. So first, asteroids. Oh, OK, a few. Super volcanoes, big
volcanoes, super– OK, great. Nuclear war. You guys are smarter
than those crowds. Most people will do that. Climate change. OK, great. Pandemics. All right. Biotechnology, bioengineered
viruses, so that’s good. Artificial intelligence. Yeah, see, that’s
somehow not surpris– I don’t know why. That’s not super
surprising with this crowd. And lastly, aliens. OK. I see a few. All right, great, thank you. That’s just eight, and
there are definitely more things like gamma
rays, high energy physics experiments. But I focused on these
eight because they’re really the ones that seem most
dangerous, the most salient to me. And because my book,
at least at the start, was supposed be a brief guide
to the end of the world, though, if you see
over there, it’s kind of grown in the telling,
which is not unusual. Now here’s the
unfortunate reality. Our species, all
of us human beings are in greater exponential
peril now than we’ve ever been over the
200,000 or so years that we’ve existed
on this planet. And tomorrow, that peril is
going to grow a little more. And then the next
day, it’s going to grow a little more,
and so on and so on, sort of adding up
until each day becomes like pulling a trigger in
a game of Russian roulette. Eventually, you’re
going to lose. Now before I began this book,
I spent years as a journalist reporting around the world on
infectious disease outbreaks, on high wire geopolitics, on
climate change most of all. And you can see a
couple of cover stories I wrote back during my days
with “Time” magazine here. Pandemics, bees. That bees one was actually the
most popular one I ever did. People were really
into honeybees. I didn’t know at the
time that I was reporting on the end of the world. And I can tell you
with absolute certainty that there was a time when
this left me despondent, hopeless, deeply insecure
about the future, which, given the state of
the world today, meant I was just really
a little bit ahead of the schedule
of everyone else. So I assumed when I
started this book in 2017 that what I would
learn would really only confirm that feeling,
that feeling of despair. I just listed eight ways
the world could end, and that’s just really
scratching the surface. But one of the great things
about actually working on a book and actually getting
out there and reporting is that you learn things
you didn’t expect. That’s why we writers do it. And while I was working on
this book, a couple of things happened. So this is my wife
and I had a kid. His name is Ronan. He’s here with his
favorite reading material. He loves “The New Yorker,” just
mostly Anthony Lane really more than anything else. He’s a little more than two
years old now, and he’s just– he’s great. I mean, I left him this
morning with the babysitter, and he was crying and
it was very, very sad. You know, I saw him. He looked at me with his
eyes, and all these questions I had about the future of
the planet and humanity, that it seemed kind of
theoretical before, really became real to me. And I do say, you don’t have
to have children or think people should have children
to care about the future or to feel you have
a stake in the world. Just to be clear. That just happened to
be the case for me, and it also happened to be
that these two things happened simultaneously. But in the past, where I might
have instinctively looked for doom as a reporter,
now I was really determined to look for hope. And it was that attitude
I took with me when I had been working
on “End Times,” and I’ve found people out
there in this field who are making a difference. So I visited with
astronomers who work all night on
mountaintop observatories, keeping a vigil for
near Earth objects. I met volcanologists
who were trying to warn the world about the
underappreciated threat that’s lying beneath our feet. I met men like Daniel
Ellsberg, the peace activist who leaked the Pentagon
Papers and also tried to leak the US government’s
just appalling Cold War nuclear plants, and
women like Beatrice Finn, the nuclear abolitionist who
shared the 2017 Nobel Peace Prize. I met climate activists, and
epidemiologists, and scientists all trying to control powerful
emerging technologies, like gene editing,
people trying to develop artificial intelligence
in a safe and ethical way. And even those who hoped to
create global memorials that might keep some record
of human existence if the worst really did come. I’ve met people who are
trying to save the world. They’re all motivated by
a version of something Daniel Ellsberg told me
when I interviewed him. This is Daniel. Each of us knows, at
least most of the time, that we are mortal individuals,
that our friends and family are mortal. What we should know is that
our species is mortal as well. We’ve achieved the capacity
to exterminate ourselves. And I’ve been using
my life the best I can to stop that because
I think it’s worth it. Civilization, with
all its ills, is worth struggling to preserve. That motivated me. I think it should
motivate all of us. So this could seem like a
message of doom, this book, but I really hope it’s
ultimately a message of hope. But for that hope
to be real, it helps to understand the risks we face. So you have natural,
accidental risks like asteroids and super volcanoes. They’ve always existed
throughout the Earth’s history. And if you’re as
obsessed with dinosaurs as I was when you
were a kid, you remember that they wiped
out life on this planet before, five times, in fact. Great extinction waves saw
the earth nearly sterilized of life. Most recently, 66
million years ago, when a six mile wide
asteroid slammed into what is now the shallows
of the Gulf of Mexico and eradicated the dinosaurs. You can think of those
as the background risks. They’ve always been with us. They’re always
going to be with us. But these kind of catastrophes
are very, very, very infrequent. One expert’s estimate is
that there’s only 0.02% chance that we’ll go
extinct from natural causes over the course of the century. Not zero, and human
beings have a tendency to really conflate very
low probability with zero probability, but not very high. The reason we’re really in
greater accidental peril than before and the reason why
that grows every year is us. It’s the things that we make. It’s technology. We may indeed be the architects
of our own demise, which I admit is a strange thing to
say when you’re in the New York offices of the
greatest technology company in the world. But it is true. What I can tell
you with certainty is that technology is not
so much a thing as a force. And [INAUDIBLE] profoundly
changed the world we live in, in ways we can’t
really even begin to understand, or predict. We worry about issues like
screen time and data privacy, and they’re all very
important, but when I think about
technology, I think about what’s really dangerous– nuclear weapons, the ability
to edit the very code of life, artificial intelligence. These are human inventions,
and they can do great good, and they can do great ill. Our challenge is to
figure out which is which, and it’s going to be
people like you who are going to help decide that. So let’s go over
that list again. First, we have nuclear war. While reporting “End
Times,” I traveled here. This is Trinity site. It’s a patch in the
desert in New Mexico, where on July 16, 1945,
at 5:30 AM local time, the first atomic
bomb was tested. And that bomb changed
everything, obviously, for World War II
first and foremost, but really, for our entire
future as a species. Trinity was the dawn of
human made existential risk. Before that moment,
if our species was going to go extinct
like so many before, it would have been
nature that did us in. The planet, something about us. But after Trinity,
it could be us. We had the power to
destroy ourselves. As I– sorry, as
Isidore Isaac Rabi, one of the Manhattan
Project physicists, said, after looking at that first
blast, a new thing had been born, a new control,
a new understanding of man. And nothing would be the
same after since then. Now here’s an interesting story. There’s a side note. It actually could have been
much, much, much worse. So a few years
before Trinity, one of the physicists on
the Manhattan Project actually made some calculations. It included there was
a small possibility that the nuclear
bomb, when exploded, could have led to an
uncontrollable chain reaction that would have
ignited the atmosphere. In other words, it would
have set the sky on fire. Also the oceans. Basically everything. That would have been bad. So Robert Oppenheimer, the
head of the Manhattan Project, had them run the numbers again. The feeling was that if the odds
were greater than approximately three in one million that
the bomb would indeed turn the sky into
a big global fire, they’d stop work on the project. Now 3 in 1 million
sounds really remote, but that is actually
about 100 times better than winning the lottery. And someone does win
the lottery usually. Fortunately or not, depending
on your perspective, the math did check out, and they
determined the test [INAUDIBLE] would not, in fact, destroy
the world, which Oppenheimer, after the Trinity test, talking
to the Senate, made that clear. So theoretical grounds, we
would not set the atmosphere on fire, which must’ve been
very, very comforting to the US Senate, I’m sure. So while nuclear war has receded
from the public consciousness since the end of the Cold
War, it hasn’t gone away. There are far fewer warheads
and missiles out there, but there are still enough, and
they’re pointed at each other. They’re ready to fire. One researcher, Martin Hellman
at the [INAUDIBLE] University, has put the annual probability
of a Cuban Missile style crisis producing a nuclear war
at 0.2 to 1% per year. That’s low, but
those odds compound so much that he estimated
that a child born in 2009 had a 10% chance of
suffering an early death in a nuclear conflict over the
course of their life, which is not a pleasant thing to
realize when you know that you live in probably one
of the biggest targets. Now today, we think of nuclear
weapons as part of politics. We think about missile
counts and treaties. You might have seen what’s
going on with the US and Russia, Trump and Putin. But before that,
they were technology. Someone, many people, had to do
the initial experimental work in atomic physics. They had to conceive of them. They had to design them. They had to build them. And they did all this
without fully knowing where their work would lead. Many of the physicists who
worked on the Manhattan Project actually turned
against nuclear weapons eventually, once they saw with
their own eyes with those bombs could do. Some even started something
called the Bulletin of the Atomic Physicists,
which gave birth to a symbol that you’ve probably
all seen here, the doomsday clock, the time
keeper of the apocalypse. And its most recent
unveiling back in January was set two minutes to midnight. And that’s as close as
it’s been since 1953. It was actually the
same last year as well. And back in 1953, that
was when the Soviet Union became the second country to
explode a thermonuclear bomb. So you can imagine where
the experts in the world think we are right now. But when I visited
the Trinity site, I wondered about
those physicists. I wondered about the ones
who would change their minds. If they had known,
they could have predicted what their creation
would do, what would they have done differently? It’s an unanswerable question. By the time it could’ve
been asked, it was too late. But the chain reaction
could not be run backwards, but there were some who
knew this in advance. Leo Szilard was a
physicist from Hungary, who, in a burst of
inspiration, eventually came up with the idea of
the nuclear chain reaction. He drafted the letter
that Albert Einstein sent to FDR warning that
the Nazis had started their own nuclear
bomb program, which really began the work that
became the Manhattan Project. But he also foresaw the
danger of nuclear weapons before Trinity. And he desperately
lobbied American officials not to use the bomb on Japan. Obviously, he failed. After the war, he was asked
if it was a tragedy that the work of scientists,
who we think of as working for the
betterment of humankind, that their work had been used
for such horrific destruction. He said no, it was not
the tragedy of scientists. It was the tragedy of mankind. It’s all our tragedy. Now you can see that tragedy
playing out in real time in another risk– climate change. You can pick your news
right now, the Amazon fires, the record amount of
ice that was lost, 12.5 billion tons,
one day in August with the Greenland ice sheet. Maybe once we could have said we
didn’t know what was happening. For decades now, the
science of climate change has been very clear. Burning carbon equals
warming, warming equals death and destruction. Yet we do very
little to stop it. Far from making progress,
we’re actually going backwards. Now I know climate
change better than any of the other risks in
my book that I spent years covering as a reporter. I went to UN Climate
Change Conferences. I went to fracking
wells and oil wells. I’ve seen climate models. I’ve been to the Amazon. I’ve been to Greenland. I’ve seen them both lessen. I wrote more articles
than I really even want to remember telling about
all the steps we need to take to fight climate change,
most of which we weren’t doing. And eventually it’s like I felt
like I was writing into a void. And I came to realize that the
reason why climate change is driven so hard to stop isn’t
just a matter of denialism or the oil companies
or anything like that. They all play a role definitely. I think it’s because we as
a species, we want more. All the places I’ve
traveled around the world, I’ve seen people who want more. They want more for themselves. They want more for
their children. They want more out of a
world that has limits. And that takes more resources,
it takes more energy, and it brings with it warming,
like an exhaust, after it. So here’s a couple of numbers. So behind me is the US carbon
emissions per capita right now, 16.5 metric tons, which puts
us as one of the biggest countries in the world. Only really Middle
Eastern petro states have a larger carbon
footprint than the US. And many of us, including, I’m
guessing, people in this room, have a larger
footprint than that. I mean, all you need
is a few round trip flights to San Francisco, and
you’re going past that number. But all right, we know that
Americans are carbon hogs, so let’s look at Germany. Germans obviously
are so green they have a actual successful
environmental party with that name. They come in at 8.9 metric tons. If America could
become Germany, if they could have the
footprint of Germans, that would be an
environmental miracle. That’s almost
cutting it in half. There’s another way
to look at that. There are 7.7 billion
people on this planet. Most of them use
far, far less energy than the greenest citizens
of richer countries. Not because they don’t
want to for the most part, but because they can. They’re just really too poor. They can’t afford it. They don’t have access to it. So if every person in the
world, all 7.7 billion of them, used as much carbon
as the Germans, annual carbon emissions
would be in excess of 68 billion metric tons. That’s nearly twice the
current global levels, which are already
so high that we’re on track for reasonably
catastrophic levels of warming. What that tells me is that
there’s a collision here between a world where
everyone– and I mean everyone– not just people
in rich countries, but everyone gets a fair shot
at a life with access to energy, and movement, and one
with a safe climate, a collision between what we
want and what limits nature will apparently hold us to. So which is going to win out? You know, I’ve seen as
a reporter and observer that humans are not really
good at respecting limits. Politicians even less so. They don’t really like to
put limits on their voters. So that puts the onus back
on science and technology, people like you to figure out
a way to bypass those limits. Innovation disruption
will surely include better, cheaper forms
of energy, renewable energy, energy efficiency. But I think it’s also going to
require longer shots with less predictable consequences. Efforts like experimental
air carbon capture, which would suck carbon
directly out of the atmosphere and therefore directly reduce
carbon levels, and with it, warming. If there’s one peace
time Manhattan Project that we should
really be pursuing, one mission that I would
like to leave all of you very intelligent people,
it’s that, affordable carbon capture. It’s the moonshot we
need to really avert this existential risk. And if that doesn’t work, we
need to do something cruder. Solar geoengineering,
literally darkening the skies, reduce the amount of sunlight
that comes into the Earth. It’s an idea that
comes ironically from another accidental
risk from volcanoes. When big volcanoes blow,
they throw debris and sulfur in the atmosphere. That has the effect of
blocking out the sky, dimming it, cooling the earth. So we know it works. We know from experience,
based off eruption, this actually does. This is effective. What we don’t know
is how predictable it is or controllable. So that puts us like
those physicists back at Trinity site
in 1945, wondering if their calculations
might have been a bit off and the sky was
about to catch fire. We’d be stepping
into the unknown, perhaps disarming
one existential risk by introducing another. The best we can hope
for, the message that I hope you take
away from this talk is that we need to be careful
in developing this technology, but we also need to
face our threats head on and not pretend that we
can just wish them away. That’s because if nuclear war
was the technological threat of the past, climate change
is the threat of the present. What’s about to come, it’s going
to be even more challenging. So biotechnology tools
like gene editing offers the ability to
fundamentally rewrite the code of life. AI could allow us, if you
believe the people who are truly optimistic about it. We could eventually
do anything we want. Each, I think, is capable
of destroying the world. Biotech is going to allow
researchers to engineer pathogens deadlier
and more contagious than anything found in nature. The risk, this could be
misused for bio weapons and bioterrorism. It’s obvious. It’s why the Director
of National Intelligence named CRISPR a potential
web of mass destruction. But scientists are
already carrying out work in labs, amplifying
viruses like the H5N1 avian flu, all in effort to
predict how they might evolve in the wild, which
is great, but should when those viruses
escape, you have a horror movie on your hands, something
far worse than anything nature could cook up. As for AI, if you’re
attending this talk, I think you know
more about this– I’ll be honest– than I do. One of the good things
about being a journalist is that it’s OK to admit when
you don’t know something, and I don’t know whether true
artificial general intelligence will one day be possible. And if it does,
whether it actually posed an existential risk. But I do know that AI
develops, if we integrate it into how we live
and work, the risk will proliferate, maybe not
existential but certainly, potentially bad. It can be used to destabilize
our sense of truth. It could speed the
pace of warfare. Really, it’s integrated
into our military. It’s not enough to frighten me. But with nuclear war and climate
change already established existential threats, ones
we can barely control now, biotech and AI are still
lingering on the horizon. So that means we can take steps
now to regulate and control the growth that will make
a significant difference in the risk we’ll face ahead. There’s still time to manage
how these technologies are going to develop and keep that genie
in the bottle, which is not to say it’s going to be easy. Both of these
technologies are examples of dual use, which means they
have the same tools that can be used for good or for ill. And it’d be kind of hard to
distinguish between the two. Gene editing might create a
cure for a terminal disease. It could also be used to
engineer several virus. The same skills
that could create AI that could be used in
medicine or for energy efficiency could also be used
to create terrible weapons. We don’t know which is which
often until it’s too late. So there’s a concept in
technology studies called the Collingridge dilemma. It’s named after a
British academic named David Collingridge, and it’s one
of the most fascinating things I found when I was
working on this book. It goes this way. You have the most power to
control an emerging technology in its earliest stages. But that’s the
point when you don’t know the full impacts
of the technology, because that doesn’t come out
until it’s widely adopted. But when it’s widely
adopted, then the technology becomes entrenched, that
much harder to control. So you can think of
something like social media. It would’ve been
very easy to regulate social media in the early
days before it was entrenched. And it would’ve been
difficult, if not impossible, at that point to foresee what
effects it was going to have. At the same time, regulating
a technology like social media once it becomes entrenched,
as we know now, very, very difficult. All you gotta
do is look on the internet to see how hard it is. That’s where we find
ourselves with biotech and AI. They’re not quite
entrenched, which means they’re
still controllable, because we don’t yet
know where they develop. It’s not clear how they
should be controlled. We could, of course,
wait to see what happens, which is our default
when it comes to tech, I think. But when it comes
to existential risk, we don’t really have the
luxury of getting it wrong, because there’s
no trial and error with the end of the world. So all we can try to do
is proceed with awareness. Try to be conscious of the
responsibility that we bear, especially those
working in technology. At the same time, we
can’t just stop progress, can’t just put a halt to it. Because without
it, we’re not going to be able to live in a world
with 7.7 billion people. We need to strike a
balance and to do so without the benefit of a scale. Nick Bostrom at the Future
of Humanity Institute at Oxford University is the
dean of existential experts. And he has an analogy that I
think kind of wraps this up. It really helps to understand
where we find ourselves. So he asked readers to imagine
a rocket on a launch pad. On the launch pad,
it’s standing there. The engines are off. It’s sustainable. It’s a still. But it’s also
ultimately vulnerable. It could be destroyed
by wind or weather. It could just be
destroyed in an accident. It’s also sustainable
once it’s in space, where it’s traveling through,
weightlessly and endlessly. When the rocket’s in the air,
it’s inherently unsustainable. If it reduced its speed and
cuts its fuel use, which means it gets more
time in the air, it will still eventually
run out of fuel and crash. The only way to a safe
state, in his view, is to burn more fuel,
until the rocket can escape Earth’s gravitational
pull and enter space. That means accepting risk. The accelerating
rocket will burn fuel faster than it would
at a slower speed, but it can’t reach
escape velocity. It will crash sooner. It might even explode
mid-air if pushed too hard. But acceleration, as safely as
we can manage it, knowing we may not be able to
manage it safely at all, is the only path forward. We’re the rocket here. The very advances
that might protect us from one set of risks make
us more vulnerable to others. There’s no perfectly
safe path, and things will almost certainly
get more dangerous before they become safer. But I have to
believe we can chart the right path for
humanity because I have a stake in the future. You have a stake in that future. We all have a stake
in that future. Thank you very much. [APPLAUSE] So I think we’re open
to questions now. AUDIENCE: All right. I’ll just go with the first one. BRYAN WALSH: Sure. AUDIENCE: You talk
about existential risks. Do you have any
examples where we’ve done an adequate
job at mitigating any of those risks in the past? BRYAN WALSH: I’d say asteroids
are probably the one that we’ve done the best on. Starting in the 1990s, you
had the Spaceguard program that actually was able to
track, and measure, and find, and locate large asteroids. Those were about
larger than 140 meters. Those are the ones that
are particularly dangerous. They get bigger
than that, that’s when you get looking at damaging
a civilization or a continent. We also actually
manage theoretical ways to deflect one if we do
have one coming our way. So that’s various–
that’s one thing I think. And it’s interesting
because that wasn’t something
we were aware of, I think, really, until then. And a few things happened. We saw, if you remember,
in the early ’90s, there was a comet that
actually hit Jupiter. And astronomers were
able to see that happen, and that really drove home
that the cosmos are dangerous. And we were able to use some of
that post-Cold War technology to get a tracking
[INAUDIBLE] going. And we’re getting better. We still miss them. You might have seen a couple
of weeks ago, there was one, I think it was
about 100 meters– City Killer, they called it– that came within 45,000
miles of the Earth with no warning whatsoever. There’s a lot more small
asteroids out there. They operate on a
[INAUDIBLE] obviously. And so we’re still
missing those. We could do additional
steps there, but at least that’s one thing
where we’ve taken the steps. And unlike humanity
threatens the entire history, we’ve actually been able
to reduce that risk. That’s pretty rare, though. AUDIENCE: Could you give
your best ballpark estimate, even if it’s a wide
range, for, like, when you think the
world might end and what it might
be that does us in? Will humanity– like, will
humans be here if the world if the world is actually, like– BRYAN WALSH: Well, the
world is going to be fine. Barring some, you know– I don’t know– barring some
impact event that was so vast, it literally sterilized
all life on Earth– AUDIENCE: [INAUDIBLE] BRYAN WALSH: Yeah, so yeah,
life will pass, human beings. This century’s going to
be tough because I think we’re in a situation now where
we’re like kids with new toys, where all these
things that we can do, we’re getting more powerful. Technology moves fast, wisdom
doesn’t really catch up. So this century– so if we can
get through the 21st century, then I think we’ll
be on better footing. Well, obviously, if
we can’t, we’re gone. In terms of what’s the
most likely to get us, I think biotechnology is the
one that worries me the most. Obviously, if the world’s going
to end today right now as I’m talking, it’s probably to be
a nuclear strike, most likely by accident. That’s a real risk. That happens multiple times. That almost happened
in the Cold War, and it could still happen now. So that’s one thing. But looking forward,
a risk that’s going to grow over time,
biology is very powerful. And it’s getting easier
and easier to do. I sort of liken it to
computer programming. Back in the ’70s, I suppose,
computer programming was slower. It took more resources,
fewer people could do it, which limited what
you were able to do. Biology, for a long
time, was the same way, programming biology. Now it’s getting
faster and faster. People talk about
synthetic biology as being able to engineer life. And more and more
people can do it. More people can do
something, the more likely that someone will do something
wrong, either accidentally or on purpose. And we know, based off
nature, what happens when a disease gets out there. If you could create
something that doesn’t really respect the laws
of evolution that can be both contagious
and extremely virulent, that’s very dangerous. That’s what keeps
me awake at night. AUDIENCE: I’ve got a
few short questions. BRYAN WALSH: Sure. AUDIENCE: How afraid should
we be of solar flares? Do you talk about
space debris and how that could mess up
all of our satellites and ruin communications
on Earth? And what do you have to
say to people who think that we could colonize Mars? BRYAN WALSH: Those first
two are kind of connected. If you know solar affairs,
you know the Carrington event back in the 19th century,
1859 I think it was. Huge solar flare. Of course, it didn’t
really make [INAUDIBLE] because we didn’t really have
telecommunications in 1859. So there was very little effect. If it happens now,
that’s a big difference. That could really cripple
global communications. It could cripple
the global grid. And then you have I think not
necessarily existential risk, but you have the risk of really
widespread social collapse. So I didn’t include it
here because I tried to keep it to those level. But that is something
that does worry me. Did it with space debris. Again, I think
both of those show how, as we develop, as we
develop technologically, as we become more connected,
a lot of benefits, obviously. We’re living through
all them right now here. But it creates new
vulnerabilities, and that’s another
vulnerability. So again, you have a situation
where 100 years ago, it wouldn’t have made a difference. Now it could really
be crippling. And then I think– sorry,
what was the last one? Can you show me? AUDIENCE: I don’t want to put
my own perspective on the people who want to go to Mars. BRYAN WALSH: Oh yeah, the Mars. Yeah, I have a chapter
on survival and sort of what to do after
a catastrophe. And I know that obviously
Elon Musk, Jeff Bezos– Musk is talking
about Mars, Bezos is talking about
space more generally. They do couch it in terms
of, like, an existential risk refuge. It’s going to be a long time
before that can possibly be the case. So I don’t think within
the 21st century, which is not to
say I don’t think we should make that effort. Like, we are going to
have to expand over time. When I’m talking about
needing to get more resources to continue to grow, space
travel and spread in space can be a part of that, I think. But the idea that we could
shelter ourselves on Mars if things go bad
on earth, we’d have to really fuck Mars up rea– sorry– Earth
really, really bad. Because Mars is
already fucked up. I mean, like, it’s
going to take a lot to make Mars remotely
a habitable place. If you want to do that,
just do that on Earth. So I don’t think it’s
very realistic as a sort of protection against
existential risk. I think it’s more this
is humanity’s destiny. We shouldn’t move
in that direction. But we’re going to need
to manage these risks here on Earth first. AUDIENCE: Are there
places on Earth that are better to survive? BRYAN WALSH: It depends on
which one you’re talking about, I guess. I mean obviously,
climate change, I’d like to live in Canada. I think that’s going
to be a good place. But no, I mean, I think
it’s going to depend– honestly, like all risks,
resources will help. Being in areas with
some wealth will help. Sea coasts are going
to be in trouble. Obviously, New York City is
going to not be wiped off. It’s not going to
destroy New York, but it’s going to add
hugely to the costs of being active in New York City. And suddenly a huge
amount of our budget is going to go to
managing that risk. And that means things we
can’t spend elsewhere. We can’t spend on health
care, for instance. We can’t spend it on
research and development. So yeah, I think I
can’t say where to live. Maybe a mountain top somewhere. Maybe New Zealand, obviously. New Zealand’s a great place
to go to, though they’ve kind of shut that down. But all these risks
are global, which means they’re going to impact us all. There’s not really
any safe place to go. AUDIENCE: Hey,
thanks for coming. For the biotech risk,
is there any– or what are the most realistic ways
that we could mitigate that? Can the CDC do something,
or is it just– BRYAN WALSH: I think it’s– AUDIENCE: Or do we need
people to stop using CRISPR? BRYAN WALSH: No, I don’t
think we need to– no. No, and that’s the trick. Because I mean, if
you stop using CRISPR, you’re giving up a
lot along the way. You know, CRISPR is going to
create already– you know, we’re getting the
first human, I believe, clinical studies using CRISPR
on curing a form of blindness. It’s just going to
grow from there. So I would want to stop it. I think it just has to have– a lot more effort has to
go in to trying to predict what these experiments will do. I use an example in the book. There’s studies that are
going on with H5N1, where they’ve done gain of function. That means you actually
introduced mutations in the virus that makes
them more contagious. And they’re doing this to
study how it evolved in nature, and that makes sense. You want to know, is this virus
actually going to be a threat? It’s been around
for 15 years or so. Like, when I was working
and living in Hong Kong, I had to travel around
Southeast Asia going to, like, backyard chicken
farms, writing about H1N1. It never became a pandemic. So they’re doing this
to try to predict that. But you then have
created something that’s worse than anything
that existed in nature. So you do that, you
better really be careful. You better be very safe. And I don’t know if
the calculations being made in terms of
determining is it worth it. You know, that’s what
I would ask look. I would almost
wish that you could do sort of an existential
risk kind of guide before you do anything that
could introduce something new to the world like that. So that’s what I would say. I mean, like, you’re
getting to hear that. I think you’re beginning
to see that being done, but all I like to do
is look at the science to do the CRISPR baby last year. I mean, it just takes one
person to do something. And it’s really
hard to stop them. And that’s what makes
biology so tough because so many
people can do it. It gets easier and
cheaper every year. And it’s a bit like,
again, like coding. Like, suddenly, tons of people
can do it, so we have malware. If the same thing has
happened with biology, that creates a very
dangerous environment. AUDIENCE: So should
something be done to limit the
accessibility of it? BRYAN WALSH: I think something
should be– yeah, I would say– no, I don’t know. I mean, I would say yes, try to
keep track of what’s going on. Try to work together as
an international society, international community
to say some things should be off limits. Like, let’s not edit the
germline for instance. It’s just realistically hard
to actually enforce that. With something like
nuclear weapons, it was easier because that
takes a lot of resources. It takes very
specialized expertise. That’s why in all honesty, we
haven’t seen a nuclear blast here in New York. Not because people didn’t want
to do it, because it turns out it’s very hard. But biology is something where
it’s going to get easier, and it’s just, it is accessible. So I would want labs,
I would want the CDC, I would want other agencies
around the world just to be managing
this more closely. But that risk is something
we have to balance a little bit, so [INAUDIBLE]. AUDIENCE: Hi, thanks
for your talk. Have you [INAUDIBLE]
the possibility of aliens as an
existential threat? BRYAN WALSH: I do, yeah. Well, one thing about
the aliens is they’re in there for two reasons. One, if they actually
can get here, if they’re actually
extraintelligence that can actually make
it to Earth, they would be so advanced
beyond us technologically that if they are
hostile, that’s it. It’s not like Independence Day. There’s no rallying point
or president in an F-14. Like, we’re just done. Hopefully, they’ll be friendly. I mean, that’s certainly what
the [INAUDIBLE] have always thought, but historically,
looking at our planet, when you have these
encounters, they often don’t go well for one side. So that’s one thing. Another worry, though,
is that they don’t exist. And this is something
existential risk experts really– you probably have heard
of something like the Fermi paradox. Where’s the silence? Why don’t we see
anyone out there? The more we discover that
there are planets out there that seem to support
life or could support life, the weirder that becomes. And there’s a concern
a lot people have that is there something
in the nature of being a technological civilization
that creates self-destruction? So I would be worried
if we continue not to see any life out there. I’d be really worried
if we see what are called necro signatures,
like evidence of civilizations that existed and are now gone. That would worry me
quite a lot, actually. So it might be that
the best hope we have is that we are alone,
which means at least we don’t know what’s ahead of us. But if we see evidence of other
dead civilizations out there, that doesn’t really
bode well for us. AUDIENCE: Hi. So we’re talking about
the probabilities of a lot of events. But the probability
of climate change is 100% as it’s occurring. And I feel like there’s a
certain cognitive bias that is going on here, which is
that all of these events, we’re kind of, like,
imagining events where something
happens very quickly and very catastrophically. There is a saying the world’s
not going to end with a bang. It’s going to end
with a whimper. And these are all bangs, and
you’re talking about, like, oh, if the world is
going to end today, it’s going to be a
nuclear bomb or biology. But no, the world is
ending today already. It’s climate change. It’s happening. So we should look at this
cognitive bias of slow versus fast sort of events. But I guess my question
then is building on what you’re talking about,
about the inherent capacity for self-destructiveness,
of civilization, also Daniel Ellsberg
saying we should fight for civilization
because it’s worth it, but we’re also, when we think
about climate change and a lot of these problems,
where we need to think about fundamental change
to our civilization. Like, we don’t actually want
to keep our civilization. We want to change
our civiliza– we want a different civilization. BRYAN WALSH: Right. AUDIENCE: So like, what
thought has been put into, like, well, what does
a long-term surviving civilization look like? BRYAN WALSH: We don’t know
what a long-term surviving civilization looks
like, I think. I think going forward,
what we’re going to have to have is a mix of– look, here’s a thought,
having reported on climate change for
a number of years, I have a story in the book
of going to the Copenhagen conference. This was back in 2009. There’s a lot of hope
then that we were really going to come together as a
globe and make a difference. We’re going to create
a new global deal to succeed the Kyoto Protocol. President Obama had
just taken office. I mean, it was a
feeling of great hope. It didn’t happen. It really, it didn’t. When push came to
shove, politicians weren’t really willing to
push self-conceived limits on their populations. And I just, I guess, I know
that we need to make changes. Maybe I’m just more cynical– not cynical– but skeptical
about human beings. Like, not just a few of us, not
just the elect, but all of us making these kind of changes. Generally, we respond
to extreme events, and that’s probably what
makes climate change so hard. You know, we’re seeing
extreme events, obviously. But it’s still hard to really
work on that time frame. So I put this climate
change in there. It’s different than
all the other risks. Right? All the other risks
are bangs mostly, meaning they happen
very quickly. Climate change is going
to be something that will never be fixed in my mind. Like, it will always be with us. It’s like a lifestyle condition
of industrial civilization. And I’m hoping we can find
some radical changes that will obviate the need to make
these hard political decisions that we just can’t seem to do. That’s why I talk about
something like carbon capture. Look, if you could actually turn
carbon dioxide into something that you deal with
like you do with waste, that would sort of skip past
those political challenges. And that doesn’t mean
it’s going to work. These are just
experimental ideas. They aren’t there yet. But the idea that
we’re not really putting far more resources into
that, the idea that we’re not pouring money into
figuring out how to solve that
problem, because that would be sort of the closest
thing to a silver bullet, while doing the political
change necessary. Because the things
attached to the politics around climate change are
attached to other issues as well. I’m hopeful and I see
younger generations wanting to do different
things, but it’s a big world. And I just, I don’t
know if we can count on that kind of change. So I’m hoping that we can
create technological innovation that kind of sidesteps the
need for it, ultimately. So that’s just how I’ve seen it. I know people are pushing
hard for a different– Naomi Klein, you
probably know, has a book coming out very soon. And she’s made that
case that we need to have a political revolution. That might be the case. It’s just been tough
seeing on the ground for that to really come true. So that’s kind of where. AUDIENCE: So this is sort of
a follow-up on what you just finished talking about. So as a writer, so I mean you
write about climate change. But the question– and I mean,
so even today, a lot of people do not believe it because
there are other people, and again, coming
to the [INAUDIBLE] there might be other
interests that people would write against it. And the same for
anything [INAUDIBLE] AI or nuclear war, for instance. I mean, [INAUDIBLE] And so I guess my
question as being someone who has written about climate
change for a lot of times and also having seen how
people respond to it, but you still see that
maybe half the population don’t think that’s a problem,
what are the lessons learned? What do you think
journalists today are doing wrong in the
sense that you’re not able to convince the public
that there’s a problem? Or a fraction of the public,
especially when there is maybe an adversary
trying to propagate things without the right data. BRYAN WALSH: Yeah, I
mean, I come back to this again and again. I spent so much time writing
stories, all these solutions, all these strategies. You should do this,
you should do that. You should stop flying. You should get carbon offsets. You should use biofuels. You should do this. And you’re absolutely right
that there’s a countervailing narrative out there, and
that countervailing narrative has the advantage because
it’s talking about a stasis. Basically when they say don’t
worry about climate change, don’t worry about energy,
that’s just the status quo. So we’re challenged
to overcome that. But the reason why I use
that example of Germany, for instance, look. There’s not a huge climate
denialism problem in Germany. It’s more an issue
of energy use. That’s getting past the
politics and just looking at OK, you want a world where
people have access to energy. That makes a big difference. We all have that. We’re all able to travel. We’re all able to
use technology. We’re all able to have
better lives because of that. And if you were to
spread that to everyone, if that was a
priority, then suddenly carbon emissions
balloon, at least with current levels
of technology, current levels of energy,
current levels of energy mix. And I guess, that’s
part of the reason why I think it’s been hard
for people to deal with this. It’s not just that they’re
being told by Fox News or whatever this is all unreal. There’s a kind of a term
called climate dissonance, D-I-S-S-O-N-A-N-C-E, just
to be clear, not dissent. And that’s the feeling of– some of it’s guilt.
Some of it’s confusion. It’s when you sort of know
climate change is real, but you know that your
lifestyle is contributing to it, which it all is for
all of us, everyone here in this room, guarantee. And so some people react to
that by denying it the same way I think a lot of us deny
hard psychological facts. We just prefer not
to think about it. Some people become very guilty. They become even
sort of despondent. And I think most people
fall more to the latter. They sort of want it to go away. And I don’t know if we can
get people around that. That’s why I’m hoping we can
create new forms of energy, new sort of backup technologies
that can just avoid– that sidestep that essentially. Because human beings are hard. It’s hard to make us
think about the long term. It’s hard to make us
think about the group. It’s very hard to make
us think about the globe and extremely
difficult to make us think about the future, which
is what climate change is. We think about
what’s happening now. We see the hurricanes,
we see the forest fires. Climate change is always
going to be problem that we felt most in the future. We as human beings do not
think well about the future. If you think about
yourself, like you, your own self in the future
is a stranger to you. Otherwise we’d all save
more for retirement, do a million other
things to improve our chances in the future. So I don’t have a lot of faith
in our ability to do that. And that’s why I’m hoping we can
figure out a way around that. And maybe that’s
giving up somewhat. I don’t think it’s giving up. I think it’s being
more realistic, and it goes hand-in-hand with
supporting politics, supporting the movements that
will support this, also supporting the
adaptation that we needed to really survive
this because that’s going to be a huge difference. It’s not just what
we do in terms of reducing carbon emissions. It’s about how do we
create a society that’s more resilient to
this risk and others. But yeah, it’s as
much as I would wish there’s a magic wand
that could roll over the media and suddenly we create
the message that makes everyone think
the way we think, that’s never going
to happen I think. And so we need to be realistic
about that to a certain extent. AUDIENCE: And another
quick follow up, I mean, like, because
I mean, I guess– BRYAN WALSH: Sorry, go ahead. AUDIENCE: In my opinion,
I’ve seen, for instance, in some countries, the
anti-scientific sentiment is not as high as that maybe
in some other countries. Do you think there’s any
reason for that in terms of– BRYAN WALSH: For
why some countries are less anti-science? AUDIENCE: Yeah, I guess
maybe in terms of [INAUDIBLE] BRYAN WALSH: I
mean, like, yeah, I think education
plays a role in that. I think the media environment
plays a huge role in that. We have a– I started in the media in 2001. Started over in Hong Kong. And I’ve seen the number of
jobs in this industry decline, I think, by about
half since then. So you have fewer people who are
independent voices around this who have the rise of
conservative media that’s now essentially a state
media certainly obfuscating things. None of that helps. It makes things much worse. And it concerns me quite a lot. I don’t know how to fix that. You know what I mean? If I knew how to fix
the media business, I would be doing that
right now probably, instead of writing the book. But yeah, I think
that plays a big role, and that’s a big challenge
facing these risks because they are complicated. They require an
informed population. And that’s not something
we really have. And I actually do worry. This isn’t something
I cover so much. But when I look at things like
the resurgence of measles, for instance, here in the
US, that’s just ignorance. There’s no other
way around that. That is just going backwards. And I do get really worried
that that could continue to happen in other areas, too. If we just– it’s
not getting dumber. It’s just being given
an alternate reality you can believe in if you want. And the internet plays
a role in that, too. You can find the facts to
support whatever you feel. But we’re not going backwards. The internet’s not going back
to wherever it came from, probably somewhere back here. So we just have to live with
that and try to work around it, I think. AUDIENCE: I have
a quick question that’s sort of built on this
and the climate dissonance that you mentioned. There is a lot of
resistance to even admitting that CFCs were an issue
until alternatives were found to them. And then all of a sudden,
it was a big thing, and we banned the CFCs and such. But given the sheer
difficulty of just getting a technological solution
to this and the fact that even people who accept
that climate change is happening are having trouble adjusting
their lives, what realistically should we be doing about this? You mentioned getting
involved in politics, but I’m just
concerned that we’re going to continue drifting for
the next however many years past the point where
it becomes catastrophic and then just become
cannibal nomads killing each other and stuff. BRYAN WALSH: Right,
“Mad Max,” yeah. Yeah, it’s– the CFCs
example is a great one. I have it in the book. ‘Cause
early on with climate change, we thought that’s
how we solve it. We’d come up with
basic substitutions. We created a treaty,
the Montreal Protocol, which is what really
banned CFCs as the most successful international
treaty of all time. That turned out to be
the case because that was an easier technological
problem to solve, which, to me, tells me this
is often more about how we use energy, how central
energy is to what we do, as compared to
something like CFCs. In terms of what we should do,
I mean, be involved politically, obviously. Again, if you have the ability,
if you have the intelligence, if you have the capital,
please work on these problems, this problem most of all. Because all the other
ones, we may skip them all. Never get that asteroid,
never get that supervolcano. We may luck out
with biology, AI. It might not never happen. But this is
definitely happening. It’s happening every day. It’s only going to get worse. And then lastly, we
really need to think– adaptation, when I first
started reporting on this, was considered, like, giving up. Environmentalist
didn’t even really like to talk about adaptation
because they thought it would take away from the
energy of actually reducing carbon emissions. That’s not the case. We need to do that. We need to help poor
countries do that most of all. We need to accept that we’re
moving into a different world, and we need to get ready for it. And that’s if we do that,
even if things get worse, because they will, that
will be the difference between a livable, if difficult,
world for our descendants. And you know, “Mad Max” in the
desert with the electric guitar guy driving around
with Tom Hardy, so that’s really what we do. But I think it’s
important for people to have that ability to
work on this project. This is the most
important thing out there. We should be doing it. So, and that’s it. Awesome. Thanks very much. [APPLAUSE]