This page may be out of date. Submit any pending changes before refreshing this page.
Hide this message.
Quora uses cookies to improve your experience. Read more
Robert Walker

Not a massive meteorite strike. Everyone says this, but if you look into the topic closely, it's not possible. We haven't been hit by anything this big for over three billion years. The big craters on the Moon, Mars and its moons, Mercury, a very old and huge eroded crater on Earth also - they all date back to over three billion years. Back then the solar system was still settling down into its current state - towards the end of the "late heavy bombardment".

The reason is that Jupiter protects us, so the simulations suggest. It breaks up really big comets, or they hit Jupiter or the sun or get ejected from the solar system before they get into orbits close enough to ours to be any problem.

They'll say - look at the meteorite strike that ended the dinosaur era!

But we are not dinosaurs. Turtles, crocodiles, alligators, small mammals, flying dinosaurs (the birds), dawn redwood trees, pine trees, many lifeforms survived that impact. And humans with the barest minimum of our technology are able to survive anywhere from the Arctic to the hotttest of deserts, or in tropical rainforests. We would survive, some of us, a giant impact like that.

And it is also extremely unlikely that we are hit by anything even that big. We have found all the 10 km diameter asteroids between Jupiter and the Sun already. Found 90% of the 1 km ones also. The 10 km ones happen only every 100 million years. And most of those are found leaving only ones that are currently way beyond Jupiter which also means we'd get a bit of warning at least.

So - that's both extremely unlikely - only 1 in 100 million in the next decade, perhaps less. And also would not make us extinct.

A supernova also would not make us extinct. None near enough to be deadly to earth anyway. But we are also protected by our atmosphere, and there would be many people on the other side of the world at the time. Same also applies to a gamma ray burst. And both very unlikely. The galaxy is a hundred thousand light years in diameter - so the next supernova is almost certain to be far too far away to harm us - and gamma ray bursts are very focused. and would need to be pointed directly at us - which again is very unlikely.

And - probably would be at least a fair few people in submarines, very hard to think of any disaster that could kill everyone in submarines. Tsunami are a surface phenomenon and a hundred meters or so below the surface you wouldn't even know it is happening chances are.

A giant supervolcano eruption like Yellowstone wouldn’t make us extinct either. It’s a disaster in the immediate vicinity, an estimated 90,000 would die. Globally the Main effect is global cooling for about ten years. If it happened without warning, many would die but we would not go extinct. With a couple of years of warning, we could prevent nearly all the deaths from starvation by planting different crops and by storing up food that is otherwise used to feed cattle or to produce ethanol, in the year or two before the eruption. For details see my answer to What will really happen when the Yellowstone supervolcano erupts?

As for black holes, well there can't be many mini black holes in the universe or we'd see stars blinking out. No large ones near to the sun, as we'd spot them by the accretion disk.

And not likely to create one ourselves, because the galaxy has natural particle accelerators the size of stars and larger - the fast particles they create which hit Earth regularly don't make mini black holes, or if they do, they are harmless - and we are nowhere close to duplicating those energy levels. We'd need to be able to build something like CERN larger than a star before it's a concrn.

That leaves, diseases. But you generally get a few immune to it if it is a natural disease.

It would be possible surely to genetically engineer some bug to intentionally make us extinct- but a few people would be immune, and there would also be others that never are contacted. If nothing else, then the uncontacted tribes, which still exist in a few islands and forests, would emerge from their forests bewildered to an uninhabited world :), That would make a fun sci fi story though I don't think it is likely in reality.

Climate change won't do this. It's effects are much exaggerated by a few people who go over the top, opposite to the climate skeptics, climate excessers ??

See for instance, How Guy McPherson gets it wrong

You also hear that we can turn Earth into Venus through climate change. This just can't happen. We would need to release and burn many times the entire global inventory of coal, oil, methane etc. We couldn't do it even if we never took any precautions.

That leaves things we can do to ourselves.

The idea of an AI intelligence taking over the world, well I don't find that plausible at all. Here I'm voicing my personal view and opinion.

I think we are a long way away from that, indeed that we will never build computers that can understand truth in the way a human can do. Indeed if we do have strong AI, I don't think it will be through programming, but rather through genetics, biology, or some such. Or some approach that is somehow part biology, part machine. Which would involve many ethical issues. E.g. enhancing the intellectual capacity of a whale and giving it the ability to speak like a human - is that an acceptable thing to do? What kind of life would such a creature live - would itnot potentially be very unhappy and miserable? Anyway we aren't close to that capability yet AFAIK.

Why Strong Artificial Intelligences Need Protection From Us - Not Us From Them

I can't see this happening within ten years for ethical reasons.

As for nanotechnology - we could make tiny nanomachines, already do. But we are nowhere near able to make a nanite - a nano replicating machine. Nowhere near.

We can't even build a "clanking replicator" - a factory or solar panel or some other big machine or device able to create a copy of itself. The advances in 3D printing take us partly there, but still need humans to source the materials they use, and to build the copy of the printer, as well as not being quite at the stage of printing out computer chips.

That could happen, nanoreplicators, see no reason technically why not. But surely not in just ten years. Don't think that is long enough to get to a "clanking replicator" many steps to be filled in though there are ideas about how we could do it.

The one thing that I do think needs great care is life itself. Experiments like this one:

Synthetic bug given 'fewest genes' - BBC News

And particularly

First life with 'alien' DNA

That's DNA with six bases instead of four.

The researchers take great care, and I'm sure will continue to do so.

The thing is we don't know that DNA based life is optimal. We have only the one example, and there is no way that life could have explored the entire solution space when DNA evolved. It could easily be a "local maximum" that seems optimal because evolution had a bit of a blind spot and never explored some particular direction. This can definitely happen with higher animals, e.g. Australian marsupials never evolved into mammals Could that have happened with microbes also, that Earth life has never evolved some more optimal form of life that we could either make in a laboratory or find on another planet?

As an example of how non DNA based life could be better than DNA life - it could require smaller cells (DNA life though it works is Rube Goldberg in its complexity and life that can work with less complexity could be much smaller) meaning it needs less resources and can reproduce more quickly. It could have a more efficient faster metabolism. It could have a biochemistry that is in some way more robust to environmental hazards. It could be better at photosynthesis - just a few percent improvement, if DNA life can't match it, could mean that it takes over through an exponential process, from green algae in the sea, basis of much of the marine food chain.

It could produce chemicals poisonous to us as a byproduct - like BMAA which is possibly implicated in Alzheimers misincorporated in place of l-serine which it resembles quite closely. There's no advantage to green algae to cause Alzheimers in humans and in the same way there need be no advantage to the XNA life to create chemicals poisonous to humans or other Earth life, it might just be that that's how it works. It could well be invisible to Earth life, not perceived as a threat because it doesn't produce any of the carbohydrates and peptides that our and cells defences respond to. So they don't respond to anything except the actual physical trauma. It might just live for instance in our stomachs, linings of our lungs, mouths, on our skin, and our body does nothing to stop it, and then harm us either by eating us directly, or by chemicals it produces. It doesn't need to harm humans to be a hazard. If it harms any of the lifeforms we depend on, it could be just as problematical, and either lead to our extinction or severely diminish the habitability of the Earth for humans.

Exponential growth would start slowly, but then continue more and more rapidly - and it is surely low probability in the first place, Bevertheless, it could potentially in theory happen in a decade. E.g. that some form of life is created that can survive on and in humans as well as other animals that our immune system doesn't recognize and produces chemicals harmful to us. One of the applications of artificial life is the possibility of using it to make implants that our bodies won't reject, so that's a line of research that if the researchers were careless could lead directly to a lifeform that could be harmful to humans and hard to protect against. Again not saying we shouldn't follow this research, just saying it needs care and if it was done very carelessly it could have the effect described.

For that reason also, as well as others, I think we need to take great care returning samples from another planet that may have life in them. It's not likely that we can return a sample from Mars before 2025, because it would take at least a decade just to pass all the laws needed to permit such a sample return, and that process is not started yet.

So - though a low probability, I think that's one of the few things that could make us extinct. Also genetic engineering. Both could be really good positive things, so I'm not saying we must never do these things. But they need a great deal of care. Humans have never been able to do such things, and past experience of doing other things may not be a reliable guide.

Tiny probability. But when existential risks are concerned, we have to consider tiny probabilities.

The good news is that these are things we can do something about by making sure we take care.

There are other possible risks, and some time I'll expand this to add a section to look into those too, check them all off. See Nick Bostrom's list. Existential Risks

But it is a bit out of date. For instance he mentions runaway global warming, which is now known not to be possible through just burning fossil fuels etc. Perhaps he wrote this at a time when a scientist had just published research that for a year or two seemed to show that it was possible.

It gives an idea of some of the things that some people have thought needs to be looked into. The ones I mention here are the ones that seem closest to possible. His list is not just of human extinction events but things that could permanently reduce life prospects.

Also, it is based on the ideas of post humanism, that in future mind uploading would be possible, on the idea of super intelligent programs etc. If you don't think any of those are possible, as I don't, then many of the things in the list are things you don't think could happen. If you think those are possible, well his list will suggest other future possibilities. Some of the transhumanists think we could achieve some kind of a runaway technological event they call the "singularity" which would happen in the near future - that depends on this idea of super intelligent computer programs. If you are a believer in this, then you'd think we could become extinct as a result of a badly programmed superintelligence taking over the world. For me that's just science fiction, as explained in Why Strong Artificial Intelligences Need Protection From Us - Not Us From Them, and a couple of other articles linked to from that one.

Here is an article I wrote about: How To Keep Earth Safe - Samples From Mars Sterilized Or Returned To Above Geostationary Orbit - Op Ed

And about asteroid impacts Giant Asteroid Headed Your Way? - How We Can Detect And Deflect Them

You might also be interested in my answer to Is it true that a neutron star will hit the Earth in 75 years from now? The answer is No, vastly improbable, but is fun to look at.#

I've now written this up as a rather longer article goes into more possibilities here

Could Anything Make Humans Extinct In The Near Future?

About the Author

Robert Walker

Robert Walker

Writer of articles on Mars and Space issues - Software Developer of Tune Smithy, Bounce Metronome etc.
Studied at Wolfson College, Oxford
Lives in Isle of Mull
4.8m answer views110.3k this month
Top Writer2017, 2016, and 2015
Published WriterHuffPost, Slate, and 4 more