Register

Peak Oil is You


Donate Bitcoins ;-) or Paypal :-)


Page added on February 15, 2015

Bookmark and Share

Twelve ways the world could end

Twelve ways the world could end thumbnail

 

What are the chances of all human life being destroyed by a supervolcano? Or taken over by robots? A new report from Oxford university assesses the risks of apocalypse
end of the world scenarios, Illustration by Lucas Varela

Since the dawn of civilisation people have speculated about apocalyptic bangs and whimpers that could wipe us out. Now a team from Oxford university’s Future of Humanity Institute and the Global Challenges Foundation has come up with the first serious scientific assessment of the gravest risks we face.Although civilisation has ended many times in popular fiction, the issue has been almost entirely ignored by governments. “We were surprised to find that no one else had compiled a list of global risks with impacts that, for all practical purposes, can be called infinite,” says co-author Dennis Pamlin of the Global Challenges Foundation. “We don’t want to be accused of scaremongering but we want to get policy makers talking.”

The report itself says: “This is a scientific assessment about the possibility of oblivion, certainly, but even more it is a call for action based on the assumption that humanity is able to rise to challenges and turn them into opportunities. We are confronted with possibly the greatest challenge ever and our response needs to match this through global collaboration in new and innovative ways.”

There is, of course, room for debate about risks that are included or left out of the list. I would have added an intense blast of radiation from space, either a super-eruption from the sun or a gamma-ray burst from an exploding star in our region of the galaxy. And I would have included a sci-fi-style threat from an alien civilisation either invading or, more likely, sending a catastrophically destabilising message from an extrasolar planet. Both are, I suspect, more probable than a supervolcano.

But the 12 risks in the report are enough to be getting on with. A few of the existential threats are “exogenic”, arising from events beyond our control, such as asteroid impact. Most emerge from human economic and technological development. Three (synthetic biology, nanotechnology and artificial intelligence) result from dual-use technologies, which promise great benefits for society, including reducing other risks such as climate change and pandemics — but could go horribly wrong.

Assessing the risks is very complex because of the interconnections between them and the probabilities given in the report are very conservative. For instance, extreme global warming could trigger ecological collapse and a failure of global governance.

The authors do not attempt to pull their 12 together and come up with an overall probability of civilisation ending within the next 100 years but Stuart Armstrong of Oxford’s Future of Humanity Institute says: “Putting the risk of extinction below 5 per cent would be wildly overconfident.”

Unknown consequences
A catch-all category to cover the unknown unknowns — an amalgamation of all the risks that we have not thought of or that seem ridiculously unlikely in isolation (such as sending signals to extraterrestrial civilisations that attract deadly alien attention). Together they represent a significant apocalyptic threat.
Probability: 0.1%

Asteroid impact
An asteroid at least 5km across — big enough to end civilisation, if not wipe out human life — hits Earth about once every 20 million years. But programs to map hazardous objects are making progress and, given enough warning, a concerted effort by the world’s space powers might succeed in deflecting an incoming asteroid on to a non-collision path.
Probability: 0.00013%

Artificial intelligence
AI is the most discussed apocalyptic threat at the moment. But no one knows whether there is a real risk of extreme machine intelligence taking over the world and sweeping humans out of their way. The study team therefore gives a very wide probability estimate.
Probability: 0-10%

Supervolcano
An eruption ejecting thousands of cubic kilometres of material into the atmosphere — far larger than anything experienced in human history — could lead to a “volcanic winter”, with effects similar to an asteroid impact or nuclear war. Such events are known from the geological record to have caused mass extinctions. And with today’s technology, there is not much we could do to prevent its effects.
Probability: 0.00003%

Ecological collapse
A full collapse of the global ecosystem, so that the planet could no longer sustain a population of billions, is one of the most complex risks in the study. Because many unknown sequences would be involved, the team does not even guess at a probability.
Probability: n/a

Bad global governance
This category covers mismanagement of global affairs so serious that it is the primary cause of civilisation collapse (rather than a secondary response to other disasters). One example would be the emergence of an utterly incompetent and corrupt global dictatorship. The probability is impossible to estimate.
Probability: n/a

Global system collapse
This means economic and/or societal collapse, involving civil unrest and a breakdown of law and order that makes the continuation of civilised life impossible anywhere on Earth. There are too many unknowns to give a probability estimate.
Probability: n/a

Extreme climate change
Conventional modelling of climate change induced by human activity (adding carbon dioxide to the atmosphere) has focused on the most likely outcome: global warming by up to 4C. But there is a risk that feedback loops, such as the release of methane from Arctic permafrost, could produce an increase of 6C or more. Mass deaths through starvation and social unrest could then lead to a collapse of civilisation.
Probability: 0.01%

Nuclear war
A nuclear war between the US and Russia was the chief apocalyptic fear of the late 20th century. That threat may have reduced but, with proliferation of nuclear weapons, there is still a risk of a conflict serious enough to cause a “nuclear winter” as a pall of smoke in the stratosphere shuts out sunlight for months. That could put an end to civilised life regardless of the bombs’ material impact.
Probability: 0.005%

Global pandemic
An apocalyptic disease would combine incurability (like Ebola), lethality (like rabies), extreme infectiousness (like the common cold) and a long incubation period (like HIV/Aids). If such a virus spread around the world before people were aware of the danger, the international health system would have to move with unprecedented speed and resources to save mankind.
Probability: 0.0001%

Synthetic biology
Genetic engineering of new super-organisms could be enormously beneficial for humanity. But it might go horribly wrong, with the emergence and release, accidentally or through an act of war, of an engineered pathogen targeting humans or a crucial part of the global ecosystem. The impact could be even worse than any conceivable natural pandemic.
Probability: 0.01%

Nanotechnology
Ultra-precise manufacturing on an atomic scale could create materials with wonderful new properties but they could also be used in frightening new weapons. There is also the “grey goo” scenario of self-replicating nanomachines taking over the planet.
Probability: 0.01%

Illustration by Lucas Varela

FT



35 Comments on "Twelve ways the world could end"

  1. Makati1 on Sun, 15th Feb 2015 6:41 am 

    Most of the above seem to be waiting in line to do the job. The probabilities are pure junk though. Artificial intelligence 0-10%? LMAO! We won’t last long enough for that to ever come close to even existing.

  2. Dredd on Sun, 15th Feb 2015 8:42 am 

    The Pentagon says that climate change poses immediate risks to our national security. We should act like it.” – State of the Union Speech

    According to the Commander In Chief, the Pentagon says that climate change is the greatest threat to national security.

  3. Rodster on Sun, 15th Feb 2015 8:58 am 

    I’m reminded of a George Carlin skit. “The Earth will always be here, WE WON’T. Pack your shit folks, you’re going away.”

  4. Davy on Sun, 15th Feb 2015 9:08 am 

    Roadstar, where is George when you need him! He died too early.

  5. ghung on Sun, 15th Feb 2015 9:31 am 

    The probability that any of these assessments are correct:

    0.00%

  6. Repent on Sun, 15th Feb 2015 10:09 am 

    2nd that the Earth will exist forever. Even after the sun dies, the planet could float lifeless in space for eternity. Maybe eventually it would float into the habitable zone of another star and life would start back up again?

  7. Plantagenet on Sun, 15th Feb 2015 10:16 am 

    Interesting that peak oil doesn’t even make the list of potential risks

  8. MSN fanboy on Sun, 15th Feb 2015 10:37 am 

    Peak oil is considered a sub category and fits into 9 of the above risk analysis. The issue with peak oil is that it isn’t a probability, its arguably now. Fun Fun.

  9. Dredd on Sun, 15th Feb 2015 10:47 am 

    Repent on Sun, 15th Feb 2015 10:09 am

    2nd that the Earth will exist forever. Even after the sun dies, the planet could float lifeless in space for eternity. Maybe eventually it would float into the habitable zone of another star and life would start back up again?
    ============================
    We the Sun goes through its expansion stage to become a red giant, the Earth will be vaporized:

    Earth’s fate is precarious. As a red giant, the Sun will have a maximum radius beyond the Earth’s current orbit, 1 AU (1.5×1011 m), 250 times the present radius of the Sun. However, by the time it is an asymptotic giant branch star, the Sun will have lost roughly 30% of its present mass due to a stellar wind, so the orbits of the planets will move outward. If it were only for this, Earth would probably be spared, but new research suggests that Earth will be swallowed by the Sun owing to tidal interactions.”

    (Life According To Science)

  10. Dredd on Sun, 15th Feb 2015 10:49 am 

    … typo in my comment above …

    “We the Sun goes through its expansion stage”

    should be:

    “When the Sun goes through its expansion stage”

  11. Rodster on Sun, 15th Feb 2015 10:51 am 

    “where is George when you need him! He died too early.”

    ———————–
    He even coined the phrase “the upper one percent” (in the mid ’90’s) way before Occupy Wall Street happened.

  12. Apneaman on Sun, 15th Feb 2015 11:10 am 

    What a fucking joke. 10% from something not yet invented,yet only 0.00003% for a super-volcano. The biggest extinction event, the Permian, was caused by volcanism (Massive co2 releases)and so were the other 13 major extinction events. Volcanism and it’s co2 were already underway when the Chicxulub meteor struck

  13. shortonoil on Sun, 15th Feb 2015 11:31 am 

    Artificial intelligence 0-10%?

    It is already here! Back in the 90’s I was developing AI apps for medical diagnostics, and industrial applications. The day arrives when the machine gives you a result, and you can’t figure out how it came to that conclusion. Any AI developer will tell you similar stories. With billions of machine now interconnected, and with a processing speed millions of times faster than a human brain it is just a matter of time; if it hasn’t happened already. If it did, it probably is not likely to tell us!

  14. Rodster on Sun, 15th Feb 2015 11:56 am 

    I agree with Short.

    This is Stephen Hawking’s greatest fear. Multiply that by 50 years of exponential technology advancement. Hell Google has been working on AI partners that will become your personal/virtual mate and will know more about you than your wife or girlfriend.

    Then there’s Boston Dynamics “Petman” and his best friend the 4 legged militarized robotic dog, another Google company and of course they are working on driver less vehicles.

    On our current glide scope there could come a time in the not too distant future where the movies “The Blade Runner or Terminator” become a reality.

  15. Davy on Sun, 15th Feb 2015 11:58 am 

    Short, your ETP of oil predicament will ensure the “HALS 9000’s” of the world have no future. “AI” is like travels to Mars conceivable but written out of the cards by POD & ETP of oil.

  16. Rodster on Sun, 15th Feb 2015 12:14 pm 

    Here are the following robots being worked on. And after watching those videos tell me you’re not a little concerned about where this could go?

    Petman https://www.youtube.com/watch?v=RGZoMPXG0MI

    Petman 2015 https://www.youtube.com/watch?v=NTtAu6VH6HA

    Boston Dynamics “Big Dog https://www.youtube.com/watch?v=ybacz2Y3kw4

    And then there’s these japanese human like robots https://www.youtube.com/watch?v=MaTfzYDZG8c

  17. Apneaman on Sun, 15th Feb 2015 12:35 pm 

    The end will be human caused because of our inability to handle out technology. It won’t be AI, just plain old co2 like all the other major extinction events on this planet. Maybe a nuclear war started by accident. All the fancy science fiction doom stories are fun, but they always leave out what is the energy source for the upcoming AI masters? Do the Boston Dynamics toys run on Dilithium crystals? Last time I checked it was fossil fuels in a very loud, internal combustion engine; that was invented in 1864 – 151 years ago. Real futuristic stuff. Maybe they will power them with the hydrogen economy that is coming next year………again. People are far too impressed with technology and don’t seem to realize that the back bone of modern civilization is still dirty industry that needs men on tools to build and maintain it. You can automate a factory, but you can’t build refineries, petro chemical plants and power boilers with robots. Can’t extract the raw materials without men either. Maybe someday they would get there if they had the energy, but they don’t, so don’t hold your breath. If one pays close attention you cans see industrial civilization falling apart before your very eyes. The national infrastructure has only been emergency band-aided for the last 35 years(Regan/Deregulation). Maybe the AI robots will fix it up.

  18. Apneaman on Sun, 15th Feb 2015 12:40 pm 

    Americas infrastructure report card 2013

    D+

    Estimated investment needed by 2020
    $3.6 Trillion

    http://www.infrastructurereportcard.org/

    Good luck

  19. Apneaman on Sun, 15th Feb 2015 12:52 pm 

    Institutional/Systematic Breakdowns

    A Record Year of Recalls: Nearly 64 Million Vehicles

    http://www.nytimes.com/2015/02/13/business/auto-safety-recalls-set-record-of-nearly-64-million-vehicles-in-2014.html?_r=0

  20. nubs on Sun, 15th Feb 2015 12:52 pm 

    Here’s a link to the report: http://globalchallenges.org/publications/globalrisks/about-the-project/

    Have not yet read thru the details, but estimating quantative probabilities of any of these events dependends entirely on the assumptions that one makes. In other words, the probabilities are meaningless.

  21. Rodster on Sun, 15th Feb 2015 1:36 pm 

    “The end will be human caused because of our inability to handle out technology. It won’t be AI, just plain old co2 like all the other major extinction events on this planet.”

    You are entitled to your opinion. I have been around computers since the late 70’s and have witnessed first hand the mindblowing advancement in technology. Some of the new tech of 10-15 years ago was mere science fiction in the late 70’s to early 80’s. Anyone who follows the path of where AI development was 5 years ago and where it is today would be stunned if not shocked at what has been achieved in such a short time. And each level of advancement becomes faster and more powerful.

    AI advancement is a real threat and it is here already.

  22. Apneaman on Sun, 15th Feb 2015 1:59 pm 

    Of course it’s a threat. It’s human created. The law of unintended consequences applies to every thing we make.
    Can AI survive the end of the oil age or even the beginning of the end?

  23. ghung on Sun, 15th Feb 2015 3:01 pm 

    The only way AI could become an enduring threat would be if the systems controlled every aspect of their own production; the energy and supply chains from mine/well to final assembly. Not even close to that yet. We still have the power to pull the plug. I majored in AI, and the closest we’ve come is so-called “expert systems”. Anyone who thinks otherwise needs to read “Society of Mind” by Marvin Minsky. Does the Tianhe-2 supercomputer have the ability to cause all of the millions of actions, required for it to exist, to occur?

    The most damage AI systems could cause would be to wreck havoc on the normal controlled operations of our many interconnected systems; crash financial systems, disrupt infrastructure (power grids and such), shut down transportation, etc.. By doing so, the ‘rogue’ AI systems would be self-limiting, bringing about their own demise. Integration of everything is as big a threat to artificial systems as it is to humans. No robot is an island, so to speak.

  24. Davy on Sun, 15th Feb 2015 3:54 pm 

    My AI threat fantasy is a human produced AI system of codes, hack engines, and virus engines with a modus operendi of wreaking havoc on the global IT systems at every level. The goal of this systematic AI construction is complete destruction of modern civilized connectivity. This could be programed by a human and set loose to infect the entire system and once infected impossible to eradicate because it would be self-replicating. This monster would be AI driven so once within the entire world communication system it could turn on when a critical mass of connectivity would have been achieved. This could be similar to the dormancy of a biological virus so it could spread a maximum amount of viruses. It would intelligently hack and cover to achieve its infection. Once the monster has achieved its critical mass of connectivity infections it would then intelligently execute its destructive codes for the optimum effect at the optimum time. Being AI this monster would be able to do all this once the key is turned on its own.

    This key must be turn by a human and the maliciousness of this monster is human. Computer AI is just like a gun in that it is the human that makes a gun a danger. A gun in itself is nothing but steel. A human must load it and then pull the trigger. The important issue with AI is there is zero chance of it replicating and growing beyond the internal workings of the up and running global system. AI has zero chance of surviving the end of modern industrial man without his complexity and energy intensity. AI has zero chance of surviving a grid going down. Eventually when complexity fall below a certain point the digital pulse stops. So in conclusion my vision of an AI monster is a highly effective human produced virus with an IT MIRV on all IT fronts. A highly effective turnkey artificial intelligence technology killer once set loose cannot be stopped.

  25. Rodster on Sun, 15th Feb 2015 4:04 pm 

    “The only way AI could become an enduring threat would be if the systems controlled every aspect of their own production; the energy and supply chains from mine/well to final assembly.”

    BINGO !!!

    That’s what Stephen Hawking was referring to. AI development so advanced that they build themselves. Add the advancement of 3D printing to the mix and you can see red flags in the distance.

  26. Wilton Granger on Sun, 15th Feb 2015 6:42 pm 

    The “Society of Mind” is available as a free PDF at
    http://www.acad.bg/ebook/ml/Society%20of%20Mind.pdf

    Eileen

  27. Makati1 on Sun, 15th Feb 2015 6:54 pm 

    Rodster, I suggest you take a college course on Systems … and you will see the fallacy of your assertions. Any AI would start in the ground as ores and millions of steps later, maybe, become a sentient computer but still reliant on outside sources.

    I have read literally hundreds of SF novels over the last 60 years and a few had some form of AI in the story, long before even desk tops were available. Only those who understood the vast resources necessary to make it happen, were even near believable.

    Much as you techie religionists might love to have an AI god, it ain’t gonna happen. Robots are nothing more them semi-intelligent machines. When they shut down an assembly line and go on strike, independent of their masters, I’ll start to believe. Until then, I’ll just LMAO!

  28. FarQ3 on Sun, 15th Feb 2015 7:49 pm 

    Ghung, I don’t think AI is so outrageous. I think that we are close to there being a robot that can physically replace a man as a miner, technician and manufacturer.

    A robot could be controlled by a host supercomputer and could potentially perform every task that is required to perpetuate the construct of an army of robotic war machines.

    In fact the prototype machines for these armies are being constructed right now by us humans.

  29. Davy on Sun, 15th Feb 2015 8:31 pm 

    Far, your robots will mine asteroids too I assume. AI robots are in the same position as AltE both being fossil fuel depndant. Not only FF dependent but dependent on a complex global system that is a human system.

    Another way to look at this AI issues is the same as EV cars and their potential as a replacement for the internal combustion engine. It won’t scale friend. Not only will it not scale it will not scale in time. The all important BAU whether it concerns man or machine has a shelf life that is coming due. No BAU no AI.

    All technology is dated. The ingredients that brought us to peak everything have run out. Nature is the only game in town. We have been her tool but all the time we thought she was our tool. What irony to have the tables turned and so quick. This AI is just more techno-delusional exceptionalism of a species that has overstepped it place in the grand scheme of planetary Earth.

  30. Apneaman on Mon, 16th Feb 2015 2:20 am 

    Microchip plants will be one of the first victims of collapse. Then futurists will have too switch over to D&D type fantasies. Watch out for that cave troll and those evil wizards.

    Want a real boogie man? Look in the mirror. Rapacious Apes……Suicidal Planet eating species destroyer.

  31. cOcOxxNuSS on Mon, 16th Feb 2015 4:42 am 

    AI is not the same as general intelligence (referred to as strong AI). The AI used today and programmed in languages like LISP is not the type of intelligence as humans or other animals have. It focuses on solving a special problem in little different varieties without direct aid from the coding. I did it myself during my academic studies.

    (strong) AI with conciousnes, emotions and real decisions is very far away. First of all it would be necessary to fully understand how our intelligence, conciousness memories and so forth function. Then you could try to apply that to robots and stuff. Also the different departments have to work together more colosely and so on.

    Furthermore there are the issues that others here pointed to. (FF dependency etc.)

    The article is obviously pretty dumb BS. Mixing SF with real threats and the risk assessment is ridiculous.

  32. cOcOxxNuSS on Mon, 16th Feb 2015 4:46 am 

    And the same applies for nanotechnology. It is mostly concerned with nano coating and microfabrication technologies to get some desired properties for manufacturing tools/materials.

  33. Davy on Mon, 16th Feb 2015 6:51 am 

    Great points CoCo from one who sounds like he is an expert.

  34. Rodster on Mon, 16th Feb 2015 9:53 am 

    Mak- Rodster, I suggest you take a college course on Systems … and you will see the fallacy of your assertions.”

    Those assertions are those of Stephen Hawking as well, but nice try. And I’ll bet he knows a lot more about AI than you do.

  35. Rodster on Mon, 16th Feb 2015 10:02 am 

    Mak- “Rodster, I suggest you take a college course on Systems … and you will see the fallacy of your assertions.”

    Those are pretty much the same assertions that Stephen Hawking has made and he takes it two steps further but nice try anyway.

    I’ll go out on a limb and say he knows a lot more about the threat of AI than you do.

Leave a Reply

Your email address will not be published. Required fields are marked *