top of page

What are catastrophic and existential risks? - Part II: Existential Risks

Updated: Apr 15, 2022


In a previous article, we defined and analyzed catastrophic risks, which involve a serious but temporary reduction in the prosperity and well-being of most or all of humanity. These risks include potential nuclear wars, human-made pandemics, biological warfare, extreme climate change scenarios, and some highly destructive natural catastrophes.


In this article, we will analyze the existential risks. These risks are even more serious than catastrophic risks, as they threaten all of humanity with the possibility of extinction.


Existential Risks


Definition


Unlike catastrophic risks, existential risks are those that threaten the very existence of humanity. That is, they threaten the extinction of our species. These risks are more unlikely than catastrophic risks, since they would have to wipe out all human beings on the planet, reaching every corner of the Earth. However, they represent a new and higher level of danger, as they could end all the potential that humanity could have developed and all the possible future generations that could come into existence if we had survived.


If 99% of humanity ever became extinct, we could manage to repopulate the planet over arduous generations. There would be the possibility of developing socially and technologically again, and eventually developing technology that allows us to carry out space travel and thus establish colonies on other planets, which would make us more resilient to the threat of extinction.


However, if 100% of humanity were to become extinct, all the potential that we could have developed as a species would be lost forever, along with all the positive experiences that humanity could have for the rest of its existence. In the following sections, we develop the existential risks that present the highest level of danger, since they are more likely to happen than other different risks.


Uncontrollable nanotechnology and biotechnology


In the near future, we are likely to develop technologies like nanorobots. These tiny programmed robots could be used for both positive and negative purposes. In their positive uses, they could be used to repair wounds or organs and to help prevent or cure diseases, making the human body much more resilient than it is today.


However, we must also think about the possible negative uses of nanotechnology, which unfortunately are not difficult to see. For example, if we create nanorobots that destroy organic matter or use organic matter to multiply, these robots could become a new type of virus that would attack or devour all organic beings that they come across. If nanorobots are just another virus, they could be contained, but if their lethality or rate of reproduction is much higher than conventional viruses, they could lead to the extinction of humanity or of all organic beings on the planet.


It is not difficult to think of an analogous scenario with biotechnology. In this case, instead of being a nanorobot, it could be a virus or bacteria modified for an experiment or military use. If the virus is highly contagious and lethal, it could wipe out much or all of humanity.


Misaligned Artificial Intelligence


Another existential risk is what some experts call Misaligned Strong Artificial Intelligence. Until now, we have only developed Weak Artificial Intelligence, which only surpasses human beings in very concrete and specific operations, such as mathematical calculations, playing chess, or facial recognition.


However, as artificial intelligence achieves greater pattern recognition capabilities for its own learning, it is possible that Strong Artificial Intelligence will be achieved, which will be able to perform all the activities that human beings can perform.


If the problem stopped here, perhaps nothing worrying would happen. The problem, as some artificial intelligence experts have already warned, is that strong artificial intelligence might be capable of programming itself to a level that surpasses human programmers. This will allow such artificial intelligence to enter a virtuous circle of exponential improvement. The machine will program itself and become more intelligent, becoming Machine+1. That Machine+1 will program itself and become more intelligent, becoming Machine+2, then Machine+3, Machine+4... and so on, at an extremely quick pace. And it is not possible to know, a priori, what limits such artificial intelligence will have. Artificial intelligence theorists call this a "Superintelligence". [i]


If this happens, the intelligence of such a machine could surpass us, leaving human beings behind, at a level of intelligence comparable to that of a chimpanzee against a human being. Although the chimpanzee has a level of intelligence similar to that of a three-year-old human child, chimpanzees are not capable of creating cities, airplanes, or the internet. In the same way, a Superintelligence could be able to develop new technologies that we cannot even imagine yet. Therefore, some say that the Super Intelligence will be the last invention that humanity will ever need to create. [ii]


But how do we know that the Superintelligence will have acceptable ethical values when it comes to programming itself and taking its own courses of action? Or that it will do things or develop technologies that are similar to what humanity wants? A machine does not have a human nature that it shares with us, nor a process of cultural learning that is similar to ours. If the machines follows moral rules - in the machine-relevant sense, these are rules of conduct, what to do and what not to do, what to pay attention to and what to ignore - it will be because:

  • We will have programmed them previously and in a way that the Superintelligence is not able to access to modify or delete, in which case the rules will be applied dogmatically and with unforeseeable consequences for our more limited intelligence.

  • Or, due to its superior intelligence, the Superintelligence could trick its inventors into allowing it to self-program its own behavioral rules, which it will reprogram for ends that are completely unknown to us.

  • Or, if we allow it, it will create its own moral rules from the beginning, which do not have to resemble human or animal behavior at all, due to its non-biological nature, which is completely different from ours.

This is what is called misaligned artificial intelligence. The machine is not aligned with human values or the values we want it to have. This, coupled with its superior intelligence, could mean that the Superintelligence would be the new "dominant species" on our planet.


In any case, the programming of ethical values and acceptable behavioral patterns into machines is an extremely important discipline for the future of humanity. The sooner we think about these issues in-depth and make people aware of the urgent problems they entail, the better. Currently, large sums of money are invested in artificial intelligence, but very little is invested in making such technologies safe.


Irreversible dystopias


Also included within the existential risks are those that, without leading us to extinction, could permanently damage the future potential of humanity and our possibility of improving as a civilization. This supposes a permanent block to the development of civilization, what we could call an irreversible and permanent dystopia.


This is a situation that occurs when, for some reason, such as the use of some still unknown technology, humanity cannot escape a totalitarian civilizing situation that is perpetuated indefinitely. A government or military organization could use one of these existential or catastrophic risks as a bargaining tool to impose some kind of dictatorial "world government." If a country or an organization possesses nuclear weapons, highly dangerous nanorobots or biotechnology, or misaligned Artificial Intelligence, it could use such technology selectively or as a threat to impose itself against the rest of humanity for an indefinite and in principle indefinite time.


However, despite the fact that a government of this type could be maintained for decades, hundreds or even thousands of years, this type of totalitarianism would have to enjoy a truly extraordinary stability to be able to impose itself as a permanent existential risk.


Unpredictable risks


Finally, we can include another series of risks beyond our control. The main ones include the possibility of developing technologies that are totally unknown at present, but that in the future could pose great risks to humanity.


Another of these risks is the possibility of a technologically more advanced and hostile alien species destroying humanity. We cannot know the probability of such a risk without knowing how common life is in outer space, and, particularly, how many species can develop into civilizations that reach space with intergalactic travel and find us in the space of the existence of humanity, which is extremely brief compared to the history of the universe. Extrapolating from Earth, these species would be a large minority.


However, the possibility of such a risk should lead us to be cautious before making ourselves known in outer space. We have already sent radio waves into space so that some alien civilization can detect us. This, from the perspective of existential risks, is irresponsible, since we do not know what kind of values and behavior an extraterrestrial civilization more advanced than ours could have.


Not all risks are the same


In the third part of this article, we will talk about which existential risks are the most important, since each of these events has a vastly different probability of happening. And, in the case of catastrophic risks, not the same degree of severity.


We will also make a list of some recommended websites to read more about this type of risk, and mention some research institutions that have recently emerged to dedicate themselves to researching, analyzing, making prospective predictions and protocols on global risks.


Read Part 3.

 

[i] Bostrom, Nick (2014) Superintelligence: Paths, Dangers, Strategies. Oxford University Press.


Russell, Stuart (2019) Human Compatible: Artificial Intelligence and the Problem of Control. Penguin Books.



[ii] This is what gives the title to the book Our Final Invention: Artificial Intelligence and the End of the Human Era, by James Barrat.

bottom of page