Superintelligence: Paths, Dangers, Strategies - Chapter 14

The strategic picture 

In this Chapter, Bostrom seeks to analyse which general direction humanity should be heading.


Accordingly, two perspectives are outlined by the author. Firstly, the person-affecting perspective tries to determine whether a suggested change would be in 'our interest'. Whereas the impersonal perspective, on the other hand, provides no special consideration to current people. “The impersonal perspective sees great value in bringing new people into existence, provided they have lives worth living: the more happy lives created, the better.” 


Science and technology strategy


Differential technological development


The principle of differential technological development is defined as excluding the creation of dangerous and unfriendly technologies and only developing technologies that can have a beneficial impact. 


Preferred order of arrival


This subchapter determines the preferred order in which disruptive technologies should emerge. 


“Risks from nature—such as asteroid impacts, supervolcanoes, and natural pandemics—would be virtually eliminated, since superintelligence could deploy countermeasures against most such hazards, or at least demote them to the non-existential category (for instance, via space colonization). These existential risks from nature are comparatively small over the relevant timescales. But superintelligence would also eliminate or reduce many anthropogenic risks. In particular, it would reduce risks of accidental destruction, including risk of accidents related to new technologies. Being generally more capable than humans, a superintelligence would be less likely to make mistakes, and more likely to recognize when precautions are needed, and to implement precautions competently.”


Therefore, it is apparent that Bostrom is advocating that superintelligence should emerge prior to other dangerous technologies, like advanced nanotechnology. The rationale for this is that superintelligence would diminish the existential risks from nanotechnology but not the other way around. 


Rates of change and cognitive enhancement 


Any improvements in human cognitive ability will most probably accelerate technological enhancements, “including progress toward various forms of machine intelligence, progress on the control problem, and progress on a wide swath of other technical and economic objectives.”


Pathways and enablers


Effects of hardware progress


Faster computers facilitate better developments in machine intelligence and better hardware certainly helps make computers faster. Additionally, hardware can improve software with the result that improved hardware diminishes the minimum skill necessary to code a seed AI. 


Correspondingly, it seems that speedy improvements in hardware is undesirable from the impersonal perspective. However, it depends on whether the existential threats are not great. 


Should whole brain emulation research be promoted? 


While attempting to develop whole brain emulation, neuromorphic AI could emerge instead, a type of machine intelligence that Bostrom regards as unsafe.


However, at the very minimum there are 3 considered benefits of whole brain emulation: 

  1. “Its performance characteristics would be better understood than those of AI; 

  2. It would inherit human motives; 

  3. It would result in a slower takeoff.” 


Collaboration 


As already stated in a previous Chapter, world collaboration regarding superintelligence can provide numerous advantages. 


The race dynamic and its perils 


It is highly likely that there will be a race dynamic in the quest for superintelligence, which could have the benefit of faster innovation and better advancements, since one project will be looking to be better than the other. 


Nevertheless, a race dynamic would, according to the author, diminish investment in safety and collaboration may be less likely which would have the consequence of mistrust between states. 


On the benefits of collaboration 


However, if projects or states would decide to collaborate, it could have numerous advantages such as enabling more investment in safety, preventing violent conflicts and the sharing of ideas regarding how to solve the control problem may lead to important conclusions. 


Working together


Lastly, this subchapter can be summarised with citing ‘The common good principle’ which is largely crucial to the future of humanity and is defined as follows:

“Superintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals.”




Comments