- Get link
- X
- Other Apps
- Get link
- X
- Other Apps
The kinetics of an intelligence explosion
Once machines obtain human-level intelligence, when will they be able to develop strong superintelligence? This Chapter determines whether it is likely to be a slow, medium or fast transition.
Timing and speed of the takeoff
Distinguishing this Chapter with Chapter 1, the question raised in this one is, “if and when such a machine is developed, how long will it be until such a machine becomes radically superintelligent?”
Takeoff is described as the improvements of the machine that has seen its intelligence reaching the level of human intelligence.
Appropriately, there are 3 possible types of transition:
Slow. A slow takeoff is one that may take decades or centuries to happen. Slow takeoff scenarios provide fantastic capabilities to humanity as it will have adequate time to adapt and respond to the superintelligence.
Fast. A fast takeoff is one that may take minutes, hours, or days to happen. Fast takeoff scenarios provide limited capabilities for humanity to neither adapt nor respond. “Nobody need even notice anything unusual before the game is already lost.” In the case of a fast takeoff occuring, humanity’s fate relies on preparations already placed.
Moderate. A moderate takeoff is one that may take months or years to happen. Moderate takeoff scenarios provide some capabilities to adapt and respond but not sufficient enough for humanity to be confident about those responses.
To answer this question, it is necessary to determine the ‘optimization power’ that is implemented in order for the system’s intelligence to be enhanced, and “the responsiveness of the system to the application of a given amount of such optimization power.”
Therefore, we are left with the following equation:
Rate of change of intelligence = optimization power / recalcitrance
A system’s intelligence will be rapidly developed if either a great amount of optimization power is exerted and the system’s intelligence is not very difficult to amplify as well as the system’s recalcitrance is insufficient.
Recalcitrance
Non-machine intelligence paths
Cognitive enhancement is optimized by “eliminating severe nutritional deficiencies, and the most severe deficiencies have already been largely eliminated in all but the poorest countries.” This would, then, have high recalcitrance because it will be evident to humanity that we have developed the capability to eradicate more diseases and poverty (through stats and hence, by eliminating poverty, we can eradicate malnutrition). By going down the brain-computer path, however, the recalcitrance is possibly going to be quite high, due to the fact that humanity will be able to predict when the breakthrough is coming, mainly due to the fact that animals will be used as experiments initially.
Emulation and AI paths
Again, the recalcitrance down the whole brain emulation path will likely be high, since “biological supporters will organize to support regulations restricting the use of emulation workers, limiting emulation copying, prohibiting certain kinds of experimentation with digital minds, instituting workers’ rights and a minimum wage for emulations, and so forth.”
Nevertheless, in the AI path, recalcitrance could be incredibly low. For instance, “if human-level AI is delayed because one key insight long eludes programmers, then when the final breakthrough occurs, the AI might leapfrog from below to radically above human level without even touching the intermediary rungs.” This is the hypothesis of singularity in AI.
It is also likely that due to human nature to observe intelligence from an anthropocentric viewpoint will guide humanity in downplaying that intelligence, with the result of excessively estimating recalcitrance.
Optimization power and explosivity
Correspondingly, there is no correlation between recalcitrance being low and a fast takeoff.
According to Bostrom, applied optimization power will amplify during the takeoff, “at least in the absence of deliberate measures to prevent this from happening.”
Accordingly, once the system attains the human baseline for individual intelligence, the system’s intelligence will only amplify and with a capacity to learn, the system most probably will use that to self-develop even further. At some point, its capability may be too large, that most of its optimization power may come from itself. Therefore, for the reasons that the system could rapidly expand itself and “incorporate vast amounts of content by digesting the Internet” (in the case of AI) or the possibility of scanning further biological brains (in the case of whole brain emulation), among others, the probability of having high recalcitrance is quite low.
*Any comments made by the author of this blog are written in Italics*
Comments
Post a Comment