Read Superintelligence: Paths, Dangers, Strategies Online
Authors: Nick Bostrom
Tags: #Science, #Philosophy, #Non-Fiction
64
. E.g. Warwick (2002). Stephen Hawking even suggested that taking this step might be necessary in order to keep up with advances in machine intelligence: “We must develop as quickly as possible technologies that make possible a direct connection between brain and computer, so that artificial brains contribute to human intelligence rather than opposing it” (reported in Walsh [2001]). Ray Kurzweil concurs: “As far as Hawking’s … recommendation is concerned, namely direct connection between the brain and computers, I agree that this is both reasonable, desirable and inevitable. [
sic
] It’s been my recommendation for years” (Kurzweil 2001).
65
. See Lebedev and Nicolelis (2006); Birbaumer et al. (2008); Mak and Wolpaw (2009); and Nicolelis and Lebedev (2009). A more personal outlook on the problem of enhancement through implants can be found in Chorost (2005, Chap. 11).
66
. Smeding et al. (2006).
67
. Degnan et al. (2002).
68
. Dagnelie (2012); Shannon (2012).
69
. Perlmutter and Mink (2006); Lyons (2011).
70
. Koch et al. (2006).
71
. Schalk (2008). For a general review of the current state of the art, see Berger et al. (2008). For the case that this would help lead to enhanced intelligence, see Warwick (2002).
72
.
Some examples: Bartels et al. (2008); Simeral et al. (2011); Krusienski and Shih (2011); and Pasqualotto et al. (2012).
73
. E.g. Hinke et al. (1993).
74
. There are partial exceptions to this, especially in early sensory processing. For example, the primary visual cortex uses a retinotopic mapping, which means roughly that adjacent neural assemblies receive inputs from adjacent areas of the retinas (though ocular dominance columns somewhat complicate the mapping).
75
. Berger et al. (2012); Hampson et al. (2012).
76
. Some brain implants require two forms of learning: the device learning to interpret the organism’s neural representations and the organism learning to use the system by generating appropriate neural firing patterns (Carmena et al. 2003).
77
. It has been suggested that we should regard corporate entities (corporations, unions, governments, churches, and so forth) as artificial intelligent agents, entities with sensors and effectors, able to represent knowledge and perform inference and take action (e.g. Kuipers [2012]; cf. Huebner [2008] for a discussion on whether collective representations can exist). They are clearly powerful and ecologically successful, although their capabilities and internal states are different from those of humans.
78
. Hanson (1995, 2000); Berg and Rietz (2003).
79
. In the workplace, for instance, employers might use lie detectors to crack down on employee theft and shirking, by asking the employee at the end of each business day whether she has stolen anything and whether she has worked as hard as she could. Political and business leaders could likewise be asked whether they were wholeheartedly pursuing the interests of their shareholders or constituents. Dictators could use them to target seditious generals within the regime or suspected troublemakers in the wider population.
80
. One could imagine neuroimaging techniques making it possible to detect neural signatures of motivated cognition. Without self-deception detection, lie detection would favor individuals who believe their own propaganda. Better tests for self-deception tests could also be used to train rationality and to study the effectiveness of interventions aimed at reducing biases.
81
. Bell and Gemmel (2009). An early example is found in the work of MIT’s Deb Roy, who recorded every moment of his son’s first three years of life. Analysis of this audiovisual data is yielding information on language development; see Roy (2012).
82
. Growth in total world population of biological human beings will contribute only a small factor. Scenarios involving machine intelligence could see the world population (including digital minds) explode by many orders of magnitude in a brief period of time. But that road to superintelligence involves artificial intelligence or whole brain emulation, so we need not consider it in this subsection.
CHAPTER 3: FORMS OF SUPERINTELLIGENCE83
. Vinge (1993).
1
. Vernor Vinge has used the term “weak superintelligence” to refer to such sped-up human minds (Vinge 1993).
2
. For example, if a very fast system could do everything that any human could do except dance a mazurka, we should still call it a speed superintelligence. Our interest lies in those core cognitive capabilities that have economic or strategic significance.
3
. At least a millionfold speedup compared to human brains is physically possible, as can been seen by considering the difference in speed and energy of relevant brain processes in comparison to more efficient information processing. The speed of light is more than a million times greater than that of neural transmission, synaptic spikes dissipate more than a million times more heat than is thermodynamically necessary, and current transistor frequencies are more than a million times faster than neuron spiking frequencies (Yudkowsky [2008a]; see also Drexler [1992]). The ultimate limits of speed superintelligence are bounded by light-speed communications delays, quantum limits on the speed of state transitions, and the volume needed to contain the mind (Lloyd 2000). The “ultimate laptop” described by Lloyd (2000) would run a 1.4×10
21
FLOPS brain emulation at speedup of 3.8×10
29
× (assuming the emulation could be
sufficiently parallelized). Lloyd’s construction, however, is not intended to be technologically plausible; it is only meant to illustrate those constraints on computation that are readily derivable from basic physical laws.
4
. With emulations, there is also an issue of how long a human-like mind can keep working on something before going mad or falling into a rut. Even with task variety and regular holidays, it is not certain that a human-like mind could live for thousands of subjective years without developing psychological problems. Furthermore, if total memory capacity is limited—a consequence of having a limited neuron population—then cumulative learning cannot continue indefinitely: beyond some point, the mind must start forgetting one thing for each new thing it learns. (Artificial intelligence could be designed such as to ameliorate these potential problems.)
5
. Accordingly, nanomechanisms moving at a modest 1 m/s have typical timescales of nanoseconds. See section 2.3.2 of Drexler (1992). Robin Hanson mentions 7-mm “tinkerbell” robot bodies moving at 260 times normal speed (Hanson 1994).
6
. Hanson (2012).
7
. “Collective intelligence” does not refer to low-level parallelization of computing hardware but to parallelization at the level of intelligent autonomous agents such as human beings. Implementing a single emulation on a massively parallel machine might result in speed superintelligence if the parallel computer is sufficiently fast: it would not produce a collective intelligence.
8
. Improvements to the speed or the quality of the individual components could also indirectly affect the performance of collective intelligence, but here we mainly consider such improvements under the other two forms of superintelligence in our classification.
9
. It has been argued that a higher population density triggered the Upper Paleolithic Revolution and that beyond a certain threshold accumulation of cultural complexity became much easier (Powell et al. 2009).
10
. What about the Internet? It seems not yet to have amounted to a super-sized boost. Maybe it will do so eventually. It took centuries or millennia for the other examples listed here to reveal their full potential.
11
. This is, obviously, not meant to be a realistic thought experiment. A planet large enough to sustain seven quadrillion human organisms with present technology would implode, unless it were made of very light matter or were hollow and held up by pressure or other artificial means. (A Dyson sphere or a Shellworld might be a better solution.) History would have unfolded differently on such a vast surface. Set all this aside.
12
. Our focus here is on the functional properties of a unified intellect, not on the question of whether such an intellect would have qualia or whether it would be a mind in the sense of having subjective conscious experience. (One might ponder, though, what kinds of conscious experience might arise from intellects that are more or less integrated than those of human brains. On some views of consciousness, such as the global workspace theory, it seems one might expect more integrated brains to have more capacious consciousness. Cf. Baars (1997), Shanahan (2010), and Schwitzgebel (2013).)
13
. Even small groups of humans that have remained isolated for some time might still benefit from the intellectual outputs of a larger collective intelligence. For example, the language they use might have been developed by a much larger linguistic community, and the tools they use might have been invented in a much larger population before the small group became isolated. But even if a small group had always been isolated, it might still be part of a larger collective intelligence than meets the eye—namely, the collective intelligence consisting of not only the present but all ancestral generations as well, an aggregate that can function as a feed-forward information processing system.
14
. By the Church–Turing thesis, all computable functions are computable by a Turing machine. Since any of the three forms of superintelligence could simulate a Turing machine (if given access to unlimited memory and allowed to operate indefinitely), they are by this formal criterion computationally equivalent. Indeed, an average human being (provided with unlimited scrap paper and unlimited time) could also implement a Turing machine, and thus is also equivalent by the same criterion. What matters for our purposes, however, is what these different systems can achieve
in practice
, with finite memory and in reasonable time. And the
efficiency variations are so great that one can readily make some distinctions. For example, a typical individual with an IQ of 85 could be taught to implement a Turing machine. (Conceivably, it might even be possible to train some particularly gifted and docile chimpanzee to do this.) Yet, for all practical intents and purposes, such an individual is presumably incapable of, say, independently developing general relativity theory or of winning a Fields medal.
15
. Oral storytelling traditions can produce great works (such as the Homeric epics) but perhaps some of the contributing authors possessed uncommon gifts.
16
. Unless it contains as components intellects that have speed or quality superintelligence.
17
. Our inability to specify what all these problems are may in part be due to a lack of trying: there is little point in spending time detailing intellectual jobs that no individual and no currently feasible organization can perform. But it is also possible that even conceptualizing some of these jobs is itself one of those jobs that we currently lack the brains to perform.
18
. Cf. Boswell (1917); see also Walker (2002).
19
. This mainly occurs in short bursts in a subset of neurons—most have more sedate firing rates (Gray and McCormick 1996; Steriade et al. 1998). There are some neurons (“chattering neurons,” also known as “fast rhythmically bursting” cells) that may reach firing frequencies as high as 750 Hz, but these seem to be extreme outliers.
20
. Feldman and Ballard (1982).
21
. The conduction velocity depends on axon diameter (thicker axons are faster) and whether the axon is myelinated. Within the central nervous system, transmission delays can range from less than a millisecond to up to 100 ms (Kandel et al. 2000). Transmission in optical fibers is around 68%
c
(because of the refractive index of the material). Electrical cables are roughly the same speed, 59–77%
c
.
22
. This assumes a signal velocity of 70%
c
. Assuming 100%
c
ups the estimate to 1.8×10
18
m
3
.
23
. The number of neurons in an adult human male brain has been estimated at 86.1 ± 8.1 billion, a number arrived at by dissolving brains and fractionating out the cell nuclei, counting the ones stained with a neuron-specific marker. In the past, estimates in the neighborhood of 75–125 billion neurons were common. These were typically based on manual counting of cell densities in representative small regions (Azevedo et al. 2009).
24
. Whitehead (2003).
25
. Information processing systems can very likely use molecular-scale processes for computing and data storage and reach at least planetary size in extent. The ultimate physical limits to computation set by quantum mechanics, general relativity, and thermodynamics are, however, far beyond this “Jupiter brain” level (Sandberg 1999; Lloyd 2000).