Read The Singularity Is Near: When Humans Transcend Biology Online
Authors: Ray Kurzweil
Tags: #Non-Fiction, #Fringe Science, #Retail, #Technology, #Amazon.com
149
. For example, the fifth annual BIOMEMS conference, June 2003, San Jose,
http://www.knowledgepress.com/events/11201717.htm
.
150
. First two volumes of a planned four-volume series: Robert A. Freitas Jr.,
Nano-medicine
, vol. I,
Basic Capabilities
(Georgetown, Tex.: Landes Bioscience, 1999);
Nanomedicine
, vol. IIA,
Biocompatibility
(Georgetown, Tex.: Landes Bioscience, 2003);
http://www.nanomedicine.com
.
151
. Robert A. Freitas Jr., “Exploratory Design in Medical Nanotechnology: A Mechanical Artificial Red Cell,”
Artificial Cells, Blood Substitutes, and Immobilization Biotechnology
26 (1998): 411–30,
http://www.foresight.org/Nanomedicine/Respirocytes.html
.
152
. Robert A. Freitas Jr., “Microbivores: Artificial Mechanical Phagocytes using Digest and Discharge Protocol,” Zyvex preprint, March 2001,
http://www.rfreitas.com/Nano/Microbivores.htm
; Robert A. Freitas Jr., “Microbivores: Artificial Mechanical Phagocytes,”
Foresight Update
no. 44, March 31, 2001, pp. 11–13,
http://www.imm.org/Reports/Rep025.html
; see also microbivore images at the Nanomedicine Art Gallery,
http://www.foresight.org/Nanomedicine/Gallery/Species/
Microbivores.html
.
153
. Robert A. Freitas Jr.,
Nanomedicine
, vol. I,
Basic Capabilities
, section 9.4.2.5 “Nanomechanisms for Natation” (Georgetown, Tex.: Landes Bioscience, 1999), pp. 309–12,
http://www.nanomedicine.com/NMI/9.4.2.5.htm
.
154
. George Whitesides, “Nanoinspiration: The Once and Future Nanomachine,”
Scientific American
285.3 (September 16, 2001): 78–83.
155
. “According to Einstein’s approximation for Brownian motion, after 1 second has elapsed at room temperature a fluidic water molecule has, on average, diffused a distance of ~50 microns (~400,000 molecular diameters) whereas a 1-micron nanorobot immersed in that same fluid has displaced by only ~0.7 microns (only ~0.7 device diameter) during the same time period. Thus Brownian motion is at most a minor source of navigational error for motile medical nanorobots.” See K. Eric Drexler et al., “Many Future Nanomachines: A Rebuttal to Whitesides’ Assertion That Mechanical Molecular Assemblers Are Not Workable and Not a Concern,” a Debate about Assemblers, Institute for Molecular Manufacturing, 2001,
http://www.imm.org/SciAmDebate2/whitesides.html
.
156
. Tejal A. Desai, “MEMS-Based Technologies for Cellular Encapsulation,”
American Journal of Drug Deliver
y 1.1 (2003): 3–11, abstract available at
http://www.ingentaconnect.com/search/expand?pub=infobike://adis/add/2003/00000001/00000001/art00001
.
157
. As quoted by Douglas Hofstadter in
Gödel, Escher, Bach: An Eternal Golden Braid
(New York: Basic Books, 1979).
158
. The author runs a company, FATKAT (Financial Accelerating Transactions by
Kurzweil Adaptive Technologies), which applies computerized pattern recognition to financial data to make stock-market investment decisions,
http://www.FatKat.com
.
159
. See discussion in
chapter 2
on price-performance improvements in computer memory and electronics in general.
160
. Runaway AI refers to a scenario where, as Max More describes, “superintelligent
machines
, initially harnessed for
human
benefit, soon leave us behind.” Max More, “Embrace, Don’t Relinquish, the Future,”
http://www.KurzweilAI.net/articles/art0106.html?printable=1
. See also Damien Broderick’s description of the “Seed AI”: “A self-improving seed AI could run glacially slowly on a limited machine substrate. The point is, so long as it has the capacity to improve itself, at some point it will do so convulsively, bursting through any architectural bottlenecks to design its own improved hardware, maybe even build it (if it’s allowed control of tools in a fabrication plant).” Damien Broderick, “Tearing Toward the Spike,” presented at “Australia at the Crossroads? Scenarios and Strategies for the Future” (April 31–May 2, 2000), published on KurzweilAI.net May 7, 2001,
http://www.KurzweilAI.net/meme/frame.html?main=/articles/art0173.html
.
161
. David Talbot, “Lord of the Robots,”
Technology Review
(April 2002).
162
. Heather Havenstein writes that the “inflated notions spawned by science fiction writers about the convergence of humans and machines tarnished the image of AI in the 1980s because AI was perceived as failing to live up to its potential.” Heather Havenstein, “Spring Comes to AI Winter: A Thousand Applications Bloom in Medicine, Customer Service, Education and Manufacturing,”
Computerworld
, February 14, 2005,
http://www.computerworld.com/softwaretopics/software/story/
0,10801,99691,00.html
. This tarnished image led to “AI Winter,” defined as “a term coined by Richard Gabriel for the (circa 1990–94?) crash of the wave of enthusiasm for the AI language Lisp and AI itself, following a boom in the 1980s.” Duane Rettig wrote: “. . . companies rode the great AI wave in the early 80’s, when large corporations poured billions of dollars into the AI hype that promised thinking machines in 10 years. When the promises turned out to be harder than originally thought, the AI wave crashed, and Lisp crashed with it because of its association with AI. We refer to it as the AI Winter.” Duane Rettig quoted in “AI Winter,”
http://c2.com/cgi/wiki?AiWinter
.
163
. The General Problem Solver (GPS) computer program, written in 1957, was able to solve problems through rules that allowed the GPS to divide a problem’s goals into subgoals, and then check if obtaining a particular subgoal would bring the GPS closer to solving the overall goal. In the early 1960s Thomas Evan wrote ANALOGY, a “program [that] solves geometric-analogy problems of the form A:B::C:? taken from IQ tests and college entrance exams.” Boicho Kokinov and Robert M. French, “Computational Models of Analogy-Making,” in L. Nadel, ed.,
Encyclopedia of Cognitive Science
, vol. 1 (London: Nature Publishing Group, 2003), pp. 113–18. See also A. Newell, J. C. Shaw, and H. A.
Simon, “Report on a General Problem-Solving Program,”
Proceedings of the International Conference on Information Processing
(Paris: UNESCO House, 1959), pp. 256–64; Thomas Evans, “A Heuristic Program to Solve Geometric-Analogy Problems,” in M. Minsky, ed.,
Semantic Information Processing
(Cam-bridge, Mass.: MIT Press, 1968).
164
. Sir Arthur Conan Doyle, “The Red-Headed League,” 1890, available at
http://www.eastoftheweb.com/short-stories/UBooks/RedHead.shtml
.
165
. V. Yu et al., “Antimicrobial Selection by a Computer: A Blinded Evaluation by Infectious Diseases Experts,”
JAMA
242.12 (1979): 1279–82.
166
. Gary H. Anthes, “Computerizing Common Sense,”
Computerworld
, April 8, 2002,
http://www.computerworld.com/news/2002/story/0,11280,69881,00.html
.
167
. Kristen Philipkoski, “Now Here’s a Really Big Idea,”
Wired News
, November 25, 2002,
http://www.wired.com/news/technology/0,1282,56374,00.html
, reporting on Darryl Macer, “The Next Challenge Is to Map the Human Mind,”
Nature
420 (November 14, 2002): 121; see also a description of the project at
http://www.biol.tsukuba.ac.jp/~macer/index.html
.
168
. Thomas Bayes, “An Essay Towards Solving a Problem in the Doctrine of Chances,” published in 1763, two years after his death in 1761.
169
. SpamBayes spam filter,
http://spambayes.sourceforge.net
.
170
. Lawrence R. Rabiner, “A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition,”
Proceedings of the IEEE
77 (1989): 257–86. For a mathematical treatment of Markov models, see
http://jedlik.phy.bme.hu/~gerjanos/HMM/node2.html
.
171
. Kurzweil Applied Intelligence (KAI), founded by the author in 1982, was sold in 1997 for $100 million and is now part of ScanSoft (formerly called Kurzweil Computer Products, the author’s first company, which was sold to Xerox in 1980), now a public company. KAI introduced the first commercially marketed large-vocabulary speech-recognition system in 1987 (Kurzweil Voice Report, with a ten-thousand-word vocabulary).
172
. Here is the basic schema for a neural net algorithm. Many variations are possible, and the designer of the system needs to provide certain critical parameters and methods, detailed below.
Creating a neural-net solution to a problem involves the following steps:
- Define the input.
- Define the topology of the neural net (i.e., the layers of neurons and the connections between the neurons).
- Train the neural net on examples of the problem.
- Run the trained neural net to solve new examples of the problem.
- Take your neural-net company public.
These steps (except for the last one) are detailed below:
The Problem Input
The problem input to the neural net consists of a series of numbers. This input can be:
- In a visual pattern-recognition system, a two-dimensional array of numbers representing the pixels of an image; or
- In an auditory (e.g., speech) recognition system, a two-dimensional array of numbers representing a sound, in which the first dimension represents parameters of the sound (e.g., frequency components) and the second dimension represents different points in time; or
- In an arbitrary pattern-recognition system, an
n
-dimensional array of numbers representing the input pattern.
Defining the Topology
To set up the neural net, the architecture of each neuron consists of:
- Multiple inputs in which each input is “connected” to either the output of another neuron, or one of the input numbers.
- Generally, a single output, which is connected either to the input of another neuron (which is usually in a higher layer), or to the final output.
Set Up the First Layer of Neurons
- Create N
0
neurons in the first layer. For each of these neurons, “connect” each of the multiple inputs of the neuron to “points” (i.e., numbers) in the problem input. These connections can be determined randomly or using an evolutionary algorithm (see below).- Assign an initial “synaptic strength” to each connection created. These weights can start out all the same, can be assigned randomly, or can be determined in another way (see below).
Set Up the Additional Layers of Neurons
Set up a total of M layers of neurons. For each layer, set up the neurons in that layer.
For layer
i
:
- Create N
i
neurons in layer
i
. For each of these neurons, “connect” each of the multiple inputs of the neuron to the outputs of the neurons in layer
i–1
(see variations below).- Assign an initial “synaptic strength” to each connection created. These weights can start out all the same, can be assigned randomly, or can be determined in another way (see below).
- The outputs of the neurons in layer
M
are the outputs of the neural net (see variations below).
The Recognition Trials
How Each Neuron Works
Once the neuron is set up, it does the following for each recognition trial:
- Each weighted input to the neuron is computed by multiplying the output of the other neuron (or initial input) that the input to this neuron is connected to by the synaptic strength of that connection.
- All of these weighted inputs to the neuron are summed.
- If this sum is greater than the firing threshold of this neuron, then this neuron is considered to fire and its output is 1. Otherwise, its output is 0 (see variations below).
Do the Following for Each Recognition Trial
For each layer, from layer
0
to layer
M
:
For each neuron in the layer:
- Sum its weighted inputs (each weighted input = the output of the other neuron [or initial input] that the input to this neuron is connected to multiplied by the synaptic strength of that connection).
- If this sum of weighted inputs is greater than the firing threshold for this neuron, set the output of this neuron = 1, otherwise set it to 0.