John Markoff has written yet another essay on the rise of the machines. This time Markoff is reporting on an Association for the Advancement of Artificial Intelligence conference organized by Eric Horvitz, a Microsoft researcher and president of the association. The conference took place at the Conference Grounds on 2/25, but the report isn’t due out until late 2009. Supposedly they weren’t looking at longer term super-human AIs, but rather near term issues … (emphases mine)
Scientists Worry Machines May Outsmart Man - NYTimes.com - John Markoff
… They focused particular attention on the specter that criminals could exploit artificial intelligence systems as soon as they were developed... also discussed possible threats to human jobs, like self-driving cars...
… Dr. Horvitz said he believed computer scientists must respond to the notions of superintelligent machines and artificial intelligence systems run amok.
The idea of an “intelligence explosion” in which smart machines would design even more intelligent machines was proposed by the mathematician I. J. Good in 1965. Later, in lectures and science fiction novels, the computer scientist Vernor Vinge popularized the notion of a moment when humans will create smarter-than-human machines, causing such rapid change that the “human era will be ended.” He called this shift the Singularity.
This vision, embraced in movies and literature, is seen as plausible and unnerving by some scientists like William Joy, co-founder of Sun Microsystems. Other technologists, notably Raymond Kurzweil, have extolled the coming of ultrasmart machines, saying they will offer huge advances in life extension and wealth creation...
... Tom Mitchell, a professor of artificial intelligence and machine learning at Carnegie Mellon University, said the February meeting had changed his thinking. “I went in very optimistic about the future of A.I. and thinking that Bill Joy and Ray Kurzweil were far off in their predictions,” he said. But, he added, “The meeting made me want to be more outspoken about these issues and in particular be outspoken about the vast amounts of data collected about our personal lives...
I was pleased to see that Bill Joy isn't being mocked as much, the poor guy took a terrible beating for stating the obvious. (Personally I'm expecting that, while it’s true that we're screwed, the end-times of superhuman intelligence will be pushed out beyond 2100.)
So the conference doesn’t sound terribly interesting, but I was interested in Markoff’s reference to IJ Good. This pushes the basic idea of the Singularity, exponential recursion, back another thirty years. I suspect thought Markoff got the reference from this Wikipedia article (but he didn't, see update) …
… Irving John ("I.J."; "Jack") Good (9 December 1916 – 5 April 2009)[1][2] was a British statistician who worked as a cryptologist at Bletchley Park.
He was born Isidore Jacob Gudak to a Polish-Jewish family in London. He later anglicized his name to Irving John Good and signed his publications "I. J. Good."
An originator of the concept now known as "technological singularity," Good served as consultant on supercomputers to Stanley Kubrick, director of the 1968 film 2001: A Space Odyssey…
Yes, he was alive until a few months ago. I don’t need to remind any of my readers that the main character of 2001 was an AI named Hal (though Hal came from Arthur C Clarke’s book, not the movie). The article concludes with the story of Good’s Singularity premise …
… In 1965 he originated the concept now known as "technological singularity," which anticipates the eventual advent of superhuman intelligence:
“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make…”
Good's authorship of treatises such as "Speculations Concerning the First Ultraintelligent Machine" and "Logic of Man and Machine" (both 1965)…
I gather he was still in decent shape when Vinge’s “Singularity” materials made news over 10 years ago. He must have read them and recognized the ideas of his earlier papers. The book The spike : how our lives are being transformed by rapidly advancing technologies / Damien Broderick (Amazon) provides some additional historical context …
… Nor is the idea altogether new. The important mathematician Stanislaw Ulam mentioned it in his “Tribute to John von Neumann,” the founding genius of the computer age, in Bulletin of the American Mathematical Society in 1958.13 Another notable scientific gadfly, Dr. I. J. Good, advanced “Speculations Concerning the First Ultraintelligent Machine,” in Advances in Computers, in 1965. Vinge himself hinted at it in a short story, “Bookworm, Run!,” in 1966, as had sf writer Poul Anderson in a 1962 tale, “Kings Must Die.” And in 1970, Polish polymath Stanislaw Lem, in a striking argument, put his finger directly on this almost inevitable prospect of immense discontinuity. Discussing Olaf Stapledon’s magisterial 1930 novel Last and First Men, in which civilizations repeatedly crash and revive for two billion years before humanity is finally snuffed out in the death of the sun, he notes…
It’s a shame Professor Good isn’t around to do an interview, he gave quite an impressive one in 1992 (in which, by the way, he tells us Turing claimed to have only an above-average IQ, which is rather curious).
Update 7/29/09: Per comments, John Markoff tells me he learned about the I. J. Good story from an interview with Eric Horvitz.
Update 8/8/09: Per comments, today a description of the panel's mission is on the AAAI website main page. There's no persistent address, so it won't stay in its current spot. For the record, here's a copy. The official mission is more ambitious than the impression left by John Markoff's article ... (emphases mine)
The AAAI President has commissioned a study to explore and address potential long-term societal influences of AI research and development. The panel will consider the nature and timing of potential AI successes, and will define and address societal challenges and opportunities in light of these potential successes. On reflecting about the long term, panelists will review expectations and uncertainties about the development of increasingly competent machine intelligences, including the prospect that computational systems will achieve "human-level" abilities along a variety of dimensions, or surpass human intelligence in a variety of ways. The panel will appraise societal and technical issues that would likely come to the fore with the rise of competent machine intelligence. For example, how might AI successes in multiple realms and venues lead to significant or perhaps even disruptive societal changes?
The committee's deliberation will include a review and response to concerns about the potential for loss of human control of computer-based intelligences and, more generally, the possibility for foundational changes in the world stemming from developments in AI. Beyond concerns about control, the committee will reflect about potential socioeconomic, legal, and ethical issues that may come with the rise of competent intelligent computation, the changes in perceptions about machine intelligence, and likely changes in human-computer relationships.
In addition to projecting forward and making predictions about outcomes, the panel will deliberate about actions that might be taken proactively over time in the realms of preparatory analysis, practices, or machinery so as to enhance long-term societal outcomes.
On issues of control and, more generally, on the evolving human-computer relationship, writings, such as those by statistician I. J. Good on the prospects of an "intelligence explosion" followed up by mathematician and science fiction author Vernor Vinge's writings on the inevitable march towards an AI "singularity," propose that major changes might flow from the unstoppable rise of powerful computational intelligences. Popular movies have portrayed computer-based intelligence to the public with attention-catching plots centering on the loss of control of intelligent machines. Well-known science fiction stories have included reflections (such as the "Laws of Robotics" described in Asimov's Robot Series) on the need for and value of establishing behavioral rules for autonomous systems. Discussion, media, and anxieties about AI in the public and scientific realms highlight the value of investing more thought as a scientific community on preceptions, expectations, and concerns about long-term futures for AI.
The committee will study and discuss these issues and will address in their report the myths and potential realities of anxieties about long-term futures. Beyond reflection about the validity of such concerns by scientists and lay public about disruptive futures, the panel will reflect about the value of formulating guidelines for guiding research and of creating policies that might constrain or bias the behaviors of autonomous and semiautonomous systems so as to address concerns.
They're taking this seriously. I'm impressed.