Futurist forecasting has long been the object of strange fascination. Specific doomsday predictions — whether derived from the Mayan calendar or calculated by Christian radio host Harold Camping — have earned widespread media coverage, prompting no small amount of ridicule and what almost seemed like disappointment the day after.
As the Amazing Criswell dramatically intoned at the beginning of Plan 9 From Outer Space, "We are all interested in the future, for that is where you and I are going to spend the rest of our lives."
And those lives — in one form or another — may extend further into the future than most of us imagine. Even science fiction writers have had trouble keeping up with exponentially increasing scientific breakthroughs — from nanotechnology to artificial intelligence — that can be both hopeful and troubling.
Ray Kurzweil, on the other hand, has had no problem staying ahead of the curve. Microsoft co-founder Bill Gates called him "the best person I know at predicting the future of artificial intelligence." Time magazine published a 2011 cover story on him called "2045: The Year Man Becomes Immortal."
And just last month, Kurzweil joined forces with Google as its director of engineering, a high-level position in which he'll essentially be prepping computers to pass the Turing Test, at which point they'll be intellectually indistinguishable from human beings. In 2002, Kurzweil bet Lotus inventor Mitchell Kapor $20,000 that would happen by 2029. And in his 2005 bestseller The Singularity is Near, he went a step further, predicting a merging point between humans and machines in, yes, the year 2045.
Kurzweil's predictions might seem all the more far-fetched were it not for his real-life track-record as an inventor. Back in the '80s, after betting Stevie Wonder that a synthesizer could replicate the sound of a piano — anybody see a pattern here? — he invented one that did. Kurzweil and Wonder had become friends in the mid-'70s, when the musician purchased the first Kurzweil Reading Machine, a washing-machine-sized invention that scanned and spoke printed text. And in 2008, Kurzweil replicated the accomplishment with what Gizmodo dubbed "the first seeing-eye cellphone."
In advance of a Jan. 23 speaking engagement at Colorado State University-Pueblo, Ray Kurzweil spoke with the Indy last week about many of the above topics, along with the practical impact and ethical implications of a potentially post-human future.
Indy: I'd like to begin by asking about your talk in Pueblo. I assume that, in light of your latest book [How to Create a Mind], progress in reverse-engineering the human brain will be a significant part of it. But I'm also guessing there'll be more to it than that?
Ray Kurzweil: Well, it's kind of a culmination of my thinking. I've been interested in the evolution of technology. Early in my youth, I was timing my own technology projects, and with that I developed the theory of the Law of Accelerating Returns, that information technology progresses in an exponential manner.
But our intuition is linear, not exponential. Exponential and linear progressions start out similarly, but end up radically different. That's why my predictions very often seem startling, you know, 10 or 20 years out, but then people very quickly get used to changed circumstances when they actually do happen.
I wrote a paper exactly 50 years ago when I was 14, as a Westinghouse Talent Search submission, about how I thought the brain worked. I had this theory that actually was quite compatible with the theory in the recent book, that human thinking is based on pattern recognition, not logical thinking.
I also did a project where a computer used pattern recognition to recognize melodies in music and then to write original melodies based on the patterns it found. And so I got to meet President Johnson and appear on national television. And that was 50 years ago.
And it's actually only recently that the advances in information technology, such as brain scanning, are enabling us to see inside a living brain with enough resolution and specificity to actually see what's going on. So some of the best evidence of my theory that I present in the book really came out in this past year, while I was writing the book.
So I'll be presenting that whole thesis: what the empirical evidence is, how that applies to the brain, how exponential gains in both hardware and software will lead to more intelligent machines, and what impact that will have on society.
Indy: There are a lot of dystopian depictions of technology in popular culture, from William Gibson's cyberpunk novels to HAL 9000 refusing to open the pod bay door in 2001. And then there are people like Buckminster Fuller, who saw technology as the solution to our biggest problems. Would you say you fall into the latter camp, or do you see technology as essentially neutral?
RK: Well, I've written about both the promise and the peril. In fact, Chapter 8 of The Singularity Is Near is called the "The Deeply Intertwined Promise and Peril of GNR." The "GNR" refers to the three technology revolutions: Genetics, Nanotechnology and Robotics. Robotics is another word for artificial intelligence.
And technology has been a double-edged sword. The fire that keeps us warm can also burn down our villages. That being said, the evidence is quite clear that human life has steadily increased even though the technology of, let's say, violence has certainly increased enormously in sophistication. Yet the average person dying from interpersonal violence has come down dramatically by a factor of hundreds over the last few centuries. And that's because of the relative prosperity and abundance that our technology has provided. Whereas people who lived in an economy of extreme scarcity were quick to resolve disputes in a violent manner.
You can look at any indicator you want. Look at health. Your life expectancy was 20 a thousand years ago, it was 37 in 1800, it was 48 in 1900, pushing 80 today. I have an analysis that shows that within 15 years we'll be adding more than a year, every year, to your remaining life expectancy, which is kind of the tipping point.
Or look at democracy. There were relatively few democracies a century ago, even a half-century ago. I wrote that the Soviet Union would be swept away by the then-emerging decentralized electronic communication. I wrote that in the '80s, when the Soviet Union was going strong. People thought that was crazy — that the Soviet Union could be swept away by a few teletype machines and e-mail over phone lines and so on — but that's exactly what happened.
So these technologies have clearly benefited more than they have harmed us. I felt like part of my own brain went on strike during that one-day SOPA strike [a 2012 blackout by Google, Wikipedia and others in protest of federal anti-piracy bills], because I'm so dependent on these brain extenders. Yet they didn't even exist, you know, about a decade ago.
Indy: It's amazing how quickly your individual consciousness can integrate with these technologies to the point where they become second nature.
RK: Right. But that's not to say there aren't real dangers, like privacy. There are also existential risks.
For example, we are now in the process of reprogramming the information processes of our own biology, to program biology away from cancer, away from heart disease, and so on. But those same technologies could be used by a bioterrorist to reprogram a relatively benign biological virus — like a cold or flu virus — and make it very deadly and very communicable.
I'm not saying that we're defenseless. In fact, I've personally worked for the U.S. Army on this issue. Fifteen years ago, when biotechnology was fairly new, they had a bioterrorism-protection program and they took me around and said, "OK, here's our section that protects people from anthrax. And here's where we do research on protecting people from smallpox." They said, "Here, we've got some great smallpox samples, would you like to see them?" I said, "Maybe some other time."
Indy: And this was before the anthrax threats surrounding 9/11?
RK: This was before that. So I said, "Well, where's the section that deals with a brand new virus you haven't seen before, which somebody cooks up in a bioengineering laboratory?" And they said, "Is that possible?" And I said, "Well it's actually not easy, but it's going to get easier. And it is possible."
So today, it's still not easy, but it's actually a lot easier than it was 15 years ago. But there is now a rapid-response system. In terms of sequencing the virus under question, that used to be very slow. In fact, it was actually not feasible. HIV took five years to sequence, SARS took 31 days, we can now sequence a virus in one day. Another example of the acceleration I talk about.
So this system will sequence the virus, create either an RNA interference medication that deactivates it or an antigen-based vaccine, and a lot of these can be tested in simulators. So there is a rapid-response system, but we can't just cross that off our list of things to worry about, anymore than we can cross software viruses off that list. If we did, our whole system would be obsolete tomorrow.
Indy: So can an organization like the CDC [Centers for Disease Control and Prevention] keep up to speed on all that?
RK: I think we are now keeping up to speed, but we're gonna have to keep at it. We have a technological immune system for software viruses, and we now have one for biological viruses. And both need to be updated constantly, because the technology attacks keep getting more sophisticated.
So my point is that there are things to worry about. There are also things we can do about them. We don't just sit back and worry in an idle manner.
Indy: Or stop investing in it, right? Which I imagine is what some people want to do.
RK: Well, I mean, everybody supports it in theory. But, you know, whether it actually gets the resources it needs, with everybody trying to make cuts, is another matter.
I try to articulate the need to make investments in this. The problem is you don't see the danger — I mean nobody's been killed in a biological attack — but, you know, that doesn't mean we shouldn't invest in it. Because you can't just measure the danger based on past experiences.
Indy: Returning to the less lethal subject of privacy, I saw an interview where you mentioned how, on your first day at Google, it was clear privacy is a "sacred trust" that every employee must maintain. But I'm curious about policy statements Google posted back in 2004, where they acknowledged that they'd be picking up on keywords from within your Gmail messages and then presenting you with ads from companies with related products. Are there privacy issues involved with mining that kind of personal information? And will the technology surrounding that be part of what you're doing there?
RK: Well, I'm not the right expert to talk to here at Google about exactly what the policies are. I can just report that it's taken extremely seriously. Anytime any software goes out of research and into an actual product, there's very stringent privacy reviews by groups that are charged with doing that. There's an understanding that if users felt that their privacy was being infringed, or had been infringed, that that would be very damaging for all parties concerned, particularly Google.
But at the same time, Google wants to provide useful services, and it's useful to know something about the user in order to be helpful. If you have a friend that's trying to guide you through certain information and find things for you, you want that friend to know something about you, otherwise it's gonna keep giving you the wrong information. So that has to be balanced against the need to keep the information private, while providing useful services. My mission here is to help understand natural language documents ...
Indy: Which is a different area?
RK: Yeah, although Google could become aware that you are worried about the bioavailability of Vitamin B-12, and therefore without you even asking, it would tell you that, well, 12 seconds ago this new research came out on this issue you're interested in. But it would have to know that you're interested in that issue.
So there'll have to be some kind of flexible interface, where you can let Google know what kinds of information you're interested in, you know, while having confidence that that's gonna be kept confidential.
Indy: So then is it basically machines reading your e-mail, rather than people? Or, I should say, scanning your e-mail for keywords.
RK: Yeah, my mission is to go beyond keywords to actually pick up semantic meaning of documents in general. Because you know, search is still obviously based on keywords. There is AI entering into it already, but it still doesn't fully understand the meaning of all the sentences in the documents, and there's a lot of information that's contained in that semantic content.
But we'd like to enhance search [capability] by having the system actually understand what all those billions of documents mean, at least to some extent. I don't think we'll get human levels of understanding, by my analysis, until 2029.
I've worked in this area for decades, and I've pioneered some of the techniques that are used in artificial intelligence today. So what I have the opportunity to do here is to take certain methods and apply them to what they call Google scale. There's just a vast amount of data and information that exists here, vast computing resources with many millions of computers, not to mention, you know, hundreds of millions of users. So it's a great opportunity. I couldn't do this kind of work anywhere else.
Indy: Do you think that your sharing of resources and knowledge with Google could in fact hasten the Singularity?
RK: Well, that sort of brings up this interesting issue. People say, "Well, these trends are so inexorable and predictable, why don't we all sit back and just take it easy for the next 20 years, and let it unfold and not have to work very hard?" And then of course it wouldn't happen.
But I think what we can count on is people's passion, curiosity and creativity. In fact, the opportunities are even greater, because more and more people have the resources to do this type of work. Google itself was started by a couple of kids in college with notebook computers in their dorm rooms. So we all have very powerful tools, but the tools that exist here are more powerful than you can find anywhere else.
Indy: More powerful than what DARPA [the federal government's Defense Advanced Research Projects Agency] has?
RK: Well, DARPA is funding lots of independent research at universities. There's a lot of other resources available, but Google has some very unique capabilities. It's a very good place for me to be. I mean, I guess I intuitively take my own and everyone else's efforts into account when I make predictions.
Indy: So you've got 16 years — with $20,000 riding on it — for the Turing Test to be passed. Did you take the Google job to make sure you could cover the bet?
RK: [Laughs.] I've got more than 16 years, because it doesn't say when in 2029 that it would happen. We're only in January. Yeah, that's one of my motivations here, to make sure I win the bet.
Indy: This may seem like an odd question, but in regard to a computer passing the Turing Test or having more intellectual capabilities than we do, to what degree would that be the result of computers becoming more intelligent, or us becoming less so?
RK: Oh, I think we're becoming more intelligent. By "we," I'm including our brain extenders. I mean, I've been managing work groups for 45 years. I can have a handful of people that, in a matter of weeks, can accomplish what used to take a hundred to two hundred people years.
Folks say, "Oh, these brain extenders make us stupid because we forget how to do things on our own." But it reminds me of the controversy when I went to college — about these devices that look a lot like your phone today, called calculators — if they were gonna make kids stupider and forget how to do arithmetic. And indeed arithmetic skills have gone down, but the calculators haven't gone away. You know, we create tools to make up for our own limitations.
So they're definitely making us smarter, but it is a human-machine civilization. We have tools now that can access, you know, almost all of human knowledge that we carry around at all times. These are fantastic tools that make us smarter, but indeed we probably don't remember as much because we learn to rely on them. But that reliance is well-justified.
Indy: There's a prevailing science-fiction trope, especially in recent years, involving a person's individual consciousness being extended indefinitely by some form of digital replication or emulation. To what extent do you see that as a real possibility, and if so how soon?
RK: In my recent book, I talk about the three issues of consciousness, free will and identity. They're not really resolvable through scientific analysis alone. They're truly philosophical issues, and some people therefore conclude they're meaningless issues. But I argue against that, because our whole moral system is based on consciousness.
So you really need to understand who and what is conscious, and whether an entity is a continuation of the consciousness of another entity.
So it's a complicated issue. I mean, if I had to boil it down in terms of preservation of identity or consciousness, I would point to continuity of patterns. So if a person puts a computer in their brain, for example, as Parkinson's patients have done, they're not radically changing their brain, they're changing one part of it. There's a fundamental continuity with most of it. And so we reasonably conclude it's the same person.
Indy: So biology is not disposable.
RK: No. I mean, you could continue that argument — that if you go down that slippery slope, and every day you replace another little bit of their biology with a non-biological system that performs exactly the same, or close enough, you would still have a continuity of patterning. You would eventually have changed the substrate. But I think it's the pattern of information processing — not the substrate it runs on — that's important.
And I think we will be introducing non-biological processes into our brain. And according to the Law of Accelerating Returns, that non-biological component of our thinking is gonna grow exponentially. So if you look ahead to, let's say, the 2040s, it's gonna predominate. And so we will be mostly non-biological at that point. But that doesn't mean we're a different person, because we've maintained the continuity of identity all through the process.