The AI revolution has been canceled due to lack of reality
(WARNING: Somewhere in the following chaotic post is a really good thesis struggling to be free. It may change drastically as I sort it all out.)
There's an interesting interview with Vernor Vinge on Computerworld about AI and how it will surpass human intelligence by 2020 ("AI will surpass human intelligence after 2020"). I certainly don't have Dr. Vinge's credentials. I have a lowly undergraduate degree in electrical engineering, having been damn glad to get it and to get out into the working world. But I do have nearly 40 years of hands-on experience with IT, and I can assure Dr. Vinge that as long as IT works as it does then we've got nothing to worry about with AI intelligence exceeding our own.
This isn't to cast aspersions on what's currently out there. Quite the contrary. There's a lot of very good systems and software, and some of it looks nearly miraculous and seems close to validating Clark's Third Law. But the bottom line is that it's still limited, and it requires at some point a human-in-the-middle to successfully operate. And it will continue to do so well past 2020.
The Problem with the Future
I am a child of the 50's, and grew up reading science fiction from the three grand old men of that period; Asimov, Clark, and Heinlein. Asimov gave me robots and a human galactic empire ("The Foundation Trilogy"), Clark gave me vistas of the near and far future ("The Sands of Mars", "The City and the Stars"), and Heinlein gave me gritty reality in the near future ("The Moon is a Harsh Mistress", "Starship Troopers"). In all those futures travel around the solar system via rocket was assumed, and it was further assumed that it was but mere decades away.
This all culminated with the book and movie by Clark, "2001". I remember seeing "2001" in 1968. It was a year later that Apollo 11 landed on the moon. Vision and fact seemed to be in lockstep. Then reality set in. Apollo continued up to 1972 and Apollo 17, when the program was canceled. As the Apollo program wound down many of the scientists and engineers who helped put us on the moon were laid off. We had a laughable attempt at an orbiting space station called Skylab. Money was being funneled towards the then-new Space Shuttle, which over the years was stripped down in capability into the system we have today. All we wound up doing after landing on the moon in 1969 was to spend an inordinate amount of money orbiting the earth. We've forgotten so much that we're struggling to re-discover this lost treasure (such as in junk yards!) just to go back to the moon by 2020, let alone beyond it.
The problem with the future is that when it finally arrives it never looks anything like you originally envisioned. It turns out to be a lot harder and a lot more expensive than you ever anticipated, and it can be a real disillusionment and a powerful cynic generator.
Computers Are Dumb
Computers are fragile. Take away their power and they sit there like big dumb door stops. Remove their network connectivity and they consume inordinate amounts of power doing very little of real interest. Connect them up and they are the targets of computer virii that want to turn them into bot networks for the purpose of sending spam and propagating large-scale DDoS attacks.
The idea that our computers are going to grow every more sophisticated, every more faster, ever more connected, than then hit some critical threshold and become equal or superior to us follows the same flawed thinking (and wishing) that we used to follow with regards to ubiquitous space flight. In fact we'll have affordable space flight long before we'll have artificial intelligence, given that the intricate engineering required to create reusable and affordable lifters are just now being built. We're going to need to come up with a fundamentally different approach to artificial intelligence before we can even begin to design them, and no, quantum computing ain't the answer either.
People Are Dumb
We have a very bad habit of falling in love with our technology, of elevating it to heights it does not deserve. Our current need to elevate our tools appears little different from our desires, thousands of years ago, to create idols from gold and to erect temples in which to place them and worship them. Computers should be amplifiers of human intellect and capability, not replacements for it.
The drive to create an AI seems symptomatic of a much deeper problem with America: we've given up and turned within. It turned out to be very difficult to push out into space, especially the way we wanted to in the 1960's. The solutions for affordable and sustainable rocket flight are a lot harder than we ever realized and our own politics are even more intractable, especially when you're trying to sell manned space flight to people who want to do little more than sit mindlessly in front of a television or twitch mindlessly in front of a video game.
And so, in disappointment, we've turn inward to create a world that will satisfy our need to succeed. We have our computers and the software we write to run upon them. And these idiot savants do many clever things and do them so much more faster than we can, and we make the mistake believing they're better than us because of it. But a faster idiot is still an idiot. Speed kills, especially if you're an idiot.
We may yet create artificial intelligence. Given enough time anything is possible. But to say that we'll have an AI that exceeds our intelligence by 2020, a mere 13 years away, is a fool's prediction at best. And to say it will occur after 2020 is disingenuous; even I can make such a prediction, along with space colonies on other planets and faster-than-light flight to the stars. I just won't say how much beyond 2020.
If we want better intelligence (because in the end this is what the search for AI seems to me to be what we're after), then I'd suggest we nurture it within ourselves; that we tackle our own limitations, both intellectual and ethical. There's the real challenge and the real payoff.
There's an interesting interview with Vernor Vinge on Computerworld about AI and how it will surpass human intelligence by 2020 ("AI will surpass human intelligence after 2020"). I certainly don't have Dr. Vinge's credentials. I have a lowly undergraduate degree in electrical engineering, having been damn glad to get it and to get out into the working world. But I do have nearly 40 years of hands-on experience with IT, and I can assure Dr. Vinge that as long as IT works as it does then we've got nothing to worry about with AI intelligence exceeding our own.
This isn't to cast aspersions on what's currently out there. Quite the contrary. There's a lot of very good systems and software, and some of it looks nearly miraculous and seems close to validating Clark's Third Law. But the bottom line is that it's still limited, and it requires at some point a human-in-the-middle to successfully operate. And it will continue to do so well past 2020.
The Problem with the Future
I am a child of the 50's, and grew up reading science fiction from the three grand old men of that period; Asimov, Clark, and Heinlein. Asimov gave me robots and a human galactic empire ("The Foundation Trilogy"), Clark gave me vistas of the near and far future ("The Sands of Mars", "The City and the Stars"), and Heinlein gave me gritty reality in the near future ("The Moon is a Harsh Mistress", "Starship Troopers"). In all those futures travel around the solar system via rocket was assumed, and it was further assumed that it was but mere decades away.
This all culminated with the book and movie by Clark, "2001". I remember seeing "2001" in 1968. It was a year later that Apollo 11 landed on the moon. Vision and fact seemed to be in lockstep. Then reality set in. Apollo continued up to 1972 and Apollo 17, when the program was canceled. As the Apollo program wound down many of the scientists and engineers who helped put us on the moon were laid off. We had a laughable attempt at an orbiting space station called Skylab. Money was being funneled towards the then-new Space Shuttle, which over the years was stripped down in capability into the system we have today. All we wound up doing after landing on the moon in 1969 was to spend an inordinate amount of money orbiting the earth. We've forgotten so much that we're struggling to re-discover this lost treasure (such as in junk yards!) just to go back to the moon by 2020, let alone beyond it.
The problem with the future is that when it finally arrives it never looks anything like you originally envisioned. It turns out to be a lot harder and a lot more expensive than you ever anticipated, and it can be a real disillusionment and a powerful cynic generator.
Computers Are Dumb
Computers are fragile. Take away their power and they sit there like big dumb door stops. Remove their network connectivity and they consume inordinate amounts of power doing very little of real interest. Connect them up and they are the targets of computer virii that want to turn them into bot networks for the purpose of sending spam and propagating large-scale DDoS attacks.
The idea that our computers are going to grow every more sophisticated, every more faster, ever more connected, than then hit some critical threshold and become equal or superior to us follows the same flawed thinking (and wishing) that we used to follow with regards to ubiquitous space flight. In fact we'll have affordable space flight long before we'll have artificial intelligence, given that the intricate engineering required to create reusable and affordable lifters are just now being built. We're going to need to come up with a fundamentally different approach to artificial intelligence before we can even begin to design them, and no, quantum computing ain't the answer either.
People Are Dumb
We have a very bad habit of falling in love with our technology, of elevating it to heights it does not deserve. Our current need to elevate our tools appears little different from our desires, thousands of years ago, to create idols from gold and to erect temples in which to place them and worship them. Computers should be amplifiers of human intellect and capability, not replacements for it.
The drive to create an AI seems symptomatic of a much deeper problem with America: we've given up and turned within. It turned out to be very difficult to push out into space, especially the way we wanted to in the 1960's. The solutions for affordable and sustainable rocket flight are a lot harder than we ever realized and our own politics are even more intractable, especially when you're trying to sell manned space flight to people who want to do little more than sit mindlessly in front of a television or twitch mindlessly in front of a video game.
And so, in disappointment, we've turn inward to create a world that will satisfy our need to succeed. We have our computers and the software we write to run upon them. And these idiot savants do many clever things and do them so much more faster than we can, and we make the mistake believing they're better than us because of it. But a faster idiot is still an idiot. Speed kills, especially if you're an idiot.
We may yet create artificial intelligence. Given enough time anything is possible. But to say that we'll have an AI that exceeds our intelligence by 2020, a mere 13 years away, is a fool's prediction at best. And to say it will occur after 2020 is disingenuous; even I can make such a prediction, along with space colonies on other planets and faster-than-light flight to the stars. I just won't say how much beyond 2020.
If we want better intelligence (because in the end this is what the search for AI seems to me to be what we're after), then I'd suggest we nurture it within ourselves; that we tackle our own limitations, both intellectual and ethical. There's the real challenge and the real payoff.
This comment has been removed by the author.
ReplyDelete