Who’s Afraid of Smart Machines?

Computer Science Teacher
Computer Science Teacher - Thoughts and Information from Alfred Thompson

Who’s Afraid of Smart Machines?

  • Comments 2

On top of the discussion about robo-ethicists wanting to revisit Asimov’s Three Laws of Robotics I saw several pointers to a New York Times article called Scientists Worry Machines May Outsmart Man. Will we have thinking machines that are self-aware and smarter than humans? One of the famous papers on this is The Coming Technological Singularity: How to Survive in the Post-Human Era by Vernor Vinge. Vinge postulates that by the year 2030 we will see “the imminent creation by technology of entities with greater than human intelligence.” OK now to me the term “post-human era is a little scary. But personally I don’t see it happening.

Now the topic of creating machines that really think; that are creative and inventive; and that are self-aware has been around forever. In fact I’m sure it long pre-dates computers. But I do know that it was a topic of discussion when I was an undergraduate student over 35 years ago. People were predicting this “singularity” within 30 years even them. That prediction seems to have been wrong though. At first people thought we’d come up with machines that think the way people do. That turned out to be difficult at least in part because we still don’t understand how people think. I’m not sure we’ll figure that out in the next 20+ years either. As fast as technology moves we don’t seem to be figuring out people that quickly.

And there is the question of what is thinking? What is creativity? What is self-awareness? Yes, friends, we are delving into the arena of philosophy! Ha, you thought you were done with that or that you’d never need it didn’t you? I think you are wrong there at least if you want to talk about smart machines. We have to have some way of knowing when we get there even if we don’t know how we are going to get there. Or if we are going to get there.

Interestingly enough I have known several computer scientists who started in philosophy. Philosophy majors often make great programmers you know. Some claim it is because they are good logical thinkers. My theory is that it is because they have an easier time grasping abstractions. Dealing with abstraction is key to modern computer science. I wish I’d paid more attention in philosophy classes but the older I get and the more I get into computer science the more grateful I am for the courses I did have.

Anyway. I am skeptical of the notion that people who can’t understand how they think can create computers that think better than humans. I do not believe in the “and here a miracle happens” school of science and engineering either. We have computers that do special purpose things, play chess for example, and beat humans. Are those programs thinking? Are they creative? I don’t think so. I think they are good at dealing with rules and calculating lots and lots of steps. Humans get good results with less work though. One can teach a child to play chess in a half hour. They will (or can) learn on their own how to get better and better. While we have been programming heuristics into software for decades we have not made the sort of progress that was talked about 30-40 years ago. I think it unlikely. I’m not sure if that makes me an optimist or a pessimist. Your call. 

Perhaps there is something not easily reachable about what thinking and creativity are all about. Or perhaps it is something simple just out of view that will be discovered any day now. I think it may be the former. I also think that philosophy is going to be more helpful in getting us to true thinking machines or at least proving if they are possible or not.

So what do you think? What do your students think? Is this a topic that comes up in computer science courses? Does it come up in faculty lounges or places where students congregate? I think perhaps it should.

  • This is a big topic that I've followed for some time because it intersects my interests in human expertise, computer science, problem solving, and philosophy of mind.  One of my favorite approaches is the one taken my Minsky in "The Emotion Machine," where he takes the stance that we are pretty far from knowing the full technical details of human cognition, but pretty close to a good general architectural understanding.  His version is a six-level model of cognition with 'critic' and 'selector' modules at each level that detect patterns and recruit resources.  I especially like the way he explores emotion and self-awareness in a reasonable resource-recruitment based model.  I don't know if he has the right model but it makes as much sense as anything else I've seen, and more than most.   As for fearing them, I fear their potential but somewhat less than I fear the lunatics around me on the road today.

  • Sorry to take so long to get to writing here. I've meant to comment since you wrote this. Yours was the third or fourth item regarding this story that I encountered, and I wanted to answer a bit more fully than I had the others. Aiming higher has slowed me down.

    My first response to the Time article was that it was a classic example of horrid headline writing. They cry, "Scientists Worry Machines May Outsmart Man", and yet graf 4 cites as an example that "computer worms and viruses that defy extermination and could thus be said to have reached a “cockroach” stage of machine intelligence".

    Well, first, it's a rigged contest. Those fighting worms and viruses aren't allowed to loose anti-viruses, to take remote action to root out the worms and viruses. That would be illegal. Secondly, if the malware could actually survive serious attempts at extermination, that would put them at the level of real viruses and bacteria, not cockroaches. Finally, assuming that you grant that this really is evidence of "cockroach" intelligence, we are not intellectually threatened by the cockroach. The headline is FUD.

    But more importantly, the article itself and not just the headline is problematic. It is based on what I regard as a profound misunderstanding of the nature of, well... intelligence, life, the universe and everything. for well more than half a century we have called digital computers "electronic brains" or the like. Fiddle faddle. Brains are not digital, they are analog. They aren't "about" calculating or storing numbers, they are about communications, about relationships, and they are dynamic. Thinking they are the same sorts of things is an example of a profound mistake that we have mad about the wonders of science and technology for hundreds of years.

    The error is to take our latest really clever invention, our shiniest toy, our latest revelation and attempt to explain the whole world in terms of it. One of the most profound examples of this is the view of the world as a clockwork. When we were able to (once again--Heron clearly could have done as much) build machines as complex as the great clockworks built for the medieval churches and kings, it was a triumph of engineering and science. Clocks could track the Heavens, predict the future (in terms of sunrises and sunsets, the turn of the seasons) and model the universe.

    If science could do this, obviously it had tapped the Truths of the Universe, and so we came to regard the world to be a giant clock, designed by a Divine Clock Maker, put in motion and operating deterministically ever since according to simple rules discoverable by science. Our belief that digital computers are just like human brains and that therefore all they have to do is be big enough, complex enough and they will surpass brains and have minds greater than even our own. Voila! Verner Vinge's Singularity!

    (Or perhaps they already have and we are nothing but a simulation in a great computer! In fact, since it is inevitable that such computers and simulations will be invented, and there will be zillions of simulations and only one real world, statistically it is far more likely that we ARE already in a simulation than that we are real!)

    Such thinking misses a lot. It over-simplifies, it obsesses over a single notion until that notion becomes the whole of the universe, explains something. Like the number "23", we come to see it everywhere. Like those obsessed by conspiracy theories and other paranoid delights, we need to back off, contemplate the limits of each idea, each model, and not merely its power.

    The role of clockwork thinking, and at least one view of it as one step on the path from naturalistic, to mechanistic, to systems thinking, I recommend a movie called "MindWalk" (you can read about it on a page I wrote recently on the web: http://blog.eldacur.com/musings/mindwalk ).

    In point of fact, the world differs from the clockwork world in several ways: It is alive and organic, it constantly changes rather than being the infinitely repetitive motions of an unchanging mechanism. The world is open. Any system of rules is incomplete (see Gödel). It is indeterministic and uncertain, driven by chance.

    These are all things we know for certain, and for people of faith, be they Christians like you or Deists like me (or pagans and Taoists like some of my other most intelligent friends), there is also the dimension of the spirit. If there is spirit, then mind is more than brain, and even if brains can be modeled haven't mastered mind. Even for those who are not people of faith there is the question of will, the riddle of consciousness of mind beyond behavior.

    So, to get around to answering your question, am I afraid of smart machines? Well, no, for many reasons. First, I don't know that I believe in smart machines, even in theory. If I do, I am certain that none of the machines we have now are anywhere near smart, not in the way that we are. Even in that distant future where they might become smart I don't know that they can become willful, intentional, and if they can do THAT, I still don't see them as a THREAT.

    Life, the universe and all that are not about reaching a goal. There is no achievable goal in an open universe and every discipline we know tells us that this on is. They are about the journey. There is no contest to be the best, to be the winner. It's only in the movies that "There can only be one". In life, another set of feet upon the journey, a companion, a different point of view is a boon, a grace.

    Why must we see man as "the only animal that ..."? Why must we be the only one? Why must there be a best? Why can't we be a part of a tapestry, a cosmic whole, a player in a drama that requires others to be complete or even comprehensible? Why must we fear a rival rather than embrace a challenge and a chance to learn from comparing perspectives?

    As to the teaching aspects: I don't know if the question comes up in CS classes, but it should, and philosophy, and history, and mathematics. The nature of the world, of mind, of science and physical law is something that I think we don't address enough in education or in life. We'll fight about details that derive from different lines of reasoning based on one or another world views, argue Intelligent Design or the like, but not really look at the nature of the world.

    Just my two bits.

Page 1 of 1 (2 items)