Saturday, December 17, 2011

Artificial Intelligence

In my last post, I alluded to my belief that I philosophically find artificial intelligence (AI) impossible. There are logical and religious reasons that I find this to be so. Most of these come from my experience researching AI, so it still baffles me to this day when prominent smart people say that they are concerned about AI. I can potentially see a problem with "AI" which I will address, but AI will be nothing like seen in Terminator or the Matrix.

Everything that is currently being dubbed an AI is really just an API. An API is basically an interface. Basically a program will take an input and through an -albeit very sophisticated- algorithm to produce a result. Namely it will produce a predicted result minus occasional glitches or unexpected test cases. If I create a program that solves a puzzle, it will never do anything else besides solving the puzzle. A computer can do repetitive tasks much better than a human, but a computer must be programmed a specific set of instructions in order to accomplish a task. A computer program cannot be creative nor can it change it's programming let alone transcend its programming.

When I watched the Terminator TV show, The Sarah Connor Chronicles, Skynet was now a chess AI. Some even ascribed the real life Deep Blue as being an AI. The reality is these chess "AI's" are not really as impressive as you may think. Now it is impressive that the program can go through multiple chess layouts very quickly, but many chess layouts have been solved meaning that once the pieces are in a certain layout, there are very specific piece movements that will cause checkmate no matter how much the opponent tries. One key aspect often left out when discussing Deep Blue is that it may have won against a human chess master, but Garry Kasparov actually defeated the program several times. A moot point, but if the real life chess super computer could only defeat a human with an upgrade, it's ludicrous to believe that a chess program could then take over the world. I know I'm arguing about a fictional television program, but strangely a lot of people think this is plausible. In fairness, if a chess program could do something more than play chess, then I would be tempted to call it an AI, but that exists solely in the realm of science fiction (or fantasy).

Philosophically speaking, my belief in the human soul means that artificial intelligence is impossible. I'll elaborate on this more in a future blog post, but I believe the soul is the answer to the freewill dilemma. Bottom line is that I fundamentally believe that I have free will. A true intelligence must have free will or else how is it different from any other kind of machine? All programs are literally a set of predetermined instructions called code. Because of the freewill dilemma, I don't philosophically believe that any true AI is achievable.

But can AI's be dangerous ...

The movie iRobot actually touches on what I consider to be a realistic problem with AI especially since driverless cars may be in the not-so-distant future. No matter how sophisticated the program may be, it cannot possibly be programmed to handle all cases -especially in a place as chaotic as the real world. In the movie, one of the robots saved Will Smith's character over a child in a car accident because an adult had a higher chance of survival. I don't think morality can be truly quantified into a program. Plus your morality may be different than that of the programmer's. The best a driverless car company can do is just put in a software patch afterwards to handle an unfortunate accident that it didn't account for. Maybe eventually a driverless car will be very safe, but a lot of people may die before that happens.

Taking a page from Terminator III, if our national defense system was automated, that would be a potential world-ender. An unexpected case could potentially launch nuclear missiles. The AI would not have any true malevolent intent, but it could still kill a lot of people. On a smaller scale, any AI that has a decision over a weapon can be guaranteed to be dangerous. Even though I don't believe an AI could ever develop intent -let alone malevolent intent, I would be fundamentally against arming AI's for the simple reason that an AI cannot be programmed to handle all cases and can always be prone to glitches. I can understand prominent smart people being concerned about AI's, but only in the sense that some idiots will have the hubris to believe that it is a good idea to arm a cold unthinking machine.