General

Transcending Bad Sci-Fi

So I was watching “Transcendence”, the latest cinematic attempt to generate public hysteria about Artificial Intelligence (AI), or more specifically about Hard AI. I stopped watching around the 42 minute mark. Johnny Depp’s remarkably good portrayal of a scientist dying a slow and painful death from Polonium poisoning had kept me watching despite the utter nonsense being thrown around in the name of Sci-Fi. Then I got to the point, where one of a gang of anti-AI extremists says the following whopper:

If that thing [the AI computer armed with Johnny Depp’s consciousness] connects to the internet, the first thing it will do is copy itself to every networked computer in the world and there will be no way to stop it.

Take a moment to ponder this sentence. First of all, if your script centered on an AI gone rogue, hinges on whether or not the said AI successfully copies itself to “every networked computer”, then you should think of a different profession that scriptwriting. I mean, seriously? THAT’S the best AI gone rogue scenario you could come up with? Second, and more seriously, this sentence suggests that the makers of the film really didn’t give a crap about actually understanding the promise and perils of strong AI. Even an AI powered scriptwriting program could have come up with dozens of fantastic script ideas centered around the theme of a strong-AI gone rogue which were not in violent conflict with our basic understanding of the subject.
The problem is with the notion that said AI, which is powered by “the most powerful quantum processors on the planet”, could exist independently of the cutting-edge computing hardware on which it was originally conceived. The point is simply that of all the examples of “strong”-AI (or perhaps just I) which exist in Nature all involve some sort fleshy matter built out of neurons and neurotransmitters. I am talking about brains, of course. At least considering mammalian brains as formal working examples of biological machines that exhibit strong-Intelligence, the following observation is crucial:

The software does not have an existence independent of the hardware.

In other words the “consciousness” aspect of the behavior of these strong-I machines cannot be simulated on a computer which is less complex than the brain-matter itself. Sure, one could always (in principle) take an imprint of the consciousness in brain ‘X’, store it on some digital memory and at some point in the future restore the consciousness by uploading the stored data into some other brain ‘Y’. However, and this is crucial, while the imprint represents the individual aspects of the brain ‘X’s behavior, as long as it is separated from the actual brain itself it cannot exhibit any “conscious” behavior.
Coming back to the film, if the rogue AI copies “itself” (or more properly speaking creates “imprints” of itself) onto other computers of the network, those imprints would simply be fossils of the original consciousness without the advanced hardware required to sustain the computation. In other words, the threat of an AI spreading like a global digital pandemic simply cannot be realized unless and until every networked computer is as advanced as the original hardware. Given that in the film, the original hardware is described as consisting of “the most powerful quantum processors on the planet”, connecting the AI to the internet would not pose any harm as long as most of the hardware on the planet was not as sophisticated as Depp’s original quantum computers.
The fact that the filmmakers were unable to grasp this fact is what leads to “Transcendence” being at best a campy sci-fi movie with more in common with “Flash Gordon” that with “Blade Runner”. But don’t let these deep observations stop you from enjoying the rest of the film. After all, how would the AI feel if you gave up on it halfway through the movie?

Please leave a reply