Are AIs Intelligent and does it matter?

Years ago, I asked a very intelligent friend what he thought of Artificial Intelligence. After the briefest pause he replied, “I think we are a pretty good simulation of it.”

It took me a moment to understand what he meant.

I guess Turing had it right in 1950 (or Denis Diderot in 1746, or René Descartes in 1637 come to that). The point of the Turing test is if a human evaluator cannot tell the difference between a human and machine whilst interrogating them with natural language, then the machine is effectively intelligent.

In other words, if it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.

I believe this is one of the reasons folks have got so excited about ChatGPT, from a layman’s perspective it clearly passes the test and any of us can go online right now to give it a go.

Ultimately, I think this question is probably as important as the question about whether a computer can beat a Grand Master at Chess or Go (not just yet they are still only at the level of advanced amateur), or Old Maid come to that. Does it matter and if we are confused by that idea, why?

One of the things that beguiles us is the speed and complexity with which computers do things. Alvy Ray Smith in his terrific The Biography of the Pixel keeps returning to the impact of Moore’s Law: “Anything good about computers gets better by an order of magnitude every five years.” Great though the advances in computer graphics were, they often had to wait for Moore’s Law to deliver computers fast enough and capable enough to calculate the images. He charts the earliest electronic pixel to Toy Story explaining that a lot of the magic of the technical production of the Pixar films is down to being able to perform an enormous number of calculations to construct a single frame of the film in a reasonable time.

The impact of Moore’s Law on the development of computer graphics is so profound that the results are magical. Apparently. The complexity of the computation that goes into every frame so far beyond our intuitive understanding that Arthur C. Clarke’s Third Law applies: “Any sufficiently advanced technology is indistinguishable from magic.”

And so, it is with AI. Deep down the sum of the computation is, in principle, straightforward (regression), but it is wrapped so up in many layers of complexity with Neural Networks et al as to become opaque to the point that even the designers of the AI systems don’t fully understand why they produces the output that they do.

Summary: it appears to be intelligent, and we don’t understand it. Magic, huh?

Not that the likes of ChatGPT do, always, appear intelligent.

This is no longer about ‘intelligence’ which the OED defines as “The faculty of understanding”, it is more about what we consider to be human.

Are we confused that displaying intelligent behaviour in some way makes an AI human? Do we really believe it can feel emotions? Can it be creative (next post)? Could it love? Could it be evil? Or, if it looks like it is displaying emotions, is that the same as feeling them? After all, we cannot experience each other’s emotions, so maybe, like the Turing test and intelligence, it is equivalent.

AIs are probably already a part of your everyday life, even if you don’t realise it. They are going to play an even bigger role going forward. I think soon we will get over the fact that computer software has mastered natural language just as we did with Chess. Instead, we’ll come up with some new language to differentiate between human intelligence and AI intelligence, or lack of it.

I’m known for my bad puns, but ChatGPT just produced this poodle joke for me:

Why did the poodle start a band?

Because it had perfect “pooch”!

I said that that was a rotten pun, how about a Border Terrier joke?

Why did the border terrier bring a ladder to the library?

Because it wanted to reach the “paw”-some books on the top shelf!