AI (Artificial Intelligence) and RI (Real Intelligence)



 

By Dennis Schuchman


 

Everyone is saying that AI is about to change the world, and they’re right. After decades of missteps and false promises, AI is coming of age and it will change the world in ways that I can’t even imagine.

 

I’ve been in the computer industry for over 40 years now, but I’m not a computer scientist. In fact, I’ve never even taken a computer science course. I’ve learned a number of programming languages, how various operating systems work and how to configure hardware and software systems and networks to create computing environments that let users get their work done. So, I’m not a scientist, but maybe more of an engineer. Or maybe just a mechanic. When it comes to AI, then, I’m just as much an outsider as anyone. Well, almost.

 

I think that the way the media present AI gives people the wrong idea of what it’s about. People seem to think that AI combines the best of human brilliance with cold, hard computer logic.

 

Wrong.

 

Let me tell you a story.

 

There’s a weekly podcast I like called Skeptics’ Guide to the Universe. The host is Steve Novella, neurology professor at Yale. He and his panel discuss science news and debunk pseudoscience. One of the regular features of the ‘cast is called Science or Fiction. Steve presents his panel with three science news items. The catch is that one of them is fake and the panel has to figure out which one it is. Sometimes there’s a theme; the news items are all on the same general topic. A recent Science or Fiction had a theme, but Steve was going to keep it secret until the end.

 

I’ll spoil the ending -- the theme was ChatGPT. Steve asked ChatGPT if it was familiar with the podcast and with Science or Fiction. The program apparently gave answers that, if you’d heard them from a person, would lead you to think that the person understood Skeptics’ Guide in general and Science or Fiction in particular. So, Steve took the next logical step of having ChatGPT put together that week’s Science or Fiction.

 

When Steve revealed the fake story, I thought, “Huh. I thought that was true.” Turns out I was right. Turns out all three stories were real and none were fake.


So how could a seemingly intelligent program get something so simple so glaringly, obviously wrong? And that brings up some other questions that are just as important. How does AI ever get it right? And how do we actually know it’s right? The answer, basically, is: we don’t.

 

I’m an old school programmer. Any program I write is based on an algorithm, a step-by-step procedure for solving a problem. Writing a program means translating the algorithm into whatever programming language I decide to use.

 

Let me give you an example. When I was a kid, my father showed me a little geometric figure. He asked if I could draw it without taking the pencil off the paper and without drawing over any lines. Not having a lot of patience, I didn’t find the answer, so he showed it to me. Many years later, after I’d been working with computers for quite some time, something (I don’t remember what) made me think about Dad’s little puzzle. It was obvious that there was more than one way to do it and it made me curious about just how many solutions there were.

 

So I wrote a program to find them all. My programming technique was what we call “brute force”. It basically tried every possible way to do it. Luckily there was a small enough number of combinations that the program could run pretty quickly. And, since I wanted to know if the program was working right, I had it print out every step of every attempt. I could check to see if the solutions it found were right by picking up a pencil and trying them. If any of the solutions were wrong or if the program seemed to be missing correct solutions, I could go back and look at all of the attempts. If I did that, I would find one of three things:

 

1. Nothing was wrong and I was just mistaken.

2. Something in my algorithm was wrong and led to a wrong

    answer.

3. The algorithm was right, but my coding was wrong.

 

Early AI tried to come up with algorithms that that captured the way humans think. But that led to limited success, so modern AI doesn’t work that way. It works via neural nets, which are an entirely different way of simulating how human brains learn things and solve problems.

 

Suppose we wanted to create a neural net that could look at a picture and decide whether or not it was a picture of a horse. The neural net would have an input layer which would consist of the pixels of a digital image being examined. The pixels are called input nodes. Then there would be an output layer consisting of two output nodes, which we can call Horse and Not Horse. All of the input nodes are connected to all of the output nodes. It’s called a neural net because the nodes are analogous to neurons and the connections to synapses. After looking at a picture, we would want the neural net to light up either the Horse or the Not Horse node. To start with, the network has to be trained – it’s shown a set of images that are known to be either Horse or Not Horse. What training the net does is to change the strengths or weights of the connections between each input node and each output node. At the end, an output node lights up if the sum of all of its weighted inputs exceeds a threshold value. Needless to say, it’s a lot more complicated in practice, but that’s the essence of it.

 

I don’t know if a neural net as simple as the one in my example could really solve the Horse/Not Horse problem and most neural nets today are made more complicated and more capable by adding hidden layers of nodes between the input layer and the output layer. Each node is connected to all of the nodes of the next layer. You may have heard the term deep learning. All that refers to is neural nets with multiple hidden layers. 

 

Unlike my little geometry program, which can print out the result of each step in the algorithm so we can understand exactly what it did, the only thing we can learn from a neural net after training is what the weights of the connections ended up being. And that doesn’t really tell us why those particular weights gave us the answer. In fact, you can start with two identical neural nets and train them with different sets of images and you’d most likely end up with two nets with similar accuracy, but with different weights for the connections. And they’d probably get different ones wrong.

 

Large language models (LLM’s) like ChatGPT are built on neural nets and add another layer of complexity -- generative AI. According to Wikipedia:

 

Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics.

 

What’s missing here? LLM’s don’t necessarily evaluate the input data to decide if it’s true or false or relevant or irrelevant or biased or honest. And they don’t check the answers for validity. That’s why ChatGPT didn’t know that its Science or Fiction was wrong.

 

This means that AI has some of the same characteristics as RI (real intelligence), namely fallibility and inscrutability. There’s ongoing research trying to eliminate, or at least reduce, the fallibility and inscrutability, but it’s not there yet.


Don’t get me wrong – I’m not trying to diminish what AI has accomplished and what it will accomplish in the future. If you want a great example, go look up AlphaFold, which has solved a major problem that many, many scientists have been beating their heads against for decades with limited success. That solution will likely lead to a whole new generation of medicines.

 

But we just need to be aware of AI’s limitations. Would I trust AI to drive a car better and more safely than a human being within the next several years? Absolutely. Would I trust one to decide whether or not to launch a preemptive nuclear strike? Not any time soon.

Comments

Most Popular Posts

Adam Smith's Pin Factory

Bilateral Oligopoly

The Stock Market Crash of 1929 and the Beginning of the Great Depression

List of Posts By Topic

Explaining Derivatives - An Analogy

Government Finance 101. Fiscal Policy: Welcome to Alice in Wonderland

John von Neumann Sees the Future

The Roman Republic Commits Suicide: A Cautionary Tale for America

Josiah Wedgwood, the Wedgwood Pottery Company, and the Beginning of the Industrial Revolution in England

“Pax Americana”: The World That America Made