Judea Pearl has spent a lifetime working to advance machine learning. Professor of Computer Science at UCLA and recipient of the Turing Award, the highest award in the field of of computer science, he is well acquainted with the capabilities of Large Language Models.
“I enjoy working with ChatGPT, yes,” the 88-year-old computer scientist told me in the living room of his Monterey-style house in Encino. “It gives me a lot of things quicker than I would be able to find.”
However, Dr. Pearl cautions that one should not confuse these AI systems with actual understanding. “You have to understand how they operate,” Dr. Pearl told me. “It doesn’t understand what the context is, or what the content of those articles is, but it’s very good in predicting the next word. That’s how they operate. They are very good in predicting, and they can look at billions and billions of articles.”
Real understanding, he said, is much more nuanced than being able to spit back words at a prompt. “It’s not understanding the way we humans understand understanding.”
Dr. Pearl offered an example of human understanding. “When I say that you understand algebra, or you understand politics, or you understand how refrigerator works, I mean something different. You have a model of how the refrigerator works. And if I ask you a question about the refrigerator, you use your model to answer the questions. The model is the same model that I have. So I understand you, and if I test you, you understand,” he said.
But Large Language Models do not operate that way. “If we’re talking about a computer, it doesn’t have the model of the refrigerator. It’s only a way to predict the next word,” he said. After being trained on billions of articles, the model is very good at text prediction. It knows if you say “Hi,” the common response should be “Hi. How are you?” But that does not mean it has real understanding that it is greeting someone, nor does it understand the motivation behind the common greeting; making yourself pleasant to others.
AI might have gone in a very different direction. It might have followed Dr. Pearl’s work and developed causal and counterfactual reasoning. Humans use this reasoning all the time. For example, if I had practiced basketball over the summer, I would have had a better chance at making the team. But Large Language Models cannot engage in causal or counterfactual reasoning, though they may be trained to respond with “Aw shucks!” after you don’t make the team.
Dr. Pearl believes that AI is going in the wrong direction. “It’s a black box that no one understands,” he said, “and I don’t like opaque systems. ”
The reason for concern about AI is that it poses serious danger to humanity. “Because it has super computational ability, it can take advantage of us. It can essentially treat us as pets to entertain the master. Who’s the master? The machine.”
Consider the following scenario. Artificial intelligence might be able to employ people remotely to work for it in the future. The machine would then be able to carry out its plans in the real world. Election and polling fraud could become frequent. Society would appear, in the eyes of the government, to support giving more control to artificial intelligence, when in fact they do not.
“The machine can make us believe that what we are doing is for our own good, when, in fact, it is for the master’s good. Most rules do that. It gives the subjects the illusion that what they are doing is for their own good, but, in essence, they are serving the master. So there is no reason why a machine like that will be unable to enslave us mentally, perhaps physically too,” he said. “If it has control, it can physically bring in the sheriff. It can convince the sheriff to go and arrest us so that it can control us both physically and intellectually.”
Dr. Pearl is also an avid defender of Israel on UCLA’s campus, and on X. He was born and raised in Bnei Brak, before Israel became a state. In 2002, his son, Danny, was murdered by Islamic terrorist group Al-Qaeda, while he was working as South East Asia Bureau chief for the Wall Street Journal. “I do my work, and I defend Israel,” Dr. Pearl said.
Lately, he has been writing a book about the relationship between Israelis and their Palestinian neighbors. From my conversation with him, it does not sound like it will be very optimistic. “Every Palestinian wants only one thing: to eliminate Israel,” he said. “This is the issue here and people forget it because it doesn’t make sense. Why wouldn’t they wish to have a state instead of preventing the Jews? It doesn’t make sense to any Westerner.” Once again, Dr. Pearl does not like opaque systems.
Artificial intelligence, like the Middle East, is full of opportunities and great dangers. Once again, conventional wisdom is wrong. “You see that people are arguing and arguing about what we do, what is the danger, and how we can control it, and they don’t know how,” he said, about AI. “Whoever tells you that he or she knows is lying. We don’t have any idea of how to control it.”