If it’s a Smart Machine, It’s AI

by Kris Hammond (An article found in 2014. History is good to remember)

Popular Mechanics recently published the article, “Why Watson and Siri Are Not Real AI,” featuring a Q&A with Douglas Hofstadter. Within the article, Hofstadter supports his argument on why he believes, “we’re off the path of programming real AI.” It shouldn’t be surprising to learn that I disagree.

In the early days of Artificial Intelligence, the community wanted to think hard about very abstract versions of what they thought intelligent beings would do. They looked at calculus, puzzle solving, chess and all sorts of “life of the mind” problems. The idea was that by focusing on these sorts of problems, problems that were abstracted away from the real world, they could focus on the core algorithms that would define intelligence. It all seemed very scientific at the time.

Of course, this approach never really resulted in anything particularly deep in terms of working systems. There were papers and ideas, but not a lot of working software.

We are now seeing a huge resurgence of interest in AI, driven by a few commercial entities, including Google and IBM.  The former is focused on learning in support of its core mission, while the latter is looking at how to make use of massive textual data sets to support evidence-based reasoning. Unlike research groups in the past, both organizations are squarely focused on solving problems that will give them commercial advantages, rather than provide “interesting” research results.  In addition, unlike the researchers of the past, they are getting incredible results that are having real impact in the world.

There are a lot of reasons for this, not the least of which is the raw computational power that individuals within these companies have access to, as well as the sheer volume of online textual data that is available to them. Sadly, the fact that they are doing this work for commercial purposes is a focus point for detractors.

Unfortunately, some people have a hard time letting go of the past. One of the grand old men of AI, Doug Hofstadter, keeps taking stabs at this sort of work, commenting:

“The people in large companies like IBM or Google, they’re not asking themselves, what is thinking? They’re thinking about how we can get these computers to sidestep or bypass the whole question of meaning and yet still get impressive behavior.”

His stance is that while these systems do impressive things, they are not AI because when you look under the hood, they are getting the job done using techniques that do not conform to his view of what constitutes AI. He sees this performance as uninteresting, stating:

“Watson is finding text without having a clue as to what the text means. In that sense, there’s no intelligence there. It’s clever, it’s impressive, but it’s absolutely vacuous.”

The sad thing here is that Hofstadter seems to think that his view is scientific, while it is actually the essence of an anti-scientific stance. He is arguing against approaches to machine intelligence by saying that this just isn’t the way he thinks intelligence works. He just doesn’t like it, so it must be wrong.

The reality is that these systems do work.  And, if it turns out that reasoning can be supported by search and pattern recognition, then so be it.

The beauty of technology is that it doesn’t care what we think.  It either works or it doesn’t.  And if it does, you don’t get to say it doesn’t just because you don’t like the way it gets the job done.

You’ve said in the past that IBM’s Jeopardy-playing computer, Watson, isn’t deserving of the term artificial intelligence. Why?

Well, artificial intelligence is a slippery term. It could refer to just getting machines to do things that seem intelligent on the surface, such as playing chess well or translating from one language to another on a superficial level—things that are impressive if you don’t look at the details. In that sense, we’ve already created what some people call artificial intelligence. But if you mean a machine that has real intelligence, that is thinking—that’s inaccurate. Watson is basically a text search algorithm connected to a database just like Google search. It doesn’t understand what it’s reading. In fact, read is the wrong word. It’s not reading anything because it’s not comprehending anything. Watson is finding text without having a clue as to what the text means. In that sense, there’s no intelligence there. It’s clever, it’s impressive, but it’s absolutely vacuous. 

Do you think we’ll start seeing diminishing returns from a Watson-like approach to AI?

I can’t really predict that. But what I can say is that I’ve monitored Google Translate—which uses a similar approach—for many years. Google Translate is developing and it’s making progress because the developers are inventing new, clever ways of milking the quickness of computers and the vastness of its database. But it’s not making progress at all in the sense of understanding your text, and you can still see it falling flat on its face a lot of the time. And I know it’ll never produce polished [translated] text, because real translating involves understanding what is being said and then reproducing the ideas that you just heard in a different language. Translation has to do with ideas, it doesn’t have to do with words, and Google Translate is about words triggering other words.

So why are AI researchers so focused on building programs and computers that don’t do anything like thinking?

They’re not studying the mind and they’re not trying to find out the principles of intelligence, so research may not be the right word for what drives people in the field that today is called artificial intelligence. They’re doing product development.

I might say though, that 30 to 40 years ago, when the field was really young, artificial intelligence wasn’t about making money, and the people in the field weren’t driven by developing products. It was about understanding how the mind works and trying to get computers to do things that the mind can do. The mind is very fluid and flexible, so how do you get a rigid machine to do very fluid things? That’s a beautiful paradox and very exciting, philosophically.

In the ’70s and then again in the ’80s, a lot of AI researchers were pushed away from the type of work you advocate. What happened?

[It was] the result of a lot of hype, which didn’t materialize. In the first case, a lot of AI researchers were basically saying that intelligent machines were just around the corner. They were predicting in the ’60s that there would be a world champion chess playing computer within a decade. And that wasn’t even close, and so I think in the ’70s, a lot of that hype was looked upon skeptically by government funding agencies and the money dried up for a while.

Something else happened in the early ’80s. There was something called the fifth generation of computers, which were using a certain [programming] language called Prolog, based on very rigid, deductive logic, and it claimed that all of human knowledge was going to be encoded in databases with Prolog. And lots of books were written about this; lots of people wrote grants; the grants were funded, and then . . . nothing happened.

Prolog was one of the silliest approaches I’ve ever heard of, and it fell to the ground in shambles. And again people in the government said, You haven’t produced anything. We’re not going to give you any money. So certain companies like Apple started to invest instead of the government, and when computers started getting much, much bigger, with better memory and faster processing, people found that you could brute-force problems in a way you couldn’t before. That sort of revived an excitement about developing products that could do incredibly impressive things—even though behind the scenes the computers weren’t doing anything resembling thinking.

So what will get us closer to creating real AI?

I think you have to move toward much more fundamental science, and dive into the nature of what thinking is. What is understanding? How do we make links between things that are, on the surface, fantastically different from one another? In that mystery is the miracle of human thought.

The people in large companies like IBM or Google, they’re not asking themselves, what is thinking? They’re thinking about how we can get these computers to sidestep or bypass the whole question of meaning and yet still get impressive behavior.

How do we even begin to answer those giant questions?

[Here’s one example:] My research group has focused on building programs that look at letter strings and are able to perceive them at abstract levels. For example, I could ask you the question, if the letter string ABC changed to ABD, how could the string PPQQRR have the same thing happen to it? Well you could say that PPQQRR would also change to ABD. Now that’s the dumbest answer but it’s defensible. You could be less rigid—but still somewhat rigid—and say, it changes to PPQQRD, where the last letter changes to a D. But even more sophisticated is to notice that PPQQRR is just like the 3 letters of the alphabet ABC, but doubled, and so you say, PPQQSS where the last two letters change to their successors. 

Now this isn’t an example of Einsteinian thinking, but it’s an example of thought—of stripping away everything and looking at the essence of the situation. This is what we try to get our programs to do, not only to make abstract perceptions but to favor them.

Do you think interest in fundamental AI science is being rekindled?

One of my recent graduate students named Abhijit Mahabal came to my research group because he’s very interested in trying to understand how people perceive patterns. And when he finished his Ph.D., Google snatched him up. Now he’s working for Google, and while I know what still drives him is trying to understand the mind, he’s in a corporation, and in a corporation what counts are profits, products, and a bottom line. But at the same time Google wants to encourage bright people and Abhijit is extremely bright, and they give him some leeway to explore his own ideas. 

In an environment where there’s lots of money floating around, there’s always some extra that can be used to indulge in luxuries, you know? And there will always be people interested in the mysteries of thinking.

Being hacked by a robot requires much less hardware than I expected. There’s no need for virtual-reality goggles or 3D holograms. There are no skullcaps studded with electrodes, no bulky cables or hair-thin nanowires snaking into my brain. Here’s what it takes: one pair of alert, blinking eyeballs. 

I’m in the Media Lab, part of MIT’s sprawling campus in Cambridge, Mass. Like most designated research areas, the one belonging to the Personal Robots Group looks more like a teenage boy’s bedroom than some pristine laboratory—it bursts with knotted cables, old pizza boxes and what are either dissected toys or autopsied robots. Amid the clutter, a 5-foot-tall, three-wheeled humanoid robot boots up and starts looking around the room. It’s really looking, the oversize blue eyes tracking first, and the white, swollen, doll-like head following, moving and stopping as though focusing on each researcher’s face. Nexi turns, looks at me. The eyes blink. I stop talking, midsentence, and look back. It’s as instinctive as meeting a newborn’s roving eyes. What do you want? I feel like asking. What do you need? If I was hoping for dispassionate, journalistic distance—and I was—I never had a chance. 

“Right now it’s doing a really basic look-around,” researcher Matt Berlin says. “I think it’s happy, because it has a face to look at.” In another kind of robotics lab, a humanoid bot might be motivated by a specific physical goal—cross the room without falling, find the appropriate colored ball and give it a swift little kick. Nexi’s functionality is more ineffable. This is a social robot. Its sole purpose is to interact with people. Its mission is to be accepted. 

That’s a mission any truly self-aware robot would probably turn down. To gain widespread acceptance could mean fighting decades of robot-related fear and loathing. Such stigmas range from doomsday predictions of machines that inevitably wage war on mankind to the belief that humanoid robots will always be hopelessly unnerving and unsuitable companions. 

For Nexi, arguably the biggest star of the human–robot interaction (HRI) research field, fame is already synonymous with fear. Before visiting the Media Lab, I watched a video of Nexi that’s been seen by thousands of people on YouTube. Nexi rolls into view, pivots stiffly to face the camera and introduces itself in a perfectly pleasant female voice. If the goal was to make Nexi endearing, the clip is a disaster. The eyes are big and expressive, the face is childish and cute, but everything is just slightly off, like a possessed doll masquerading as a giant toddler. Or, for the existentially minded, something more deeply disturbing—a robot with real emotions, equally capable of loving and despising you. Viewers dubbed its performance “creepy.” 

Now, staring back at Nexi, I’m an instant robot apologist. I want to shower those clips with embarrassingly positive comments, to tell the haters and the doubters that the future of HRI is bright. There’s no way seniors will reject the meds handed to them by chattering, winking live-in-nurse bots. Children, no doubt, will love day-care robots, even if the bots sometimes fail to console them, or grind to an unresponsive halt because of buggy software or faulty battery packs. To turn today’s faceless Roombas into tomorrow’s active, autonomous machine companions, social robots need only to follow Nexi’s example, tapping into powerful, even uncontrollable human instincts. 

That’s why Nexi’s metallic arms and hands are drifting around in small, lifelike movements. It’s why Nexi searches for faces and seems to look you in the eye. When it blinks again, with a little motorized buzz, I realize I’m smiling at this thing. I’m responding to it as one social, living creature to another. Nexi hasn’t said a word, and I already want to be its friend. 

As it turns out, knowing your brain is being hacked by a robot doesn’t make it any easier to resist. And perhaps that’s the real danger of social robots. While humans have been busy hypothesizing about malevolent computers and the limits of rubber flesh, roboticists may have stumbled onto a more genuine threat. When face to face with actual robots, people may become too attached. And like human relationships, those attachments can be fraught with pitfalls: How will grandma feel, for example, when her companion bot is packed off for an upgrade and comes back a complete stranger? 

When a machine can push our Darwinian buttons so easily, dismissing our deep-seated reservations with a well-timed flutter of its artificial eyelids, maybe fear isn’t such a stupid reaction after all. Maybe we’ve just been afraid of the wrong thing.

Spread the love