ValueWalk’s interview with Pamela McCorduck, the the author or co-author of several books including, “This Could Be Important.” In this interview, Pamela discusses her background, explains what AI and algorithms are, the problems AI itself produces, how AR and VR fit in to AI, the AI winter, the computing power in the 1960s vs now, if AI is a big hype, what her book is about, and protection against the fraudulent uses of AI-deepfakes.
Can you tell us about your background?
I was a graduating English major in 1960 when two young assistant professors asked me if I’d like to work on their book during the period between graduation and my plans to go to graduate school. Yes! I said enthusiastically. Uh, what’s it about? Artificial intelligence, they said. That was the beginning of a long relationship between AI and me. I earned a masters’ in the Columbia University School of the Arts Writing Program, and published two novels before I returned to AI, and wrote the first modern history of artificial intelligence, called Machines Who Think (1979). Later, with one of those guys who’d introduced me to the field, I co-authored an internationally best-selling book on the Japanese effort in AI, The Fifth Generation (1983). I wrote several other books: on the effects of computing in general, on art and artificial intelligence; then returned to writing novels when I got interested in the sciences of complexity. The new memoir and social history called This Could Be Important—a phrase I used as I pulled on the sleeves of public intellectuals in New York City, where I lived for forty years, saying Artificial intelligence? This could be important—is based on the decades of personal journals I’ve kept since my thirties.
Can you define for us what AI means? Is it just a fancy way of saying algos and computers?
There are many flavors of AI, each its own specialty. Algorithms play a very big role, whether statistical or mathematical, whether supervised or unsupervised. They detect patterns that might otherwise go undetected in big data, and yet struggle to detect patterns that humans find easy-perceiving and understanding scenes, for example. But algorithms are only one part of AI, and while often breathtaking, they’re pretty mechanical. Symbolic AI, which is what the first AI efforts were, is more human-like, and deals with abstractions, like story telling or making art. Algorithms can play a part in symbolic AI, but they are subordinate to the goals of a program whose main purpose is understanding or telling stories, or making art, behaviors where we depart from our biological cousins and ancestors. The symbolic part of the field is wide open and awaits smart young people to move in.
What purpose would it have, what problems would it solve?
You mean AI? You might as well ask what purpose intelligence in general has, and what problems intelligence solves. In the early days we never thought about the problems AI might itself produce. Gaining more intelligence seemed like gaining more virtue. What could go wrong? Now we know.
Can you tell us about some other related fields like robotics, AR and VR and how they fit into the picture?
These are all fields in their own right that share some basics with AI, but have other specific goals. Robotics is considered embodied AI, but we design robots for specific tasks. The Roomba doesn’t do windows (alas). One interesting project at MIT studies how to imbue robots with human sensibilities about space. Robots can’t work next to you if they aren’t aware when they’re crowding you, or they appear threatening. AR and VR also have their own goals, but rely on aspects of AI to reach those goals.
Are any of the above bubbles? Why or why not?
It’s unlikely any AI investments will pay off quickly, but I could be wrong. When I hear the term "AI winter" I have to remind people that entrepreneurs were making pretty big promises in the 1980s for a field that had hardly begun. The winter came in the drying up of investments, not the drying up of research.
You were documenting AI in the 1960s. I find that fascinating - can you explain what it was like then? I mean they barely had computers back then - how did it work?
No kidding! I like to remind people we have more computing power in our parking meters than those pioneers had in their 1960s machines. We each have more computing power in our smart phones than the world, the entire world, had in 1970. (Let me put in a plug right now for computer scientists and computer engineers of the last half-century who are heroic in what they’ve accomplished. Likewise, of course, AI researchers, though they’re a subset of computer scientists.) In the pioneering days, advancement was very slow, ambitions had to be very limited. Somebody asked me, how on earth did they communicate their research without the Internet? Answer: they got on planes, or they sent their graduate students on planes physically to present papers. Or they mailed papers by the postal service to each other. Yes, it was slow and it was simple.
What has changed since then?
Obviously, the technology has transformed everything. All over the world, lots of really smart people have put their minds to the problems. But there has been a much more fundamental, I think equally important, change. A few months ago I heard a leading primatologist say something that nearly made me fall out of my seat (I’d gone to hear him talk about chimpanzees). He said: "AI taught us what questions to ask about intelligence." Wow! But he was right. You hear the same kind of acknowledgment in the digital humanities. In other words, AI has made a deep philosophical and psychological change in how we approach intelligence and intelligent behavior.
Is AI big hype or are we at an inflection point - what are your views?
No, certainly not big hype. I confess that for me, having been around the field for sixty years, it’s dizzying to see the phrase AI leap out of popular journalism as often as it does. This was a phrase I shared with about a hundred people in the world back in 1960, and half of them were rabidly anti-AI!
What is your book about?
My book is one part memoir, one part group biography of the original pioneers, one part social history—definitely a book about humans, not machines. It’s also the account of my personal quest, to connect the sciences and the humanities (English major, remember?) in some very basic, significant way. This has begun to happen, and its velocity will increase.
Based on your 60 years of experience what do you think we will see next in AI going forward?
Hah! If you’d been an early homo sapiens, how would you predict what might happen as humans acquired symbolic intelligence? Would you say, well, we’ll have story telling, and make art, and compose songs and poems that will bond us together, and learn to worship deities, and build cities, and… Really? No, you wouldn’t. Sorry, no predictions from me. I know my limitations.
If you could pass any AI related legislation what would it be?
We’re all deeply concerned about the fraudulent uses of AI-deepfakes being a current example. I believe the Europeans are ahead of the U.S. in regulating some aspects of AI, and many here in the U.S. are concerned about shaping regulatory bodies too. I say this based on the enormous numbers of institutes, councils, and long-range studies of the effects of AI that exist.
In AI the representations of right and wrong might seem new, but it’s still at bottom the same old right and wrong. We’ll be falling back on time-tested ethics, like tell the truth, don’t bear false witness. Honor the lives of other living creatures, which includes not only honoring their physical being but in the case of humans, their emotional, intellectual, and private circumstances. Practice personal humility.
I feel so lucky to have been present at the birth of a new field of such deep significance. While AI has always been a spectator sport for me-I’m not a practitioner-its growth, the possibilities it has opened to us, have been a gift to my life I can hardly describe.