Marilyn’s Artificial Intelligence

On the pastoral campus of the University of California Santa Cruz, Professor Marilyn Walker is a force behind the burgeoning major of Artificial Intelligence. In fact her face, thirty feet tall, graces the wall of the International boarding terminal at San Jose International Airport as she screen writes formulas. It also dots areas around the campus itself. Clearly, there’s a new Marilyn in town.

By Kimberly Wainscoat

Professor Marilyn Walker

We spoke at her picturesque home on the campus, where she is a tenured, Ivy and Stanford educated professor. As an early pioneer in Palo Alto technology companies such as Hewlett Packard and AT&T Bell Labs, Marilyn is also a sought after consultant in the booming area of Artificial Intelligence. Her body of work includes statistical methods for dialog optimization and expressive generation for dialog. To the layperson, think Alexa, Siri or Ok Google. Simply put, she is helping to lead the way for informative human-computer conversations and interactions.
While UCSC was initially developed as an arts campus, without grades and offering self-designed majors, much has changed in the past decade with Silicon Valley appearing at the campus’ doorstep. Home to a large swath of forest known as Elfland, where students once explored alternate realities, they now explore Artificial Intelligence in state of the art labs led by world class professors such as Marilyn. In the past decade the campus has swelled to accommodate students with an eye towards computer science and engineering, particularly those interested in computer gaming and human computer interaction. Many of these students graduate to engineering jobs in the Valley.
Artificial Intelligence, the future of which Elon Musk and Mark Zuckerberg are currently dueling over in the press, is exploding. The stakes are extraordinary—read what Musk has to say and you would think Artificial Intelligence is humanity’s next weapon of mass destruction. After my interview with Marilyn, I might take Musk’s side in the argument.
The cornerstone of Marilyn’s work is to create an AI that will offer in-depth conversation and human-like phraseology, the definition of “digital optimization,” which she is known for. According to her, we should be able to ask our computers a complicated question such as, “Hey Alexa, make me a reservation at a vegan restaurant with patio seating and views of the Monterey Bay. I need a table for six at 8 pm on May 9. Let me know what you find.”
She would hopefully answer back in an informative way, “There are two places I can recommend. Theo’s in Soquel is a lovely and delicious place with limited views but has patio seating at the time your requested, whereas Au Midi in Aptos offers extraordinarily complicated French cuisine worthy of a Michelin star, but it has no seats available on the patio at the time you requested. What shall I do? Would you like more information or different times for availability?”
We can ask Siri to do the simplest of things for us, such as “Siri take me to the local Chevron,” or “Siri, call Dad,” and thing such as, “Put on my calendar an appointment tomorrow at 8 am with the chiropractor.”
But wouldn’t it be nice if we could ask, “Hey Siri/Alexa, what was the average number of women majoring in Computer Science back in the eighties compared to today? And if there is a decline in women studying computer science, why is that?” In other words, a fact finding conversation, where she could act like my assistant.
I asked Siri exactly the question above about computer science and women and the decline in computer science as a major. Siri responded with, “Okay, here’s what I found. Here is some information for the number of women majoring in Computer Science in 1984.”  And she sent me some links to peruse. But she could do no more than this. The word “utterance” that Marilyn was using in our conversation to describe the use of talking to computers, described how I felt talking to Siri. I was uttering, she was uttering, almost like a cow bleats. This was not a conversation.

Kimberly: Can you share with us the problems you face in getting our computers to converse with us?
Marilyn: Yes, for example, you know that your computer can’t really carry on a conversation, right? So currently your default behaviour is to pack everything into a single utterance. Because you knew that you couldn't say one thing and have it come back then say the next thing and the next thing, like in a normal conversation. It doesn’t remember the first thing you said, so you can’t build on it. So you, the user, accommodate to the technology. You figure out, ‘How can I use this thing to get something done?’ And one of the things that happens is that people can get wedged in a pattern of usage. So that when the technology gets better they don't actually think of using it for that thing anymore because they’ve gotten use to it as it is. So you have to re-educate people and then it’s circular because it doesn't do it and then people don't try it and because people don't try it you don't have the training data to see what people would do if it was there. You need that data to learn how to get a system to do it.

Kimberly: One thing I am struggling with is, computers don’t have emotions. So is it possible to really converse with them?

Marilyn: I’m not saying there is going to be emotional intelligence but it can help you get more complex things things done. One of the things we are trying to do is learn from user reviews that you might say something like ‘the food is out of this world.’ So instead of something really generic like ‘it’s got great food,’ we are trying to learn how to have the computer to not be so boring when it talks.

Kimberly: How did you begin your career in technology?

Marilyn: The first problem that I started working on, the very simplest problem was, ‘Who is he, her, it, them?’ Pronouns. It’s kind of ridiculous. That’s my first paper in 1987 and it still can’t do it. I mean it’s not that hard. It’s because of the assumptions about search. People accommodate to what the technology can do.

Kimberly: Could this technology be used for some kind of nefarious purpose?

Marilyn: It's hard to imagine my technology is going to be used to do some kind of evil, like convince somebody to push some button they should not push. But it's possible. About ten years ago I worked on the technology for personality recognition. It was developed to try to recognize what personality people are from their speech. That is the kind of technology that they said was used in the last election. They used people's social media to figure out where people’s hot buttons were and they did targeted political campaigns based on this kind of information, information inferred from your Facebook or other social media posts. For example, it might be able to infer that you are kind of self-righteous or something, and it would figure out some kind of targeted political message that would sway your beliefs.
I was interested in it because there was this idea that the computer should adapt to you. And one of the most visible aspects of you is your personality. Such as, if you are introverted or extroverted. There are results that show if you have a robotic exercise coach that matches a stroke victim’s personality, it it better at getting them to do their medically recommended exercises. So an extrovert person, if they have an extroverted coach, they spend longer doing their exercises, and if an introverted person has an introverted robotic coach they too spend more time.  So if you can infer what somebody's personality is you can have the computer adapt to them.
And you could be doing good just as well as you could be doing evil. When I saw those article about the last election I did think this hits too close to home. I had never imagined that that  technology would be used with that scenario. But that's what happens. It reminds me of when plastic was invented. People didn't know it was going to cause such havoc in our seas. It was just exciting and revolutionary. It is unimaginable what will and can be done with this technology down the road.

KImberly: What kind of consulting work are you doing in Silicon Valley?

Marilyn: I find the connection with Silicon Valley very compelling, the opportunity to have an impact on the technology that is right there at our doorstep. And with all this interest in my area this last year my team of students was selected by Amazon to compete in the Alexa Prize. There was an international call for proposals and they got over a hundred and picked twelve teams to compete and gave them one hundred thousand dollars to develop a chatbot for Alexa. Our team was chosen

Kimberly: What is a chatbot?

Marilyn: It is just another kind of conversation agent but instead of being task oriented, like get me an Uber, you're supposed to be able to just chit chat with it. You are supposed to be able to talk to it about fashion or movies.

Kimberly: That’s huge.

Marilyn: It's impossible! Well, it’s not necessarily impossible but right now the current technology, if you talk to any of the chatbots in this competition where people have been working almost a year they are all quite limited. It’s a big idea. The idea that Amazon had was to see if you could talk about virtually anything for twenty minutes and have some kind of interesting conversation. It’s a very challenging problem.
Recently, I received a gift from Fujitsu to start a collaboration with them. And right now we are talking to another company who I shouldn't mention because the contract is not signed. I also consult with a lot of startups because the area is so hot. That could take up all my time if I am not careful. There are an incredible amount of startups working on AI conversational intelligence right now.

Kimberly: What kind of companies are working on AI? In other words, where and how will we the consumer, see this in our lives in the near future?

Marilyn: There are car companies using it, as we all know. But also, it could be used in your refrigerator. For example you could call your refrigerator and say, ‘Do we have any eggs?’ And you could call your house and tell it to put the heat on. One day soon you may be able to talk to your medical devices.

Kimberly: One last question—According to the National Center for Education Statistics, during the 1984-1985 academic year, when you graduated with a degree in Computer Science, women accounted for nearly 37% of all computer science undergraduate students. As of 2010-2011, women made up just 17.6% of computer science students. Where are all the women in computer science?

Marilyn: I don’t know. Where are they? When I taught at the university level in England I had a computer science class with one hundred students and only two of them were women, and they weren't even British. They were from India. The major for women is in decline and I don’t know why that is.

Kimberly: Your daughter Isabel just finished her freshman year as an engineering major at Tufts. Was she inspired by you?

Marilyn: No one was more surprised than me about that. My husband Steve (she is referring to UC Professor Steve Whittaker, also a Fellow of the prestigious Association for Computational Linguistics) and I work together on a joint project called Well Being, so the study of computers and engineering is a family affair.




Two University Students Dish on the Subjects of College Life & “Fembots”
Grace Wainscoat & Isabel Walker, Graduation Day, Georgiana Bruce Kirby, 2016

Marilyn’s daughter, Isabel Whittaker-Walker, is majoring in Human Factors & Engineering at Tufts University. She was interviewed by Grace Wainscoat, who will be studying Humanities & Italian at John Cabot University in Rome, Italy.
Grace: In your major, there seems to be a lot of different elements and areas of study to pursue. There seems to be no particular path. Is that right?

Isabel: Certainly, yes, in what I’m interested in. I think that if I was doing a more traditional type of engineering degree, such as Civil or Mechanical, I would have courses that I would have to take because I would have to get my certification. But it’s kind of not like that, because Human Factors and Engineering is more psychology based and more qualitative so it's applicable to a lot more things. I can go a lot of different ways with it. I could take engineering management courses if I'm interested. There’s a chance I might be a mechanical engineer at some point, so I have taken a lot of engineering courses. I think I will probably be working with engineers so it will be useful.

Grace: Do engineering majors have fun in college?

Isabel: Yes, we do have fun in college. Me and my engineering girl gang, we have fun. (Laughter)

Grace: Are all the stereotypes wrong about nerdy nerds and boring boys in these engineering and math majors?

Isabel: There might still be some (laughter)...but it’s a good mix. I like it, a lot of my friends are engineers and a lot aren’t.

Grace: Are you taking any Humanities classes?

Isabel: I placed out of English but I’m taking a yoga class next semester.

Grace: How rigorous is the major in Human Factors and Engineering?

Isabel: It’s very rigorous but I like it so it’s worth it. While I may be taking calculus or physics and I may hate that, I may also be taking a cool psychology or human factors course.

Grace: How does your engineering or math background affect the way you take selfies? Can you attribute your great selfies with your knowledge of geometry and angles?

Isabel: I wouldn’t define it that way.

Grace: What are you learning about robots?

Isbel: I have seen some really cool robots that can help the physically disabled do things. I think they are called Willow Garage, they made this assisted robot that could shave this man’s face. He was paralyzed and needed assistance. Companion robots are cool. I have seen a lot of research on how robots look and whether or not people want to interact with them based on that. Sometimes you want them to look cute and sometimes you want them to look like they know what they are doing.

Grace: Should we talk about ethics of robots and their rights?

Isabel: We should talk about gender and robots.

Grace: Okay let's talk about that.

Isabel: I am so interested in this, such as, well, I don’t think a robot needs a gender but if it did what does that mean? In science fiction, how do we see female robots and how do we see male robots? They do very different things. There is the term the ‘fembot.’ Siri doesn’t think they have a gender. If you ask them, Siri what’s your gender? They will say, “I exist beyond your human concept of gender.” So for Siri, it’s more appealing to have a female voice in a service position. But Siri doesn’t even identify as having a gender. And there is like an interesting question, from a service perspective, is there a benefit to having Siri be a female voice? You would rather have a woman serve you. It would make you feel better. It's what you are used to from an historical perspective. Think of the traditional roles of phone operators, receptionists and such. It is purely psychological. It's what the market users prefer. And also people don't want to see a super masculine looking robot with a female voice, because that seems weird and throws them off. So it’s all about user comfort.

Grace: It seems like robots are reinforcing gender stereotypes.

Isabel: Yes, and robots are becoming increasingly more relevant. This study that I am working on, we added in a female condition because they didn't have one. Which is I think partially because of resources, but still, we put it there.

Comments

Popular posts from this blog

More Days, More Ways to Vote

The Return of Wati