First Monday
Read related articles on Computer industry, Intelligent agents and Philosophy

FM Interviews: Rosalind Picard

Rosalind Picard is NEC Development Professor of Computers and Communications and Associate Professor of Media Technology at MIT. Educated at Georgia Tech and MIT, Rosalind (or Roz) worked at AT&T Bell Labs, IBM, Hewlett-Packard, and even as a graphic designer and newspaper features editor in advance of her academic career. Her research interests include affective computing; texture and pattern modeling; and browsing and retrieval in video and image libraries. Her most recent book, Affective Computing, was just published by MIT Press so First Monday took a brief opportunity to catch up with Roz to discuss her book and research.

FM: Affective computing, could you please try to describe it.

Rosalind Picard: Affective computing is computing that relates to, arises from, or deliberately influences emotions. In practice, we are trying to build computers with the skills of emotional intelligence, such as the ability to recognize emotions, assist in communicating human emotion, and respond appropriately to emotion. These include wearable computers that look more like shoes, jewelry, and clothing than like the traditional box on a desk, although we are working on the latter as well. One example that has already been demonstrated is eyeglasses that sense if the wearer is furrowing his brow in confusion or raising it in interest, and relay this information to the computer or to other people, perhaps in a distance learning situation.

FM: Why "affective" rather than "emotive"?

Rosalind Picard: I pondered adjectives for some time. The popular connotation of "emotional" connotes irrational behavior, and "emotive" seemed too close to that and to "emote," which gets associated with the expressive aspects of emotion. Expression is only a part of affective computing. I went to the theorists where I found a thicket of conflicting terminology, with little agreed upon definitions for any of these words. Several theorists include "attention" and "interest" as affective phenomena, but not as emotions. I wanted a broad term that was accurate and minimally tied to the negative aspects of emotions. "Affective" doesn't pack the book-cover punch of "emot___" but in the end it won out because it seemed to be the most accurate. "Affective" has the additional advantage that it is nicely confused with "effective."

FM: You argue that computers could use a little more emotional intelligence. How could we measure this intelligence in a computer?

Rosalind Picard: This is the kind of question that drives the advocates of emotional intelligence nuts. Since Dan Goleman's best seller Emotional Intelligence, there has been renewed interest in devising some kind of measuring stick for these skills, some kind of "EQ" test. I won't get off into all the problems with the proposed measuring devices (Gardner tackles some of these issues, which also arise with testing the "multiple intelligences"). I think, rather than try to measure the "EQ" of a machine, what will be measured is how well it, in tandem with its user, performs when given certain tasks.

For example, 100 people try to solve a problem with the aid of a computer. Half the computers have the skills of emotional intelligence, and half do not. Once the human-computer pairs have solved the problem, then you inquire about things such as accuracy, speed of solution, enjoyment of the experience by the human, what the human learned, and desire of the human to continue working with that computer. In other words, there is no scalar criterion of success; the measurement will depend on the situation, its goals, and even the relevant values placed on the goals.

It is important to be clear that the affective computers above won't necessarily be any more "emotional" in a given situation. Emotional intelligence includes the ability to regulate emotions, so that, for example, if it was inappropriate to show emotion in some situation, then the computer would show no emotion. However, the computer would still watch for emotion from its user, so that it could adapt its behavior in better service to that user.

FM: Theodore Roszak wrote thirty years ago, in a short essay, about the importance of the "impish spirit of play" and warned about the dangers of technological or "idiot efficiency." In an sense you are asking for computers to have this "impish spirit", aren't you?

Rosalind Picard: What a delightful expression. However, I'm not sure "idiot efficiency" is the real danger after all. I'm more worried about what may need to be called "stunted emotions" referring to the misinterpretation of the mood of your e-mail, or when conversations take ten times longer than necessary because the writers weren't skilled enough to convert vocal intonation into text, and the danger of what will happen to people if they spend all their waking hours interacting with and through a device that constantly ignores their emotions. The latter is of particular concern if you believe Reeve's and Nass' findings, which indicate that the way people interact with computers is inherently natural and social. Here's what Reeves and Nass found: If you take a classic test of social interaction between two people, and replace one of the people with a computer, then the classic results still hold. For example, if a person teaches you something, then you might tell them afterwards "that was really great." If another person asks you how that teacher was, then you might say "great." People tend to give slightly higher praise face-to-face. Now replace some of the people with a computer: if the computer teaches you (a person) something and asks "please rate this computer how it did as a teacher" then you might click on "really great". If another identical computer asks you to rate how that other computer did, then you would tend to still click on something positive, but not quite as positive: "great." In other words, you are nicer "face-to-face" (face-to-monitor) than you are otherwise. The results of the human-human interaction still hold for the human-computer interaction. This, and dozens of other studies have pointed to this result. Now, if Reeves and Nass's so-called "media equation" is true, then what is the result of human-computer interactions when the computer repeatedly ignores your emotions? When a person constantly ignores your emotions, especially if you expressed them clearly, then it can have a detrimental impact on your relationship, self-esteem, and so forth. You receive the message that your emotions do not matter, or perhaps that they are invalid. This concerns me: that non-affective computers, which repeatedly ignore deliberately expressed human emotions, are having a detrimental impact on people. I wonder if in part why so many people are angry at Microsoft is not just because their products frustrate them so much, but also because this frustration is ignored. The computer makes people feel like they are dummies, when in fact it is the computer that is stupid.

FM: As you noted in your essay, "Does HAL Cry Digital Tears", HAL in some ways was more emotional than his human companions in space. Why do you think the media and others imagine scientific perfection in humans as a kind of mechanical, calculating state? Mr. Spock of the original Star Trek series was the holotype, in some ways, wasn't he? But you would argue that a completely non-emotional human would be a huge intellectual disaster. Why?

Rosalind Picard: HAL was the most emotional character in the film; quite a leap for the 60's! I suspect perception of the ideal scientist as mechanical and calculating (this was the 60's, in contrast with the scientists recently portrayed in Jurassic Park, for example) comes from the misperception that rational thinking is purely mechanical and calculating. After all, most irrational thinking appears to be associated with strong emotions, and no such emotions are obvious when people are rational. One problem is that most people only tend to NOTICE emotions when they are out of balance, such as when someone is too emotional. When emotions are working their moment-by-moment internal influence on rational thought then we do not see them. But, it is very important that they are still there doing their work. Neuroscientists have learned that if emotions are effectively disconnected in the human brain, then a person's ability to act and think rationally is impaired. It is therefore important (if you want to be scientifically accurate) that Spock actually did have emotions; he just had great emotional intelligence when it came to suppressing their expression. In contrast, Star Trek's android "Data" is misleading in that he still functions rationally when his emotion chip is turned off.

FM: So emotions are "objective" in their "influence" on rational thought? Or is the subjectivity of emotions OK? There is this impression that emotional influence is subjective, and therefore "wrong", and non-emotional influence on critical thinking is objective, and therefore "right". So "subjective" and "objective" are meaningless?

Rosalind Picard: I wouldn't say that they are meaningless notions, only easily confused notions. There can be purely logical "objective" thought: if A implies B, then not B implies not A" and so forth. However, upon finer examination, a person's decision whether to choose such a logical approach is probably based on her feelings about the importance of logic in that situation. Emotions are steadily at work within us, biasing goals and motivations, and ultimately what we think and do. And, it's good we can't turn these emotions off. Most of the decisions we face throughout the day do not involve a clear definition of "A," "B" or a clear set of rules such as "A implies B". In these day-to-day decisions, emotions evidently act as a guide through "rational" (but not purely objective) reasoning. If we did not have emotional signals working in these cases, we would probably fail to arrive at final decisions. We would spend lots of time wandering around acting rather irrationally. This has been confirmed in patients who have a kind of brain damage where their emotional system is not properly "hooked up" so to speak.

FM: In Fritz Lang's classic science fiction movie Metropolis, created in 1926, the scientist Rotwang creates a very human robot, that uses its intelligence and emotions to manipulate the hoi polloi in the future. Rotwang's creation was smart and sexy. Could future affective computers also have gender? Would artificial gender mean different emotional temperaments?

Rosalind Picard: People attribute gender to their bikes, cars, software agents, and just about anything. It's been shown that if you give a computer a female voice, people will make typical female assumptions about it that are usually less favorable than the male assumptions. In other words, people give machines gender even when it really isn't there, so "yes" future affective computers may also have gender. The temperaments can, of course, be chosen arbitrarily, at least their initial "innate" settings. In the popular Myers Briggs Personality Type Indicator, which clusters people's personalities based on four axes, one axis indicates a gender bias: the "thinking-feeling" axis. Women as a group tend to be clustered toward the "feeling" side of this axis, indicating a kind of preference to make decisions that explicitly take into account feelings, while men are clustered more toward the "thinking" side, indicating a kind of preference to make decisions based on other information. Of course, any individual man or woman could lie anywhere along this axis. (I've read that most graduate students lie closer to the "thinking" side, regardless of male/female.) If the distribution of computers were chosen to follow that of people, then we could expect to find a similar bias among groups of gendered computers. But, this is up to their designers.

FM: Do you sense gender differences in the kinds of research on affective computing?

Rosalind Picard: I spent most of my professional career being one of the only women in a room of mostly men, so I am so used to ignoring gender differences that, consequently, I really don't know much about them. Colleagues have told me that there are important differences that have been found among large groups of men and women, although these may not hold for a particular man or woman. Fanya Montalvo, a computer scientist who has been thinking about "emotional computing" much longer than I have suggested to me that affective computing will be more feminine, and its technology more appealing to women. I would be delighted to help make technology more appealing to women, especially to girls. Making technology more feminine, however, has never been the primary goal behind affective computing. Emotions are essential for both males and females to function properly.

FM: With emotional intelligence, would it be possible for computers to have a kind of mechanical religion? What would it be like?

Rosalind Picard: A "mechanical" religion might be an uninteresting collection of rules: speak with deference in the presence of your Maker, capitalize the Name of Thy Maker, etc. But with emotions, I think it would actually be possible for computers to have a much more intriguing situation: to FEEL that what they believe is true. I just wrote a dialogue for the MIT "God and Computers" class that might interest you. It involves two machines denying the existence of their maker (A sort of machine atheism, if you will, although DISCLAIMER the analogy I make between the role of God and the role of computer scientists does not extend to the issue of the Maker's perfection). Barry Kort, posted the dialogue at http://www.musenet.org/bko rt/utnebury/RUR.html

FM: James Martin, many years ago in 1973, wrote in his book Design of Man-Computer Dialogues the following:

The difference in thinking talent - the computer being good for ultrafast sequential logic and the human being capable of slow but highly parallel and associative thinking - is the basis for cooperation between man and machine. It is because the capabilities of man and machine are so different that the computer has such potential. However it is equally important that systems designers and those managers and other persons who think about computer usage do not try to make the computer compete with man in areas in which man is superior." [emphasis by author]

Would affective computing reduce "the areas in which man is superior" over computers?

Rosalind Picard: Marvin Minsky has declared that machines will some day be so far superior to us that we will be lucky if they keep us around as household pets. I don't agree with Marvin, given that we humans will continue to develop and grow along with that which we make. But, let's suppose for a moment that we do someday succeed in building machines that can perform all the "useful" functions humans perform, and maybe even better than we can perform them. Does that mean we are worthless? Does it mean that the machines are superior to people? I think these are questions that transcend science, and we must beware of any scientist who tries to answer them without appeal to higher causes. I will be giving a plenary talk at a conference at MIT at the end of April that will address these issues. The conference will bring together theologians and computer scientists to address personhood and human dignity. I believe that human worth is far more than measures of its utilitarian or functional value; human life has inalienable God-given dignity. We can give dignity to our creation, but what we can give does not begin to compare to what God gives.

FM: So if we could give "dignity" to our creations - like computers - how would you define it? Does that act of "dignity transfer" make us "deities" to computers?

Rosalind Picard: Defining "Machine Dignity" is the subject of my talk on April 30, and it is an immensely challenging one. Frankly, I am still debating the answers to these questions; they aren't ones we usually cover in our curriculum at MIT. I mentioned above the dialogue I wrote for my presentation of the (new last fall) MIT God and Computers class, "Toward Machines That Can Deny Their Maker." This has a Part II, which I am still completing, that will address your second question. I can reveal my bias: I am certainly not interested in computer scientists being made into deities; our heads are already much too big. Nonetheless, we are in the role of maker, and we face decisions regarding giving machines not only emotions, but also morals and dignity.

FM: Lady Lovelace noted that Charles Babbage's Analytical Engine could never be creative. Would affective computers also be creative computers? How would you know?

Rosalind Picard: Some have thought that the entrance to the secret garden of creativity was randomness; others quantum mechanics; others, emotions. In my book I talk about some links between emotion and creativity because I think emotions contribute significantly to human creativity, and could contribute to a new kind of machine creativity. The evidence seems to indicate that creativity involves emotions. Psychologists have even shown that being in a good mood can facilitate creativity, making it more likely you can solve a problem that requires a creative solution. However, emotions are not enough. It is not clear what provides the extra spark.

FM: David Ambrose in his novel Mother of God, William Gibson in a recent episode of the X-Files of all things, and others have discussed the possibilities of artifical and emotion-laden mega-programs operating on the Internet. Why are all of these emotional forms of artifical intelligence portrayed as incredibly dangerous? Would it be possible for a form of artifical intelligence to have sufficient emotional sense to work in non-violent ways?

Rosalind Picard: Violence and danger are guaranteed high-arousal response generators for most people. Sex is the other one. These are the easiest ways to arouse an audience, which of course is a very important aim. Scientists have shown that arousal is the most significant predictor of memory and attention. Permit me to poke a little fun by turning your question around, à la Reeves and Nass: is it possible for a form of human intelligence to have sufficient emotional sense to work in non-violent ways? Of course. But what about all the evil violent people on TV and in novels? Well ...


Contents Index

Copyright © 1998, ƒ ¡ ® s † - m ¤ ñ d @ ¥