affective computing
n. Computer technology that uses biometric sensors to detect physical characteristics that relate to moods and emotions; the computer simulation of moods and emotions.
Examples
2003
Imagine if a computer could sense if a user was having trouble with an application and intuitively offer advice. The irritating paperclip that embodies Microsoft's Office Assistant could be a thing of the past. The software industry has tried to make applications more intelligent but many fall far short of being genuinely useful. However, this could be about to change.

Kate Hone, a lecturer in the department of information systems and computing at Brunel University, is the principal investigator in a project that aims to evaluate the potential for emotion-recognition technology to improve the quality of human-computer interaction. Her study is part of a larger area of computer science called affective computing, which examines how computers affect and can influence human emotion.
—Cliff Saran, “Letting your computer know how you feel,” Computer Weekly, June 24, 2003
2003
For the last decade, the UC San Diego psychologist has traveled a quixotic path in search of the next evolutionary leap in computer development: training machines to comprehend the deeply human mystery of what we feel.

Movellan's devices now can identify hundreds of ways faces show joy, anger, sadness and other emotions. The computers, which operate by recognizing patterns learned from a multitude of images, eventually will be able to detect millions of expressions.

Scanning dozens of points on a face, the devices see everything, including what people may try to hide: an instant of confusion or a fleeting grimace that betrays a cheerful front.

Such computers are the beginnings of a radical movement known as "affective computing." The goal is to reshape the very notion of machine intelligence.
—Charles Piller, “A Human Touch for Machines,” Los Angeles Times, May 07, 2003
1996 (earliest)
But whether the emotional computers of the future are based on theoretical models of the body, brain or mind (or a combination of all three), one question remains: are we prepared to build a machine and give up control over it? Are we willing to give machines the freedom to make value-based, emotional decisions?

For Picard, the path to an answer is strewn with other questions, equally challenging but more specific. For example, should machines be given emotional self-awareness? Should they be given emotional skills beyond the power of humans? Should they be allowed to feign and hide emotions in the way that humans do? The answers are by no means obvious: a machine tutoring a child with brain damage may feel exasperation, but showing the exasperation might interfere with the teaching process by upsetting the child.

Picard hesitates to be dogmatic, but if her vision of "affective computing" does come to pass, she believes one rule may eventually become necessary: no emotions without ethics. After all, if HAL hadn't valued the success of the mission above the lives of his crewmates, the tragedy could never have happened. The horror of 2001 isn't that HAL had emotions, says Picard. "It's that he didn't have the intelligence and ethics to handle them."
—“You're wrong, Mr Spock,” New Scientist, April 27, 1996
Filed Under