top of page

Empathetic machines



Why we would want machines to have empathy? I’m sure there are many reasons, but this is mine: we can map out what a person is likely to experience in the future, given their present state. The map of their future branches at many decision points, and we are often left wondering, what will be the best course of action overall?


These kinds of decisions are really hard for humans, because each decision point exponentially increases the number of possible outcomes, so it becomes exponentially more difficult to imagine a future state after all these decisions. Just like Google maps, a machine with empathy wouldn’t force you to take a given course, but by understanding how you would feel about the many situations you would encounter, it could bring your attention to the courses of action which align best with your values.


Before we dive into thinking about how to engender goodwill, sympathy, and empathy in AI, let's start with basic definitions.


Goodwill is wishing someone well.


Sympathy is understanding how someone actually does feel or predicting how they would feel in a given situation.


Empathy is goodwill plus sympathy; it is understanding how someone feels, and wishing them well.


Imparting goodwill to a machine would entail asking it to optimize its actions with respect to measurements of wellness for some party or group. The hard part here is measuring wellness, not optimizing.


The whole field of emotional intelligence (EI rather than AI) is devoted to training machines to label human emotions based on biometric or verbal input, which is essentially the machine version of sympathy. The big difference is that the machine lacks the physical body that would be required to “feel” the emotion along with the subject; it can’t experience the quickened pulse, facial flushing, or sweaty palms that a human might experience during sympathy.


Seasoned healthcare professionals also lose many of these physiologic responses to sympathy, but that actually tends to make them better at helping people, not worse. So I think it’s okay that the robot’s palms aren’t going to sweat.


Predicting how someone would feel in a given situation takes more than biometric and verbal input. It requires parsing out situations into discreet aspects of experience, and understanding how people respond to or value those experiences. So the basic components here are:

  1. Ontologies/taxonomies that describe experiences

  2. Systems that detect the emotional meaning of biometric and verbal data

  3. Systems that learn the usual emotional responses to experiences.

In a way, that brings us full circle to the measurements of wellness required to impart goodwill.


By putting together the components of sympathy and goodwill, it should be possible to engender a very useful form of empathy in machines.


As is so often the case these days, measurement and data/knowledge modeling are the hardest parts that remain. I think most of the hard parts of the algorithms have already been worked out by a lot of very smart people.

Recent Posts

See All

Today I heard a beautiful story told by Bre Gastaldi on the Once Upon a Gene podcast, and it reminded me of something wonderful that happened when I was in 5th grade. One of my neighbors had cerebral

bottom of page