top of page

Would you rather...

Updated: Jun 21, 2022

... run a marathon through Arizona in July with nothing but honey to drink OR spend a week stuck in a broken elevator with Lord Voldemort?

To design AI systems that support our well-being, it will be imperative to find out how people value one medical event or circumstance compared to another.



When I was in college, we would play a silly game on road trips called "Would you rather...?" One person would think of two horrible alternatives, and everyone would say which they would choose and why. For example:

Would you rather... run a marathon through Arizona in July with nothing but honey to drink OR spend a week stuck in a broken elevator with Lord Voldemort?

It was always fun to hear why my friends made various choices, and the ensuing debates ranged from completely absurd to surprisingly logical.


Let's try another scenario:

Would you rather... give yourself injections in the stomach every day for 30 years OR lose your left foot at the age of 66?

There's nothing funny about that scenario, but what is funny is the fact that Facebook's (now Meta's) algorithms can predict with scary precision which stories will keep us scrolling, while our medical information systems have no idea what people value.


This is a fundamental problem if we want to design AI systems which promote our well-being. How would these systems even define well-being?


Fortunately, this is a tractable problem, and one that we intend to address at Medical Intelligence One. Our approach will likely take advantage of "would you rather..." types of scenarios, whether explicitly asked or passively observed in the choices people make. As that network of values becomes more clearly understood for the general population, we will start to see sub-populations of patients emerge who have characteristic patterns of values.


For example, childbirth may be viewed as a very positive thing or a very negative thing, depending on the person. How a person values childbirth may also predict how they value other clinical events or circumstances. Further, we can see that these values change significantly over time. Someone who desires to become pregnant at the age of 24 would likely feel very different about the prospect of pregnancy at 74.


Understanding such values is an essential component of common sense and even compassion. Building AI with an awareness of how humans value things will be critical to design AI systems that support our well-being.


While we undoubtedly have a lot to learn, we have begun work on the problem and we are excited to see where this leads.


So, now I have a question for you:

Would you rather... have AI involved in your medical care that understands human values, OR AI which is blind to human values?

Let us know what you think in the comments below!



Recent Posts

See All

Today I heard a beautiful story told by Bre Gastaldi on the Once Upon a Gene podcast, and it reminded me of something wonderful that happened when I was in 5th grade. One of my neighbors had cerebral

bottom of page