Key Definitions

For the purpose of this Symposium we are adopting the following definitions. 

What do we mean by Virtual Humans?
Our working definition of a virtual human is any human-like, non-physical entity that has conversational capabilities. This includes computer-mediated conversational technologies (e.g., Replika), objects with a voice embedded in them (e.g., Not the Only One) and interactive projections of specific people (e.g., virtual experts). Non-conversational companion robots and emotive objects (e.g., robotic seals for the elderly) and other robots (e.g., surgical arms and military machines) are excluded from this definition.

What do we mean by Health?
We are adopting a broad definition of health which is closely captured by the World Health Organization definition, namely, “health is a state of complete physical, mental, and social well-being and not merely the absence of disease or infirmity.”

What do we mean by Social Support?
Social support encompasses a wide spectrum of actions, words, and goods offered by one entity to improve the wellbeing of another entity. Traditionally, social support has been seen as an exchange between humans. For this meeting, we assert that social support may be offered from a virtual entity to a human and vice versa. Three overarching support types are likely relevant for this meeting: (1) informational support to provide new perspectives, advice, and reminders; (2) emotional support to impact feelings, including perceptions of self and others; and (3) companionship support to foster a sense of belonging, typically by spending time together.

Framing Questions

The following questions will be referred to at different points during the Symposium to guide discussion about the ethical dimensions of virtual humans for health and social support.  They are not meant to be an exhaustive list of questions or topics for discussion.

  1. Defining and Characterizing
  • What characteristics make a virtual human more or less human?
  • What characteristics make a virtual human more or less capable of serving health or medical/clinical needs?
  • What other dimensions are relevant to the ethical characterization of virtual humans for health and social support?
  1. Contextualizing
  • Why virtual humans? What forces are contributing to the perceived need for technologies that engage as virtual humans for health and social support?
  • How are virtual humans contributing to or detracting from other (more traditional) ways of addressing health and social support needs?
  • How do the current states of technology and potential technological limitations constrain virtual humans and how human-like they can become in the near term? What are the most reasonable long-term prospects for the development of the technology?
  1. Implications for users
  • In what ways could/do virtual humans impact users (e.g., socially, clinically, psychologically)?
  • How does the inherent diversity of users in terms of access, experiences and representation influence our characterization of these technologies and their risks/benefits?
  • What are the implications (e.g., ethical, legal, professional and scientific) of the extent to which a virtual human is intended for clinical vs. non-clinical purposes?
  • What are the implications (e.g., ethical, legal, professional and scientific) of the extent to which a virtual human is more or less human-like?
  1. Implications for non-users
  • In what ways do virtual humans impact non-user humans and other sectors/priorities (i.e. collateral harm)?
  • In what ways do virtual humans inform debates about the need to incorporate social dimensions into medicine (e.g., the extent to which medical practitioners can/should address social aspects of a patient’s life)?

5. Planning for the future of virtual humans

  • What can/should be subject to the norms of ethics and etiquette and what can/should be regulated by law? What are the global implications? (e.g., is international regulation feasible?)
  • What ethics principles should guide the development and application of virtual humans?
  • What are best practices to protect users of virtual humans and their data in everyday, research, clinical contexts (e.g., privacy, confidentiality, transparency, bias, autonomy, informed consent, data governance)?
  • How should we measure the benefits and harms of virtual humans?