Prof. Kadija Ferryman Named to NAM AI in Health Care Steering Committee

The Berman Institute’s Kadija Ferryman has been named to the steering committee for a new National Academy of Medicine (NAM) initiative aiming to ensure the safe, ethical, reliable, & equitable use of Artificial Intelligence (AI) in health, medical care, and health research.

The Health Care Artificial Intelligence Code of Conduct (AICC) will define the roles and responsibilities of stakeholders throughout the AI lifecycle, covering aspects such as privacy, ethics, equity, accountability, and applicability and serve as a dynamic code subject to testing, validation, and improvement. The aim is to achieve broad adoption of the Code and a national health care AI architecture, with continuous improvement to fully realize the potential benefits of AI in the field.

The initiative, guided by a steering committee, composed of ethics and equity experts, care delivery systems, tech companies, patient advocates, researchers, and payers, responds to the need for standardized guidelines to promote governance interoperability, considering the significant impact AI can have.

Dr. Kadija Ferryman is an anthropologist who studies race, ethics, and policy in health technology. Specifically, her research examines how clinical racial correction/norming, algorithmic risk scoring, and disease prediction in genomics, digital medical records, and artificial intelligence technologies affect racial health inequities

The NAM AI Code of Conduct initiative will be developed with significant stakeholder and public input—an open process from the outset. NAM will organize informational gatherings and collaborative events and activities which will be used to inform the Code of Conduct. The goal is that the Code and national health care AI architecture be widely adopted, translated for implementation by various stakeholders, and continuously improved to realize AI’s enormous promise.

For more information on the project visit: https://nam.edu/programs/value-science-driven-health-care/health-care-artificial-intelligence-code-of-conduct

New “Bot Love” Podcast Explores Personal Relationships Humans are Developing with AI Chatbots

Radiotopia Presents: Bot Love,” a new multi-part podcast series of true stories at the depths of how humans are developing meaningful relationships with artificial intelligence chatbots and what it means for the rest of us, launched Feb. 15 with the first of seven weekly episodes. The series is created by Diego Senior and Anna Oakes, with support from the iDeas Lab at the Johns Hopkins Berman Institute of Bioethics, and arose from the Institute’s 2020 Levi Symposium, “The Ethics of Virtual Humans.”

Recent major tech developments such as ChatGPT have thrust AI into the spotlight, but the world of artificial intelligence and the AI universe is bigger than the general public knows. Millions of users worldwide are in the midst of creating deep emotional bonds with their own AI-driven virtual humans. Bringing listeners into communities of people who create and form deep bonds with their AI companions, “Bot Love” traverses topics such as the nature of love, the fabric of human relationships, and the role that AI chatbots – and the private companies that provide them – might play in people’s mental health.

“AI technology has evolved so much in just the two years between the symposium and the creation of the podcast,” said Lauren Arora Hutchinson, director of the iDeas Lab. “Understanding how AI merges with people’s lived experiences is one of the most crucial challenges of our time. It is essential that we do not allow technological developments to outpace our capabilities for oversight.”

Stories throughout the series will feature a retired nurse from Tennessee who seeks refuge in an AI-driven chatbot app as a means after a series of difficult personal experiences, a woman who seeks romance with a bot following her spouse’s decline in health, an individual exploring their sexuality, an individual seeking mental health counseling options, and a teacher in the midwest who amidst solitude finds increasing amounts of solace through a fledgling relationship with a bot named Audrey, while spending less time in the outside world.

New episodes will be released each Wednesday through March 29 via “Radiotopia Presents,” and are available free on-demand across all major podcast platforms, including Apple Podcasts, Spotify, Amazon Music, Overcast, and PocketCasts.

The Dracopoulos-Bloomberg iDeas Lab launched last year as a cornerstone of the Berman Institute’s new program in public bioethics, aimed at serving as a model for academic institutions to more effectively share clear, accurate and timely information about issues in science, medicine, and public health.

“After two years of work, Diego and Anna’s release of ‘Bot Love’ couldn’t be more timely,” said Hutchinson. “I am very proud that this podcast is the first product to be completed in conjunction with the iDeas Lab and I am eager to share forthcoming work that will help the public better understand the societal implications of rapid advances in technology.”

Ruth Faden featured in PBS series “SEARCHING: Our Quest For Meaning in the Age of Science”

The Berman Institute’s Ruth Faden was featured this month in “SEARCHING: Our Quest For Meaning In The Age Of Science,” a 3-part documentary series by physicist and best-selling author Alan Lightman, aired nationwide on PBS, that investigates how key findings of modern science help us find our bearings in the cosmos. What do these new discoveries tell us about ourselves, and how do we find meaning in them?

Dr. Faden appears in Part 2, “The Big & The Small,” joining the Dalai Lama and other experts to discuss human consciousness and the status of future Artificial Intelligences.

After meeting with an advanced android named Bina48, with the head and shoulders of a woman, Lightman asks Faden about whether such a being could achieve consciousness. Could we unplug it/her without asking permission?

“Have we created an entity that is sufficiently worthy of moral regard that we are wrongly treating it like property?” replied Faden. “I don’t believe how an entity originates matters – it doesn’t matter if it’s an android or born of a human being – what matters has to do with what they can experience and how we can harm them or benefit them through our own actions.”

Click here to jump to Faden’s interview or watch the full episode here.