JHU Exploration of Practical Ethics

Practical ethics is an interdisciplinary field of study that takes on ethical issues arising in professions and scholarly disciplines, within institutions and society.

For almost a quarter-century, the Berman Institute of Bioethics has led national and international efforts to make sense of and find answers to new ethical issues arising from rapid gains in health care, public health, and the biomedical sciences.

But advances in science and technology increasingly touch impact aspects of our lives that go far beyond bioethics’ traditional purview. In response, Berman Institute scholars and their colleagues across the university have begun exploring contemporary ethical issues that cross academic disciplinary lines and take place in a wide range of real-world circumstances. To support these efforts, Johns Hopkins created the Exploration of Practical Ethics program, which provides grants for faculty to undertake research in interdisciplinary fields of ethics.

The program awarded nine grants in 2016 to projects examining issues relating to criminal justice, higher education, economics, and environmentalism, among others. Last year, the program held another competitive call for applications and disbursed $350,000 to seven new cross-disciplinary projects. On Nov. 14, the researchers held a Symposium to share their work.

Current Awardees

The Law of Unintended Consequences: Will the Implementation of California Senate Bill 27 Impact Animal Health and Well-Being?

We propose to analyze the ethical trade-offs between a California state legislation, Senate Bill (SB) 27, that limits antibiotic uses in food-producing animals to benefit public health and the potential costs from this legislation in terms of animal health and welfare. This policy will go into effect on January 1, 2018, allowing us to leverage the natural experiment to determine shifts that occur around the time of policy implementation. We will interview poultry and dairy farmers and other stakeholders, we will evaluate animal health and welfare outcomes, and we will conduct an ethical analysis to examine the trade-offs. To aid in the development of future policies, we will recommend mitigation strategies and produce ethical checklists as tools for decision-makers.

Co-Principal Investigators

Meghan F. Davis, Assistant Professor, Department of Environmental Health & Engineering, Bloomberg School of Public Health and Department of Molecular and Comparative Pathobiology, School of Medicine

Jessica Fanzo, Bloomberg Distinguished Associate Professor of Ethics and Global Food & Agriculture, Nitze School of Advanced International Studies; Berman Institute of Bioethics; Bloomberg School of Public Health

Co-Investigators 

Christopher Heaney, Associate Professor, Department of Environmental Health & Engineering, Bloomberg School of Public Health

Keeve Nachman, Assistant Professor, Department of Environmental Health & Engineering; Director, Food Production & Public Health Program, Center for a Livable Future, Bloomberg School of Public Health; Co-Director, Risk Sciences and Public Policy Institute

Sara Y. Tartof, Research Scientist, Kaiser Permanente Southern California

Joan Casey, Postdoctoral Scholar, Division of Environmental Health Sciences, School of Public Health, University of California, Berkeley

Ethical Robotics: Implementing Value-Driven Behavior in Autonomous Systems

Robots will soon pervade our daily lives as surrogates, assistants, and companions. As we grant them greater autonomy, it is imperative that they are endowed with ethical reasoning commensurate with their ability to both benefit and harm humanity. In 1942, Isaac Asimov stipulated his Three Laws of Robotics to govern robot behavior. Implementing such laws requires an actionable value system that can be analyzed, judged and modified by humans. The proposed project brings together ethics and robotics experts from the JHU Berman Institute of Bioethics and JHU/APL to (1) develop an ethical framework for robots, (2) implement the framework by extending existing robot capabilities, and (3) assess the framework’s impact on robot behavior using JHU/APL’s Robo Sally, a hyper-dexterous robot with Modular Prosthetic Limbs and human-like manipulation capabilities. The goal is to derive design guidelines and best practices to implement practical ethics in next-generation robotic systems.

Co-Principal Investigators 

David Handelman, Senior Roboticist, Applied Physics Laboratory

Ariel Greenberg, Senior Research Scientist, Applied Physics Laboratory

Bruce Swett, Senior Neuroscience Researcher, Applied Physics Laboratory

Co-Investigators 

Debra Mathews, Assistant Director of Science Programs, Berman Institute of Bioethics; Associate Professor, Department of Pediatrics, School of Medicine

Travis Rieder, Research Scholar and Assistant Director for Education Initiatives, Berman Institute of Bioethics

Are We Asking the Right Questions about the Ethics of Autonomous Vehicle Testing?

The current development of autonomous vehicles (AVs) appears to be leading us to a wonderful future of effortless mobility. But what if there are unanticipated, negative consequences? We believe that a wait-and-see approach is irresponsible, especially since some consequences can be irreversible. We are particularly concerned about pathways of testing and deployment of AVs that could lead to widening disparities and a declining quality of life for certain segments of society. In our proposed work, we will begin with a systematic exploration of possible negative outcomes and will engage multiple stakeholders, including those who may be most impacted by these outcomes. We then develop recommendations for the sponsors and implementers of AV trials (testing programs) that would enable stakeholders to voice their concerns and influence the design of these trials. With these research experiences, we will be well positioned for future work in dissemination and implementation of our recommendations.

Co-Principal Investigators 

Johnathon Ehsani, Assistant Professor, Department of Health Policy & Management, Center for Injury Research & Policy, Bloomberg School of Public Health

Tak Igusa, Professor, Department of Civil Engineering, Whiting School of Engineering

Co-Investigator                                                                                         

Govind Persad, Assistant Professor, Department of Health Policy & Management, Bloomberg School of Public Health; Berman Institute of Bioethics

Housing Our Story: Towards Archival Justice for Black Baltimore

“Housing Our Story” engages in the practical ethics of building an archive about African-American staff and contract workers at the Johns Hopkins University. Even with many librarians making new commitments to diversity and social responsibility, few have considered the ethical imperatives raised by structural racism, archival silences, and failed efforts to resist erasure on the part of marginal populations. While archivists nobly aim to preserve the memory of the world, they often, in practice, institutionalize the choices of the powerful. Archivists and their benefactors get to determine what belongs in special collections, where to locate archives, how to organize them, and even what counts as an archival source. Because archivists have their own biases and have to deal with the realities of a given institution’s capacities, their choices ultimately result in silences, silences that, not infrequently, infringe on black people’s ability to form social memory and history. We aim to redress this problem.

Co-Principal Investigators 

Jennifer P. Kingsley, Senior Lecturer and Assistant Director, Program in Museums and Society, Krieger School of Arts and Sciences

Shani Mott, Lecturer, Center for Africana Studies, Krieger School of Arts and Sciences

N.D.B. Connolly, Herbert Baxter Adams Associate Professor, Department of History, Krieger School of Arts and Sciences

The Ethics of Preparedness in Humanitarian Disasters

What are the “everyday” ethical issues that affect war-adjacent professionals such as humanitarians, journalists, and scholars on the ground? How do individuals in these fields resolve them? We examine: 1.) The training that professionals such as researchers, journalists, and humanitarians working adjacent to war receive; 2.) How these individuals’ understandings of professional conduct interact with local populations’ concepts of ethical and moral behavior; and 3) How professionals’ protocols and practices subsequently evolve—or do not—in the field. Focusing on the humanitarian crises that conflicts in Syria and Iraq have produced, this project uses multi-sited, immersive fieldwork with foreign and local professionals in Iraqi Kurdistan and Lesvos, Greece to identify communities of practice, indigenous innovations, and emergent ethical tensions. Subsequent workshops in each field site bring together scholars, practitioners, and community representatives, to identify key ethical issues and discuss potential cross-field policy interventions.

Principal Investigator:

Sarah E. Parkinson, Aronson  Assistant Professor of Political Science and International Studies, Krieger School of Arts and Sciences and Nitze School of Advanced International Studies

Co-Investigator: 

Valerie De Koeijer, Graduate Student, Department of Political Science, Krieger School of Arts and Sciences

Determining the Number of Refugees to be Resettled in the United States: An Ethical and Human Rights Analysis

The controversial Executive Orders by President Donald Trump on travel bans and refugees, which reduce by half the number of refugees proposed to be admitted in 2017 as compared to the Obama Administration’s determination, raise policy and ethical questions about the criteria used to determine the number of refugees admitted to the United States. This project will undertake a literature review related to ethics, human rights, policy, and refugee resettlement; conduct qualitative interviews with key informants; identify relevant ethics and human rights frameworks relevant to the question; and seek to create a framework to help guide decisions on the number of refugees to be resettled. We plan to seek feedback on our proposals and then publish them and make recommendations to policy-makers and the public based on our analysis.

Principal Investigator:

Leonard Rubenstein, Senior Scientist, Department of Epidemiology, Bloomberg School of Public Health; Berman Institute of Bioethics

Co-Investigators 

Govind Persad, Assistant Professor, Department of Health Policy & Management, Bloomberg School of Public Health; Berman Institute of Bioethics

Daniel Serwer, Professor, Nitze School of Advanced International Studies

Paul Spiegel, Professor of the Practice, Department of International Health, Bloomberg School of Public Health; Johns Hopkins Center for Humanitarian Health

Rachel Fabi, Assistant Professor of Public Health and Preventive Medicine, SUNY Upstate Medical University

Conducting Research on Commercially-Owned Online Spaces

People are increasingly spending time in online spaces that are created by commercial interests (e.g. retail sites, brand specific web pages, commercially-owned social spaces). It is critical to understand the potential impact of such spaces on the people who enter them and engage in activity within. Online spaces created by commercial entities are often restricted to certain types of people for specific purposes that are in line with the interests of the entity who created the space rather than the public good. Such restrictions potentially preclude important research that is routine within ‘real world’ commercial spaces for the promotion and protection of public health. In this proposal, we seek to understand and elucidate right and wrong action in relation to research on consequential, commercially-owned online spaces to which entry for research purposes is currently frequently prohibited through the existence of terms and conditions that preclude such action.

Principal Investigator:

Katherine Clegg Smith, Professor, Department of Health, Behavior and Society, Bloomberg School of Public Health

Co-Investigators 

Joanna Cohen, Bloomberg Professor of Disease Prevention and Director of the Institute for Global Tobacco Control, Bloomberg School of Public Health

Meghan Moran, Assistant Professor, Department of Health, Behavior and Society, Bloomberg School of Public Health

Mark Dredze, Associate Professor, Computer Science, Whiting School of Engineering

Errol L. Fields, Assistant Professor, Division of General Pediatrics and Adolescent Medicine, Department of Pediatrics, School of Medicine

Practical Ethics Blog

Ethical Robots
Nov. 2, 2018

By Amelia Hood

Isaac Asimov’s Three Laws of Robotics were first published in the short story collection I, Robot in 1950. Asimov was a biochemist and a prolific author of science fiction, and the type of anthropomorphized, “positronic” robots he featured in his stories existed well in the future (i.e. the Three Laws were said to have been published in 2058). His Three Laws, however, did go on to inspire many people to become roboticists and have influenced the ways in which roboticists think about ethics in their field.

Roboticists at the Applied Physics Laboratory teamed up with ethicists from the Berman Institute to start to do the work of creating robots that can behave with an ethical code inspired by the Three Laws, and to think about what that might look like when programming them to act. In Ethical Robotics: Implementing Value-Driven Behavior in Autonomous Systems, they are focusing on the first clause of the First Law: A robot must not harm a human.

Read the full post

The investigators, David Handelman, Ariel Greenberg, Bruce Swett, and Julie Marble, along with ethicists Travis Rieder and Debra Mathews, are working with semi-autonomous systems, a type of machine that is not totally controlled by humans, unlike a chainsaw, but is programmed to have some degree of freedom to act on its own. Self-driving cars are an example of semi-autonomous machines that have garnered much recent debate. The roboticists on this project are working with machines that have “arms” and “hands” that are used to assist with surgery and physical therapy, or to defuse bombs and perform battlefield triage.

The team’s first step to embed a moral code into a semi-autonomous system is to ensure that the robot can perceive the moral salience of features in its surroundings. Many robots can “see” and categorize things in their surroundings, usually by analyzing pixels from a camera, or using motion or heat sensors. Systems can be trained to distinguish between living and non-living things, can identify and open doors, and can perform intricate tasks like surgery or bomb disposal with the aid of humans. In order to follow the First Law, however, it is required that, in addition to accurately perceiving what is in its surroundings, the robot must be able to tell if something it is seeing is capable of being harmed.

The investigators thus are attempting to teach the robot to ‘see’ which objects in its view have minds. Having a mind is a condition, they argue, of being capable of suffering, and therefore a prerequisite of being subject to harm. This includes, of course, physical harm, but also includes psychological, financial, cultural, or other types of harm.

From here, the investigators then put forth a framework that distinguishes three types of injury that a robot might cause: 1. harm: damage to an entity with moral status, a being with a mind; 2. damage: damage to an entity without moral status, an object; and 3. harm that is a consequence of damage: damage that might be inflicted upon an object, but which causes harm to a being with moral status (e.g., destroying a child’s teddy bear might cause emotional harm).

Embedding these classifications will take place in addition to, or on top of, the robot’s existing perception capabilities. The roboticists have conceived ‘moral vision’ in a semi-autonomous robot so that it classifies objects into those things that have minds and those that don’t. It then also will assess the relationships between the non-minded objects to the minded ones. For example, an object could be used to cause harm, like a blade, or damage to an object could cause non-physical harm, like a ruined teddy bear.

It is important to consider the ways in which our own, or roboticists’ own, moral vision might be imperfect. Humans have been shown to have more empathy for non-human objects that look like humans. We also tend to anthropomorphize—attribute human characteristics to non-human objects—like when we call our computers ‘stupid.’ We also de-humanize: a customer service representative is just a voice on the line who speaks for a corporation. It’s clear that these tendencies can affect how we act in certain situations. If we program a robot to share our biases, these tendencies will also become encoded and affect the robot’s actions.

This is just a first step. Perceiving “mindedness” and the classifying moral salience  inform how a robot will act, or learn to act, in certain contexts. The second step of creating a more ethical robot is to then, of course, program into the robot what philosophers sometimes call ‘deontic constraints,’ which would limit the actions it is permitted to do by virtue of the possible harms it could cause.

Online Spaces
October 15, 2018

As the internet becomes increasingly essential to everyday life, and especially to commerce, research on the internet is also essential. IRBs, lawyers, and researchers are working out the best ways to interact with, recruit, and do research with internet users–falling under human subjects research.  But the investigators of Conducting Research in Commercially-Owned Online Space, a 2017-2018 Practical Ethics awardee, aim to clarify the best way to do research on content important to the health of the public that is created by and owned by private, and sometimes powerful, entities.

Read the full post

You are a public health researcher who is interested in studying advertising and marketing messages. You’ve noticed that QR codes and URLs are becoming more and more popular on packaging and in advertising. As part of your research, you’d like to follow these QR codes and visit the websites of these companies to study the messages and content on the sites. When you arrive at the site, however, you are asked to enter some personal information: your name, address, birthday, and whether or not you regularly use the product. On some sites, your information will be verified against government records before you are allowed to enter the site.

 

In addition, most websites have a link at the bottom of the page that leads you to a legal document outlining the rules of accessing and interacting with the site. The terms of service on many sites are quite restrictive.  Most prohibit you from even taking and sharing a screenshot. Terms of Service rarely indicate whether or not the site and its contents can be subject to research. Without explicit reference to research, we don’t know what a researcher is able to do in these heavily and legally restricted spaces.

 

As the internet becomes increasingly essential to everyday life, and especially to commerce, research on the internet is also essential. IRBs, lawyers, and researchers are working out the best ways to interact with, recruit, and do research with internet users–falling under human subjects research.  But the investigators of Conducting Research in Commercially-Owned Online Space, a 2017-2018 Practical Ethics awardee, aim to clarify the best way to do research on content important to the health of the public that is created by and owned by private, and sometimes powerful, entities.

 

There is a long tradition of conducting surveillance research in commercial spaces to protect the public’s health and interests.  Research in privately owned brick-and-mortar spaces is commonplace and is bound by both implicit and explicit rules or understandings of who is allowed in the space and for what purpose.  The internet is a world that is being built with great speed, and the ‘space’ is being reserved to serve companies’ objectives. These companies are creating barriers to understanding what is happening in these strategic online spaces. How can such barriers be overcome for the good of the public [or, to protect the public’s health]?

Dr. Katherine Clegg Smith and her collaborators are looking to clearly understand and state the issues at hand—what is the power and scope of a website’s Terms of Service? What must researchers know before conducting descriptive research online? With a better understanding of these issues, researchers and research institutions can work towards finding the best way to conduct research on commercially-owned online spaces.

The Number of Refugees to be Resettled in the U.S.
Oct. 8, 2018

One year ago, President Trump dramatically reduced the number of refugees that would be allowed to enter the United States in 2018. Following the unprecedented 110,000-person ceiling set by President Obama for 2017, Trump lowered the number to 45,000. This marks the first time in history that the United States is not the leader in refugee resettlement—the first year we’ve accepted fewer than half the world’s refugees.

Investigators on the project Determining the Number of Refugees to be Resettled in the United States: An Ethical and Human Rights Analysis were awarded funding to analyze both policy and ethical questions about the criteria used to determine the number of refugees admitted to the United States each year. The investigators interviewed various stakeholders and experts who are deeply familiar with the process: past and present members of the State Department, congressional staff, think tanks on both sides of the political aisle, as well as those involved with the actual resettlement process including NGOs and domestic and international refugee bureaus.

Read the full post

This blog post is based on a conversation with Rachel Fabi, PhD, a Practical Ethics a co-investigator on the project: Determining the Number of Refugees to be Resettled in the United States: An Ethical and Human Rights Analysis.

By Amelia Hood

One year ago, President Trump dramatically reduced the number of refugees that would be allowed to enter the United States in 2018. Following the unprecedented 110,000-person ceiling set by President Obama for 2017, Trump lowered the number to 45,000. This marks the first time in history that the United States is not the leader in refugee resettlement—the first year we’ve accepted fewer than half the world’s refugees. This comes at a time when there are more refugees and internally displaced people than ever before (about 25 million). President Trump recently announced the ceiling for his second year in office: 30,000.

The process of determining the number of refugees to be resettled in the United States neither begins nor ends with the President. The UNHCR, the United Nations’ Refugee Agency, determines how many refugees there are in the world. The United States’ Department of State takes that number into consideration and each year, suggests a number of refugees to accept to the President. With the President’s approval, the number is then approved by Congress.

Investigators on the project Determining the Number of Refugees to be Resettled in the United States: An Ethical and Human Rights Analysis were awarded funding by the JHU Exploration of Practical Ethics Program to analyze both policy and ethical questions about the criteria used to determine the number of refugees admitted to the United States each year. The investigators, led by Leonard Rubenstein, JD, interviewed various stakeholders and experts who are deeply familiar with the process: past and present members of the State Department, congressional staff, think tanks on both sides of the political aisle, as well as those involved with the actual resettlement process including NGOs and domestic and international refugee bureaus.

In these interviews, respondents cited many prudential reasons for resettling refugees. Taking in refugees can help to advance US foreign policy goals. It also helps to maintain the US’ influence and stature as a world leader. Some cited the costs and benefits of resettling refugees, who tend to make a net-positive contribution to the US economy.

Respondents also gave moral reasons for resettling refugees. Operating within the framework of human rights, powerful countries, like the United States, are obligated to help those whose rights are endangered or violated. Others cited the moral importance of diversity, and America’s identity as a nation of immigrants.

One major finding of this project is that both of these categories of reasons support a higher number of refugees to be resettled than the ceiling President Trump has set.   Moral obligations based on humanitarianism, a duty to repair, and the political legitimacy of the international order can be used to inform refugee policy and provide a sound moral as well as policy basis for determining the number of refugees to be admitted.  It would result in a far higher number of refugees resettled in the United States.

Through scholarly publications, investigators hope to show both the diversity of values that exists in the process of determining this number and that agreement is possible.  They also hope to guide the thinking motivating the process itself, as well as the ethical and practical considerations supporting it, through the creation of a white paper and ethical framework.

Drs. Rubenstein, Fabi, and the rest of the team will present their findings and talk more about the ethics of determining refugee resettlement at the Practical Ethics Symposium on November 14 in Feinstone Hall at the Bloomberg School of Public Health. The Symposium will feature presentations from all of the 2018 awardees of the JHU Exploration of Practical Ethics Program.

Project Leadership and Staff

Jeffrey Kahn, PhD, MPH
Andreas C. Dracopoulos Director; Robert Henry Levi and Ryda Hecht Levi Professor of Bioethics and Public Policy
Ruth R. Faden, PhD, MPH
Berman Institute Founder; Philip Franklin Wagley Professor of Biomedical Ethics
Alan Regenberg, MBE
Director of Outreach & Research Support
Amelia Hood, MA
Research Associate