Team:Wageningen UR/HP/Ethics

Ethics

As with any other technology, the implementation of Mantis in society raises concerns that must be addressed. One way of addressing these concerns is an ethical analysis of some questions posed by our technology. To perform such analysis we talked with several experts whose input served as inspiration for our personal analysis presented in the following graph and texts. These experts were Prof. Dr. Marcel Verweij (Head of the Department in Philosophy from Wageningen University & Research), Dr. André Krom (Ethics Expert from the RIVM, the Dutch National Institute for Public Health and Environment), Dr. Ghislaine van Thiel (Medical Ethics teacher at Utrecht Medical University), Prof. Dr. Tsjalling Swierstra (Professor of Philosophy at Maastricht University), Dr. Ree Meertens (Associate Professor of Health Promotion at Maastricht University) and Jaco Westra (Coordinator of Synthetic Biology at the RIVM). The conclusions of this research were implemented in the Roll-out Plan.

Risk Perception

Here you will find an analysis of safety and risk, and how risk has an important subjective component. Also, we thought about how Mantis could be perceived by society considering different factors for risk perception (Figure 1).

What is safety? In our case, safety is the condition of a technology that makes the user, the environment or any other individual free from any risk of being harmed. But how can one determine how safe a technology is? In other words, how can one determine the risk associated with a technology? There are two different approaches.

On one hand, we can use a risk analysis, which studies the known risks and their probability under specific circumstances. On the other hand, risk perception is a more subjective approach based on the perception of individuals and the factors influencing this perception. Risk analysis and risk perception are related but show two different aspects of risk: objectivity versus subjectivity, respectively. Another question that we can ask is: How safe must a technology be in order to be used? To answer this question, we need to consider the benefit that the technology offers as well as the risk it poses. A population will consider using a technology after making their judgment. This judgment is based on a comparison between the benefits offered and the risks posed by the technology. The balance between risk and benefit is reflected in the laws through the limits of acceptability. Therefore, a technology does not have to be completely free of risk for it to be used by the population, but it has to be acceptably safe.

There are many factors affecting risk perception (a summary of factors affecting our device is depicted in Figure 1). However, there is still no consensus on if risk perception is mainly due to social factors, personal factors or technological factors. In our opinion, two related factors are important: moral acceptability and the attitude towards the technology. These are linked directly to the subjective perception of the technology and not only to the perception of the risk itself. The differences in the moral acceptability and the attitude among populations highlight an important detail: people have different perceptions of reality. This is an important consideration, as the risk measured in risk analysis is considered to be the "real" risk by the scientific community. The clash of reality perceptions could be observed in the battle against the Ebola breakout: while the health authorities insisted that the corpses had to be burned, the local families saw this as an offense to their gods, which would cause the deceased to not be able to reach the life after death. Consequently, distrust was generated against the health authorities which made control of the disease more difficult. It is important to keep in mind that for the local families, the risk of not seeing their loved ones in the afterlife was more important than the risk of getting infected (read more here). So we can conclude that talking about "real risk" can be misleading when debating with the public. Instead, "analysis of all the known risks that can be anticipated" could be a better alternative. Of course, there are unknown risks for a certain technology, which cannot be analyzed or measured. These have two main sources: risks that arise from intentional use of the technology for harmful purposes and risks that arise from unexpected accidents or secondary effects.

Thanks to the interview with André Krom (RIVM) we became aware that the risk perception is affected by a number of factors. We thought that a good way of predicting the reception of the public to our device would be to look into these factors and estimate the impact of Mantis in each of them. After a literature research [1,2], a list of factors was created. Most of the factors are based on two opposite concepts, one of which inspires safety while the other inspires risk. We placed Mantis nearer one term or the other based on our judgment. The result can be observed in Figure 1.

Figure 1: This figure represents the expected risk perception of the public towards Mantis, taking into account a number of factors affecting risk perception. On the right, you can find the characteristics that would make Mantis be perceived as safe, and on the left the ones that would make Mantis be perceived as risky. Below each factor, there is the statement on which we based our decision. Beware that this is only a guide because it does not represent all factors either the relative importance of each of them.

Conclusions: Risk has an important subjective component and the producers need to take this into account when presenting a technology to the public and implementing it into society. When pursuing the acceptance of the population, it is important that they perceive the benefits of the technology.

References

  1. Sjöberg, Lennart, "Factors in Risk Perception." Risk Analysis 20.1 (2000): 1-11.
  2. Blake, Elinor R., "Understanding Outrage: How Scientists Can Help Bridge the Risk Perception Gap." Environmental Health Perspectives 103.6 (1995): :123-125.

Responsibility

Responsibility can be defined as the duty to deal with something. In the case of developing a technology, it is the responsibility of the producers to deal with the risks associated with the technology, implementing safety measures. Here we discussed how many safety measures must be implemented to make a technology safe. Furthermore, we discussed what happens when the effects of the technology are beyond the producer's reach, specifically when the producer delegates responsibility to the end-user.

During development, it is necessary to carry out a risk assessment for the known risks that the technology poses. The result from this analysis will not be black or white, or in other words, it will not classify the technology as either dangerous or safe. It will place the technology inside a spectrum of safety. Therefore, the sort and number of safety measures will depend on the position of the technology in this spectrum. The larger the risk a technology poses, the more safety measures will be taken. It is advisable to take more safety measures than just the necessary ones. However, safety measures will also have a cost, leading to a more expensive technology. This can have serious effects on the success of some technologies, like Mantis, whose purpose is to be affordable to all and reach the maximum number of people possible.

As a result, an evaluation is necessary between the risk the technology poses in the specific circumstances where it will be used (closed device, trained personnel, etc.) and the cost of the possible safety measures. The rule that should be followed in these situations is the "rule of reasonability" that states that the producer should act as it would be reasonable to act, considering the circumstances. It would be totally reasonable not to spend thousands of dollars in a safety measure that might never be needed after all.

Another problem related to responsibility is the inevitable transfer of responsibility to the end-user. There is no way for the producer to prevent that the end-user will misuse the technology, whether intentionally or not. For example, misuse might happen in form of inappropriate disposal of the device or unethical use of the device for discriminatory purposes. However, the producer has several options to avoid undesirable consequences from the misuse of the device. For example, the use of a biocontainment mechanism would prevent ecological disasters even in case of inappropriate disposal. In the case of unethical use, the only option for the producer is the explicit description of the proper use of the technology in the instruction sheet or on the label. Furthermore, the producer can have a pro-active role in the implementation of the technology, such as making sure that the personnel receives proper training or developing visual pamphlets to inform the (possibly illiterate) population about the device and its use.

Even taking measures, some responsibility will always be transferred. In the case of a catastrophe resulting from the misuse of the device, there will be a trial in which it will be determined who has more responsibility in that specific situation. This will depend on factors such as the intention of the end-user and the opportunities that the producer had to prevent the catastrophe. However, even if the end-user turns out to be most responsible, the producer will also be affected negatively because the product will be perceived as unsafe and the catastrophe will bring bad press to the company. Therefore, even when considering the transfer of responsibility, it is in the producer's best interest to prevent a misuse of their technology.

Conclusions: Developers have the duty to do everything in their power to make their technology safe, within certain reasonable limits. It is not only what they should do, it is also in their best interest to develop a safe technology, as well as transfer as little responsibility as possible to the end-user.

Patient Autonomy

Patient autonomy related to diagnostics is just one part of medical ethics. Medical ethics is a broad field that is based on four pillars: beneficence, non-maleficence, justice, and autonomy (read about them here). Concerning our device, the first three are relatively easy to analyze. Mantis complies with the beneficence principle because it provides a means to detect diseases at an early stage and the possibility to provide subsequent treatment(s), as well as helping to control the spread of infectious diseases. It complies with the non-maleficence principle because the technology itself does not pose any danger to the patient (this does not consider the unethical use by the user, e.g. sacrificing infected patients). Additionally, Mantis complies with the justice principle because it will be open source, affordable and easy to produce, so that society will be able to take advantage of the technology, and not only a privileged part of the population. However, the autonomy principle is a bit more difficult to analyze; how should governmental bodies react in a deadly epidemic emergency, when the control of the disease is necessary? How autonomic is the decision of the patient when their knowledge about the situation and the device is limited?

When talking about autonomy and technologies we can think of three categories for decision-making processes:

  • The technology makes the decision after the measurement. For example, pacemakers are in-the-loop systems where the decision of the device is not affected by a human.
  • The technology is used to take measures or to diagnose, but the decision is made by a person, the patient. The patient is informed of the possible options as well as their consequences and then they will make an autonomous decision. For example, the current healthcare system in first world countries.
  • The decision is placed on a human, but that human is not the patient. This happens in accidents where the patient is unconscious or in third world countries where the means to correctly inform the respective patients are lacking and therefore the healthcare authorities will make the decision instead.

Ideally, all the societies and their healthcare systems will evolve into one that uses informed decision-making for most situations, because it is the only one that completely respects the autonomy of the patient.

In order for the patient to be able to take an autonomous decision, it is necessary to provide sufficient knowledge about the usage of the technology, such as why is the technology needed, what are the possible outcomes and what happens after each outcome. Although providing knowledge is important for autonomy, that does not mean that autonomy depends on knowledge. The patient will always have the right to refuse the technology even when the decision is based on feelings instead of knowledge, such as distrust of the health authorities. Furthermore, the patients have the right to refuse to receive information.

In our specific case, there is a complication because our device is based on genetically modified bacteria. In normal conditions, it is not necessary to inform about the technology behind the diagnostic but the controversial perception of genetic engineering can change the situation. Although Mantis must be approved by the government of the specific countries before use, there might be regions where the opposition to GMOs can be strong. In these cases, it is necessary to directly inform the population about the nature of the technology behind the device, so that the patient can take a decision based on their moral standards. In case that the patient refuses to accept, an alternative should be offered, even if it is slower or more expensive.

The autonomy of the patient should never be ignored. It is internationally agreed that diagnostics should never be imposed. In a mortal infectious disease outbreak, the advised way to proceed is to offer the diagnostic as a way of avoiding quarantine. For example, consider an infectious disease with an incubation time. During this incubation time, the patient does not show symptoms, but it can infect other people as well as being diagnosed. Ideally, a fast screen of the population with a diagnostic test would allow distinguishing between infected people and non-infected people. If a patient refuses to be diagnosed while not showing any relevant symptoms, they retain their right to refuse such diagnostic. However, if the patient has been in contact with pathogens or people infected with pathogens present on a recognized list of severely dangerous infectious agents ( e.g. the 'Federal Isolation and Quarantine Laws' made by the Center for Disease Control [1]), the local government is within its right to put the person in quarantine (for example, people were quarantined during the SARS coronavirus epidemic, but not during the Zika virus outbreak). To start a quarantine situation, the necessary government bodies need to issue the proper warrant after analyzing the situation. If the disease is not present on this list, then the freedom of the individual cannot be impeded and the patients keep their right to autonomy.

References
  1. CDC, Specific Laws and Regulations Governing the Control of Communicable Diseases, Center for Disease Control, visited 30 October 2017, https://www.cdc.gov/quarantine/specificlawsregulations.html

Conclusions:

  • Decisions should always be made by patients, even though there currently is a system where the health authorities decide due to the lack of resources to ensure an autonomic decision by the patients.
  • In normal conditions, there is no need to inform the patient about the technology behind the diagnostic, but we should inform about the GMO nature of Mantis when this is a sensitive topic in the region.
  • In case of a deadly epidemic emergency, diagnostics cannot be imposed. Patients are put in quarantine and only when they accept to take the diagnostic they can be released from the quarantine (If the result of the diagnostic is negative).