Team:Wageningen UR/HP/Ethics

Ethics

As with any other technology, the implementation of Mantis in the society raises concerns that must be adressed. On of the way of adressing these concerns is an ethical analysis of some questions posed by our technology. To perform such analysis we talked with several experts whose input and answers served as inspiration for our personal analysis presented in the following videos and texts. These experts were Marcel Verweij (Head of the department in Philosophy from Wageningen University), Andr&eacute Krom (Ethics Expert from RIVM), Ghilaine van Thiel (Medical Ethics teacher in Utrecht Medical University), Tsalling Swierstra (Philosophy Professor in Maastricht University), Ree Meerteens (Associate Professor of Health Promotion in Maastricht University) and Jaco Westra (Coordinator of Synthetic Biology from RIVM).

What is safety? In our case, safety is the condition of a technology that makes the user, the environment or any other individual free from any risk of being harmed. But how can one determine how safe a technology is? In other words, how can one determine the risk associated with a technology? There are two different approaches:

On one hand, we can use a risk analysis, which studies the known risks and their probability under specific circumstances. An example of this approach is the identification of minimal lethal doses (LD50) for chemical compounds used in the pharmaceutical and food industries. On the other hand, we have the risk perception which considers a subjective approach, based on the perception of the individuals and the factors affecting this perception. Risk analysis and risk perceptions are related but show two different aspects of risk: objectivity versus subjectivity, respectively. Another way to observe these two concepts is to consider that the risk created by a technology can be dissected into two components: hazard and outrage. The hazard is the known danger and is measured through risk analysis, whereas the outrage is the reaction of the society, mainly driven by the risk perception.

Another question that we can ask is: How safe must a technology be in order to be used? To answer this question, we need to consider the benefit that the technology offers, in addition the risk it poses. A population will consider using a technology after making their judgement. This judgement is based on a comparison between the benefits offered and the risks posed by the technology. The battle between risk and benefit is reflected in the laws through acceptability limits. Therefore, a technology does not have to be completely free of risk for it to be used by the population, but it has to be acceptably safe. The same way as risk perception has an important subjective part, benefit perception is also under the perception of individuals, and as a consequence affected by different subjective factors. An example of the battle between benefit and risk perception we can consider cars. Although they are a dangerous technology which takes many lives in traffic accidents and can be used as a weapon in terrorist attacks, the benefit of an autonomous and fast means of transport makes the population accept the technology even with its risks considered.

There are many factors affecting risk perception which have been discussed to have great effect (A summary of factors affecting our device is depicted in Figure 1). However, we are still not sure if risk perception is mainly due to social factors, personal factors or technological factors. In our opinion, there are two factors which are important to consider, and those are related to each other. Moral acceptability and the attitude towards the technology link directly the subjective of perception of the technology and not only the perception of the risk itself. The differences in the moral acceptability and the attitude among populations also show an important detail: People have different perceptions of reality. This is an important consideration, as the risk measured in risk analysis is considered to be the "real" risk by the scientific community. The clash of reality perceptions could be observed in the battle against the Ebola breakout: while the health authorities insisted that the corpses had to be burned, the local families saw this as an offense to their gods, which would cause the deceased to not be able to reach the life after death. Consequently, distrust was generated against the health authorities which made control of the disease more difficult. It is important to keep in mind that for the local families, the risk of not seeing their loved ones in the afterlife was more important than the risk of getting infected. So we can conclude that talking about real risk can be misleading when debating with the public. Instead, analysis of all the known risks that can be anticipated could be a better alternative. Of course, there are unknown risks for a certain technology, which cannot be analysed or measured. These have two main sources: risks that arise from intentional use of the technology for harmful purposes and risks that arise from unexpected accidents or secondary effects.

Risk perception affects our life through laws, based on the acceptability limits for risks. In a democratic society there are three different sectors affecting the legal boundaries for technology. These sectors have different roles: The first sector, the general population, will have its own subjective risk perception. This perception will not always be the same as the experts' opinion, caused by several reasons. On one hand, we have legitimate reasons, for example the unexpected consequences not considered by the experts. This reason is legitimate because the risk analysis performed by the experts is reductionist, as they do not have improbable risks in mind. On the other hand, we will have illegitimate reasons that are based on wrong conceptions. For example, some individuals in the population that believe that the gluten protein is not good for health even if they do not suffer from Celiac Disease.

The second human sector are the scientists and experts. They will inform the population and the politicians about the risk analysis, and they will try to correct the wrong conceptions from the general population. Third and last are the politicians, who will make decisions and define a legal limit for the technology. These decisions might be influenced by their own perception, the perception of the population, the feelings and emotions of the population, the advice from experts, economical pressure, religious pressure, corruption... Of course not all of these influences are ethical, but their influence is undeniable in most countries. We can only hope that we evolve into a society where decisions are made through an open public discussion among all the parties involved, where only the legitimate reasons will affect the outcome.

Figure1: This figure represents the expected risk perception of the public towards out technology, taking into account a number of factors affecting risk perception. Beware that this is only a sort of rough guide, because it does not represent all factors neither the relative importance of each of them.

Responsibility can be defined as the duty to deal with something. In the case of developing a technology, it is the responsibility of the producers to deal with the risks associated with the technology. When a technology is being developed, it is necessary to carry out a risk assessment for the known risks that the technology poses. The result from this analysis will not be black or white, or in other words, it will not classify the technology as either dangerous or safe. It will place the technology inside a spectrum of safety. Therefore, the sort and number of safety measures will depend on the position of the technology in this spectrum. The larger the risk a technology poses, the more safety measures will be taken. It is advisable to take more safety measures than just the necessary ones. However, safety measures will also have a cost, leading to a more expensive technology. This can have serious effects on the success of some technologies, like Mantis, whose purpose is to be cheap so it can be made affordable to all and reach the maximum number of people possible.

As a result, an evaluation is necessary between the risk the technology poses in the specific circumstances where it will be used (closed device, trained personnel...) and the cost of the possible safety measures. The rule that should be followed in these situation is the "rule of reasonability" that states that the producer should act as it would be reasonable to act, considering the circumstances. It would be totally reasonable not to spend thousands of dollars in a safety measure that might never be needed after all.

Another problem related to responsibility is the inevitable transfer of responsibility to the end-user. There is no way for the producer to prevent that the end-user will misuse the technology, whether intentionally or not. For example, misuse might happen in form of inappropriate disposal of the device or unethical use of the device for discriminatory purposes. However, the producer has several options to avoid undesirable consequences from the misuse of the device. For example, the use of a biocontainment mechanism would prevent ecological disasters even in case of inappropriate disposal. In the case of unethical use, the only option for the producer is the explicit description of the proper use of the technology in the instruction sheet or in the label. Furthermore, the producer can have a proactive role in the implementation of the technology, such as making sure that the personnel receives proper training or developing visual pamphlets to inform the (possibly illiterate) population about the device and its use.

Even taking measures, some responsibility will always be transferred. In the case of a catastrophe resulting from the misuse of the device, there will be a trial in which will be determined who has more responsibility in that specific situation. This will depend on factors such as the intention of the end-user and the opportunities that the producer had to prevent the catastrophe. However, even if the end-user turns out to be the most responsible one, the producer will also be affected negatively because the product will be perceived as unsafe and the catastrophe will bring bad press to the company. Therefore, even when considering the transfer of responsibility, it is in the producer's best interest to prevent a misuse of his technology.

Patient autonomy related to diagnostics is just one part of medical ethics. Medical ethics is a broad field that is based on four pillars: beneficence, non-maleficence, justice and autonomy. Concerning our device, the first three are relatively easy to analyse. Mantis complies with the beneficence principle because it provides a means to detect diseases in an early stage and the possibility to provide subsequent treatment(s), as well as helping to control the spread of infectious diseases. It complies with the no-maleficence principle because the technology itself does not pose any danger to the patient (this does not consider the unethical use by the user). Additionally, Mantis complies with the justice principle because it will be open source, affordable and easy to produce, so that society will be able to take advantage of the technology, and not only a privileged part of the population. However, the autonomy principle is a bit more difficult to analyse; how should governmental bodies react in a mortal epidemic, when the control of the disease is necessary? How autonomic is the decision of the patient when their knowledge about the situation and the device is limited?

Who should take decisions?

When talking about autonomy and technologies there are three categories for decision-making processes:

  • In-the-loop systems: The technology makes the decision after the measurement. For example, the pacemakers are in-the-loop systems where the decision of the device is not affected by a human.
  • Out-of-the-loop systems and informed decision-making: The technology is used to take measures or to diagnose, but the decision is made by a person, the patient. The patient is informed of the possible options as well as their consequences and then he or she will make an autonomous decision. For example, the current healthcare system in first world countries.
  • Out-of-the-loop and shared decision-making: The decision is placed on a human, but that human is not the patient. This happens in accidents where the patient is unconscious or in third world countries where the means to correctly inform the respective patients are lacking and therefore the healthcare authorities will make the decision instead.

Ideally, all the societies and their healthcare systems will evolve into one that uses informed decision-making for most situations, because it is the only one that completely respects the autonomy of the patient.

What information should the patient receive?

In order for the patient to be able to take an autonomous decision, it is necessary to provide sufficient knowledge about the usage of the technology, such as why is the technology needed, what are the possible outcomes and what happens after each outcome. Although providing knowledge is important for autonomy, that does not mean that autonomy depends on knowledge. The patient will always have the right to refuse the technology even when the decision is based on feelings instead of knowledge, such as distrust to the health authorities. Furthermore, the patients have the right to refuse receiving information.

In our specific case there is a complication, because our device is based on genetically modified bacteria. Although in normal conditions it is not necessary to inform about the technology behind the diagnostic, the controversial perception of genetic engineering can change the situation. Although before using the technology this one must be approved by the government of the specific countries, there might be regions where the opposition to GMOs can be strong. In these cases, it is necessary to directly inform the population about the nature of the technology behind the device, so that the patient can take a decision based on their moral standards. In case that the patient refuses to accept, an alternative should be offered, even if it is slower or more expensive.

Should the diagnostic be imposed in emergency cases?

The autonomy of the patient should never be ignored. It is internationally agreed that diagnostics should never be imposed. In a mortal infectious diseases outbreak, the advised way to proceed is to offer the diagnostic as a way of avoiding the quarantine. For example, consider an infectious disease with an incubation time. During this incubation time, the patient does not show symptoms, but it can infect other people as well as being diagnosed. Ideally, a fast screen of the population with a diagnostic test would allow to distinguish between infected people and non-infected people. If a patient refuses to be diagnosed, it will be put into quarantine and only when it is proven through any technology that the person is not a carrier of the disease it will be possible to liberate that patient.

The autonomy principles also apply to the entire governmental organ. As a rule, only genocides will be a matter of public intervention. In case a country refuses to use a technology to control an infectious disease, a limited number of options are available. On one hand, other technologies can be offered, even if they are less effective. On the other hand, political or economic pressure might be used to make the government accept the technology, although these measures might have a negative effect on the population, introducing economic problems besides the already existing health problems. In conclusion, diagnostics must always be presented as an optional and beneficial tool for the control of diseases, and not as a compulsory tool for screening the population.