So, Ireland advances through the stages of reopening in the midst of a pandemic, we see the beginnings of a Data Ethics case study unfolding as technology vendors start their advertising of thermal scanning solutions as part of a Covid-19 response strategy. The question is one of the ethics of technology solutionism versus soap and water and whether suppliers of data processing technologies have an ethical duty to be upfront about the limitations to the effectiveness of such technologies, particularly in the midst of a global pandemic.

Here’s one from Vodafone Ireland this morning

And here’s @brian_daly on Twitter calling out an advertorial in a Sunday paper

In the advertorial the solution provider makes a bold claim that their product was developed in line with WHO Guidelines. Neither Vodafone nor the other company seem to have noticed that there is no Public Health guidance at this time that requires the use of thermal scanners and that the WHO has this to say on their website:

Thermal scanners are effective in detecting people who have a fever (i.e. have a higher than normal body temperature). They cannot detect people who are infected with COVID-19.

There are many causes of fever. Call your healthcare provider if you need assistance or seek immediate medical care if you have fever and live in an area with malaria or dengue.

The Data Protection Commission’s guidance on the Return to Work protocols in Ireland is very clear:

The DPC is not aware of any current Public Health advice recommending the implementation of temperature testing in the workplace. Accordingly, temperature testing should not be considered a requirement of the Protocol at this time.

The Ethical Problem

Part of the issue with the use of thermal scanning as part of Covid-19 response is the issue of false positives and false negatives. Lots of things can cause people to have high temperatures (other illnesses or hormonal conditions, physical activity, menopause) and other things can mask high temperatures (hormonal conditions that reduce body temperature, people taking paracetamol to suppress a fever). And that is before we even get to the point of considering asymptomatic carriers of the virus or people who are pre-symptomatic.

Once we implement a technology such as this, the questions then need to be considered around what actions will be taken once a technology or a process has determined someone is has an excessive temperature. Do we ban people from entering the area? Do we require additional more invasive tests? Do we deprive people of income? And if we are going to impact on fundamental rights and freedoms, is the information we are basing those decisions on actually adequate or accurate enough to have the required infection control/prevention effect?

The Ethical Challenge

The ethical issue arises when a technology provider starts to promote a solution in an environment that could impact on rights and freedoms and how people respond to a public health emergency without being transparent about the limitations of the technology and the potential risks that arise due to error rates. While a false positive is potentially inconveniencing and embarrassing for a person barred entry to a building and could give rise to issues if a worker is denied income because they are prevented from working, a far greater problem arises from false negatives either through calibration issues, environmental or medical issues that mask illness, asymptomatic carriers, or people intentionally masking fevers through medication.

The fact that vendors of technology solutions seem to be promoting their widgets without a frank disclosure of the limitations of the technology as a public health control (and, indeed, implying that there might be guidance that would suggest it is an effective control) is akin to the sale of mystery elixirs from the back of a carnival van in days of yore.

Of course, from a legal perspective, the burden falls on the organisation that is installing these systems to conduct the Data Protection Impact Assessment needed to make their use lawful. That Assessment would need to consider the necessity, proportionality, and effectiveness of any proposed solution in the context of public health guidance. It would need to consider the necessity of any processing to that stated purpose (for example, linking temperature to a biometric identifier and facial recognition would need a robust justification). That evaluation would need to consider the issues of false positives and false negatives and asymptomatic transmission. And it would need to consider if the objective can be met by other less invasive means of processing (or if processing of data is required at all). Ideally this would need to be done BEFORE a technology is acquired and implemented.

The liability for data protection breaches in this case falls primarily on the Data Controller. The entity that purchased the thermal cameras. The obligation to do the DPIA falls to them. The obligation to ensure that their processing of data meets GDPR requirements and the fundamental test of necessity and proportionality falls to them. The seller of the technology may not even be a data processor if they have simply sold hardware and software and are not actually doing any processing of the data.

What does the DPC say?

The DPC’s guidance is clear on this:

  • There may be scenarios where this technology might have a use
  • The necessity and proportionality will need to consider public health guidance
  • The Data Controller installing the cameras will need to justify all subsequent processing from a data protection perspective
  • A DPIA may need to be carried out (hint: it’s Special Category Data, so that “may” is actually a “must”).

So, vendors won’t disclose the paltry evidence of effectiveness, the evidence of false positive and false negative conditions, and the absence of any public health guidance advising the use of thermal scanners because that might put people off buying the widget-de-jour, the mystery elixir that is good for curing and preventing all ills. And by the time a problem arises, they have moved on or the facts don’t create any liability for them in the use of the technology. “Caveat Emptor!!” is the Hogwarts-like spell that is uttered to shield the unscrupulous vendor from the harm triggered by their technology solutionism.

This is the data ethics issue de jour, however, it is is not a new problem.

First, do no harm?

But given the risks to people in society of a misplaced investment decision by an employer, a shop, or other location in technology that doesn’t work for the purpose it is being acquired which removes or reduces the budget that is available for other mechanisms with verifiable and evidence based success (e.g. masks, social distancing, and hygiene), shouldn’t vendors take an ethical approach and be up-front about the limitations of their technology? This is a key data ethics question. If the technology is neutral, how we chose to use it and promote its use becomes inherently an ethical issue.

If the technology has limitations for a prescribed purpose, and if the potential impacts to people or society of those impacts will reduce the beneficience of the technology without improving the overall situation and while potentially leading to constraints on the autonomy of people or their ability to exercise other rights, then failing to highlight that in a way that could reduce the ability of society to deploy approaches that do work  more without the same issues of data ethics, cost, data protection, and displacement of investment from more effective (but less technologically fancy) approaches.

All it takes is one person to trigger a chain of transmission. One asymptomatic person waved through because the temperature camera says they are OK.

Warning: This Elixir may contain Snake Oil and Boot Polish.

(Our research paper giving guidance on considerations for Thermal scanning in work places is available here)