what other nations can learn



Greece’s decision to deploy machine learning in pandemic surveillance will receive a lot of study around the world.Credit: Konstantinos Tsakalidis / Bloomberg / Getty

A few months after the start of the COVID-19 pandemic, operational researcher Kimon Drakopoulos sent an email to the Greek Prime Minister and the head of the country’s COVID-19 scientific working group to ask them if they needed any help. additional advice.

Drakopoulos works in data science at the University of Southern California at Los Angeles and is originally from Greece. To his surprise, he received a response from Prime Minister Kyriakos Mitsotakis within hours. The European Union was asking member states, many of which implemented widespread closures in March, to allow non-essential travel to resume from July 2020, and the Greek government needed help deciding when and how to reopen borders.

Greece, like many other countries, lacked the capacity to test all travelers, especially those without symptoms. One option was to test a sample of visitors, but Greece opted to experiment with an artificial intelligence (AI) -based approach.

Between August and November 2020 – with input from Drakopoulos and his colleagues – authorities launched a system that uses a machine learning algorithm to determine which travelers entering the country should be tested for COVID-19. The authors found that machine learning was more effective at identifying asymptomatic people than random testing or testing based on a traveler’s country of origin. According to the researchers’ analysis, during the peak tourist season, the system detected two to four times as many infected travelers as the random tests.

The machine learning system, which is one of the first of its kind, is called Eva and is described in Nature this week (H. Bastani et al. Nature https://doi.org/10.1038/s41586-021-04014-z; 2021). This is an example of how data analytics can contribute to effective COVID-19 policies. But it also presents challenges, ranging from protecting individual privacy to needing to independently verify its accuracy. In addition, Eva recalls why the treaty proposals on the pandemic (see Nature 594, 8; 2021) must take into account the rules and protocols on the proper use of AI and big data. These should be established in advance so that these scans can be used quickly and safely in an emergency.

In many countries, travelers are selected at random or based on risk categories for COVID-19 testing. For example, someone coming from an area with a high rate of infections might be given priority for testing over someone coming from an area with a lower rate.

In contrast, Eva collected not only travel history but also demographic data such as age and gender from the passenger information forms required for entry into Greece. He then matched these characteristics with data from previously tested passengers and used the results to estimate an individual’s risk of infection. COVID-19 tests have been targeted at travelers considered to be most at risk. The algorithm also released tests to allow it to fill in data gaps, making sure it stayed up to date as the situation unfolded.

During the pandemic, there was plenty of ideas on how to deploy big data and AI to improve public health or assess the economic impact of the pandemic. However, relatively few of these ideas have been put into practice. Part of the reason is that businesses and governments that hold relevant data, such as cell phone records or financial transaction details, need approved systems in place before they can share the data. with researchers. It is also not clear how consent can be obtained to use this personal data, or how to ensure that this data is stored securely.

Eva was developed in consultation with lawyers, who ensured that the program complied with the privacy protections offered by the EU’s General Data Protection Regulation (GDPR). Under the GDPR, organizations, such as airlines, that collect personal data must adhere to security standards and obtain consent to store and use the data – and to share it with a public authority. The information collected tends to be limited to the minimum amount required for the stated purpose.

But this is not necessarily the case outside the EU. Additionally, techniques such as machine learning that use AI are limited by the quality of the data available. Researchers have uncovered numerous instances in which algorithms intended to improve decision-making in fields such as medicine and criminal justice reflect and perpetuate prejudices common in society. The field needs to develop standards to indicate when the data – and the algorithms that learn from it – are of sufficient quality to be used to make important decisions in an emergency. There is also a need to focus on transparency about how algorithms are designed and the data used to train them.

The hunger with which Drakopoulos’ offer of aid was accepted shows how keen policymakers are to improve their ability to respond to emergencies. As such algorithms become more important and more widely accepted, it might be easy for them to slip, unnoticed, into everyday life or be used in nefarious ways. One example is facial recognition technologies, which can be used to reduce criminal behavior, but can also be misused to invade people’s privacy (see Nature 587, 354-358; 2020). While the creators of Eva succeeded in doing what they set out to do, it’s important to remember the limits of big data and machine learning, and to develop ways to govern these techniques in order to that they can be deployed quickly and safely.

Despite a multitude of data collection methods, many policymakers were unable to access and exploit the data during the pandemic. Researchers and funders should start now to prepare the ground for future emergencies, by developing data-sharing agreements and privacy protocols in advance to improve response times. And discussions should also begin on setting reasonable limits on how much decision-making power an algorithm should have in a crisis.



About Author

Comments are closed.