Artificial intelligence risks to privacy demand urgent action – Bachelet

0

GENEVA (September 15, 2021) – United Nations High Commissioner for Human Rights Michelle Bachelet on Wednesday stressed the urgent need for a moratorium on the sale and use of artificial intelligence (AI) systems that present a serious risk to human rights until adequate safeguards are in place. place. She also called for a ban on AI applications that cannot be used in accordance with international human rights law.

“Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our time. But AI technologies can have negative and even catastrophic effects if used without sufficient consideration of how they affect people’s human rights, ”Bachelet said.

“The higher the risk to human rights, the more stringent the legal requirements for the use of AI technology should be,” said the UN human rights chief. “But since it may take time before risks can be assessed and addressed, states should impose moratoria on the use of potentially high-risk technologies.”

As part of its work * on technology and human rights, the United Nations Human Rights Office today released a report that analyzes how AI – including profiling, taking decision making and other machine learning technologies – affects people’s right to privacy and other rights, including the rights to health, education, freedom of movement, freedom of peaceful assembly and association and freedom of expression.

“Artificial intelligence now reaches almost every corner of our physical and mental life and even our emotional states. AI systems are used to determine who gets public services, decide who has a chance of being hired for a job and, of course, they affect the information that people see and can share online, ”he said. high commissioner.

The report examines how states and businesses have often rushed to integrate AI applications, without doing due diligence. There have already been many cases of people being treated unfairly because of AI, such as being denied Social Security benefits due to faulty AI tools or being arrested due to faulty facial recognition.

The report details how AI systems rely on large data sets, with information about individuals collected, shared, merged and analyzed in multiple and often opaque ways. The data used to inform and guide AI systems can be flawed, discriminatory, outdated, or irrelevant. Long-term data storage also poses special risks, as data could in the future be exploited in ways that are still unknown.

“With the rapid and continued growth of AI, bridging the huge accountability gap in the way data is collected, stored, shared and used is one of the most pressing human rights issues facing we are facing, ”Bachelet said.

The inferences, predictions, and monitoring performed by AI tools, including researching information about patterns of human behavior, also raise serious questions. The biased data sets upon which AI systems rely can lead to discriminatory decisions, and these risks are more acute for already marginalized groups.

“The risk of discrimination associated with AI-based decisions – decisions that can change, define, or harm human lives – is all too real. That is why there must be systematic assessment and monitoring of the effects of AI systems to identify and mitigate human rights risks, ”Bachelet said.

“Biometric technologies, which are increasingly becoming a must-have solution for states, international organizations and technology companies, is an area where more human rights guidelines are urgently needed,” the report stresses.

These technologies, including facial recognition, are increasingly used to identify people in real time and from a distance, potentially allowing unlimited tracking of individuals. The report reiterates calls for a moratorium on their use in public spaces, at least until authorities can demonstrate that there are no significant issues of accuracy or discriminatory impacts, and that these AI systems comply with strong privacy and data protection standards.

Businesses and states also need to be much more transparent in how they develop and use AI.

“The complexity of the data environment, algorithms and models underlying the development and operation of AI systems, as well as the intentional secrecy of government and private actors are factors that undermine meaningful means for the public. to understand the effects of AI systems on human rights and society, ”the report states.

“We cannot afford to continue to catch up with AI – by allowing its use with limited or no limits or oversight, and dealing with the almost inevitable human rights consequences after the fact. The power of AI in the service of people is undeniable, but so too is AI’s ability to fuel human rights violations on a massive scale with virtually no visibility. Steps are needed now to put human rights safeguards on the use of AI, for the good of all of us, ”Bachelet stressed.

Read the full report here

See also: High Commissioner’s statement on the implications of Pegasus spyware for the Council of Europe on September 14, 2021

* Visit OHCHR’s page on the right to privacy in the digital age: http://www.ohchr.org/EN/Issues/DigitalAge/Pages/DigitalAgeIndex.aspx

© Scoop Media


Source link

Share.

About Author

Comments are closed.