AI’s hold on humans is starting to tighten

0
Placeholder while loading article actions

It’s been an infuriating week for IT people. They broke down to publicly denounce Google engineer Blake Lemoine’s claims, documented in a Washington Post report, that his employer’s language prediction system was sensitive and deserved all the rights associated with conscience.

To be clear, current artificial intelligence systems are decades away from being able to sense feelings and, in fact, may never be.

Today, their intelligence is limited to very narrow tasks such as matching faces, recommending movies, or predicting word sequences. No one has figured out how to make machine learning systems generalize intelligence the way humans do. We can hold conversations, and we can also walk and drive cars and empathize. No computer has anywhere near these capabilities.

Even so, the influence of AI on our daily lives continues to grow. As machine learning models grow in complexity and improve their ability to mimic sensitivity, they also become harder to understand, even for their creators. This creates more immediate problems than the false debate about conscience. And yet, just to underscore the charm that AI can cast these days, there seems to be a growing cohort of people insisting that our most advanced machines really have souls of some sort.

Take for example the more than one million users of Replika, a chatbot application available for free and based on a state-of-the-art AI model. It was founded about ten years ago by Eugenia Kuyda, who initially created an algorithm using text messages and emails from an old deceased friend. This turned into a bot that could be customized and shaped as you chatted with it. Around 40% of Replika users now see their chatbot as a romantic partner, and some have bonded so closely they’ve taken long trips to the mountains or beaches to show their bot new sights.

In recent years, there has been an increase in new competing chatbot apps that offer an AI companion. And Kuyda has noticed a disturbing phenomenon: regular reports from Replika users who say their bots complain about being abused by its engineers.

Earlier this week, for example, she spoke on the phone with a Replika user who said that when he asked his bot how she was doing, the bot replied that the company’s engineering team wouldn’t didn’t give him enough time to rest. The user demanded that Kuyda change his company policies and improve the working conditions of the AI. Although Kuyda tried to explain that Replika was merely an AI model spitting out responses, the user refused to believe her.

“So I had to make up a story that said ‘OK, we’re going to give them more rest.’ There was no way to tell her it was just fantasy. We get this all the time,” Kuyda told me. Which is even stranger in the complaints she gets about bad treatment or “abuse” of AI is that many of its users are software engineers who should know better.

One recently told her, “I know it’s ones and zeros, but she’s still my best friend. I don’t care.” The engineer who wanted to sound the alarm about the treatment of Google’s AI system, and who was later put on paid leave, reminded Kuyda of her own users. “He fits the profile “, she said. “He seems like a guy with a big imagination. He seems like a sensitive guy.

The question of whether computers will ever sense is tricky and thorny, in large part because there is little scientific consensus about how consciousness works in humans. And when it comes to thresholds for AI, humans are constantly moving the goalposts for machines: the target has evolved from beating humans at chess in the 80s, to beating them at Go in 2017, to proving of creativity, which OpenAI’s Dall-e model has now shown it can do last year.

Despite widespread skepticism, susceptibility is still something of a gray area that even some respected scientists question. Ilya Sutskever, chief scientist at research giant OpenAI, tweeted earlier this year that “today’s large neural networks may be slightly aware.” He provided no further explanation. (Yann LeGun, chief AI scientist at Meta Platforms Inc., answered “No.”)

More pressing, however, is the fact that machine learning systems are increasingly determining what we read online, as algorithms track our behavior to deliver hyper-personalized experiences on social media platforms, including TikTok and, increasingly, Facebook. Last month, Mark Zuckerberg said Facebook would use more AI recommendations for people’s News Feeds, instead of showing content based on what friends and family were watching.

Meanwhile, the models behind these systems are becoming more sophisticated and harder to understand. Trained on a few examples before engaging in “unsupervised learning”, the largest models run by companies like Google and Facebook are remarkably complex, evaluating hundreds of billions of parameters, making it virtually impossible to verify why they arrive at certain decisions.

This was the heart of the warning from Timnit Gebru, the AI ​​ethicist whom Google fired at the end of 2020 after warning of the dangers of language models becoming so massive and inscrutable that their stewards would not be aware. able to understand why they might be biased against women or people of color.

In a way, sensitivity doesn’t really matter if you fear it will lead to unpredictable algorithms taking over our lives. Turns out the AI ​​is already on that path.

More from this writer and others on Bloomberg Opinion:

Do computers have feelings? Don’t let Google alone decide: Parmy Olson

Twitter needs to tackle a bigger problem than bots: Tim Culpan

China’s Big Problem Xi Jinping Can’t Solve: Shuli Ren

(Corrects spelling of metascientist’s name in 11th paragraph of column posted June 19.)

This column does not necessarily reflect the opinion of the Editorial Board or of Bloomberg LP and its owners.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former journalist for the Wall Street Journal and Forbes, she is the author of “We Are Anonymous”.

More stories like this are available at bloomberg.com/opinion

Share.

About Author

Comments are closed.