Should Alexa Read Our Moods?

0
346
Oracle enhances customer experience platform with a B2B refresh

Source is New York Times

This article is part of the On Tech newsletter. You can sign up here to receive it weekdays.

If Amazon’s Alexa thinks you sound sad, should it suggest that you buy a gallon of ice cream?

Joseph Turow says absolutely no way. Dr. Turow, a professor at the Annenberg School for Communication at the University of Pennsylvania, researched technologies like Alexa for his new book, “The Voice Catchers.” He came away convinced that companies should be barred from analyzing what we say and how we sound to recommend products or personalize advertising messages.

Dr. Turow’s suggestion is notable partly because the profiling of people based on their voices isn’t widespread. Or, it isn’t yet. But he is encouraging policymakers and the public to do something I wish we did more often: Be careful and considerate about how we use a powerful technology before it might be used for consequential decisions.

After years of researching Americans’ evolving attitudes about our digital jet streams of personal data, Dr. Turow said that some uses of technology had so much risk for so little upside that they should be stopped before they got big.

In this case, Dr. Turow is worried that voice technologies including Alexa and Siri from Apple will morph from digital butlers into diviners that use the sound of our voices to work out intimate details like our moods, desires and medical conditions. In theory they could one day be used by the police to determine who should be arrested or by banks to say who’s worthy of a mortgage.

“Using the human body for discriminating among people is something that we should not do,” he said.

Some business settings like call centers are already doing this. If computers assess that you sound angry on the phone, you might be routed to operators who specialize in calming people down. Spotify has also disclosed a patent on technology to recommend songs based on voice cues about the speaker’s emotions, age or gender. Amazon has said that its Halo health tracking bracelet and service will analyze “energy and positivity in a customer’s voice” to nudge people into better communications and relationships.

Dr. Turow said that he didn’t want to stop potentially helpful uses of voice profiling — for example, to screen people for serious health conditions, including Covid-19. But there is very little benefit to us, he said, if computers use inferences from our speech to sell us dish detergent.

“We have to outlaw voice profiling for the purpose of marketing,” Dr. Turow told me. “There is no utility for the public. We’re creating another set of data that people have no clue how it’s being used.”

Dr. Turow is tapping into a debate about how to treat technology that could have enormous benefits, but also downsides that we might not see coming. Should the government try to put rules and regulations around powerful technology before it’s in widespread use, like what’s happening in Europe, or leave it mostly alone unless something bad happens?

The tricky thing is that once technologies like facial recognition software or car rides at the press of a smartphone button become prevalent, it’s more difficult to pull back features that turn out to be harmful.

I don’t know if Dr. Turow is right to raise the alarm about our voice data being used for marketing. A few years ago, there was a lot of hype that voice would become a major way that we would shop and learn about new products. But no one has proved that the words we say to our gizmos are effective predictors of which new truck we’ll buy.

I asked Dr. Turow whether people and government regulators should get worked up about hypothetical risks that may never come. Reading our minds from our voices might not work in most cases, and we don’t really need more things to feel freaked out about.

Dr. Turow acknowledged that possibility. But I got on board with his point that it’s worthwhile to start a public conversation about what could go wrong with voice technology, and decide together where our collective red lines are — before they are crossed.



  • Mob violence accelerated by app: In Israel, at least 100 new WhatsApp groups have been formed for the express purpose of organizing violence against Palestinians, my colleague Sheera Frenkel reported. Rarely have people used WhatsApp for such specific targeted violence, Sheera said.

  • And when an app encourages vigilantes: Citizen, an app that alerts people about neighborhood crimes and hazards, posted a photograph of a homeless man and offered a $30,000 reward for information about him, claiming he was suspected of starting a wildfire in Los Angeles. Citizen’s actions helped set off a hunt for the man, who the police later said was the wrong person, wrote my colleague Jenny Gross.

  • Why many popular TikTok videos have the same bland vibe: This is an interesting Vox article about how the computer-driven app rewards the videos “in the muddled median of everyone on earth’s most average tastes.”

Here’s a not-blah TikTok video with a happy horse and a few happy pups.


We want to hear from you. Tell us what you think of this newsletter and what else you’d like us to explore. You can reach us at ontech@nytimes.com.

If you don’t already get this newsletter in your inbox, please sign up here. You can also read past On Tech columns.

Source is New York Times

Vorig artikelColorado Makes Doxxing Public Health Workers Illegal
Volgend artikelTikTok's Owner, ByteDance, Says C.E.O. Zhang Yiming Will Resign