The Race for Attention on YouTube

0
337
Oracle enhances customer experience platform with a B2B refresh

Source is New York Times

This article is part of the On Tech newsletter. You can sign up here to receive it weekdays.

When we get caught up in heated arguments with our neighbors on Facebook or in politically charged YouTube videos, why are we doing that? That’s the question that my colleague Cade Metz wants us to ask ourselves and the companies behind our favorite apps.

Cade’s most recent article is about Caolan Robertson, a filmmaker who for more than two years helped make videos with far-right YouTube personalities that he says were intentionally provocative and confrontational — and often deceptively edited.

Cade’s reporting is an opportunity to ask ourselves hard questions: Do the rewards of internet attention encourage people to post the most incendiary material? How much should we trust what we see online? And are we inclined to seek out ideas that stoke our anger?

Shira: How much blame does YouTube deserve for people like Robertson making videos that emphasized conflict and social divisions — and in some cases were manipulated?

Cade: It’s tricky. In many cases these videos became popular because they confirmed some people’s prejudices against immigrants or Muslims.

But Caolan and the YouTube personalities he worked with also learned how to play up or invent conflict. They could see that those kinds of videos got them attention on YouTube and other websites. And YouTube’s automated recommendations sent a lot of people to those videos, too, encouraging Caolan to do more of the same.

One of Facebook’s executives recently wrote, in part, that his company mostly isn’t to blame for pushing people to provocative and polarizing material. That it’s just what people want. What do you think?

There are all sorts of things that amplify our inclination for what is sensational or outrageous, including talk radio, cable television and social media. But it’s irresponsible for anyone to say that’s just how some people are. We all have a role to play in not stoking the worst of human nature, and that includes the companies behind the apps and websites where we spend our time.

I’ve been thinking about this a lot in my reporting about artificial intelligence technologies. People try to distinguish between what people do and what computers do, as though they are completely separate. They’re not. Humans decide what computers do, and humans use computers in ways that alter what they do. That’s one reason I wanted to write about Caolan. He is taking us behind the curtain to see the forces — both of human nature and tech design — that influence what we do and how we think.

What should we do about this?

I think the most important thing is to think about what we’re really watching and doing online. Where I get scared is thinking about emerging technologies including deepfakes that will be able to generate forged, misleading or outrageous material on a much larger scale than people like Caolan ever could. It’s going to get even harder to know what’s real and what’s not.

Isn’t it also dangerous if we learn to mistrust anything that we see?

Yes. Some people in technology believe that the real risk of deepfakes is people learning to disbelieve everything — even what is real.

How does Robertson feel about making YouTube videos that he now believes polarized and misled people?

On some level he regrets what he did, or at the very least wants to distance himself from that. But he’s essentially now using the tactics that he deployed to make extreme right-wing videos to make extreme left-wing videos. He’s doing the same thing on one political side that he used to do on the other.


Source is New York Times

Vorig artikelSAP users identify data skills shortage as importance of analytics spikes
Volgend artikelNASA Awards SpaceX $2.9 Billion to Build Moon Lander