How a Stabbing in Israel Echoes Through the Fight Over Online Speech

0
302
Oracle enhances customer experience platform with a B2B refresh

Source is New York Times

WASHINGTON — Stuart Force says he found solace on Facebook after his son was stabbed to death in Israel by a member of the militant group Hamas in 2016. He turned to the site to read hundreds of messages offering condolences on his son’s page.

But only a few months later, Mr. Force had decided that Facebook was partly to blame for the death, because the algorithms that power the social network helped spread Hamas’s content. He joined relatives of other terror victims in suing the company, arguing that its algorithms aided the crimes by regularly amplifying posts that encouraged terrorist attacks.

The legal case ended unsuccessfully last year when the Supreme Court declined to take it up. But arguments about the algorithms’ power have reverberated in Washington, where some members of Congress are citing the case in an intense debate about the law that shields tech companies from liability for content posted by users.

At a House hearing on Thursday about the spread of misinformation with the chief executives of Facebook, Twitter and Google, some lawmakers are expected to focus on how the companies’ algorithms are written to generate revenue by surfacing posts that users are inclined to click on and respond to. And some will argue that the law that protects the social networks from liability, Section 230 of the Communications Decency Act, should be changed to hold the companies responsible when their software turns the services from platforms into accomplices for crimes committed offline.

“The last few years have proven that the more outrageous and extremist content social media platforms promote, the more engagement and advertising dollars they rake in,” said Representative Frank Pallone Jr., the chairman of the Energy and Commerce Committee, which will question in the chief executives.

“By now it’s painfully clear that neither the market nor public pressure will stop social media companies from elevating disinformation and extremism, so we have no choice but to legislate, and now it’s a question of how best to do it,” Mr. Pallone, a New Jersey Democrat, added.

Former President Donald J. Trump called for a repeal of Section 230, and President Biden made a similar comment while campaigning for the White House. But a repeal looks increasingly doubtful, with lawmakers focusing on smaller possible changes to the law.

Altering the legal shield to account for the power of the algorithms could reshape the web, because algorithmic sorting, recommendation and distribution are common across social media. The systems decide what links are displayed first in Facebook’s News Feed, which accounts are recommended to users on Instagram and what video is played next on YouTube.

The industry, free-speech activists and other supporters of the legal shield argue that social media’s algorithms are applied equally to posts regardless of the message. They say the algorithms work only because of the content provided by users and are therefore covered by Section 230, which protects sites that host people’s posts, photos and videos.

Courts have agreed. A federal district judge said even a “most generous reading” of the allegations made by Mr. Force “places them squarely within” the immunity granted to platforms under the law.

A spokesman for Facebook declined to comment on the case but pointed to comments from its chief executive, Mark Zuckerberg, supporting some changes to Section 230. Elena Hernandez, a spokeswoman for YouTube, which is owned by Google, said the service had made changes to its “search and discovery algorithms to ensure more authoritative content is surfaced and labeled prominently in search results and recommendations.”

Twitter noted that it had proposed giving users more choice over the algorithms that ranked their timelines.

“Algorithms are fundamental building blocks of internet services, including Twitter,” said Lauren Culbertson, Twitter’s head of U.S. public policy. “Regulation must reflect the reality of how different services operate and content is ranked and amplified, while maximizing competition and balancing safety and free expression.”

Credit…U.S. Military Academy, via Associated Press

Mr. Force’s case began in March 2016 when his son, Taylor Force, 28, was killed by Bashar Masalha while walking to dinner with graduate school classmates in Jaffa, an Israeli port city. Hamas, a Palestinian group, said Mr. Masalha, 22, was a member.

In the ensuing months, Stuart Force and his wife, Robbi, worked to settle their son’s estate and clean out his apartment. That summer, they got a call from an Israeli litigation group, which had a question: Would the Force family be willing to sue Facebook?

After Mr. Force spent some time on a Facebook page belonging to Hamas, the family agreed to sue. The lawsuit fit into a broader effort by the Forces to limit the resources and tools available to Palestinian groups. Mr. Force and his wife allied with lawmakers in Washington to pass legislation restricting aid to the Palestinian Authority, which governs part of the West Bank.

Their lawyers argued in an American court that Facebook gave Hamas “a highly developed and sophisticated algorithm that facilitates Hamas’s ability to reach and engage an audience it could not otherwise reach as effectively.” The lawsuit said Facebook’s algorithms had not only amplified posts but had aided Hamas by recommending groups, friends and events to users.

The federal district judge, in New York, ruled against the claims, citing Section 230. The lawyers for the Force family appealed to a three-judge panel of the U.S. Court of Appeals for the Second Circuit, and two of the judges ruled entirely for Facebook. The other, Judge Robert Katzmann, wrote a 35-page dissent to part of the ruling, arguing that Facebook’s algorithmic recommendations shouldn’t be covered by the legal protections.

“Mounting evidence suggests that providers designed their algorithms to drive users toward content and people the users agreed with — and that they have done it too well, nudging susceptible souls ever further down dark paths,” he said.

Late last year, the Supreme Court rejected a call to hear a different case that would have tested the Section 230 shield. In a statement attached to the court’s decision, Justice Clarence Thomas called for the court to consider whether Section 230’s protections had been expanded too far, citing Mr. Force’s lawsuit and Judge Katzmann’s opinion.

Justice Thomas said the court didn’t need to decide in the moment whether to rein in the legal protections. “But in an appropriate case, it behooves us to do so,” he said.

Some lawmakers, lawyers and academics say recognition of the power of social media’s algorithms in determining what people see is long overdue. The platforms usually do not reveal exactly what factors the algorithms use to make decisions and how they are weighed against one another.

“Amplification and automated decision-making systems are creating opportunities for connection that are otherwise not possible,” said Olivier Sylvain, a professor of law at Fordham University, who has made the argument in the context of civil rights. “They’re materially contributing to the content.”

That argument has appeared in a series of lawsuits that contend Facebook should be responsible for discrimination in housing when its platform could target advertisements according to a user’s race. A draft bill produced by Representative Yvette D. Clarke, Democrat of New York, would strip Section 230 immunity from targeted ads that violated civil rights law.

A bill introduced last year by Representatives Tom Malinowski of New Jersey and Anna G. Eshoo of California, both Democrats, would strip Section 230 protections from social media platforms when their algorithms amplified content that violated some antiterrorism and civil rights laws. The news release announcing the bill, which will be reintroduced on Wednesday, cited the Force family’s lawsuit against Facebook. Mr. Malinowski said he had been inspired in part by Judge Katzmann’s dissent.

Critics of the legislation say it may violate the First Amendment and, because there are so many algorithms on the web, could sweep up a wider range of services than lawmakers intend. They also say there’s a more fundamental problem: Regulating algorithmic amplification out of existence wouldn’t eliminate the impulses that drive it.

“There’s a thing you kind of can’t get away from,” said Daphne Keller, the director of the Program on Platform Regulation at Stanford University’s Cyber Policy Center, “which is human demand for garbage content.”

Source is New York Times

Vorig artikelApparent drop in cyber incidents highlights underlying problems
Volgend artikelIWM uses StorCycle to free capacity for all-flash SAN upgrade