Clearview AI Finally Takes Part in a Federal Accuracy Test

0
381
Oracle enhances customer experience platform with a B2B refresh

Source is New York Times

Clearview AI scraped more than 10 billion photos from the public internet to build a facial-recognition tool that it marketed to law enforcement agencies for identifying unknown people. Critics have said the company’s product is illegal, unethical and untested. Now, more than two years after law enforcement officers first started using the company’s app, Clearview’s algorithm — what allows it to match faces to photos — has been put to a third-party test for the first time. It performed surprisingly well.

In a field of over 300 algorithms from over 200 facial recognition vendors, Clearview ranked among the top 10 in terms of accuracy, alongside NTechLab of Russia, Sensetime of China and other more established outfits. But the test that Clearview took reveals how accurate its algorithm is at correctly matching two different photos of the same person, not how accurate it is at finding a match for an unknown face in a database of 10 billion of them.

The National Institute of Standards and Technology, or NIST, a unique federal agency that is also a scientific lab, administers its Face Recognition Vendor Tests every few months. There are two versions of the test, one for verification — which is the kind of facial recognition someone might use to open a smartphone — and another for what are called one-to-many, or 1: N searches, which are the kind used by law enforcement authorities to identify someone by looking through a big database. Oddly, Clearview submitted its algorithm for the former test, rather than the latter one, which is what its product is built to do.

Clearview AI’s C.E.O., Hoan Ton-That, called the results “an unmistakable validation” of his company’s product. He also said the company would “be submitting shortly” to the one-to-many test.

NIST has been testing the accuracy of face recognition vendors since 2000, but participation is voluntary and testing isn’t required for government agencies to buy the technology. Though its accuracy had never been audited by NIST, Clearview AI claims thousands of local and state police departments as customers; a recent report from the Government Accountability Office also cited use by a number of federal agencies, including the F.B.I., the Secret Service and the Interior Department.

Clearview AI has been sued in state and federal court in Illinois, and in Vermont, for collecting photos of people without their permission, and subjecting them to facial recognition searches. The company has also come under attack from fellow vendors, as reported by Insider, who worry that the controversy surrounding Clearview AI will cause problems for the facial recognition industry as a whole.

Source is New York Times

Vorig artikelClearview AI finally takes part in a federal accuracy test.
Volgend artikelAmazon Sales Growth Slows and Costs Rise