Speed Read

Facial Recognition Technology Has A Bias Problem (WGBH, Jan 11, 2021)
“I was locked up for no reason,” Nijeer Parks told The New York Times last month. Parks was recounting his arrest for allegedly shoplifting from a New Jersey Hampton Inn gift shop, giving officers a false ID and then making a run for it in a rental car. Except that Parks had proof that he was 30 miles away when all of that happened. But police were convinced they had the right man — they used facial recognition to match the Nijeer Parks’ face to that of the Black man pictured on the phony ID.
 

The Technology 202: Trump Is Considering Building His Own Social Network. But It Might Not Get Him What He Wants. (Washington Post , Jan 11, 2021)
Trump is considering starting his own social network after Twitter permanently banned him on Friday. The scramble reflects the president's struggle to find a new channel of communication after heavily relying on 280-character missives to govern for the past four years, as my colleagues Tony Romm and Josh Dawsey reported. Trump teased the plans for a new social network in a tweet posted to the official @POTUS account Friday night, which was later removed by Twitter.
 

Experian Warns Of Facial Recognition, Synthetic ID Fraud (Payments Source, Jan 11, 2021)
The widespread shift to e-commerce and touchless payments during the pandemic has escalated fraud risk in those channels, including the possibility of fraudsters combining altered photos with synthetic ID, Experian warns. A trick Experian is calling “Frankenstein IDs” could see fraudsters this year using machine learning to invent fake facial images, which combined with fictional identities could add a new and more virulent edge to fast-growing synthetic ID fraud, the global information company said in a new forecast.
 

Surveillance Tech Is Not Accomplishing The Things It’s Supposed To (Market Place , Jan 11, 2021)
The federal government, along with state and local governments, spends billions of dollars every year on security and surveillance technology — in theory, to prevent things like the attack on the U.S. Capitol that happened last week. It’s sophisticated, comprehensive and creates a whole lot of privacy concerns, but it also might not be accomplishing the right things. I spoke with Alvaro Bedoya, director of the Center on Privacy & Technology at Georgetown. The following is an edited transcript of our conversation.
 

Intel Launches Facial Recognition Solution Amid Debate Around The Tech’s Biases (Medianama, Jan 11, 2021)
The chipmaker Intel has now launched a facial recognition solution, which the company says will work with smart locks, access control, point-of-sale devices, ATMs and kiosks, among others. Called RealSense ID, the solution is built on Intel’s depth-sensing technology, a dedicated system-on-a-chip, with an embedded secure element to encrypt and process user data “quickly and safely”. While Intel listed out all possible use cases of the facial recognition solutions, it did not specify if it would be offering this technology to law enforcement agencies, though it did say that it was “working to ensure the ethical application of RealSense and the protection of human rights”. The tech, Intel said, processes all facial images locally and encrypts all user data. The solution is also only activated through user awareness and will not authenticate unless prompted by a pre-registered user.
 

Police Use of Clearview AI’s Facial Recognition Tech Spiked After Capitol Raid (Gizmodo, Jan 11, 2021)
Clearview AI’s controversial facial-recognition app has seen a spike in use as police track down the pro-Trump insurgents who descended on the Capitol on Wednesday. With so many of those idiots snapping selfies and livestreaming the raid as if it was a school field trip, I’m surprised police even needed the help. First reported by the New York Times, Clearview AI CEO Hoan Ton-That confirmed to Gizmodo that the app saw a 26% jump in search volume on Jan. 7 compared to its usual weekday averages. Given the aforementioned treasure trove of potential evidence documenting the attack — from live cable news broadcasts to hundreds of images and videos — the Federal Bureau of Investigation and Washington, D.C. police have called for the public’s help in identifying participants. Roughly 2,400 polices agencies nationwide have contracts to use Clearview’s facial recognition software, according to the company, and several of them have reportedly been turning to it to assist federal investigators.
 

New Administration, New Hopes For Aviation Funds (Airport Experience News, Jan 11, 2021)
After a $10 billion COVID-19 relief package passed last summer and a smaller $2 billion in supplemental money approved in December for airports, the industry is now turning its efforts toward funding for 2021. “We’re expecting another round of coronavirus relief to be one of the first things up [in the new administration], followed by infrastructure, followed by a focus on environmental issues,” said Joel Bacon, executive vice president, government and public affairs, American Association of Airport Executives (ARRA). “I think those are the big three buckets we’re expecting to be front and center. We’re obviously going to have a stake in all of those, particularly the first two.”
 

Is Your iPhone Passcode Off Limits to the Law? Supreme Court Ruling Sought (Wall Street Journal, Jan 11, 2021)
Two civil-liberties groups are asking the U.S. Supreme Court to rule on an increasingly relevant digital-privacy question: Do Americans have a constitutional right to keep their passwords and passcodes secret? It’s a thorny legal issue, and one that is unsettled in the U.S., according to lawyers at the American Civil Liberties Union and the Electronic Frontier Foundation, who on Thursday filed a petition with the Supreme Court asking it to decide the matter once and for all. The initiative is the latest twist in a tug of war between technology companies, which have radically increased the security of their products over the past decade, and law-enforcement authorities, who have increasingly relied on digital evidence to make their cases.
 

Google Chrome Privacy Plan Faces U.K. Competition Probe (Wall Street Journal, Jan 11, 2021)
U.K. antitrust officials are investigating whether Google’s plan to remove some user-tracking tools from its Chrome browser could hurt competition in the online-advertising industry. The U.K.’s Competition and Markets Authority said it has opened a formal probe into Google’s plan for Chrome to end support next year of a technology called third-party cookies, which many companies use to track individuals’ browsing habits across multiple websites.
 

He Created the Web. Now He’s Out to Remake the Digital World (NY Times, Jan 11, 2021)
Tim Berners-Lee wants to put people in control of their personal data. He has technology and a start-up pursuing that goal. Can he succeed?
 

The Facial-recognition App Clearview Sees A Spike In Use After Capitol Attack (NY Times, Jan 11, 2021)
Law enforcement has used the app to identify perpetrators, Clearview AI’s C.E.O. said.
 

Bill To Limit Police Biometrics Use Tabled In New York State As Legislators Struggle With Facial Recognition (Biometric Update, Jan 11, 2021)
A state legislator in New York has introduced a curious bill to prohibit arrests on the sole basis of “facial recognition and biometric information,” as policy-makers struggle with the deployment of technology that law enforcement officials consistently say is beneficial, but is too-frequently misused. Bill A00768 was introduced by Democrat Assembly Member Linda Rosenthal, and would apply the same criteria to fingerprint and DNA biometrics, which are recognized as deterministic and therefore generally admissible as evidence in court, as is currently applied to facial recognition. The inadmissibility of facial recognition as legal grounds for probable cause has apparently not prevented the wrongful arrest of at least three individuals in the U.S.
 

Facial Recognition Technology Can Expose Political Orientation From Naturalistic Facial Images (Homeland Security Review , Jan 11, 2021)
Using the profile pictures of over one million participants, researchers at Stanford show that a widespread facial recognition algorithm can expose people’s political orientation with stunning accuracy. Political orientation was correctly classified in 72% of liberal–conservative face pairs, remarkably better than chance (50%). Accuracy was similar across countries (the U.S., Canada, and the UK) and online platforms (Facebook and a dating website)

 

Copyright © 2024 by the International Biometrics & Identity Association (IBIA)