Massachusetts is one of the first states to put legislative guardrails around the use of facial recognition technology in criminal investigations.
Though police have been using facial recognition technology for the last two decades to try to identify unknown people in their investigations, the practice of putting the majority of Americans into a perpetual photo lineup has gotten surprisingly little attention from lawmakers and regulators. Until now.
Lawmakers, civil liberties advocates and police chiefs have debated whether and how to use the technology because of concerns about both privacy and accuracy. But figuring out how to regulate it is tricky. So far, that has meant an all-or-nothing approach. City Councils in Oakland, Portland, San Francisco, Minneapolis and elsewhere have banned police use of the technology, largely because of bias in how it works. Studies in recent years by MIT researchers and the federal government found that many facial recognition algorithms are most accurate for white men, but less so for everyone else.
At the same time, automated facial recognition has become a powerful investigative tool, helping to identify child molesters and, in a recent high-profile example, people who participated in the Jan. 6 riot at the Capitol. Law enforcement officials in Vermont want the state’s ban lifted because there “could be hundreds of kids waiting to be saved.”
That’s why a new law in Massachusetts is so interesting: It’s not all or nothing. The state managed to strike a balance on regulating the technology, allowing law enforcement to harness the benefits of the tool, while building in protections that might prevent the false arrests that have happened before.
A police reform bill that goes into effect in July creates new guardrails: Police first must get a judge’s permission before running a face recognition search, and then have someone from the state police, the F.B.I. or the Registry of Motor Vehicles perform the search. A local officer can’t just download a facial recognition app and do a search.
The law also creates a commission to study facial recognition policies and make recommendations, such as whether a criminal defendant should be told that they were identified using the technology.
If you ask lawmakers in the state how they pulled it off, they will frequently refer you to one person: Kade Crockford, an activist at the ACLU of Massachusetts.
“One of my concerns was that we would wake up one day in a world resembling that depicted in the Philip K. Dick novel “Minority Report,” where everywhere you go, your body is tracked; your physical movements, habits, activities and locations are secretly compiled and tracked in a searchable database available to god knows who,” Mx. Crockford said.
Two years ago, in June 2019, Mx. Crockford and the ACLU of Massachusetts launched a campaign against face surveillance, educating policymakers about problems with the technology and investigating, via public records requests, how widely it was used in the state. That month, the city of Somerville, outside Boston, passed the state’s first ban on government use. It was the second city in the country to do so, after San Francisco the month before.
“We don’t want to play Whac-a-Mole and ban every new dystopic piece of surveillance technology,” said Ben Ewen-Campen, a member of the Somerville City Council. “We want an opt-in dynamic where if society decides they want it, they can have it.”
The ACLU submitted over 400 public record requests to state and federal agencies and found that police regularly used the technology to identify people, usually by running their faces against the state database of driver’s license photos.
One record it received was a September 2015 memo sent by a Massachusetts state police officer to all “local, state and federal law enforcement agencies,” alerting them to a new Registry of Motor Vehicles email address for facial recognition searches. If you didn’t know someone’s identity and wanted to see if their face matched that of a Massachusetts driver, all you had to do was email a photo to that address.
“There was no mention of any policy or legal analysis or legal threshold that law enforcement would have to meet for one of these searches to be performed,” said Mx. Crockford.
Emails from local police agencies turned over to the ACLU also revealed that over the last year or so, a number of officers signed up for trial accounts with Clearview AI, an app that searches for someone’s face from billions of photos on the public web.
By 2020, Boston and five more cities in Massachusetts banned government use of facial recognition. State representative Dave Rogers, a Democrat who helped to craft the state’s facial recognition bill, said the initiatives by cities and towns helped demonstrate the need for a statewide measure. “We saw that law enforcement was using it in a completely unfettered way,” Rep. Rogers said. “Technology in our society is advancing much more rapidly than the law that regulates it.”
A bill passed by the Democrat-controlled legislature banned almost all government use of facial recognition technology, except for the Registry of Motor Vehicles, which uses it to prevent identity theft. The department could run searches for police only with a search warrant. (A warrant is required under a Washington state law that also takes effect in July.)
But Massachusetts’ Republican governor, Charlie Baker, threatened to veto the measure.
“I’m not going to sign a bill into law that bans facial recognition,” Mr. Baker said, according to a local report, citing its use in solving two cases of homicide and child sexual abuse.
Though it was a small part of a larger police reform bill, the facial recognition guidelines attracted attention. NBA player Jaylen Brown and his Celtics teammates submitted an opinion article to the Boston Globe decrying the technology’s racial bias problems and supporting the regulation.
“Despite our positions and profiles as professional athletes, we are not immune to racial profiling and discriminatory policing,” they wrote. “Studies confirm that face recognition surveillance technology is flawed and biased, with significantly higher error rates when used against people of color and women.”
“We can’t allow biased technology to supercharge racist policing in the Commonwealth,” they added.
Eventually the legislators and the governor reached a compromise, in the form of the pending regulations.
Some critics, including other ACLU offices, say that facial recognition is uniquely harmful and must be banned. Police unions and the Boston Police Department did not respond to requests for comment. Ryan Walsh, a public information officer with the Springfield, Mass., police department, indicated that the department does not see this measure as the last word on how law enforcement can use this technology.
“While we do not currently use or have plans to use any facial recognition software, we hope the law evolves as the technology evolves and improves,” he said.
Mx. Crockford, who has been working on technology and surveillance issues since joining the ACLU of Massachusetts in 2009, said that it was “politically impossible” to ban the use of facial recognition in the state. But she believes that additional guidelines will help prevent abuse and false arrests.
Mr. Rogers and state senator Cynthia Creem have introduced a new bill with restrictions that include curbs on the use of the technology in public places.
“In our view, this is very much not done,” Mx. Crockford said.
"how" - Google News
February 27, 2021 at 05:00PM
https://ift.tt/3r1SC8z
How One State Managed to Actually Write Rules on Facial Recognition - The New York Times
"how" - Google News
https://ift.tt/2MfXd3I
Bagikan Berita Ini
0 Response to "How One State Managed to Actually Write Rules on Facial Recognition - The New York Times"
Post a Comment