The U.S. may see a decline in the use of certain technologies used by law enforcement, including facial recognition, as calls for reimagining policing continue to grow.
Already, some departments have stopped using “predictive policing” software. Some opponents of the software consider it a cover for racial profiling. And some tech companies have vowed to stop supplying law enforcement with facial recognition software that is not always accurate.
Others say such technologies can be used effectively without using them in a way that exacerbates racism.
A recent article published by the American Bar Association Journal lays out decisions being made in the wake of numerous incidents of police killing Black men and women. More people are concerned these technologies are violating their Fourth Amendment rights to protection from illegal search and seizure.
“Nine years ago, Santa Cruz, California’s police department was one of the first to adopt predictive policing software, which, it later claimed, had helped reduce crime,” the ABA Journal noted.
“Innovation is the key to modern policing, and we’re proud to be leveraging technology in a way that keeps our community safer,” then-Police Chief Kevin Vogel said. “After the May 25 killing of George Floyd in Minneapolis sparked massive worldwide protests, as well as a debate over whether money spent on police departments should be redirected to other city, state or local facilities and services, Santa Cruz made history again. In late June, it became the first city in the nation to ban predictive policing.”
Following that, Amazon and Microsoft both announced a moratorium on selling their facial recognition services to law enforcement agencies. IBM announced it was abandoning the facial recognition business.
The controversy over the use of certain technologies in fighting crime is based partly on the premise that they further racial profiling. There is also evidence the technologies are not always accurate and can lead to police going after innocent citizens.
“Picture a crowded street. Police are searching for a man believed to have committed a violent crime,” NBC News wrote. “To find him, (Restin, VA police) feed a photograph into a video surveillance network powered by artificial intelligence. A camera, one of thousands, scans the street, instantly analyzing the faces of everyone it sees. Then, an alert: The algorithms found a match with someone in the crowd. Officers rush to the scene and take him into custody.”
Turns out, though, the equipment was wrong. This was not the man police were looking for. He just looked similar.
And there is much more technology at play, according to the University of San Diego.
“Police have been using fingerprints to identify people for over a century. Now, in addition to facial recognition and DNA, there is an ever-expanding array of biometric (and behavioral) characteristics being utilized by law enforcement and the intelligence community,” the university states on its website. “These include voice recognition, palmprints, wrist veins, iris recognition, gait analysis and even heartbeats.”
“On the one hand, calls for police reform are causing companies and institutions to reconsider a high-tech infrastructure that civil liberties groups and activists say perpetuate racial injustice and police brutality,” the ABA Journal states. “Black Americans are disproportionately part of use-of-force incidents, studies have found. Police shoot and kill Black people at twice the rate of whites. On the other hand, lawmakers are looking at how data and tech can improve accountability and identify police officers with a pattern of misconduct.”
Predictive policing is the use of technology to determine where crime is most likely to occur.
“Police departments in some of the largest U.S. cities have been experimenting with predictive policing as a way to forecast criminal activity,” the Brennan Center for Justice explained. “Predictive policing uses computer systems to analyze large sets of data, including historical crime data, to help decide where to deploy police or to identify individuals who are purportedly more likely to commit or be a victim of a crime.”
St. Petersburg, FL, Police Chief Anthony Holloway says his department uses technology, but in a way that it does not target a specific individual. Holloway is chair of the ABA Criminal Justice Section’s Law Enforcement Committee.
“The way we use predictive policing is that you look at where the incidents of crime are happening and not the individual people,” Holloway said. “We look at hot spots. If we see a bunch of burglaries have occurred, that is where we put our resources. We don’t use historic data. We look at what is happening in the past few days.”
Holloway said his department will also be getting body cameras in December, but they will not include facial recognition software.
“I don’t want that,’’ he said. “We want to be able to see and show the public the officers are doing what they say they are doing. We can show them a picture.”
Adam Schwartz, a senior staff attorney with the nonprofit Electronic Frontier Foundation in San Francisco, though, believes most police departments use algorithms from historic databases to determine where crime is most likely to happen. He thinks law enforcement should end reliance on surveillance technologies such as predictive policing. EFF is dedicated to the protection of civil liberties and privacy in new technologies.
Some in law enforcement argue that predictive policing helps predict crimes and locations for crimes more accurately. Critics say it raises issues of transparency and accountability by using algorithms that rely on historical data. That can reproduce biases, critics say.
Schwartz said there is little evidence to show predictive policing reduces crime. He calls it “aggressive policing in minority communities that could violate due process and privacy protections.”
The EFF has created a one-pager on predictive policing for defense attorneys. It states that this type of surveillance “amplifies racial disparities in policing, telling us more about patterns in police records, not patterns in actual crime.” Areas most heavily patrolled are those with the most police data.
Also, EFF Surveillance Litigation Director Jennifer Lynch spoke to the President’s Commission on Law Enforcement’s Use of Facial Recognition Technology in April, saying the technology has not had enough oversight.
“The adoption of face recognition technologies has occurred without meaningful oversight, without proper accuracy testing, and without legal protections to prevent misuse,” Lynch said. “This has led to the development of unproven systems that will impinge on constitutional rights and disproportionately impact people of color.”
EFF sent out a plea in June asking people to contact their Congressional representatives urging them to ban the use of facial recognition software by police.
“Cities and states across the country have banned government use of face surveillance technology, and many more are weighing proposals to do so,” EFF’s plea stated. “From Boston to San Francisco, elected officials and activists rightfully know that face surveillance gives police the power to track us wherever we go, turns us all into perpetual suspects, increases the likelihood of being falsely arrested, and chills people’s willingness to participate in First Amendment protected activities. That’s why we’re asking you to contact your elected officials and tell them to co-sponsor and vote yes on the Facial Recognition and Biometric Technology Moratorium Act of 2020.”
The Congressional Democrats’ answer to the issue is the Justice in Policing Act of 2020, which repackages legislation previously introduced, including a clamp-down on the use of face recognition technology. The bill is considered the biggest overhaul package of U.S. law enforcement in history. The House passed the bill June 25 by a party line vote of 236-181. It is not expected to advance in the Senate.