Rite Aid hit with a five-year freeze on facial recognition

US FTC alleges the technology resulted in inaccuracies and bias.

Tech Spotlight   >   Analytics [Computerworld]   >   An image of an eye with virtual surveillance.
Thinkstock

Rite Aid has agreed to a five-year ban on using facial recognition technology for surveillance as part of a settlement with the US Federal Trade Commission (FTC).

The decision follows charges by the FTC that the technology was misused, harming consumers. From 2012 to 2020, Rite Aid implemented AI-driven facial recognition to detect shoplifters. However, the FTC said the system wrongly identified specific customers as previous shoplifters and "disproportionately impacted people of color."

Rite Aid used AI-powered, face-scanning technology in hundreds of its stores to identify individuals on surveillance footage, aiming to reduce shoplifting and address other customer-related issues. The FTC said that due to the lack of proper safeguards and the technology's history of inaccuracies and racial bias, Rite Aid staff often wrongfully accused customers of theft.

“Rite Aid's reckless use of facial surveillance systems left its customers facing humiliation and other harms, and its order violations put consumers’ sensitive information at risk," Samuel Levine, director of the FTC’s Bureau of Consumer Protection, said in a news release. “Today’s groundbreaking order makes clear that the Commission will be vigilant in protecting the public from unfair biometric surveillance and unfair data security practices.”

According to the FTC, Rite Aid's system used facial recognition to scan customers as they entered the store, comparing their faces with an extensive database of individuals suspected or confirmed as shoplifters. If the system found a match, it alerted staff to monitor that shopper closely. The photo database was filled with low-quality images from grainy surveillance footage and cell phones, leading to unreliable matches. The problems with the pictures resulted in staff following customers or calling the police without witnessing any crime.

Federal officials also pointed out that Rite Aid did not inform its customers about the use of facial recognition technology and instructed its employees to keep its usage confidential from both customers and the media. The FTC claimed that Rite Aid collaborated with two unnamed companies to build its "persons of interest" database containing tens of thousands of images.

Rite Aid said that its deal with the FTC must be approved by the court overseeing its ongoing bankruptcy case. The company filed for bankruptcy protection in October.

"The allegations relate to a facial recognition technology pilot program the company deployed in a limited number of stores," the company said in a statement. "Rite Aid stopped using the technology in this small group of stores more than three years ago, before the FTC’s investigation regarding the Company’s use of the technology began."

The FTC ban exemplifies the decentralized nature of AI regulation in the US, said Kjell Carlsson, head of data science strategy and evangelism at Domino Data Lab.

“Unlike in the EU, where the EU AI Act promises consistent regulations backed by draconian fines focused on AI, in the US, companies face an evolving patchwork of regulations by federal agencies, state and municipal lawmakers, with comparatively minor penalties, mostly focused on the misuse of data rather than AI itself,” he added.

Copyright © 2023 IDG Communications, Inc.

It’s time to break the ChatGPT habit