Rite Aid is the third-largest drugstore chain in the country|Ildar Sagdejev|CC BY-SA 4.0

Rite Aid agreed yesterday not to use its facial recognition technology after the Federal Trade Commission (FTC) found the retailer “failed to implement reasonable procedures and prevent harm to consumers” in hundreds of stores between 2012 and 2020.

The AI system the company used falsely tagged consumers, particularly women and people of color.

The flawed technology resulted in hundreds of false-positive alerts, leading Rite Aid employees to make wrongful accusations of criminal behavior.

According to the FTC, employees acting on false-positive alerts followed customers around its stores, searched them, ordered them to leave, called the police and accused them publicly of shoplifting or other crimes. In one instance, an incorrect identification led to a search of an 11-year-old girl.

The federal regulators say the facial-recognition tool was mostly used in areas with large Black, Latino and Asian communities.

In a unanimous 3-0 vote, the FTC accused the chain of utilizing low-quality photos from various sources, contributing to a watchlist database.

As part of the settlement, Rite Aid is mandated to implement comprehensive safeguards before deploying AI technology. The company is prohibited from using it if it cannot control the risks to consumers.

Facial recognition is a useful tool increasingly being used in retail. However, the FTC is also focusing on the misuse of biometric information.