FTC Mandates Algorithm Destruction for Improper Use of AI

Under the provisions of the Federal Trade Commission (FTC) Act (Act), the FTC has the authority to enforce various antitrust and consumer protection laws affecting a majority of industries. In particular, section 5(a) of the Act empowers the FTC to investigate and prevent “unfair methods of competition, and unfair or deceptive acts or practices affecting commerce.”

Although the agency’s authority can be evoked in numerous situations, recent activity by the FTC indicates that the use of automated decision-making processes—including by artificial intelligence systems— is at the forefront of its regulatory agenda, and that it can also require violators to destroy the technologies and algorithms that empower unfair and deceptive practices.

FTC’s AI Enforcement Action History

On April 8, 2020, the FTC released a blog titled, Using Artificial Intelligence and Algorithms which outlined its recommended best practices when using artificial intelligence (AI) and emphasized the agency’s commitment to enforcing laws relevant to AI use, such as the Fair Credit Reporting Act (FCRA), and the Equal Credit Opportunity Act (ECOA), which both govern automatic decision-making processes.

The FTC reinforced its stance on May 7, 2021, when it announced an enforcement action resulting in a settlement by Everalbum, Inc. for, among other things, violating section 5(a) of the Act by collecting consumers’ personal photographs and using them to “train its face recognition technology” without obtaining prior consent from those consumers or allowing them to opt-out. As part of this settlement agreement, the FTC mandated that Everalbum, Inc. delete all data, models, and algorithms that it had developed using the improperly collected data.

The FTC then issued a blog post and a notice stating its intent to continue to use its authority to investigate the use of algorithms and AI technology to “ensure that algorithmic decision-making does not result in unlawful discrimination.”

Case Study: WW International, Inc.

On March 3, 2022, the FTC entered into a settlement agreement with WW International Inc. after the agency determined that WW and its subsidiary, Kurbo Inc. had committed, among other things, violations of Section 5(a).

According to the complaint in this case, WW and Kurbo created and utilized a weight loss app and website to provide its services to consumers. The app permitted users to create an account and begin utilizing its features if the user indicated that they were a parent signing up for their child, or if the user was a child at least 13 years of age.

However, according to the complaint, WW did not obtain “verifiable parental consent” as required by 15 U.S. Code § 6502(b)(1)(A)(ii) of the Children’s Online Privacy Protection Act (COPPA). Verifiable parental consent, according to the law, “means any reasonable effort (taking into consideration available technology), including a request for authorization for future collection, use, and disclosure described in the notice, to ensure that a parent of a child receives notice of the operator’s personal information collection, use, and disclosure practices, and authorizes the collection, use, and disclosure, as applicable, of personal information and the subsequent use of that information before that information is collected from that child.”

As a result of this improper registration process, at least 18,600 users of the application were under the required age of 13.

Additionally, until November 2019, the application did not consistently notify the parents of the user that WW and Kurbo were collecting personal information from the user and, when notice was provided, it was contained in a series of hyperlinks that parents were not required to follow and did not clarify that the personal information would be collected from the child using the application and not their parent.

In the settlement order, WW and Kurbo incurred monetary penalties of $1.5 million, and were required to destroy all past consumer personal data as well as the algorithms or models designed as a result of the collection of this data. The companies were also required to ensure the prevention of all future personal information collected from children, and to achieve compliance with COPPA by ensuring that parents receive direct notice and provide verifiable consent.

Although this enforcement action was not a direct result of the WW and Kurbo’s improper use of algorithms or AI, it indicates that the FTC may use a violation of a state privacy regulation to penalize an organization for their secondary use of the collected data by mandating the destruction of their algorithms.

As such, organizations and financial institutions regulated by the FTC should consider a review of their AI or model learning systems to ensure that the data utilized to create or maintain these systems are not collected in any improper manner, and are compliant with the FTC’s regulations and best practices, which can be reviewed here.

Previous
Previous

SEC Proposes Additional Cybersecurity Disclosure Rule

Next
Next

Biden Administration Moves to Regulate Cryptocurrency