First, software was eating the world; now, it’s supposedly AI – or the data used to create that AI – that’s eating the world. Either way, the market for products and services powered by machine learning and artificial intelligence is hot, with even “traditional” companies and industries realizing the benefits of these technologies.
Ever the trend-follower, the FTC has also taken notice of these areas (*cue the dramatic music*). In the past two years, they have increased their focus on both AI companies specifically and data companies generically. While it’s clear that the FTC isn’t diving deep into the architecture of transformer models or MLOps platforms, they are developing a high-level understanding of how companies acquire data and use this data to train models variously described as machine learning, deep learning, NLP, or AI.
As the FTC’s attention and insight have grown, they’ve become more aware of how issues with data collection and acquisition interact with their jurisdiction – especially as it relates to privacy and consumers’ rights. Today’s post will cover some recent developments in FTC oversight and enforcement, as well as how organizations can avoid, identify, and mitigate the risks related to “tainted” models.
Federal Trade Commission
The FTC is responsible for enforcing over 70 different laws, but they’re best known for their purview over unfair or deceptive practices. When applied to the area of machine learning, they generally focus on whether companies are utilizing consumer data in unfair or deceptive ways. Oftentimes, this means that the FTC investigates whether companies violated federal regulations related to the collection and use of consumer data.
There are multiple federal regulations that govern the collection and use of consumer data: the Children’s Online Privacy Protection Act (COPPA), for example, requires notice and verifiable parental consent prior to the collection, use, or distribution of children’s data. The FTC is able to directly fine organizations for failure to comply with COPPA; in addition, they can work with the courts to impose additional penalties.
B2C Only? Or B2B Too?
Many companies in the B2B space have historically ignored the FTC and its Bureau of Consumer Protection; if the FTC does cross their radar, it’s typically because the Bureau of Competition is involved, like in the case of a Hart-Scott-Rodino (HSR) review prior to closing a business combination.
Increasingly, however, the FTC and some courts have begun to apply “consumer protection” concepts to B2B relationships. In the context of machine learning, these causes of action are often supplemental to traditional breach of contract claims like confidentiality and purpose of use; that said, they do evidence an increasing risk for companies that “push the boundaries” of their data acquisition strategies. For B2B companies doing business in the UK and EU, the headwinds blow even stronger.
Increased Scrutiny and Settlements
When it comes to consumer data, the FTC has clearly been stepping up its focus on illegal collection. But in many cases, companies didn’t stop at collection; they went on to use that illegally-collected data for other purposes. Sometimes, those purposes were themselves illegal ( i.e., “illegal use”); in other cases, while the purpose of use was not prohibited, the initial collection or other practices arguably tainted “downstream” IP.
One recent example of this latter category occurred in Weight Watchers/Kurbo’s (now just “WW”) settlement with the FTC. While there was nothing inherently illegal about a weight loss application using machine learning to customize programs, they violated the COPPA when they failed to properly obtain parental consent for the collection of children’s data.
Some companies view fines with a risk-based approach: how do the penalties compare to the potential revenue generated by the wrongdoing? This approach may no longer be viable, as the FTC seems to be catching on to this incentive dilemma, and they’ve got a new tool in their enforcement arsenal.