Chris Mufarrige, the director of the FTC’s Bureau of Consumer Protection, spoke last week at the National Advertising Division’s Annual Conference in Washington, providing further insight into how the FTC is thinking about key issues.

Mufarrige focused his remarks on privacy and AI. He said he views the basic principles for all consumer protection to be ensuring consumers can make well-informed choices and that companies keep their promises. 

FTC’s Evolving Approach to Privacy Enforcement

Mufarrige noted that individual preferences make abstract rules governing privacy difficult to draft and administer. He criticized the Lina Khan-led FTC for its efforts to use Section 5 of the FTC Act as an omnibus privacy statute. He said the agency should instead focus enforcement on specific privacy statutes such as the Children’s Online Privacy Protection Rule (COPPA) and use Section 5’s unfairness authority only where economic analysis shows consumer harm. Continue Reading FTC Bureau of Consumer Protection Director on Privacy Rules and AI Regulation

FTC Commissioner Mark Meador spoke at the National Advertising Division’s Annual Conference this week in Washington and provided some insight into his views on advertising and consumer protection. 

Meador began by noting that he was an antitrust lawyer prior to becoming a commissioner, with limited exposure to consumer protection issues. He noted that many antitrust matters contest subtle issues of market definition and the anticompetitive effects likely to occur in the future. 

On the other hand, Meador described many of the cases brought by the FTC’s Bureau of Consumer Protection as fighting evil and involving conduct that morally shocked him. He threw in a quote from Leviticus 19:35-36 to make his point: “Do not use dishonest standards when measuring length, weight, or volume. Your scales and weights must be accurate. Your containers for measuring dry materials or liquids must be accurate.” Continue Reading FTC Commissioner Mark Meador Highlights Consumer Protection Priorities at NAD Conference

In the second landmark decision this week relating to whether use of copyrighted content for training generative AI qualifies as a fair use, Judge Chhabria, in the federal court for the Northern District of California, ordered summary judgment in favor of Meta Platforms Inc. (Meta), finding that Meta’s copying of a group of 13 bestselling authors’ books as training data for use in Meta’s large language training model (LLM) “Llama” was a fair use. Kadrey, et al. v. Meta Platforms, Inc., Case No. 23-cv-0317-VC. This groundbreaking decision out of the NDCA follows Judge Alsup’s ruling earlier this week that Anthropic’s use of legally obtained books for training its LLMs was a fair use, Bartz et al. v. Anthropic PBC, which we covered here.

The orders in both cases determined that the LLM’s use of copyrighted data for training generative AI was “highly transformative” and that the first copyright fair use factor therefore weighed heavily in favor of the AI developers. In both cases, the plaintiffs were unable to demonstrate sufficient market harm to overcome the heavy weight placed on the transformative nature of the AI models. The decisions, however, differed notably as to each judge’s consideration of the source of the copyrighted works and whether the works were obtained through authorized channels or from “pirate websites.”Continue Reading Back-to-Back Fair Use Decisions: Two NDCA Courts Find Fair Use for AI Training, Emphasizing That the Specific Facts Concerning Alleged Market Harm Will Be Critical in Overcoming AI’s “Highly Transformative” Technology

On June 23, 2025, Judge Alsup in the Northern District of California issued an order in Bartz et al. v. Anthropic PBC, granting in part and denying in part Defendant Anthropic’s motion for summary judgment on the sole issue of whether its use of Plaintiffs’ books as training data for Anthropic’s large language models (LLMs) was “quintessential” fair use.

Central to its mixed holding, the court acknowledged that Anthropic used the works in various ways and for varying purposes, such that each “use” must be identified and assessed separately. Ultimately, the court held that while the use of textual works to train LLMs was “exceedingly transformative” and thereby was protected as fair use when considered against the remaining factors, the separate use of the works to create a central library was only fair use with respect to works purchased or lawfully accessed—i.e., the use of pirated copies to create the central library was not protectible fair use. This decision makes clear that the source of content is a key element in evaluating fair use.Continue Reading Court Holds That Anthropic’s Training of AI Using Legally Obtained Books Is Fair Use, but Storage of Pirated Books Is Not

This week, the Federal Trade Commission (FTC) issued a proposed order requiring Workado, a company specializing in artificial intelligence (AI) detection tools, to stop advertising the accuracy of its AI detection tools unless it has suitable evidence that the detection tools are as accurate as claimed. The proposed settlement is yet another indication of the FTC’s continued emphasis on tackling deceptive AI technology under a new administration.

The complaint alleged that Workado marketed its AI Content Detector as being “98 percent” accurate when detecting whether text was written by AI or humans, but the complaint alleged that in reality, the accuracy rate was much lower. The complaint also alleges the AI detection tool was trained and built in such a way as to effectively analyze only academic content, rather than all of the various forms of marketing content Workado customers were submitting, thus making the 98% claim impossible. When independent testing was conducted, measuring the tool against various forms of marketing media, the accuracy rate dropped to just 53%.Continue Reading AI Detection or AI Deception? FTC Says Be Ready to Back It Up

Venable’s Advertising and Marketing Group hosted its 11th Advertising Law Symposium at our offices in Washington, DC on March 20. The symposium brought together both business and legal professionals, including in-house counsel and marketing executives, to connect on trends, opportunities, and challenges in the industry. The sessions covered a breadth of interesting topics on the latest and greatest in advertising law.

If you couldn’t make it, here are some themes that ran through some of the day’s engaging conversations:Continue Reading Key Takeaways from Venable’s 11th Annual Advertising Law Symposium

In one of the first settlements since the new administration took office, the Federal Trade Commission (FTC) announced a $17 million monetary judgment with Cleo AI to resolve allegations that Cleo violated Section 5 of the FTC Act and the Restore Online Shoppers’ Confidence Act (ROSCA). Cleo operated a personal finance mobile app that purportedly allowed consumers to take out “instant” or same-day cash advances. The vote to authorize the settlement was 2-0.

According to the complaint, Cleo advertised that consumers could access same-day or instant cash advances in the hundreds of dollars. The FTC alleged that when consumers attempted to use Cleo’s services, they were required to enroll in an automatically renewing subscription service where they were charged a subscription cost of $5.99 or $14.99 monthly. Only after the consumers entered in their payment information and enrolled in the subscription service did Cleo disclose to consumers the cash advance they were eligible for.Continue Reading Cleo AI Settles with FTC for $17 Million for Alleged Misleading Practices and Autorenewal Violations

The Federal Trade Commission (FTC) announced a “sweep” targeting AI-related conduct this week. The cases provide insight into how the agency may approach AI-related issues going forward and illustrate differences among the agency’s commissioners in how to approach issues raised by AI.

Three of the cases involved marketers making false earnings and business opportunity claims promising buyers income from AI-generated ecommerce locations. The FTC’s approach here was straightforward and consistent with how it has approached other money-making claims. Not surprisingly, both cases were voted out 5-0, and the FTC has obtained asset freezes against the companies and some principals.

The other cases were more novel and highlighted some of the challenges raised by AI.Continue Reading As FTC Begins Grappling with AI Issues, “Sweep” Signals Differing Approaches Among Commissioners

Robocalls may have always had some artificial flavor to them; however, the proliferation of the use of artificial intelligence (AI) continues to blur the line between human and machine interaction. On July 17, the Federal Communications Commission (FCC) issued a draft Notice of Proposed Rulemaking (NPRM) to address the ability of the Telephone Consumer Protection Act (TCPA) to restrict and regulate robocalls made using AI. The NPRM will be finalized and adopted at the agency’s August 7 meeting and may be modified prior to that based on feedback from interested parties.

The draft NPRM comes after the FCC invited and received comments on the subject in November of 2023. Specifically, the agency sought comments on “how AI technologies can be defined in the context of robocalls and robotexts” and what steps should be taken to ensure that the FCC can advance its statutory obligation under the TCPA. Subsequently, as we’ve reported, the FCC took action aimed at unlawful AI robocalls in a recent AI robocall enforcement in response to increased election year calling activities.Continue Reading Hello, This Is AI Calling. FCC Proposes New Rules for AI Robocalls

With the election cycle heating up as we approach the dog days of summer, so too is the Federal Communications Commission’s scrutiny of the use of AI technology in fraudulent robocalls. As we previously discussed, the FCC has already doled out fines for the use of deepfakes in political robocalls.

Now, FCC Chairwoman Jessica Rosenworcel has ratcheted up the scrutiny of nine telecom companies, asking them to explain the measures they are taking to prevent deepfake robocalls. Specifically, the FCC asks the carriers to describe the steps taken to authenticate calls in line with the STIR/SHAKEN requirements. The agency’s inquiry also includes what resources carriers have dedicated to identifying generative AI voices, and what steps they have taken to verify customers’ identities.Continue Reading AI Robocalls: Election Season Triggers Additional FCC Scrutiny