This week, the Federal Trade Commission (FTC) issued a proposed order requiring Workado, a company specializing in artificial intelligence (AI) detection tools, to stop advertising the accuracy of its AI detection tools unless it has suitable evidence that the detection tools are as accurate as claimed. The proposed settlement is yet another indication of the FTC’s continued emphasis on tackling deceptive AI technology under a new administration.

The complaint alleged that Workado marketed its AI Content Detector as being “98 percent” accurate when detecting whether text was written by AI or humans, but the complaint alleged that in reality, the accuracy rate was much lower. The complaint also alleges the AI detection tool was trained and built in such a way as to effectively analyze only academic content, rather than all of the various forms of marketing content Workado customers were submitting, thus making the 98% claim impossible. When independent testing was conducted, measuring the tool against various forms of marketing media, the accuracy rate dropped to just 53%.Continue Reading AI Detection or AI Deception? FTC Says Be Ready to Back It Up

Venable’s Advertising and Marketing Group hosted its 11th Advertising Law Symposium at our offices in Washington, DC on March 20. The symposium brought together both business and legal professionals, including in-house counsel and marketing executives, to connect on trends, opportunities, and challenges in the industry. The sessions covered a breadth of interesting topics on the latest and greatest in advertising law.

If you couldn’t make it, here are some themes that ran through some of the day’s engaging conversations:Continue Reading Key Takeaways from Venable’s 11th Annual Advertising Law Symposium

In one of the first settlements since the new administration took office, the Federal Trade Commission (FTC) announced a $17 million monetary judgment with Cleo AI to resolve allegations that Cleo violated Section 5 of the FTC Act and the Restore Online Shoppers’ Confidence Act (ROSCA). Cleo operated a personal finance mobile app that purportedly allowed consumers to take out “instant” or same-day cash advances. The vote to authorize the settlement was 2-0.

According to the complaint, Cleo advertised that consumers could access same-day or instant cash advances in the hundreds of dollars. The FTC alleged that when consumers attempted to use Cleo’s services, they were required to enroll in an automatically renewing subscription service where they were charged a subscription cost of $5.99 or $14.99 monthly. Only after the consumers entered in their payment information and enrolled in the subscription service did Cleo disclose to consumers the cash advance they were eligible for.Continue Reading Cleo AI Settles with FTC for $17 Million for Alleged Misleading Practices and Autorenewal Violations

The Federal Trade Commission (FTC) announced a “sweep” targeting AI-related conduct this week. The cases provide insight into how the agency may approach AI-related issues going forward and illustrate differences among the agency’s commissioners in how to approach issues raised by AI.

Three of the cases involved marketers making false earnings and business opportunity claims promising buyers income from AI-generated ecommerce locations. The FTC’s approach here was straightforward and consistent with how it has approached other money-making claims. Not surprisingly, both cases were voted out 5-0, and the FTC has obtained asset freezes against the companies and some principals.

The other cases were more novel and highlighted some of the challenges raised by AI.Continue Reading As FTC Begins Grappling with AI Issues, “Sweep” Signals Differing Approaches Among Commissioners

Robocalls may have always had some artificial flavor to them; however, the proliferation of the use of artificial intelligence (AI) continues to blur the line between human and machine interaction. On July 17, the Federal Communications Commission (FCC) issued a draft Notice of Proposed Rulemaking (NPRM) to address the ability of the Telephone Consumer Protection Act (TCPA) to restrict and regulate robocalls made using AI. The NPRM will be finalized and adopted at the agency’s August 7 meeting and may be modified prior to that based on feedback from interested parties.

The draft NPRM comes after the FCC invited and received comments on the subject in November of 2023. Specifically, the agency sought comments on “how AI technologies can be defined in the context of robocalls and robotexts” and what steps should be taken to ensure that the FCC can advance its statutory obligation under the TCPA. Subsequently, as we’ve reported, the FCC took action aimed at unlawful AI robocalls in a recent AI robocall enforcement in response to increased election year calling activities.Continue Reading Hello, This Is AI Calling. FCC Proposes New Rules for AI Robocalls

With the election cycle heating up as we approach the dog days of summer, so too is the Federal Communications Commission’s scrutiny of the use of AI technology in fraudulent robocalls. As we previously discussed, the FCC has already doled out fines for the use of deepfakes in political robocalls.

Now, FCC Chairwoman Jessica Rosenworcel has ratcheted up the scrutiny of nine telecom companies, asking them to explain the measures they are taking to prevent deepfake robocalls. Specifically, the FCC asks the carriers to describe the steps taken to authenticate calls in line with the STIR/SHAKEN requirements. The agency’s inquiry also includes what resources carriers have dedicated to identifying generative AI voices, and what steps they have taken to verify customers’ identities.Continue Reading AI Robocalls: Election Season Triggers Additional FCC Scrutiny

In a pair of Notices of Apparent Liability for Forfeiture this week, the Federal Communications Commission (FCC) has proposed a collective $8 million in fines against telecommunications company Lingo Telecom and political consultant Steven Kramer.

Robocalls, Generative AI, and Deepfakes

The FCC alleges Kramer violated the Truth in Caller ID Act. According to the FCC, two days before the New Hampshire 2024 presidential primary election, Kramer orchestrated a campaign of illegally spoofed and malicious robocalls that carried a deepfake audio recording of President Biden’s cloned voice telling prospective voters not to vote in the upcoming primary.

To transmit the calls, he worked with voice service provider Lingo Telecom, which incorrectly labeled the calls with the highest level of caller ID attestation, making it less likely that other telecommunications providers would detect the calls as potentially spoofed. For this reason, the FCC is also pursuing forfeiture against Lingo, alleging a violation of the STIR/SHAKEN rules for failing to use reasonable “Know Your Customer” protocols to verify caller ID information in connection with Kramer’s alleged illegal robocalls.Continue Reading FCC Proposes $8 Million in Fines Against Telecom Company and Political Consultant for Using Deepfake Generative Artificial Intelligence

In late March, Tennessee Governor Bill Lee signed into law the Ensuring Likeness Voice and Image Security Act of 2024—known as the “ELVIS Act”—making Tennessee the first state to address head-on potential misuses of artificial intelligence (AI) related to an individual’s voice. The law prohibits individuals from using AI to generate and distribute replicas of another’s voice or image without their prior consent.

Many prominent members of the music and entertainment community have identified Tennessee’s law as an important step forward for the protection of artists’ (and others’) voice and likeness. Specifically, right of publicity laws across the nation typically provide that individuals have a property right in the use of their name, photograph, and likeness. However, these laws generally do not address the use of one’s voice or generative AI exploiting another’s image, likeness, or voice to generate unauthorized impersonations or replicas. In the age of AI cloning and deep fakes, these unauthorized works have caused great concern among those in the entertainment and media industries. The ELVIS Act is “first-of-its-kind” legislation to directly address these concerns by expanding Tennessee’s existing protections against the unauthorized commercial use of one’s rights of publicity.Continue Reading Tennessee Out Front: Enacting Protections Against AI Misuse in the Music Industry

Venable’s Advertising and Marketing Group hosted its 10th Advertising Law Symposium on March 21 in Washington, DC. The group welcomed in-house counsel, advertising executives, and marketing professionals for a full day of sessions on the latest developments in advertising law and what to watch for soon.

Here are some highlights:

Patchwork of Privacy Laws Makes Compliance a Challenge

Frequent data breaches and incidents like the 2018 Cambridge Analytica scandal have increased criticism of the United States’ approach to regulating privacy through a patchwork of federal and state laws and industry self-regulatory codes. But even harsh critiques have not been enough to spur Congress to pass a preemptive privacy law that would supersede the jumble of state laws and regulations and streamline things. Partner Rob Hartwell and associate Allie Monticollo said marketers and advertisers should watch what’s happening in the states and mitigate risk accordingly.Continue Reading Event in Review: 10th Advertising Law Symposium

On February 15, 2024, the Federal Trade Commission (FTC) announced a two-step approach to tackling impersonation fraud. First, the FTC finalized a rule regulating the impersonation of businesses and government entities (the Impersonation Rule). Later that day, the FTC proposed a revision to the Impersonation Rule to extend liability to those impersonating individuals.

The Impersonation Rule deems it unfair or deceptive to falsely pose as or misrepresent affiliation with a government or business entity. This could include using government seals, business logos, or spoofed email addresses. Even more broadly, the rule prohibits using government or business lookalike insignias or marks without prior authorization. The rule will become effective 30 days after it has been finalized.Continue Reading Impersonation Rulemaking: FTC Takes Steps to Tackle AI