With the election cycle heating up as we approach the dog days of summer, so too is the Federal Communications Commission’s scrutiny of the use of AI technology in fraudulent robocalls. As we previously discussed, the FCC has already doled out fines for the use of deepfakes in political robocalls.

Now, FCC Chairwoman Jessica Rosenworcel has ratcheted up the scrutiny of nine telecom companies, asking them to explain the measures they are taking to prevent deepfake robocalls. Specifically, the FCC asks the carriers to describe the steps taken to authenticate calls in line with the STIR/SHAKEN requirements. The agency’s inquiry also includes what resources carriers have dedicated to identifying generative AI voices, and what steps they have taken to verify customers’ identities.

This latest inquiry is only another step in the FCC’s continued scrutiny of AI robocalls and robotexts. Several weeks ago, members of Congress asked Rosenworcel whether existing law empowers the agency with the tools it needs to combat impersonation scams and fraud. With this political pressure, Rosenworcel explained the steps the FCC has taken to protect consumers from fraudulent scam text messages. Perhaps most notably, she reiterated that “the Commission is always willing to explore options for stopping these messages before they reach consumers,” including welcoming “discussion of any new legislative proposals [Congress] may have on this topic.”

With a focus on the role of telecommunications carriers in preventing unlawful robocalls and texts, and the heightened stakes of an election year, the FCC continues to develop its regulatory strategies to try to get out in front of the rapidly evolving technologies employed by misusers.

For more insights into advertising law, bookmark our All About Advertising Law blog and subscribe to our monthly newsletter. To learn more about Venable’s Telecommunications services, click here or contact the author.