In a presentation titled “Tiny Screens, Big Distractions: How Reliable is Your Online Consumer Perception Survey?”, David Bernstein of Debevoise & Plimpton, Kevin Goldberg of Nestlé Nutrition, Hal Poret of ORC International, and Annie Ugurlayan of NAD traced the history of survey evidence before the courts. In the 1960s, surveys were treated with extreme skepticism by judges, with the number used in Lanham Act litigation before 1975 stuck in the single digits. With the additional consideration given to expert testimony by the revised Federal Rules of Evidence in the 1970s, Bernstein explained, judges became more comfortable with surveys, eventually elevating them to the position of influence they hold today.
This steady ascent of surveys has not necessarily been tied to improvements in reliability: rather; survey best practices have evolved to cope with the weaknesses of each medium. Random-digit dialing, for example, offers a seemingly unbiased respondent base, but the sample turns out to be self-limiting to the tiny minority of people who do not hang up immediately and go back to eating dinner. Mall-intercept surveys, while good for allowing respondents to physically interact with packaging, may present items out of the typical consumer context.
Online surveys, although they have become the most widespread model very quickly, are similarly burdened with limitations, according to Poret, a survey expert who frequently testifies at NAD. Most critically, it can no longer be assumed that an online survey is being taken at a desktop computer. Instead, many respondents access surveys on their tablets or mobile phones. Young males in particular, a coveted demographic, are hard to nail down for online surveys, “unless you let them do it on phones,” said Poret.
Sizing is by far the most difficult issue for mobile screens. Disclosure text is often too small to read on handheld screens unless some zooming function is enabled. Magnification, however, has the potential to undermine a survey by allowing the consumer to interact with an ad in wholly unnatural ways. When asked by Martin Zwerling, Assistant Director at NAD, about the potential problems with enabling a zoom feature, Poret specified that a zoom feature was typically only used in trademark cases, which have fundamentally different objectives.
Monitoring is also an issue with online surveys. Whereas in-person surveys have moderators controlling the interaction with the respondent, online survey takers do not disclose the conditions in which they take surveys. Without the buffer of a survey administrator, the panel suggested that some participants might simply click through the questions as quickly as possible to obtain the reward. Poret advised including a software delay feature that requires at least 15 seconds spent on each page.
Online surveys do offer a few advantages from a user experience perspective, though. In an online survey, all the text is equally weighted and an “I don’t know” option is typically prominent. In-person surveys often have difficulty conveying that “I don’t know” is an option in a non-suggestive way.
Despite their inherent risks, NAD indicated that it will continue to consider online surveys as long as they are reliable and credible, the same as any other survey. When asked if mall intercepts remained the gold standard, Ugurlayan responded that both regimes are equally appropriate. “The main issue for us is for consumers to see the ad in a way they would normally view it,” she said.