Survey Says—Or Does It?
When you see a headline that confirms your sense of the world, you’re naturally predisposed to embrace, remember (and these days “share”) it as a validation of what you already perceive reality to be.
Indeed, as human beings, we’re drawn to perspectives, surveys, and studies that validate our sense of the world. This “confirmation bias,” as it’s called, is the tendency to search for, interpret, favor, and recall information in a way that confirms our preexisting beliefs or hypotheses. It also tends to make us discount or dismiss findings that run afoul of our existing beliefs—even if the grounds supporting that premise are shaky, sketchy, or (shudder) downright scurrilous.
Here are some things to look for—likely in the fine print or footnotes—as you evaluate those findings.
There can be a difference between what people say they will (or might) do and what they actually will.
No matter how well targeted they are, surveys (and studies that incorporate the outcome of surveys) must rely on what individuals tell us they will do in specific circumstances, particularly in circumstances where the decision is hypothetical. When you’re dealing with something that hasn’t actually occurred, or doesn’t actually exist, there’s not much help for that, but there’s plenty of evidence to suggest that, once given an opportunity to act on the actual choice(s), people do, in fact, act differently than their response to a survey might suggest.
Let’s face it, people tend to be less prone to action in reality than they indicate they will be—inertia being one of the most powerful forces in human nature. Also, sometimes survey respondents indicate a preference for what they think is the “right” answer, or what they think the individual conducting the survey expects, rather than what they might actually think (particularly if it’s something they haven’t previously thought about). That, of course, is why the positioning and framing of the question can be so important (as a side note, whenever possible, it helps to see the actual questions asked, and the responses available).
Now, survey takers will inevitably champion the higher accuracy rate of in-person surveys (or at least phone calls) versus online surveys, though the latter are ever more common (and less expensive to conduct).
The bottom line is that when what people tell you they will do, and if you later find that they don’t—just remember that there may be more “powerful” forces at work.
There can be a difference between what people think they have, what they say they have, and reality.
Since, particularly with retirement plans, there are so few good sources of data at the participant level, much of what gets picked up in academic research is based on information that is “self-reported,” which is to say, it’s what people tell the people taking the survey. The most prevalent is, perhaps, the Survey of Consumer Finance (SCF), conducted by the Federal Reserve every three years.
The source is certainly credible, but it’s based on phone interviews with individuals about a variety of aspects of their financial status, including a few questions on their retirement savings, expectations about pensions, etc. In that sense, it tells you what the individuals surveyed have (or perhaps wish they had), but not necessarily what they actually have.
Perhaps more significantly, the SCF surveys different people every three years, so it pays to be wary of trendlines that are drawn from its findings—such as increases or decreases in retirement savings. Those who do are comparing apples and oranges—more precisely the savings of one group of individuals to a completely different group of people… three years later.
The survey sample size and composition matter.
Especially when people position their findings as representative of a particular group, you want to make sure that that group is, in fact, adequately represented. Perhaps needless to say, the smaller the sampling size—or the larger the statistical error—the less reliable the results.
Case in point: Several months ago, I stumbled across a survey that purported to capture a big shift in advisors’ response to the Labor Department’s fiduciary regulation. Except that between the two points in time when they assessed the shift in sentiment, they wound up talking to two completely different types of advisors. So, while the surveying firm—and the instrument—were ostensibly the same, the conclusions drawn as a shift in sentiment could have been nothing more than a difference in perspective between two completely different groups of people—at two completely different points in time.
When you ask may matter as much as what is asked.
Objective surveys can be complicated instruments to create, and identifying and garnering responses from the “right” audiences can be an even more challenging undertaking. That said, people’s perspectives on certain issues are often influenced by events around them—and a question asked in January can generate an entirely different response even a month later, much less a year after the fact.
For example, a 2020 survey of plan sponsor sentiment on a topic like ESG litigation is unlikely to produce identical results to one conducted in the past 30 days, any more than an advisor survey about the potential impact of the fiduciary regulation prior to its publication would likely match that of advisors dealing with those realities six months after publication. Down in those footnotes about sample size/composition, you’ll likely find an indication as to when the survey was conducted. There’s nothing wrong with recycling survey results, properly disclosed. But things do change, and you need to be careful about any conclusions drawn from old data.
Consider the source(s).
Human beings have certain biases—and so do the organizations that conduct and pay to conduct surveys and studies conducted. And sometimes the organizations paid to conduct such surveys are aware of those biases, and—consciously or unconsciously—that filters in to the way questions are posed, or in the way results are evaluated.
Not that sponsored research can’t provide valuable insights. But approach with caution the conclusions drawn by those who tell you that everybody wants to buy the type of product(s) offered by the firm(s) that have underwritten the survey.
Be wary of sentiment ‘aggregation.’
It’s rare that the authors of a particular survey don’t have a preferred/expected outcome in mind—but legitimate surveys, objectively worded, sometimes receive a more tepid response than those authors might prefer. Typical are those that claim a “majority” are in favor of a certain outcome—a majority that requires combining what is generally a small minority who are strongly in favor with a (much?) larger number who are (only) somewhat in favor (for example, 16% strongly in favor, 35% somewhat favor turns into “A Majority Favor…”).
It’s not exactly exaggerating to say that the combined result is at least somewhat supportive—but it can produce a result that is positioned far more enthusiastically in favor of a particular outcome than a discerning look at actual adoption/take-up later reveals.
Compound ‘Interests’
One of the more obvious ways to get people’s attention is to publish a survey/study that purports to find a dramatic impact of some kind. Basically, the authors will state an assortment of assumptions (and they’ll make no bones about THAT), and then take those assumptions, multiply them and…voila a gigantic impact that warrants attention (or at least clicks, likes and shares).
The math checks out, so next thing you know it’s a headline where, as Mark Twain once noted, a “lie” travels around the world while the truth is still getting its boots on. It does so by being picked up, uncritically, by news media outlets which (apparently) draw comfort from the academic credentials of the authors—and their ability to lay the veracity of the claims at THEIR feet.
When, in fact, all they’re doing is compounding the problem(s).
- Nevin E. Adams, JD
Comments
Post a Comment