Comparison “Points”

Every year about this time, we get reports from firms that purport to tell us how much time is spent in preparations for the NCAA basketball tournament—and, no, not by the teams and coaches. The “studies” (ironically, they’re always put out by firms that are in the business of helping people find jobs) generally make some assumptions about the amount of time people spend on the workplace pools as well as how many people will participate, and their compensation levels, and—voila—the productive time ostensibly “lost” to these activities. Now, they make a lot of assumptions to get to that result, including the assumption that, but for these pools, people would be doing nothing but working. But the results give journalists something easy—and “fun”—to write about, and the rest of us to read and talk about (some day someone should do a study on how much time and money is wasted writing and reading about those “studies”).

Our lives are filled with such reports: perhaps valid points that are, like it or not, supported by data that is—well, let’s just say it’s “squishy.” These reports are designed to provide some interesting if “low hanging” fruit for media coverage—and it works. The sponsoring firm gets some free press, the journalist gets some easy copy, and the reader—well, you get some interesting, if not completely meaningful, information.

And our industry is no exception. Here are some of the industry “data points” that, IMHO, are things we probably shouldn’t care about.


How the tiny minority of participants who realign their balances in any given month choose to do so.

Let’s face it, in any given month—heck, in any give YEAR—only the tiniest numerical sliver of retirement plan participants make a change in how their accounts are invested. Those who do could be doing so for any number of reasons, but inevitably those who do seem to be selling—and buying—the wrong things at the wrong time, fleeing stocks when the market takes a tumble and buying at the peak. And yes, it’s hard to avoid a certain “can you believe these idiots?” undercurrent in reporting on these movements.

Sure, highlighting the missteps of the few can provide fodder for reinforcing positive long-term investing messages. But the vast majority of participants never—ever—touch their balances.

Not that we should be wholly comfortable with that.

The investment performance of defined benefit versus defined contribution plans.

Every so often, a report comes out that reminds us that defined benefit plans turn in a better performance than defined contribution plans. What we’re apparently supposed to draw from that is that DB plans are better-managed in terms of asset allocation by professionals, better able to negotiate lower fees than their DC counterparts, and generally provide a better return on investment. In other words, DB plans are “better.”

But those who get to that result generally do so by sampling against a finite number of plans, and sampling matters (sampling ALWAYS matters). Plan size is a factor, of course, but while a defined benefit plan is ostensibly a single pool of money being managed to obtain a certain aggregate objective, a defined contribution plan is an aggregation of individually managed objectives. Now, I’m not saying that all, or even most of, those individually directed DC plan allocations are as well-designed or maintained as that put in place by a DB investment committee, but unless your defined benefit plan has a single participant, those programs have completely different objectives and timeframes. You might as well be comparing a sports car to a Hummer; which is “better” depends on the distance, the terrain, the length of time you have to complete the journey – oh, and how much fuel you have.

That said, in 2009, guess which did “better”?

That “average” 401(k) balance.

If there is one number I wish our industry would quit publishing, it’s the average 401(k) balance. As noted above, “averages” have their limitations, but the variations in this particular average are enough to make one’s head spin. Here you have participants who may (or may not) have a DB program, who are of all ages, who receive widely different levels of pay, who work for employers that provide varying levels of match, and who live (and may retire) in completely different parts of the country. But in preparing this number, we slop them all together and create—mush.

Worse than mush, actually. That number is never “enough” to provide anything remotely resembling an adequate source of retirement income, a point that is reiterated somewhat incessantly (and without the caveats about what it is an average of) in the press. I’ll allow that some of the permutations of this calculation—such as when we see that average by age demographic—can be instructive for longer-term trends, though my strong preference is for a median reading, but an average 401(k) balance is akin to an average reviewer rating on Amazon.com. It’s mathematically accurate—and completely useless.

IMHO, averages often obscure as much as they reveal—and this average does so more than most.


—Nevin E. Adams, JD

See “DB Returns Beat DC Returns Through 2008

http://www.plansponsor.com/IMHO_Goal_Lines.aspx

Comments

Popular posts from this blog

Do Roth and 401(k) Pre-Tax Holders Really Spend Differently?

Is the 401(k) Really a ‘Horrible’ Retirement Plan?

Shifting the 401(k) ‘Balance’?