All for One?
The underlying theme of last week’s PLANSPONSOR National Conference was “measuring up,” a reference not only to the need to measure the performance and outputs of a retirement plan’s designs, but also to the opportunity to increase and enhance it in the process.
Of late, fees are very much on everyone’s mind, as we all prepare for a new series of plan, and ultimately, participant, disclosures. Just ahead of those disclosures, the industry has launched a new generation of plan fee benchmarking services. Each looks at different things, each has its own set of weightings and assumptions, and each draws from a different source.
But for my money, here are 10 things you should know about any service that purports to help you benchmark your plan:
What is the source of the database that serves as a point of comparison?
Is the database itself large enough to be relevant? Does it include relevant points of comparison with your program in terms of plan size, industry, and/or geographic location?
How old is the data on which comparisons are made?
Is the data on which comparisons are made accurate?
Are the comparisons valid? Do they offer an apples-to-apples comparison?
What assumptions are incorporated in the results?
How are the results of the comparison scored?
What are the credentials of the firm/principals behind the service and/or methodology?
Is the benchmark itself relevant to your needs? Does the comparison help you improve your program?
Is the fee paid for the benchmarking service reasonable, and in the best interests of participants?
There are, admittedly, any number of measures of success for a retirement plan—and while some may be “better” than others, and some surely easier to establish, in my experience, the mere process of measuring brings benefits.
That said, there was a moment at last week’s conference where one of our speakers asked a telling question: not if attendees benchmarked their programs (a surprising number were doing so), nor if they were benchmarking against a variety of criteria (most of those in attendance had moved well beyond the standard of “participation rate” as a metric).
No, the question that gave me pause—as well as a good number of the attendees—was, If the individual members of your retirement plan committee were asked “What do you benchmark your plan against?” what would they say?
Because, if you don’t agree on that answer, the rest of the questions may not matter.
—Nevin E. Adams, JD
You may find useful the cover story of the June issue of PLANSPONSOR, which deals with this new generation of fee benchmarking services, HERE
Of late, fees are very much on everyone’s mind, as we all prepare for a new series of plan, and ultimately, participant, disclosures. Just ahead of those disclosures, the industry has launched a new generation of plan fee benchmarking services. Each looks at different things, each has its own set of weightings and assumptions, and each draws from a different source.
But for my money, here are 10 things you should know about any service that purports to help you benchmark your plan:
What is the source of the database that serves as a point of comparison?
Is the database itself large enough to be relevant? Does it include relevant points of comparison with your program in terms of plan size, industry, and/or geographic location?
How old is the data on which comparisons are made?
Is the data on which comparisons are made accurate?
Are the comparisons valid? Do they offer an apples-to-apples comparison?
What assumptions are incorporated in the results?
How are the results of the comparison scored?
What are the credentials of the firm/principals behind the service and/or methodology?
Is the benchmark itself relevant to your needs? Does the comparison help you improve your program?
Is the fee paid for the benchmarking service reasonable, and in the best interests of participants?
There are, admittedly, any number of measures of success for a retirement plan—and while some may be “better” than others, and some surely easier to establish, in my experience, the mere process of measuring brings benefits.
That said, there was a moment at last week’s conference where one of our speakers asked a telling question: not if attendees benchmarked their programs (a surprising number were doing so), nor if they were benchmarking against a variety of criteria (most of those in attendance had moved well beyond the standard of “participation rate” as a metric).
No, the question that gave me pause—as well as a good number of the attendees—was, If the individual members of your retirement plan committee were asked “What do you benchmark your plan against?” what would they say?
Because, if you don’t agree on that answer, the rest of the questions may not matter.
—Nevin E. Adams, JD
You may find useful the cover story of the June issue of PLANSPONSOR, which deals with this new generation of fee benchmarking services, HERE
Comments
Post a Comment