Impactiviti recently interviewed Greg Sapnar, Associate Director, Metrics and Adaptation, Learning and Organization Development, Bristol-Myers Squibb. The topic of this Impact Interview is Measuring Training – an endeavor that occupies the majority of Greg’s professional attention. In his current role, Greg is tasked with developing and implementing standards and tools for measuring learning outcomes. Previously, Greg was the production manager at Business Training Systems, Inc., responsible for the design and production of OSHA compliance, health and safety training programs.
Q1: What made you decide to focus in on metrics and measurement as a career role?
Early in my career, I focused primarily on instructional design and media development with a vendor company. That type of work was very rewarding from a creativity standpoint, but as is often the case, I often wondered whether or not anyone was actually learning anything from the programs, or if the work was having the desired impact on performance. In my current corporate role, I continued my work in instructional design but was able to take a more serious look at outcomes. Most recently, I was offered the opportunity to focus solely on measurement for the learning group and gladly took on the role. I believe the metrics role in learning is critical, since it reveals the true value of the investment in learning.
Q2: In general, do you think pharma companies are doing a good job measuring training effectiveness?
There’s always room for improvement. From what I’ve seen most companies have learning groups that are activity based. They focus on keeping the machinery of training in motion without dedicating sufficient time or resources to looking at outcomes.
One unique feature of pharmaceutical representative training is the heavy knowledge base we are trying to build. This practice falls more in the realm of education than traditional training and provides a challenge when it comes to measurement, since most corporate learning folks aren’t experts in measuring cognitive skills. There is also much room for improvement in developing valid and reliable tools as well as improving the consistency of our human raters, who evaluate representatives in role play scenarios. All metrics processes need to be formalized and structured in order to provide valid data to inform decisions.
Q3: What do you feel are the most important tools and processes for the job?
The most important tools are the ones which can capture what you want to measure. That is key. We need to practice “situational” measurement. It is important not to fall into the trap of thinking that because you have a good computerized testing system, everything can be measured with online testing, or that because you have a suite of role play rooms, you only need to evaluate using role play. A good learning organization should have a variety of assessment instruments and methodologies at its disposal and use them to generate cross-referenced data, revealing a three dimensional perspective of learning.
From a process standpoint, it is important to follow standards of professional practice. The fact that most of the people in pharmaceutical sales training organizations are former sales people, rather than learning experts, means they will either have to hire people to bring expertise in-house, rely heavily on consultants, or teach themselves how to apply metrics in a business environment. It is also important to develop a good relationship with the HR department and consult with them while developing a metrics strategy.
A variety of good books exist outlining how to implement valid and defensible practices in assessment, such as those by Judith Hale, and Schrock and Coscarelli. Measuring learning should be taken as seriously as a clinical study, making sure that your data is valid and reliable before making any claims. This is especially important when the consequences affect job status in any way.
Q4: How does a more robust program of metrics and measurement impact a company’s culture?
The impact a measurement strategy will have on culture, is highly dependent on the existing culture of the organization. Innovative organizations will use measurement results to develop their employees and reward them on their accomplishments. A robust metrics program should inform existing coaching and feedback programs and energize their culture of improvement. When employees are held accountable for clearly defined objectives, metrics demonstrating how well they are meeting these expectations can be a valuable tool to in helping them achieve the next level.
In companies lacking a culture naturally open to sharing metrics data, they will need to begin with a clear communications and change management program in order to gain the trust and buy-in needed to be successful.
Q5: If a company has just begun to consider measuring training effectiveness, how do you suggest they get started?
The first step in measuring training effectiveness is to define the outcomes they are trying to achieve. This may seem obvious, but many organizations still haven’t made the leap from being a training provider to being a performance driver.
One of the most useful tools I have used to identify the relationship of desired outcomes to business results is the impact map. Impact mapping provides a clear roadmap for learning success. It is an eye-opening experience when an organization focused primarily on the creation of training programs creates an impact map and sees how its efforts have the possibility of impacting real business results. Once those targets are mapped, an organization can select and implement the most appropriate measurement tools to gather learning metrics. At this point, training vendors committed to helping the organization achieve its defined learning outcomes can be brought in as partners.
It would not be out of the question to expect a two or three year plan for building a multi-dimensional view of the organizational impact of your L&D group. As this big picture is forming, the individual metrics you gather along the way will provide great value for informing day-to-day decisions at the program level.
Leave a Reply