How LinkedIn Got Better Feedback From Its Instructor-Led Trainings
May 12, 2017
LinkedIn’s learning and development (L&D team) – like most L&D teams – provides instructor-led trainings (ILTs) as one way to develop our employees. And while there are many ways to measure ILTs and learning in general, one of the staples is surveying the people who go through them.
I’m sure other L&D pros are nodding their heads while reading this, as our recent Workplace Learning Report found that attendee surveys is the most common way L&D teams measure the effectiveness of their ILTs. And I’m willing to bet many of you face the same challenge we do – not enough complete and actionable data within those surveys.
For us, our process was to send a satisfaction survey via email through our LMS right after an employee completed an ILT. About 35 percent of attendees would take the time to complete that survey, which we understand is around the industry average.
However, the data was often lacking in the detail we were seeking. That prevented our program managers from having a broader range of insights, which could help us improve our offerings and understand the effectiveness of our facilitators.
One of our first responses was to re-tool the emailed survey. Despite multiple tests on copy and different survey tools, we were barely able to move the needle on either response rates or quality of data.
That taught us it wasn’t the content of the email that mattered. It was that we were leaving the survey up to the employee to complete on his or her own time AFTER they left the training. Often, the employee would get busy with other tasks, and ignore the email or de-prioritize the survey. By the time they got to it, they seemed to be less willing or able to provide the qualitative (open-ended) feedback we were seeking.
We soon tested an interim and quite retro solution: paper. We realized that if we gave out paper surveys at the end of the ILT and had all attendees fill it out together while still in the training, they’d all do it – and we wouldn’t be asking them to take time outside of the session.
The responses were much more complete and detailed. The only drawback? It was on paper. That meant someone had to manually type in all the responses, which was time consuming. We needed a digital solution in order to meet our goals more qualitative feedback and increased participation.
So we piloted a mobile evaluation tool that prompts attendees to give feedback via their smartphones while they are still in the training.
The result? Since trying this new technique, comments on open-ended questions were much more expansive and specific, due largely to the training being top-of-mind for them since they were all still in the room. And there was definitely a “cool” factor of being able to leverage mobile devices for a key part of our workshop experience.
Across several sessions of one of our programs, Dynamic Collaboration, the number of learners who left qualitative feedback increased to 57 percent from 21 percent in the emailed surveys.
Some of our concerns were that perhaps not all learners would bring their phones with them or that the smaller keyboard would deter them from filling out the open-ended questions with more than a few words. Neither of these concerns materialized.
Another positive impact was that more than 90 percent of ILT attendees were now providing us with feedback, allowing us to hear from a much wider set of learners and empowering us to build more effective learning solutions.
“The use of the mobile eval tool has resulted in a huge increase in qualitative feedback, which allows for rich analysis of participant experience and the opportunity to further improve learning modules,” LinkedIn L&D Program Manager Jocelyn Lancaster said. “An added bonus is the ability for facilitators to view the feedback for their session immediately, allowing them to adjust their facilitation style as they host more and more sessions.”
Next steps for our L&D team are to build awareness with facilitators on leveraging this mobile evaluation tool and to build out the reporting back-end in order to take this to more of our enterprise ILT programs, including virtual ILTs and functional-specific workshops. We’re looking forward to standardizing this as our go-to evaluation tool for all of our programs and continuing to gather more – and better – feedback.