Linking In with Matt

Matthew Richter posts daily comments in LinkedIn—well, almost daily. You can follow him and join the conversation by going to http://linkedin.com/in/matthew-richter-0738b84.

For the benefit of our readers, we decide to compile and reprint some of his provocative pieces from the past. Let us know what you think.

Invalid Content

When I point out that a piece of content (like Learning Styles, DiSC, or MBTI) lacks academic rigor and doesn’t work the way users think it does, I often get these reactions:

Of course, but it gets people to notice difference.

Sure, but it starts a conversation.

Who cares, it works.

Using a flawed tool is inefficient and potentially damaging. Using a tool that fails to do what it claims to do, is delusional. If the tool (DiSC, for instance) doesn’t accurately categorize people into one of its four boxes, why use it? If those four boxes aren’t conceptually validated, why use it? More to the point, why do some people still insist these models are usable in the face of great evidence they are not? Because people feel like they do. The tools organize the complexity of being human. For most, face validity is more powerful than anything else. The gut feeling that something reflects you is all the evidence one needs. This is why it is so very important for us to listen to scientists and researchers. It is imperative we avoid fake facts. It is imperative we question our gut reactions and remember that intuition without evidence is usually a trap. Just because something makes sense, doesn’t mean it works.

More About Context

I think because we are so immersed in our own context, we forget to consider other contexts when designing and delivering training. Learning new basic tasks at a rudimentary level is easy without the context. But if we expect participants to ultimately apply new skills on the job, we have to teach them how to recognize situations where the new skills can and should be applied; make appropriate decisions how, when, and to what degree to apply these new skills; and evaluate how well those applied decisions and skills worked within the situation. Too often we teach skills using scenarios, but we tell participants when and how to practice. This skips a major part of the development process. Participants need to recognize the need to apply and then decide to actually apply. Many trainings fail at this. They teach participants to do the skill when told, but they don’t teach how to recognize appropriate times for application. For example, we teach managers to give feedback. We give them a list of times when to do so. We get them to practice, but do we also give them opportunities to recognize when to avoid feedback? Training is job related, so the context should always be present.

No Positive Effect

Client says, “We want a 2-hour, instructor-led program on sexual harassment.” Assuming you have indeed a credible and legal option for this topic, how do you respond? I will also assume you conduct the proper question and answer about why, what for, and who. Client wants this two-hour program to complete a requirement. To check a box, as we say in the US. It is not an L&D issue. Do you take the job? Do you believe that a two-hour program will have a positive impact? Do you think anyone’s behavior will change as a result of your course? Assuming the client has also neglected any follow-up or policy changes, do you take the job? Remember, if you say no, they will just get the next willing training consultant. What is our responsibility accepting work when we know if will yield no positive effect? How many of us walk away? How many of us are deluded enough to think we can change the world given these parameters? I can honestly say, I have taken the work in the past and will probably so again. What does that say about those of us who do so?