Linking In with Matt

Matthew Richter posts daily comments in LinkedIn—well, almost daily. You can follow him and join the conversation by going to http://linkedin.com/in/matthew-richter-0738b84.

For the benefit of our readers, we decide to compile and reprint some of his provocative pieces from the past. Let us know what you think.

Statistics Gets a Bum Rap.

In our daily lives we either don't understand how to use statistics or we don't know how to question what we are seeing. So much so that we too often (societally) dismiss any usage as "fake news." This is like blaming the hammer for failing to darn a sock. Or worse blaming the hammer for putting the hole in the wall.

Statistics is a tool. Tools can be used properly, as designed. Or, they can be applied badly or improperly.

When used properly, statistical analysis has helped save generations. Whether from the identification that smoking causes lung cancer to the safety of vaccinations. Over and over, when used properly, statistics enables us humans to see beyond the density of our immediate surroundings and explore the bigger story- the bigger picture of what is likely.

As L&D pros, good data analytics require good statistics. When faced with data- especially analyzed and interpreted data, consider the following:

The sample matters. It is collected data as a representation of a larger group. In the smoking example, that would be smokers & non-smokers over a period of time with/without lung cancer. If your sample is biased or non-representative, the conclusions cannot generalize to the greater population. And size matters! Bigger is more reliable.

Bias. We all tend to look for what we expect or hope to see. The researchers do, as do we readers of that research. So, we need to look for how researchers controlled for those biases. Good research methodology often handles this. But we also need our own minds to be open to see how the forest might look differently than we expected.

Methodology. How are the data collected? Or, the research itself designed? This can get pretty complex. This is where I often turn to the research translators- those who are trained in both research methodologies and my respective field- to provide insight. A poor design, or a poor gathering process will yield garbage. Garbage in gets you garbage out.

Correlation vs. Causation. Lots of things are linked. For example, it's cold outside. Winter is flu season. These two instances are often correlated. It is easy then to draw the conclusion that the cold causes the flu, rather than a virus. Wearing a scarf will not prevent the flu. A vaccine, hand sanitizer, or washing hands might. Causation is when one thing causes another. We need to differentiate and take care not to mix the two up. Many of the reasons educators still claim learning styles should be considered when designing learning is due to this confusion.

My inspiration for this post is Tim Harford. Tim's book, The Data Detective, is fantastic. The book contains many more strategies you can take, as well. And the introduction to the book contains the story about how smoking was initially correlated and then linked as a cause for lung cancer. His podcast,  More or Less, is superb, as well.


Does Personal Experience Matter as an L&D Pro?

Your arm hurts. You go to the doctor. The doctor prescribes physical therapy. You go. After two weeks you feel better.

The doctor was correct, right?

You say yes.

The statistician says the doctor may have been right, but you may have gotten better on your own without intervention. Or maybe something else accelerated your improvement. Or maybe the therapy made things worse, and you would have gotten better a week earlier without it.

As Tim says, you probably don't care- you're better! But doctors do care. Why? Because using the wrong solution (albeit a correlated one) won't work for the majority over time. Doctors encounter lots of patients with arm problems and if PT has a causal relationship to improving the situation, they can reliably prescribe it. If it doesn't, or makes the problem worse, then other treatments must be used.

This is the difference between personal experience (you got better and that's all that matters) and statistically sound approaches to solving the problem for the many.

The reverse can also happen. Just because there is a statistically sound treatment likely to work, doesn't mean, for you, it necessarily will. Aberrations happen! That is why statistics work with probabilities.

For L&D, are we the patient in this analogy? Or are we the doctors, prescribing a solution? In our professional context, we need to think like a doctor. Which means we need to separate personal experience ("it worked," or "I think it worked,") from the statistical view ("it worked because of 'x'").

In other words, we need to differentiate correlations (PT & getting better happened close together) and the cause (physical therapy MADE the patient get better).

Does the use of MBTI or DiSC increase the level of self-awareness among leaders? Does that self-awareness make them better leaders?

What is important, however, is that participants are actually more accurately self-aware- step one. And, then that accurate self-awareness increased leadership acumen according to a measurable and defined standard.

Participants often say these tools work. Why? Their personal experience. These tools seemingly provide insightful information. Being self-aware is popularly seen as significant to leadership. Therefore, using MBTI or DiSC absolutely increases leadership skills.

But these links are more likely due to personally experienced correlations. What do the statistics say? Most studies have debunked both these tools outright. Check out Annie Murphy Paul's, The Cult of Personality.

In L&D, we are like the doctor. We want to use solutions that help get us to our goals. But we need to see through individual experience (the trees) to the forest (the big picture). We want solutions that cause improvement. Relying on research and statistics enables us to make better decisions on what works.


Statistics and Research Cautions

I've been posting about the value of statistics and research as a basis for making decisions. I've been relying on the Tim Harford  book, The Data Detective, as my inspiration. And I would be remiss, if I didn't follow Tim's example and share some cautionary tales (an allusion to his excellent podcast) about statistics and numbers. The following is just a start. More to come
Replication. The inherent value of large numbers is you can see patterns, consistencies, correlations, and causes. The risk, however, is that with large numbers, you can also get anomalies. For example, Tim references the Iyengar, Lepper study that set up a jam-tasting stall. When more jams were available to try, fewer people bought jam. Fewer options- and more jam was bought. The conclusion- too much choice can be a bad thing. The problem is that this study has failed to replicate enough to be reliable. It's not that the researchers were trying to fool us. Not at all. But one study does not equal said reliability. In L&D, we take a study- often translated by journalists- and extrapolate the conclusions as fact. We need to search out replication in order to see if the original study and its conclusions were an anomaly.

Publication. Tim notes that journals are often incented to publish conclusions that are interesting. And many journals avoid publishing studies that replicate (or fail to do so) studies because they are, frankly, not so interesting. In L&D, most aren't trained researchers. We don't have the habit of asking about replication. So, we read a study (or a summary) posted in a respected journal, and we accept it as truth. But we still need to be critical thinkers and challenge.

Perspective. Time, reference points, experience, and other factors influence those doing the research, reading the research, and interpreting the research. How far away is Mars from Earth? Well, it depends on perspective. Where are we at a given point in our respective orbits around the sun? At our farthest points we are ~401 million km. At our closest, ~56 million km. On average, ~225 million km. Just asking for the distance isn't enough. We need the context of the question to pin it down. All three options can be true. Or, even false. In L&D, what is the actual performance problem? What is the context? The audience, etc.

The Question. Good research starts with a question, or a hypothesis. But we have to be careful of results that apply HARKing- Hypothesizing After Results Known. In other words, confirmation bias. We search out the data and the hypothesis to match what we hope to be true. Because combinations of choices can lead to literally a myriad of conclusions, design methodology is imperative to evaluate. Harford shares the example of Simmons, et. al.'s "proof" that listening to "When I'm Sixty-Four" by the Beatles would make you 18 months younger. How so? Buy Tim's book and read it.