My grandfather took me to a model train museum when I was a kid. I remember walking (or running) around and following the trains as they chugged along. I was never a train enthusiast, but I did think the models were neat! Over 20 years later, it turns out models are kinda how humans make sense of the world.

**A model is a representation of something**. The small train above is a model of a real train just like **a globe is a model of the Earth**. A globe isn’t the real Earth, but it conveys *some* truth about the location of land and water on the real Earth. **A 2D map is also a model **of the Earth! But why do we have two different models; why not just have the one model?

**Most models are simplifications, and so you lose accuracy, but the simplification should serve a purpose.** In a globe, one simplification is true size, but it preserves relative location of land and water. A common 2D map, the Mercator projection, simplifies size *and* relative position/size (it distorts area that is closer to the poles). But it was historically very useful for navigation at sea.

**A model isn’t only defined by absolute accuracy. Interpretability, the ease with which we can use the model, is also a core feature.** Obviously, if a model is simply wrong, it doesn’t matter how easy it is to use or interpret. But if a model does have some use and isn’t *too wrong*, that’s good.

The idea of a model may still sound abstract, but **you actually use models all the time in your daily life**. Think about a close friend: how would they react if you gave them an unexpected gift? **The person in your mind and how you think they’d react is a mental model that represents your friend**. This model is just like the model train or a globe: you lose some information (it’s not as accurate as simply seeing what your friend does), but ideally your mental model covers some core details. Hopefully.

Comparing mental models of how the mind works is fun, but it’s harder to figure out who is “right.” Instead, most psychologists test ** statistical models**. It’s still just a model, it just happens to be a mathematical model instead of a purely abstract or conceptual model. What makes it mathematical is specifying the numbers involved. So perhaps I believe that men are, on average, taller than women. Mathematically, I’d say: μ

_{MaleHeight }> μ

_{FemaleHeight}or equivalently, μ

_{Difference}= μ

_{MaleHeight }– μ

_{FemaleHeight}AND μ

_{Difference }> 0.

There’s a bunch of rules and more math statisticians and others have established about these models that allows us to test them. In this post, I’m not focusing on those specifics, but on something broader that is absolutely essential to properly using statistics: **we have to judge how good the model is before trusting what it says**.

Think of a map (a model of a city’s layout). When we’re judging a map, the core criteria is that stuff should be on the map where it would be from a birds-eye-view. A map of a city *could* include information about the height of buildings, if you wanted. It could even be presented beautifully. People can marvel over how advanced and fancy this new map looks. And yet, if the map tells you to walk down a street that doesn’t exist, it doesn’t matter how impressive the additional information is: it’s a bad model.

When we’re judging statistical models, there’s also important features we need to pay attention to that, frankly, a lot of people don’t even check. **I really want to emphasize the importance of actually checking that your model is a good one.** It really comes down to appropriateness.

What do I mean by appropriateness? Let’s say I made a model of diabetes to try and find a new treatment. Excitedly, I tell you I have found a cheap treatment using my model. With great foresight, you ask not only to see my key statistics (effect size, t-statistic, p-value), but my model. Look, here it is!

Who cares if the insights from this model are “significant” or have “large effect sizes” or “are cheap.” It’s a really stupid model, and there’s no way this model train actually portrays anything about diabetes.

If you don’t check the assumptions of the statistical model you’re running for your data, you could be making just as absurd of a claim without knowing it. Unfortunately, statistical training in psychological science is kind of all over the place, I think. So it probably isn’t properly emphasized what violating these assumptions really do, how to even check them, and, especially for complicated (fancy) analyses, it’s honestly pretty hard. But that’s the nature of science: it’s hard.

**In summary, please check you’re not modeling diabetes with a model made for trains.** Of course, I’ll go over how to do this for things I eventually post, to the extent I can. I’m still learning a lot myself!