Chapter 1 Linear models as statistical models

This chapter introduces statistical modeling using the linear model. All students are familiar with the idea of a linear model from learning the equation of a line, which is

\[\begin{equation} Y = mX + b \tag{1.1} \end{equation}\]

where \(m\) is the slope of the line and \(b\) is the \(Y\)-intercept. It is useful to think of equation (1.1) as a function that maps values of \(X\) to values of \(Y\). Using this function, if we input some value of \(X\), we always get the same value of Y as the output.

A linear model is a function, like that in equation (1.1), that is fit to a set of data, often to model a process that generated the data or something like the data. The line in Figure 1.1A is just that, a line, but the line in Figure 1.1B is a linear model fit to the data in Figure 1.1B.

A line vs. a linear model. (A) the line $y=-3.48X + 105.7$ is drawn. (B) A linear model fit to the data. The model coefficients are numerically equal to the slope and intercept of the line in A.

Figure 1.1: A line vs. a linear model. (A) the line \(y=-3.48X + 105.7\) is drawn. (B) A linear model fit to the data. The model coefficients are numerically equal to the slope and intercept of the line in A.

1.1 Two specifications of a linear model I: the “linear model” way

The basic structure of a linear model is

\[\begin{align} Y &= \beta_0 + \beta_1 X + \varepsilon\\ \tag{1.2} \end{align}\]

A linear model has two parts: the “linear predictor” (\(Y = \beta_0 + \beta_1 X\)) and the “error” (\(\varepsilon\)). The linear predictor part looks like the equation for a line except that I’ve used \(\beta_0\) for the intercept and \(\beta_1\) for the slope and I’ve put the intercept term first. This re-labeling and re-arrangement make the notation for a linear model more flexible for more complicated linear models. For example \(Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \varepsilon\) is a model where \(Y\) is a function of two \(X\) variables.

As with the equation for a line, the linear predictor part of a linear model is a function that maps a specific value of \(X\) to a value of \(Y\). This mapped value is the expected value given a specific input value of \(X\). This is often written as \(\mathrm{E}[Y|X]\), which is read as “the expected value of \(Y\) given \(X\)”. Importantly, \(\mathrm{E}[Y|X]\) is the conditional mean, which is the modeled value of \(Y\) for all observations in which \(X\) takes some specific value \(x\).

Introductory textbooks almost always introduce linear models using equation (1.2) above. The key part of the model that is missing from the specification above is a second line

\[\begin{equation} \varepsilon \sim N(0, \sigma) \end{equation}\]

% read as “epsilon is distributed as Normal with mean zero and standard deviation sigma”). This line explicitly specifies the distribution of the error part. The error part of a linear model is a random “draw” from a normal distribution with mean zero and variance \(\sigma^2\). Think of this as adding some random value to the expected value.

The inputs to a linear model (the \(X\) variables) have many names including “independent variables,” “predictor variables,”, “explanatory variables,” “treatment variables,” and “covariates”. The output of a linear model (the \(Y\) variable or variables if the model is multivariate) is the “dependent variable,” “response,” or “outcome.” The \(\beta\) in the linear model are model parameters and if a parameter is multiplied by an \(X\) variable then it is a coefficient (for example, \(\beta_1\) in model (1.2) is a coefficient) (there are additional parameters that are hidden from this specification of the linear model but show up in the specification below). The coefficients of the \(X\) in a linear model (\(\beta_1\) in model (1.2)) are often called “the effects” (so \(\beta_1\) is the effect of \(X_1\)).

Although a linear model (or statistical model more generally) is a model of a data-generating process, linear models are not typically used to actually generate any data. Instead, when we use a linear model to understand something about a real dataset, we think of our data as one realization of a process that generates data like ours. A linear model is a model of that process. That said, it is incredibly useful to use linear models to create fake datasets for at least two reasons: to probe our understanding of statistical modeling generally and, more specifically, to check that a model actually creates data like that in the real dataset that we are analyzing.

1.2 Two specifications of a linear model II: the “statistical model” way

A second way of specifying a linear model is

\[\begin{align} Y &\sim N(\mu, \sigma)\\ \mathrm{E}(Y|X) &= \mu\\ \mu &= \beta_0 + \beta_1 X \tag{1.3} \end{align}\]

The first line states that the response variable \(Y\) is a random variable independently drawn from a normal distribution with mean \(\mu\) and variance \(\sigma^2\). This first line is the stochastic part of the statistical model. The second line simply states that \(\mu\) (the greek letter “mu”) from the first line is the conditional mean or conditional expectation. The third line states how \(Y\) is generated given a value of \(X\). This is the linear predictor, which is the systematic (or deterministic) part of the statistical model.

These two ways of specifying the model encourage slightly different ways of thinking about how the data (the response varible \(Y\)) were generated. The linear model way “generates” data by constructing \(y_i\) from the \(x_i\) and then adding random, normal error. The statistical model way “generates” data by simply drawing a random variable from a normal distribution with a specified mean and variance. The statistical model way explicitly defines all parameters, including the parameters of the linear predictor (\(\beta_0\) and \(\beta_1\)), the conditional mean \(\mu\) and the variance \(\sigma\). The linear model way only defines the parameters of the linear predictor, and often these are referrred to as “the parameters” in the sense that there are not other parameters. Because of the explicit statement of all parameters, the statistical model way to define the linear model is more useful in that it is easily generalized to more complex models, including hierarchical models, generalized linear models, and Bayesian models. This is why I call it the “statistical model” way of specification.

1.3 Statistical models are used for prediction, explanation, and description

Researchers typically use statistical models to understand relationships between one or more \(Y\) variables and one or more \(X\) variables. These relationships include

  1. Descriptive modeling. Sometimes a researcher merely wants to describe the relationship between \(Y\) and a set of \(X\) variables, perhaps to discover patterns. For example, the arrival of a spring migrant bird (\(Y\)) as a function of sex (\(X_1\)) and age (\(X_2\)) might show that males and younger individuals arrive earlier. Importantly, if another \(X\) variable is added to the model (or one dropped), the coefficients, and therefore, the precise description, will change. That is, the interpretation of a coefficient as a descriptor is conditional on the other covariates (\(X\) variables) in the model. In a descriptive model, there is no implication of causal effects and the goal is not prediction. Nevertheless, it is very hard for humans to discuss a descriptive model without using causal language, which probably means that it is hard for us to think of these models as mere description. Like natural history, descriptive models are useful as patterns in want of an explanation, using more explicit causal models including experiments.

  2. Predictive modeling. Predictive modeling is very common in applied research. For example, fisheries researchers might model the relationship between population density and habitat variables to predict which subset of ponds in a region are most suitable for brook trout (Salvelinus fontinalis) reintroduction. The goal is to build a model with minimal prediction error, which is the error between predicted and actual values for a future sample. In predictive modeling, the \(X\) (“predictor”) variables are largely instrumental – how these are related to \(Y\) is not a goal of the modeling, although sometimes an investigator may be interested in the relative importance among the \(X\) for predicting \(Y\) (for example, collecting the data may be time consuming, or expensive, or enviromentally destructive, so know which subset of \(X\) are most important for predicting \(Y\) is a useful strategy).

  3. Explanatory (causal) modeling. Very often, researchers are explicitly interested in how the \(X\) variables are causally related to \(Y\). The fisheries researchers that want to reintroduce trout may want to develop and manage a set of ponds to maintain healthy trout populations. This active management requires intervention to change habitat traits in a direction, and with a magnitude, to cause the desired response. This model is predictive – a specific change in \(X\) predicts a specific response in \(Y\) – because the coefficients of the model provide knowledge on how the system functions – how changes in the inputs cause change in the output. Causal interpretation of model coefficients requires a set of strong assumptions about the \(X\) variables in the model. These assumptions are typically met in experimental designs but not observational designs.

With observational designs, biologists are often not very explicit about which of these is the goal of the modeling and use a combination of descriptive, predictive, and causal language to describe and discuss results. Many papers read as if the researchers intend explanatory inference but because of norms within the biology community, mask this intention with “predictive” language. Here, I advocate embracing explicit, explanatory modeling by being very transparent about the model’s goal and assumptions.

1.4 Model fitting

In order to use a statistical model to describe, predict, or explain, we need to fit a model to data in order to estimate the parameters. A linear model fit to some data is

\[\begin{align} \hat{y}_i &= b_0 + b_1 x_i\\ \mathrm{E}[Y|X=x_i] &= \hat{y}_i \tag{1.4} \end{align}\]

\(\hat{y}_i\) (“y hat”) is the predicted value of individual \(i\) and \(b_0\) and \(b_1\) are the coefficients of the model fit (though technically \(b_0\) is not a coefficient). Sometimes \(\hat{y}_i\) is simply called “the prediction”. The second line above simply states the conditional expectation is the prediction (read as “The conditional expectation of \(Y\), given that \(X\) is assigned the value \(x_i\) is y hat”).

If our goal is inference – to infer something about the “population” from the sample using the fit model, then \(\hat{y}_i\) is the point estimate of the parameter \(\mu_i\) and the coefficients \(b_0\) and \(b_1\) are point estimates of the parameters \(\beta_0\) and \(\beta_1\). I put “population” in quotes because it is a very abstract concept. Throughout this book, I use Greek letters to refer to a theoretical parameter and Roman letters to refer point estimates.

Throughout this text, I recommend reporting and interpreting interval estimates of the point estimate. A confidence interval is a type of interval estimate. A confidence interval of a parameter is a measure of the uncertainty in the estimate. A 95% confidence interval has a 95% probability (in the sense of long-run frequency) of containing the parameter This probability is a property of the population of intervals that could be computed using the same sampling and measuring procedure. It is not correct, without further assumptions, to state that there is a 95% probability that the parameter lies within the interval. Perhaps a more useful interpretation is that the interval is a compatability interval in that it contains the range of estimates that are compatible with the data, in the sense that a \(t\)-test would not reject the null hypothesis of a difference between the estimate and any value within the interval (this interpretation does not imply anything about the true value).

HarrellPlot of vole data.

Figure 1.2: HarrellPlot of vole data.

For the model fit to the data in Figure 1.1B, the coefficient of \(X\) is the slope of the line. Perhaps surprisingly, we can fit a model like equation (1.2) to data in which the \(X\) variable is categorical. A simple example is the experiment of antioxidants (vitamins C and E) on lifespan in Voles (Fig. 1.2). In this experiment, the \(X\) variable is categorical, with three levels: “Control”, “Vitamin_E” and “Vitamin_C”. Categorical \(X\) variables are often called factors. The trick to using a statistical model with categorical \(X\) is to recode the factor levels into numbers – how this is done is explained in Chapter xxx. When the \(X\) variable is categorical, the coefficients of the \(X\) are differences in group means. The linear model fit to the vole data has two coefficients, one for Vitamin E and one for vitamin C. The estimate and uncertainty of the these two coefficients are shown in the top part of Figure 1.2. The bottom part shows the raw data, as well as the group (factor level) means and the uncertainty in the estimate of these means.

1.5 A mean is the simplest model

The simplest possible model that can be fit to the data is

\[\begin{equation} \mathrm{E}[Y] = b_0 \tag{1.5} \end{equation}\]

which is simply the mean of \(Y\), or, more specifically, the unconditional mean of \(Y\), since its value is not conditional on any value of \(X\).

1.6 Assumptions for inference with a statistical model

Inference refers to using the fit model to generalize from the sample to the population, which assumes that the response is drawn from some specified probability distribution (Normal, or Poisson, or Bernouli, etc.). Throughout this text, I emphasize reporting and interpreting point estimates and confidence intervals. Another kind of inference is a significance test, which is the computation of the probability of “seeing the data” or something more extreme than the data, given a specified null hypothesis. A significance test results in a p-value, which can be reported with the point estimate and confidence interval. Somewhat related to a significance test is a hypothesis test, or what is now often perjoratively called a Null-Hypothesis Signficance Test (NHST), in which the \(p\)-value from a significance test is compared to a pre-specified error rate called \(\alpha\). NHST may be useful for some very limited kinds of science but, in general, is not useful for most biological research and, instead, leads to large misconceptions. A general rule of thumb is, do not compare a reported \(p\)-value to \(\alpha\).

  1. The data were generated by a process that is “linear in the parameters”, which means that the different components of the model are added together. This additive part of the model containing the parameters is the linear predictor in specifications (1.2) and (1.3) above. For example, a cubic polynomial model
\[\begin{equation} \mathrm{E}(Y|X) = \beta_0 + \beta_1 X + \beta_2 X^2 + \beta_3 X^3 \end{equation}\]

is a linear model, even though the function is non-linear, because the different components are added. Because a linear predictor is additive, it can be compactly defined using matrix algebra

\[\begin{equation} \mathrm{E}(Y|X) = \mathbf{X}\boldsymbol{\beta} \end{equation}\]

where \(mathbf{X}\) is the model matrix and \(\boldsymbol{\beta}\) is the vector of parameters. We discuss these more in chapter xxx.

A Generalized Linear Model (GLM) has the form \(g(\mu_i) = \eta_i\) where \(\eta\) (the Greek letter “eta”) is the linear predictor

\[\begin{equation} \eta = \mathbf{X}\boldsymbol{\beta} \end{equation}\]

GLMs are extensions of linear models. There are non-linear models that are not linear in the parameters, that is, the predictor is not a simple dot product of the model matrix and a vector of parameters. For example, the Michaelis-Menten model is a non-linear model

\[\begin{equation} \mathrm{E}(Y|X) = \frac{\beta_1 X}{\beta_2 + X} \end{equation}\]

that is non-linear in the parameters because the parts are not added together. This text covers linear models and generalized linear models, but not non-linear models that are also non-linear in the parameters.

  1. The draws from the probability distribution are independent. Independence implies uncorrelated observations (the \(Y\)), that is, for any two observations with the same value of \(X\), we cannot predict the value of one given the value of the other. For example, in the vole data above, uncorrelated implies that we cannot predict the lifespan of one vole within the Vitamin E treatment given the lifespan of another vole in the Vitamin E treatment. For linear models, this assumption is often stated as “independent errors” (the \(\varepsilon\) in model (1.2)) instead of independent observations.

There are lots of reasons that observations might be correlated. In the vole example, perhaps the voles were housed in batches of 5 individuals, and slight differences in the environment among the housing containers, caused all the voles in some containers to have shorter lifespans than expected given their treatment assigment and all voles in other containers to have longer lifespans than expected given their treatment assigment. More generally, if there are measures both within and among experimental units (field sites or humans or rats) then we’d expect the measures within the same unit to err from the model in the same direction. Multiple measures within experimental units (a site or individual) creates “clustered” observations. Lack of independence or clustered observations can be modeled using models with random effects. These models go by many names including linear mixed models (common in Ecology), hierarchical models, multilevel models, and random effects models. A linear mixed model is a variation of model (1.2). This text introduces linear mixed models in chapter xxx.

Measures that are taken from sites that are closer together or measures taken closer in time or measures from more closely related biological species will tend to have more similar values than measures taken from sites that are further apart or from times that are further apart or from species that are less closely related. Space and time and phylogeny create spatial and temporal and phylogenetic autocorrelation. Correlated error due to space or time or phylogeny can be modeled with Generalized Least Squares (GLS) models. A GLS model is a variation of model (1.2).

1.7 Specific assumptions for inference with a linear model

  1. Constant variance or homoskedasticity. The most common way of thinking about this is the error term \(\varepsilon\) has constant variance, which is a short way of saying that random draws of \(\varepsilon\) in model (1.2) are all from the same (or identical) distribution. This is explicitly stated in the second line of model specification (1.2). If we were to think about this using model specification (1.3), then homoskedasticity means that \(\sigma\) in \(N(\mu, \sigma)\) is constant for all observations (or that the conditional probability distributions are identical, where conditional would mean adjusted for \(\mu\))

Many biological processes generate data in which the error is a function of the mean. For example, measures of biological variables that grow, such as lengths of body parts or population size, have variances that “grow” with the mean. Or, measures of counts, such as the number of cells damaged by toxin, the number of eggs in a nest, or the number of mRNA transcripts per cell have variances that are a function of the mean. Heteroskedastic error can be modeled with Generalized Least Squares, a generalization of the linear model, and with Generalized Linear Models (GLM), which are “extensions” of the classical linear model.

  1. Normal or Gaussian probability distribution. As above, the most common way of thinking about this is the error term \(\varepsilon\) is Normal. Using model specification (1.3), we’d say the conditional probablity distribution of the response is normal. A normal probability distribution implies that 1) the response is continuous and 2) the conditional probability is symmetric around \(mu_i\). If the conditional probability distribution has a long left or right tail it is skewed left or right. Counts (number of cells, number of eggs, number of mRNA transcripts) and binary responses (sucessful escape or sucessful infestation of host) are not continuous and often often have asymmetric probablity distributions that are skewed to the right and while sometimes both can be reasonably modeled using a linear model they are more often modeled using generalized linear models, which, again, is an extension of the linear model in equation (??).

A common misconception is that inference from a linear model assumes that the response (\(Y\)) is normally distributed. Both the “linear model” and “statistical model” ways of specifying the model show precisely why this conception is wrong. Model (1.2) states explicitly that it is the error that has the normal distribution – the distribution of \(Y\) is a mix of the distribution of \(X\) and the error. Model (1.3) states that the conditional outcome has a normal distribution, that is, the distribution after adjusting for variation in \(X\).

1.8 “Statistical model” or “regression model”?

Statistical modeling terminology can be confusing. The \(X\) variables in a statistical model may be quantitative (continuous or integers) or categorical (names or qualitative amounts) or some mix of the two. Linear models with all quantitative independent variables are often called “regression models.” Linear models with all categorical independent variables are often called “ANOVA models.” Linear models with a mix of quantitative and categorical variables are often called “ANCOVA models” if the focus is on one of the categorical \(X\) or “regression models” if there tend to be many independent variables. Other patterns occur. For example “ANCOVA models” often include interaction effects but “regression models” rarely do. To avoid thinking of statistical analysis as “regression vs. ANOVA” (the type of thinking encouraged by many textbooks in biostatistics), I will most often use the term “statistical model” for general usage, and use a more specific term only to emphasize something about the model in that particluar context.

1.9 GLM vs. GLM vs. GLS

Linear models are sometimes called “general linear models” with the abbreviation GLM. This is unfortunate because the abbreviation GLM usually refers to generalized linear models. Regardless, don’t confuse either version of GLM with GLS, which is the abbreviation of generalized least squares. GLS generalizes the linear model to allow for heteroskedastic and/or correlated error (using the “linear model” way of thinking about model specification)