Appendix A — Appendix

A.1 Least Squares Estimates

Below are the mathematical details for deriving the least-squares estimates for slope (\(\beta_1\)) and intercept (\(\beta_0\)). We obtain the estimates \(\hat{\beta}_1\) and \(\hat{\beta}_0\) by finding the values that minimize the sum of squared residuals (Equation A.1) .

\[ SSR = \sum\limits_{i=1}^{n}[y_i - \hat{y}_i]^2 = [y_i - (\hat{\beta}_0 + \hat{\beta}_1 x_i)]^2 = [y_i - \hat{\beta}_0 - \hat{\beta}_1 x_i]^2 \tag{A.1}\]

We find the values of \(\hat{\beta}_1\) and \(\hat{\beta}_0\) that minimize Equation A.1 by taking the partial derivatives and setting them to 0. Thus, the values of \(\hat{\beta}_1\) and \(\hat{\beta}_0\) that minimize the respective partial derivative also minimize the sum of squared residuals. The partial derivatives are

\[ \begin{aligned} &\frac{\partial \text{SSR}}{\partial \hat{\beta}_0} = -2 \sum\limits_{i=1}^{n}(y_i - \hat{\beta}_0 - \hat{\beta}_1 x_i) \\[10pt] &\frac{\partial \text{SSR}}{\partial \hat{\beta}_1} = -2 \sum\limits_{i=1}^{n}x_i(y_i - \hat{\beta}_0 - \hat{\beta}_1 x_i) \end{aligned} \tag{A.2}\]

Let’s begin by finding \(\hat{\beta}_0\).

\[ \begin{aligned} \frac{\partial \text{SSR}}{\partial \hat{\beta}_0} &= -2 \sum\limits_{i=1}^{n}(y_i - \hat{\beta}_0 - \hat{\beta}_1 x_i) = 0 \\[10pt] &\Rightarrow \sum\limits_{i=1}^{n}(y_i - \sum\limits_{i=1}^{n} \hat{\beta}_0 - \sum\limits_{i=1}^{n} \hat{\beta}_1 x_i) = 0 \\[10pt] &\Rightarrow \sum\limits_{i=1}^{n}y_i - n\hat{\beta}_0 - \hat{\beta}_1\sum\limits_{i=1}^{n}x_i = 0 \\[10pt] &\Rightarrow \sum\limits_{i=1}^{n}y_i - \hat{\beta}_1\sum\limits_{i=1}^{n}x_i = n\hat{\beta}_0 \\[10pt] &\Rightarrow \frac{1}{n}\Big(\sum\limits_{i=1}^{n}y_i - \hat{\beta}_1\sum\limits_{i=1}^{n}x_i\Big) = \hat{\beta}_0 \\[10pt] &\Rightarrow \bar{y} - \hat{\beta}_1 \bar{x} = \hat{\beta}_0 \\ \end{aligned} \tag{A.3}\]

Thus \(\hat{\beta}_0 = \bar{y} - \hat{\beta}_1\bar{x}\).

Note

\(\bar{y} = \frac{1}{n}\sum\limits_{i=1}^{n}y_i \hspace{2mm} \text{ and }\hspace{2mm}\bar{x} = \frac{1}{n}\sum\limits_{i=1}^{n}x_i\)

Now, let’s find \(\hat{\beta}_1\).

\[ \begin{aligned} &\frac{\partial \text{SSR}}{\partial \hat{\beta}_1} = -2 \sum\limits_{i=1}^{n}x_i(y_i - \hat{\beta}_0 - \hat{\beta}_1 x_i) = 0 \\[10pt] &\Rightarrow \sum\limits_{i=1}^{n}x_iy_i - \hat{\beta}_0\sum\limits_{i=1}^{n}x_i - \hat{\beta}_1\sum\limits_{i=1}^{n}x_i^2 = 0 \\[10pt] \text{(Fill in }\hat{\beta}_0\text{)}&\Rightarrow \sum\limits_{i=1}^{n}x_iy_i - (\bar{y} - \hat{\beta}_1\bar{x})\sum\limits_{i=1}^{n}x_i - \hat{\beta}_1\sum\limits_{i=1}^{n}x_i^2 = 0 \\[10pt] &\Rightarrow \sum\limits_{i=1}^{n}x_iy_i = (\bar{y} - \hat{\beta}_1\bar{x})\sum\limits_{i=1}^{n}x_i + \hat{\beta}_1\sum\limits_{i=1}^{n}x_i^2 \\[10pt] &\Rightarrow \sum\limits_{i=1}^{n}x_iy_i = \bar{y}\sum\limits_{i=1}^{n}x_i - \hat{\beta}_1\bar{x}\sum\limits_{i=1}^{n}x_i + \hat{\beta}_1\sum\limits_{i=1}^{n}x_i^2\\[10pt] &\Rightarrow \sum\limits_{i=1}^{n}x_iy_i = n\bar{x}\bar{y} - \hat{\beta}_1n\bar{x}^2 + \hat{\beta}_1\sum\limits_{i=1}^{n}x_i^2\\[10pt] &\Rightarrow \sum\limits_{i=1}^{n}x_iy_i - n\bar{y}\bar{x} = \hat{\beta}_1\sum\limits_{i=1}^{n}x_i^2 - \hat{\beta}_1n\bar{x}^2\\[10pt] &\Rightarrow \sum\limits_{i=1}^{n}x_iy_i - n\bar{y}\bar{x} = \hat{\beta}_1\Big(\sum\limits_{i=1}^{n}x_i^2 -n\bar{x}^2\Big)\\[10pt] &\Rightarrow \frac{\sum\limits_{i=1}^{n}x_iy_i - n\bar{y}\bar{x}}{\sum\limits_{i=1}^{n}x_i^2 -n\bar{x}^2} = \hat{\beta}_1 \end{aligned} \tag{A.4}\]

To write \(\hat{\beta}_1\) in a form that’s more recognizable, we will use the following:

\[ \sum_{i=1}^n x_iy_i - n\bar{y}\bar{x} = \sum_{i=1}^n(x_i - \bar{x})(y_i - \bar{y}) = (n-1)\text{Cov}(x,y) \tag{A.5}\]

\[ \sum_{i=1}^n x_i^2 - n\bar{x}^2 = \sum_{i=1}^n(x_i - \bar{x})^2 = (n-1)s_x^2 \tag{A.6}\]

where \(\text{Cov}(x,y)\) is the covariance of \(x\) and \(y\), and \(s_x^2\) is the sample variance of \(x\) (\(s_x\) is the sample standard deviation).

Thus, applying Equation A.5 and Equation A.6, we have

\[ \begin{aligned} \hat{\beta}_1 &= \frac{\sum\limits_{i=1}^{n}x_iy_i - n\bar{y}\bar{x}}{\sum\limits_{i=1}^{n}x_i^2 -n\bar{x}^2} \\[10pt] &= \frac{\sum\limits_{i=1}^{n}(x_i-\bar{x})(y_i-\bar{y})}{\sum\limits_{i=1}^{n}(x_i-\bar{x})^2}\\[10pt] &= \frac{(n-1)\text{Cov}(x,y)}{(n-1)s_x^2}\\[10pt] &= \frac{\text{Cov}(x,y)}{s_x^2} \end{aligned} \tag{A.7}\]

The correlation between \(x\) and \(y\) is \(r = \frac{\text{Cov}(x,y)}{s_x s_y}\). Thus, \(\text{Cov}(x,y) = r s_xs_y\), where \(s_y\) is the sample standard deviation of \(y\). Plugging this into Equation A.7, we have

\[ \hat{\beta}_1 = \frac{\text{Cov}(x,y)}{s_x^2} = r\frac{s_ys_x}{s_x^2} = r\frac{s_y}{s_x} \]

Estimates for \(\hat{\beta}_0\) and \(\hat{\beta}_1\)

The least-squares estimates for the intercept and slope are

\[ \hat{\beta}_0 = \bar{y} - \hat{\beta}_1\bar{x} \hspace{20mm} \hat{\beta}_1 = r\frac{s_y}{s_x} \]

A.2 Sum of Squares

From Section 4.7.2, we have the following

\[ \begin{aligned} SST &= SSM + SSR \\ \sum_{i=1}^{n}(y_i - \bar{y})^2 &= \sum_{i=1}^{n}(\hat{y}_i - \bar{y})^2 + \sum_{i=1}^{n}(y_i -\hat{y}_i)^2 \end{aligned} \tag{A.8}\]

where \(n\) is the number of observations. We will show why Equation A.8 is true mathematically by starting with the fact that \((y_i - \bar{y}) = (\hat{y}_i - \bar{y}) + (y_i - \hat{y}_i)\)

\[ \begin{aligned} (y_i - \bar{y})^2 &= [(\hat{y}_i - \bar{y}) + (y_i - \hat{y}_i)]^2\\[10pt] & = (\hat{y}_i - \bar{y})^2 + 2(\hat{y}_i - \bar{y})(y_i - \hat{y}_i) + (y_i - \hat{y}_i)^2 \end{aligned} \]

We can sum over both sides to get

\[ \sum_{i=1}^n(y_i - \bar{y})^2 = \sum_{i=1}^n(\hat{y}_i - \bar{y})^2 + 2\sum_{i=1}^n(\hat{y}_i - \bar{y})(y_i - \hat{y}_i) + \sum_{i=1}^n(y_i - \hat{y}_i)^2 \tag{A.9}\]

For now, let’s focus on the middle term \(2\sum_{i=1}^n(\hat{y}_i - \bar{y})(y_i - \hat{y}_i)\)

\[ \begin{aligned} 2\sum_{i=1}^n(\hat{y}_i - \bar{y})(y_i - \hat{y}_i) &= 2\sum_{i=1}^n(\hat{y}_iy_i - \hat{y}_i^2 - \bar{y}y_i + \bar{y}\hat{y}_i)\\[10pt] & = 2\sum_{i=1}^n\hat{y}_i(y_i - \hat{y}_i) - 2\bar{y}\sum_{i=1}^n(y_i - \hat{y}_i) \\[10pt] &=2\sum_{i=1}^n\hat{y}_ie_i - 2\bar{y}\sum_{i=1}^ne_i\\[10pt] &= 0 \quad(\text{ given }\sum_{i=1}^ne_i = 0) \end{aligned} \tag{A.10}\]

Plugging Equation A.10 back into Equation A.9, we have

\[ \begin{aligned} \sum_{i=1}^n(y_i - \bar{y})^2 &= \sum_{i=1}^n(\hat{y}_i - \bar{y})^2 + 0 + \sum_{i=1}^n(y_i - \hat{y}_i)^2 \\[10pt] &= \sum_{i=1}^n(\hat{y}_i - \bar{y})^2 + \sum_{i=1}^n(y_i - \hat{y}_i)^2\\[10pt] \end{aligned} \]

Thus \(SST = SSM + SSR\)

A.3 Matrix representation of multiple linear regression

This section provides the details for the matrix notation for multiple linear regression. We assume the reader has familiarity with some linear algebra. Please see Chapter 1 of An Introduction to Statistical Learning for a brief review of linear algebra.

A.4 Introduction

Suppose we have \(n\) observations. Let the \(i^{th}\) be \((x_{i1}, \ldots, x_{ip}, y_i)\), such that \(x_{i1}, \ldots, x_{ip}\) are the explanatory variables (predictors) and \(y_i\) is the response variable. We assume the data can be modeled using the least-squares regression model, such that the mean response for a given combination of explanatory variables follows the form in Equation A.11 .

\[y = \beta_0 + \beta_1 x_1 + \dots + \beta_p x_p \tag{A.11}\]

We can write the response for the \(i^{th}\) observation as shown in Equation A.12.

\[y_i = \beta_0 + \beta_1 x_{i1} + \dots + \beta_p x_{ip} + \epsilon_i \tag{A.12}\]

such that \(\epsilon_i\) is the amount \(y_i\) deviates from \(\mu\{y|x_{i1}, \ldots, x_{ip}\}\), the mean response for a given combination of explanatory variables. We assume each \(\epsilon_i \sim N(0,\sigma^2)\), where \(\sigma^2\) is a constant variance for the distribution of the response \(y\) for any combination of explanatory variables \(x_1, \ldots, x_p\).

A.5 Matrix Representation for the Regression Model

We can represent the model using matrix notation.

\[\mathbf{Y} = \mathbf{X}\boldsymbol{\beta} + \mathbf{\epsilon}\]

Therefore the estimated response for a given combination of explanatory variables and the associated residuals can be written as

\[\begin{equation} \label{matrix_mean} \hat{\mathbf{Y}} = \mathbf{X}\hat{\boldsymbol{\beta}} \hspace{10mm} \mathbf{e} = \mathbf{Y} - \mathbf{X}\hat{\boldsymbol{\beta}} \end{equation}\]

A.6 Estimating the Coefficients

The least-squares model is the one that minimizes the sum of the squared residuals. Therefore, we want to find the coefficients, \(\hat{\boldsymbol{\beta}}\) that minimizes

\[\begin{equation} \label{sum_sq_resid} \sum\limits_{i=1}^{n} e_{i}^2 = \mathbf{e}^T\mathbf{e} = (\mathbf{Y} - \mathbf{X}\hat{\boldsymbol{\beta}})^T(\mathbf{Y} - \mathbf{X}\hat{\boldsymbol{\beta}}) \end{equation}\]

where \(\mathbf{e}^T\), the transpose of the matrix \(\mathbf{e}\).

\[\begin{equation} \label{model_equation} (\mathbf{Y} - \mathbf{X}\hat{\boldsymbol{\beta}})^T(\mathbf{Y} - \mathbf{X}\hat{\boldsymbol{\beta}}) = (\mathbf{Y}^T\mathbf{Y} - \mathbf{Y}^T \mathbf{X}\hat{\boldsymbol{\beta}} - (\hat{\boldsymbol{\beta}}{}^{T}\mathbf{X}^T\mathbf{Y} + \hat{\boldsymbol{\beta}}{}^{T}\mathbf{X}^T\mathbf{X} \hat{\boldsymbol{\beta}}) \end{equation}\]

Note that \((\mathbf{Y^T}\mathbf{X}\hat{\boldsymbol{\beta}})^T = \hat{\boldsymbol{\beta}}{}^{T}\mathbf{X}^T\mathbf{Y}\). Since these are both constants (i.e. \(1\times 1\) vectors), \(\mathbf{Y^T}\mathbf{X}\hat{\boldsymbol{\beta}} = \hat{\boldsymbol{\beta}}{}^{T}\mathbf{X}^T\mathbf{Y}\). Thus, (\(\ref{model_equation}\)) becomes

\[\begin{equation} \mathbf{Y}^T\mathbf{Y} - 2 \mathbf{X}^T\hat{\boldsymbol{\beta}}{}^{T}\mathbf{Y} + \hat{\boldsymbol{\beta}}{}^{T}\mathbf{X}^T\mathbf{X} \hat{\boldsymbol{\beta}} \end{equation}\]

Since we want to find the \(\hat{\boldsymbol{\beta}}\) that minimizes (\(\ref{sum_sq_resid}\)), will find the value of \(\hat{\boldsymbol{\beta}}\) such that the derivative with respect to \(\hat{\boldsymbol{\beta}}\) is equal to 0.

\[\begin{equation} \begin{aligned} \frac{\partial \mathbf{e}^T\mathbf{e}}{\partial \hat{\boldsymbol{\beta}}} & = \frac{\partial}{\partial \hat{\boldsymbol{\beta}}}(\mathbf{Y}^T\mathbf{Y} - 2 \mathbf{X}^T\hat{\boldsymbol{\beta}}{}^T\mathbf{Y} + \hat{\boldsymbol{\beta}}{}^{T}\mathbf{X}^T\mathbf{X}\hat{\boldsymbol{\beta}}) = 0 \\[10pt] &\Rightarrow - 2 \mathbf{X}^T\mathbf{Y} + 2 \mathbf{X}^T\mathbf{X}\hat{\boldsymbol{\beta}} = 0 \\[10pt] & \Rightarrow 2 \mathbf{X}^T\mathbf{Y} = 2 \mathbf{X}^T\mathbf{X}\hat{\boldsymbol{\beta}} \\[10pt] & \Rightarrow \mathbf{X}^T\mathbf{Y} = \mathbf{X}^T\mathbf{X}\hat{\boldsymbol{\beta}} \\[10pt] & \Rightarrow (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{Y} = (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{X}\hat{\boldsymbol{\beta}} \\[10pt] & \Rightarrow (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{Y} = \mathbf{I}\hat{\boldsymbol{\beta}} \end{aligned} \end{equation}\]

Thus, the estimate of the model coefficients is \(\hat{\boldsymbol{\beta}} = (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{Y}\).

A.7 Variance-covariance matrix of the coefficients

We will use two properties to derive the form of the variance-covariance matrix of the coefficients:

  1. \(E[\boldsymbol{\epsilon}\boldsymbol{\epsilon}^T] = \sigma^2I\)
  2. \(\hat{\boldsymbol{\beta}} = \boldsymbol{\beta} + (\mathbf{X}^T\mathbf{X})^{-1}\epsilon\)

First, we will show that \(E[\boldsymbol{\epsilon}\boldsymbol{\epsilon}^T] = \sigma^2I\)

\[ \begin{aligned} E[\boldsymbol{\epsilon}\boldsymbol{\epsilon}^T] &= E \begin{bmatrix}\epsilon_1 & \epsilon_2 & \dots & \epsilon_n \end{bmatrix}\begin{bmatrix}\epsilon_1 \\ \epsilon_2 \\ \vdots \\ \epsilon_n \end{bmatrix} \\[10pt] & = E \begin{bmatrix} \epsilon_1^2 & \epsilon_1 \epsilon_2 & \dots & \epsilon_1 \epsilon_n \\ \epsilon_2 \epsilon_1 & \epsilon_2^2 & \dots & \epsilon_2 \epsilon_n \\ \vdots & \vdots & \ddots & \vdots \\ \epsilon_n \epsilon_1 & \epsilon_n \epsilon_2 & \dots & \epsilon_n^2 \end{bmatrix} \\[10pt] & = \begin{bmatrix} E[\epsilon_1^2] & E[\epsilon_1 \epsilon_2] & \dots & E[\epsilon_1 \epsilon_n] \\ E[\epsilon_2 \epsilon_1] & E[\epsilon_2^2] & \dots & E[\epsilon_2 \epsilon_n] \\ \vdots & \vdots & \ddots & \vdots \\ E[\epsilon_n \epsilon_1] & E[\epsilon_n \epsilon_2] & \dots & E[\epsilon_n^2] \end{bmatrix} \end{aligned} \tag{A.13}\]

Recall, the regression assumption that the errors \(\epsilon_i's\) are Normally distributed with mean 0 and variance \(\sigma^2\). Thus, \(E(\epsilon_i^2) = Var(\epsilon_i) = \sigma^2\) for all \(i\). Additionally, recall the regression assumption that the errors are uncorrelated, i.e. \(E(\epsilon_i \epsilon_j) = Cov(\epsilon_i, \epsilon_j) = 0\) for all \(i,j\). Using these assumptions, we can write (\(\ref{expected_error}\)) as

\[\begin{equation} E[\mathbf{\epsilon}\mathbf{\epsilon}^T] = \begin{bmatrix} \sigma^2 & 0 & \dots & 0 \\ 0 & \sigma^2 & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & \sigma^2 \end{bmatrix} = \sigma^2 \mathbf{I} \end{equation}\]

where \(\mathbf{I}\) is the \(n \times n\) identity matrix.

Next, we show that \(\hat{\boldsymbol{\beta}} = \boldsymbol{\beta} + (\mathbf{X}^T\mathbf{X})^{-1}\epsilon\).

Recall that the \(\hat{\boldsymbol{\beta}} = (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{Y}\) and \(\mathbf{Y} = \mathbf{X}\mathbf{\beta} + \mathbf{\epsilon}\). Then,

\[\begin{equation} \begin{aligned} \hat{\boldsymbol{\beta}} &= (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{Y} \\[10pt] &= (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T(\mathbf{X}\boldsymbol{\beta} + \boldsymbol{\epsilon}) \\[10pt] &= \boldsymbol{\beta} + (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T \mathbf{\epsilon} \\ \end{aligned} \end{equation}\]

Using these two properties, we derive the form of the variance-covariance matrix for the coefficients. Note that the covariance matrix is \(E[(\hat{\boldsymbol{\beta}} - \boldsymbol{\beta})(\hat{\boldsymbol{\beta}} - \boldsymbol{\beta})^T]\)

\[\begin{equation} \begin{aligned} E[(\hat{\boldsymbol{\beta}} - \boldsymbol{\beta})(\hat{\boldsymbol{\beta}} - \boldsymbol{\beta})^T] &= E[(\boldsymbol{\beta} + (\mathbf{X}^T\mathbf{X})^{-1} \mathbf{X}^T \boldsymbol{\epsilon} - \boldsymbol{\beta})(\boldsymbol{\beta} + (\mathbf{X}^T\mathbf{X})^{-1} \mathbf{X}^T \boldsymbol{\epsilon} - \boldsymbol{\beta})^T]\\[10pt] & = E[(\mathbf{X}^T\mathbf{X})^{-1} \mathbf{X}^T \boldsymbol{\epsilon}\boldsymbol{\epsilon}^T\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1}] \\[10pt] & = (\mathbf{X}^T\mathbf{X})^{-1} \mathbf{X}^T E[\boldsymbol{\epsilon}\boldsymbol{\epsilon}^T]\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1}\\[10pt] & = (\mathbf{X}^T\mathbf{X})^{-1} \mathbf{X}^T (\sigma^2\mathbf{I})\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1}\\ &= \sigma^2\mathbf{I}(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1}\\[10pt] & = \sigma^2\mathbf{I}(\mathbf{X}^T\mathbf{X})^{-1}\\[10pt] & = \sigma^2(\mathbf{X}^T\mathbf{X})^{-1} \\ \end{aligned} \end{equation}\]

A.8 Model diagnostics

Suppose we have \(n\) observations. Let the \(i^{th}\) be \((x_{i1}, \ldots, x_{ip}, y_i)\), such that \(x_{i1}, \ldots, x_{ip}\) are the explanatory variables (predictors) and \(y_i\) is the response variable. We assume the data can be modeled using the least-squares regression model, such that the mean response for a given combination of explanatory variables follows the form in Equation A.11.

We can write the response for the \(i^{th}\) observation as shown in Equation A.12.

such that \(\epsilon_i\) is the amount \(y_i\) deviates from \(\mu\{y|x_{i1}, \ldots, x_{ip}\}\), the mean response for a given combination of explanatory variables. We assume each \(\epsilon_i \sim N(0,\sigma^2)\), where \(\sigma^2\) is a constant variance for the distribution of the response \(y\) for any combination of explanatory variables \(x_1, \ldots, x_p\).

A.8.1 Hat Matrix & Leverage

Combining (\(\ref{matrix_mean}\)) and (\(\ref{beta-hat}\)), we can write \(\hat{\mathbf{Y}}\) as the following:

\[\begin{equation} \label{y-hat} \begin{aligned} \hat{\mathbf{Y}} &= \mathbf{X}\hat{\boldsymbol{\beta}} \\[10pt] &= \mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{Y}\\ \end{aligned} \end{equation}\]

We define the hat matrix as an \(n \times n\) matrix of the form \(\mathbf{H} = \mathbf{X}(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\). Thus (\(\ref{y-hat}\)) becomes

\[\begin{equation} \label{y-hat-matrix} \hat{\mathbf{Y}} = \mathbf{H}\mathbf{Y} \end{equation}\]

The diagonal elements of the hat matrix are a measure of how far the predictor variables of each observation are from the means of the predictor variables. For example, \(h_{ii}\) is a measure of how far the values of the predictor variables for the \(i^{th}\) observation, \(x_{i1}, x_{i2}, \ldots, x_{ip}\), are from the mean values of the predictor variables, \(\bar{x}_1, \bar{x}_2, \ldots, \bar{x}_p\). In the case of simple linear regression, the \(i^{th}\) diagonal, \(h_{ii}\), can be written as

\[h_{ii} = \frac{1}{n} + \frac{(x_i - \bar{x})^2}{\sum_{j=1}^{n}(x_j-\bar{x})^2}\]

We call these diagonal elements, the leverage of each observation.

The diagonal elements of the hat matrix have the following properties:

  • \(0 \leq h_ii \leq 1\)
  • \(\sum\limits_{i=1}^{n} h_{ii} = p+1\), where \(p\) is the number of predictor variables in the model.
  • The mean hat value is \(\bar{h} = \frac{\sum\limits_{i=1}^{n} h_{ii}}{n} = \frac{p+1}{n}\).

Using these properties, we consider a point to have high leverage if it has a leverage value that is more than 2 times the average. In other words, observations with leverage greater than \(\frac{2(p+1)}{n}\) are considered to be high leverage points, i.e. outliers in the predictor variables. We are interested in flagging high leverage points, because they may have an influence on the regression coefficients.

When there are high leverage points in the data, the regression line will tend towards those points; therefore, one property of high leverage points is that they tend to have small residuals. We will show this by rewriting the residuals from (\(\ref{matrix_mean}\)) using (\(\ref{y-hat-matrix}\)).

\[\begin{equation} \label{resid-hat} \begin{aligned} \mathbf{e} &= \mathbf{Y} - \hat{\mathbf{Y}} \\[10pt] & = \mathbf{Y} - \mathbf{H}\mathbf{Y} \\[10pt] &= (1-\mathbf{H})\mathbf{Y} \end{aligned} \end{equation}\]

Note that the identity matrix and hat matrix are idempotent, i.e. \(\mathbf{I}\mathbf{I} = \mathbf{I}\), \(\mathbf{H}\mathbf{H} = \mathbf{H}\). Thus, \((\mathbf{I} - \mathbf{H}\) is also idempotent. These matrices are also symmetric. Using these properties and (\(\ref{resid-hat}\)), we have that the variance-covariance matrix of the residuals \(\boldsymbol{e}\), is

\[\begin{equation} \label{resid-var} \begin{aligned} Var(\mathbf{e}) &= \mathbf{e}\mathbf{e}^T \\[10pt] &= (1-\mathbf{H})Var(\mathbf{Y})^T(1-\mathbf{H})^T \\[10pt] &= (1-\mathbf{H})\hat{\sigma}^2(1-\mathbf{H})^T \\[10pt] &= \hat{\sigma}^2(1-\mathbf{H})(1-\mathbf{H}) \\[10pt] &= \hat{\sigma}^2(1-\mathbf{H}) \end{aligned} \end{equation}\]

where \(\hat{\sigma}^2 = \frac{\sum_{i=1}^{n}e_i^2}{n-p-1}\) is the estimated regression variance. Thus, the variance of the \(i^{th}\) residual is \(Var(e_i) = \hat{\sigma}^2(1-h_{ii})\). Therefore, the higher the leverage, the smaller the variance of the residual. Because the expected value of the residuals is 0, we conclude that points with high leverage tend to have smaller residuals than points with lower leverage.

A.8.2 Standardized Residuals

In general, we standardize a value by shifting by the expected value and rescaling by the standard deviation (or standard error). Thus, the \(i^{th}\) standardized residual takes the form

\[std.res_i = \frac{e_i - E(e_i)}{SE(e_i)}\]

The expected value of the residuals is 0, i.e. \(E(e_i) = 0\). From (\(\ref{resid-var}\)), the standard error of the residual is \(SE(e_i) = \hat{\sigma}\sqrt{1-h_{ii}}\). Therefore,

\[\begin{equation} \label{std.resid.} std.res_i = \frac{e_i}{\hat{\sigma}\sqrt{1-h_{ii}}} \end{equation}\]

A.8.3 Cook’s Distance

Cook’s distance is a measure of how much each observation influences the model coefficients, and thus the predicted values. The Cook’s distance for the \(i^{th}\) observation can be written as

\[\begin{equation} \label{cooksd} D_i = \frac{(\hat{\mathbf{Y}} -\hat{\mathbf{Y}}_{(i)})^T(\hat{\mathbf{Y}} -\hat{\mathbf{Y}}_{(i)})}{(p+1)\hat{\sigma}} \end{equation}\]

where \(\hat{\mathbf{Y}}_{(i)}\) is the vector of predicted values from the model fitted when the \(i^{th}\) observation is deleted. Cook’s Distance can be calculated without deleting observations one at a time, since (\(\ref{cooksd-v2}\)) below is mathematically equivalent to (\(\ref{cooksd}\)).

\[\begin{equation} \label{cooksd-v2} D_i = \frac{1}{p+1}std.res_i^2\Bigg[\frac{h_{ii}}{(1-h_{ii})}\Bigg] = \frac{e_i^2}{(p+1)\hat{\sigma}^2(1-h_{ii})}\Bigg[\frac{h_{ii}}{(1-h_{ii})}\Bigg] \end{equation}\]

A.9 Model Selection Criteria

A.9.1 Maximum Likelihood Estimation of \(\boldsymbol{\beta}\) and \(\sigma\)

To understand the formulas for AIC and BIC, we will first briefly explain the likelihood function and maximum likelihood estimates for regression.

Let \(\mathbf{Y}\) be \(n \times 1\) matrix of responses, \(\mathbf{X}\), the \(n \times (p+1)\) matrix of predictors, and \(\boldsymbol{\beta}\), \((p+1) \times 1\) matrix of coefficients. If the multiple linear regression model is correct then,

\[\begin{equation} \label{norm-assumption} \mathbf{Y} \sim N(\mathbf{X}\boldsymbol{\beta}, \sigma^2) \end{equation}\]

When we do linear regression, our goal is to estimate the unknown parameters \(\boldsymbol{\beta}\) and \(\sigma^2\) from (\(\ref{norm-assumption}\)). In Matrix Form of Linear Regression, we showed a way to estimate these parameters using matrix algebra. Another approach for estimating \(\boldsymbol{\beta}\) and \(\sigma^2\) is using maximum likelihood estimation.

A likelihood function is used to summarize the evidence from the data in support of each possible value of a model parameter. Using (\(\ref{norm-assumption}\)), we will write the likelihood function for linear regression as

\[\begin{equation} \label{lr} L(\mathbf{X}, \mathbf{Y}|\boldsymbol{\beta}, \sigma^2) = \prod\limits_{i=1}^n (2\pi \sigma^2)^{-\frac{1}{2}} \exp\bigg\{-\frac{1}{2\sigma^2}\sum\limits_{i=1}^n(Y_i - \mathbf{X}_i \boldsymbol{\beta})^T(Y_i - \mathbf{X}_i \boldsymbol{\beta})\bigg\} \end{equation}\]

where \(Y_i\) is the \(i^{th}\) response and \(\mathbf{X}_i\) is the vector of predictors for the \(i^{th}\) observation. One approach estimating \(\boldsymbol{\beta}\) and \(\sigma^2\) is to find the values of those parameters that maximize the likelihood in (\(\ref{lr}\)), i.e. maximum likelihood estimation. To make the calculations more manageable, instead of maximizing the likelihood function, we will instead maximize its logarithm, i.e. the log-likelihood function. The values of the parameters that maximize the log-likelihood function are those that maximize the likelihood function. The log-likelihood function we will maximize is

\[\begin{equation} \label{logL} \begin{aligned} \log L(\mathbf{X}, \mathbf{Y}|\boldsymbol{\beta}, \sigma^2) &= \sum\limits_{i=1}^n -\frac{1}{2}\log(2\pi\sigma^2) -\frac{1}{2\sigma^2}\sum\limits_{i=1}^n(Y_i - \mathbf{X}_i \boldsymbol{\beta})^T(Y_i - \mathbf{X}_i \boldsymbol{\beta}) \\[10pt] &= -\frac{n}{2}\log(2\pi\sigma^2) - \frac{1}{2\sigma^2}(\mathbf{Y} - \mathbf{X} \boldsymbol{\beta})^T(\mathbf{Y} - \mathbf{X} \boldsymbol{\beta})\\ \end{aligned} \end{equation}\]

[–insert details MLES–]

The maximum likelihood estimate of \(\boldsymbol{\beta}\) and \(\sigma^2\) are

\[\begin{equation} \label{mle} \hat{\boldsymbol{\beta}} = (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{Y} \hspace{10mm} \hat{\sigma}^2 = \frac{1}{n}(\mathbf{Y} - \mathbf{X} \boldsymbol{\beta})^T(\mathbf{Y} - \mathbf{X} \boldsymbol{\beta}) = \frac{1}{n}RSS \end{equation}\]

where \(RSS\) is the residual sum of squares. Note that the maximum likelihood estimate is not exactly equal to the estimate of \(\sigma^2\) we typically use \(\frac{RSS}{n-p-1}\). This is because the maximum likelihood estimate of \(\sigma^2\) in (\(\ref{mle}\)) is a biased estimator of \(\sigma^2\). When \(n\) is much larger than the number of predictors \(p\), then the differences in these two estimates are trivial.

A.9.2 AIC

Akaike’s Information Criterion (AIC) is

\[\begin{equation} \label{aic} AIC = -2 \log L + 2(p+1) \end{equation}\]

where \(\log L\) is the log-likelihood. This is the general form of AIC that can be applied to a variety of models, but for now, let’s focus on AIC for multiplef linear regression.

\[\begin{equation} \label{aic-reg} \begin{aligned} AIC &= -2 \log L + 2(p+1) \\[10pt] &= -2\bigg[-\frac{n}{2}\log(2\pi\sigma^2) - \frac{1}{2\sigma^2}(\mathbf{Y} - \mathbf{X} \boldsymbol{\beta})^T(\mathbf{Y} - \mathbf{X} \boldsymbol{\beta})\bigg] + 2(p+1) \\[10pt] &= n\log\big(2\pi\frac{RSS}{n}\big) + \frac{1}{RSS/n}RSS \\[10pt] &= n\log(2\pi) + n\log(RSS) - n\log(n) + 2(p+1) \end{aligned} \end{equation}\]

A.9.3 BIC

[—]