William H. Greene
New York University
Prentice Hall, Upper Saddle River, New Jersey 07458
Contents and Notation
Chapter 1 Introduction 1 Chapter 2 The Classical Multiple Linear Regression Model 2 Chapter 3 Least Squares 3 Chapter 4 Finite-Sample Properties of the Least Squares Estimator 7 Chapter 5 Large-Sample Properties of theLeast Squares and Instrumental Variables Estimators 14 Chapter 6 Inference and Prediction 19 Chapter 7 Functional Form and Structural Change 23 Chapter 8 Specification Analysis and Model Selection 30 Chapter 9 Nonlinear Regression Models 32 Chapter 10 Nonspherical Disturbances - The Generalized Regression Model 37 Chapter 11 Heteroscedasticity 41 Chapter 12 Serial Correlation 49 Chapter 13 Modelsfor Panel Data 53 Chapter 14 Systems of Regression Equations 63 Chapter 15 Simultaneous Equations Models 72 Chapter 16 Estimation Frameworks in Econometrics 78 Chapter 17 Maximum Likelihood Estimation 84 Chapter 18 The Generalized Method of Moments 93 Chapter 19 Models with Lagged Variables 97 Chapter 20 Time Series Models 101 Chapter 21 Models for Discrete Choice 1106 Chapter 22 Limited DependentVariable and Duration Models 112 Appendix A Matrix Algebra 115 Appendix B Probability and Distribution Theory 123 Appendix C Estimation and Inference 134 Appendix D Large Sample Distribution Theory 145 Appendix E Computation and Optimization 146 In the solutions, we denote: • scalar values with italic, lower case letters, as in a or α • column vectors with boldface lower case letters, as in b, •row vectors as transposed column vectors, as in b′, • single population parameters with greek letters, as in β, • sample estimates of parameters with English letters, as in b as an estimate of β, ˆ • sample estimates of population parameters with a caret, as in α • matrices with boldface upper case letters, as in M or Σ, • cross section observations with subscript i, time series observations withsubscript t. These are consistent with the notation used in the text.
Chapter 1 Introduction
There are no exercises in Chapter 1.
Chapter 2 The Classical Multiple Linear Regression Model
There are no exercises in Chapter 2.
Chapter 3 Least Squares
1 1. (a) Let X = . 1
x1 . . The normal equations are given by (3-12), X′e = 0 , hence for each of the xn
columns of X, xk, we know that xk’e=0. This implies that
= 0 and ∑i xi ei = 0 .
= 0 to conclude from the first normal equation that a = y − b x .
(c) Know that implies follows.
∑ (x − x )( y
= 0 and
− a − bxi ) = 0 or
= 0 . It follows then that
∑ (x − x )(y
∑ (x − x )e
− y − b xi −x = 0 from which the result
= 0 . Further, the latter
2. Suppose b is the least squares coefficient vector in the regression of y on X and c is any other Kx1 vector. Prove that the difference in the two sums of squared residuals is (y-Xc)′(y-Xc) - (y-Xb)′(y-Xb) = (c - b)′X′X(c - b). Prove that this difference is positive. Write c as b + (c - b). Then, the sum ofsquared residuals based on c is (y - Xc)′(y - Xc) = [y - X(b + (c - b))] ′[y - X(b + (c - b))] = [(y - Xb) + X(c - b)] ′[(y - Xb) + X(c - b)] = (y - Xb) ′(y - Xb) + (c - b) ′X′X(c - b) + 2(c - b) ′X′(y - Xb). But, the third term is zero, as 2(c - b) ′X′(y - Xb) = 2(c - b)X′e = 0. Therefore, (y - Xc) ′(y - Xc) = e′e + (c - b) ′X′X(c - b) or (y - Xc) ′(y - Xc) - e′e = (c - b) ′X′X(c - b). The righthand side can be written as d′d where d = X(c - b), so it is necessarily positive. This confirms what we knew at the outset, least squares is least squares. 3. Consider the least squares regression of y on K variables (with a constant), X. Consider an alternative set of regressors, Z = XP, where P is a nonsingular matrix. Thus, each column of Z is a mixture of some of the columns of X. Prove that...