Compare the various estimatorspredictors that we discussed


Econometrics and Big Data - Victor Chernozhukov and Christian Hansen

Final Problems

1. Use the data in nettfa.csv to answer this question. The goal of this exercise is simply to use machine learning/nonparametric modeling to build a model for prediction. The data consist of 9915 observations on 9 variables defined in the file nettfa readme.txt. Before answering parts a.-h. below, remove 3915 observations which will be used for an out-of-sample comparison in part i.

a. Estimate E[net_tfa].

b. Estimate E[net_tfa|X = {age, inc, fsize, educ, db, marr, male, twoearn}] using linear re- gression.

c. Estimate E[net_tfa|X = {age, inc, fsize, educ, db, marr, male, twoearn}] using k-nn with the number of neighbors chosen by cross-validation. Carefully explain how you define the distance between observations. Comment on how many neighbors you chose and how this relates to the sample size.

d. Estimate E[net_tfa|X = {age, inc, fsize, educ, db, marr, male, twoearn}] using series with basis elements chosen by cross-validation. Carefully document the basis you are using and how you chose to add elements to the expansion along the path considered for cross- validation. Comment on which terms you end up selecting.

e. Estimate E[net_tfa|X = {age, inc, fsize, educ, db, marr, male, twoearn}] by (1) lasso, (2) ridge, and (3) elastic net with penalty parameters chosen by cross-validation. Use the same dictionary of approximating functions for each of the three methods. Carefully explain how you construct the dictionary of approximating functions and your motivation for the functional forms considered. Comment on the estimated models.

f. Estimate E[net_tfa|X = {age, inc, fsize, educ, db, marr, male, twoearn}] using a CART with cost-complexity chosen by cross-validation. Comment on the final tree structure.

g. Estimate E[net_tfa|X = {age, inc, fsize, educ, db, marr, male, twoearn}] using a random forest. Note how many bootstrap replication you use and any other tuning you do. Which variables seem most important in the forest fit?

h. Estimate E[net_tfa|X = {age, inc, fsize, educ, db, marr, male, twoearn}] using boosted regression trees with number of boosting iterations chosen by cross-validation. Comment on the tree depth you use and how you made this choice. Which variables seem most important in the boosted tree fit?

i. Use the 3915 observations you held out to compare the estimates obtained in parts a.-h. Specifically, let g^j (x) for j ∈ {a, b, c, d, e(1), e(2), e(3), f, g, h, i} be the estimator of the conditional expectation obtained in the part of the question corresponding to j. Calculate the mean square forecast error as 1/3915 ∑i∈hold-out(g^j (xi) - yi)2. Which procedure is best?

Do the performance discrepancies seem large? [Note: Assuming independent sampling, you can compute a standard error for the mean square forecast error.]

2. To answer this question, use the R file "Simmons Reanalysis.R" which analyzes an example from Simmons, Nelson, and Simonsohn (2011, Psychological Science ) meant to illustrate the po- tential impacts of unprincipled variable selection. The key variables are "age" which is simply the respondent's age and serves as the dependent variable and "when64" which is a randomly assigned treatment variable. Specifically, the experiment randomly assigned subjects to the treat- ment group, where subjects listened to "When I'm 64" by the Beatles, or the control group, where subjects listened to a song unrelated to age.

a. What is estimated in block 1 of the code? Why is this a sensible object to look at in the context of the randomized control trial? Interpret the results obtained by running block 1 of the code.

b. What is estimated in block 2 of the code? (Note that "dad" is age of respondent's father.) In principle, why is this a sensible object to look at in the context of a randomized control trial? (You can essentially restate the final part of this question to answer this.) How does your answer to the second question change when you learn that the control was selected by looking for the variable that led to the largest decrease in the p-value for testing the null hypothesis of no treatment effect? How do you respond to the argument that "1) random assignment of the treatment means that all controls are independent of the treatment variable and thus can be included as a control without introducing bias, 2) the reason to include controls is to reduce residual variation and therefore increase precision in learning the treatment effect, and 3) we should thus search for the variable(s) that lead to the largest increase in precision and use these"?

c. What is estimated in block 3 of the code? In principle, why is this a sensible object to look at in the context of a randomized control trial? Explain how the procedure in block 3 addresses the problem raised in block 2 (discussed in part (b))? I.e. highlight the key distinction(s) between the mechanism for selecting control variables in block 3 and block 2.

3. This exercise is intended to have you compare the various estimators/predictors that we discussed in class. It is deliberately kept broad and somewhat vague - feel free to do more work than what's asked for.

a. Assume that we're interested in predicting the growth rate of a country, based on the country characteristics. Download the Barro-Lee data (accessible via hdm package).1 Why does this correspond to the "big p" case? Why should we worry about overfitting in this case?

b. Consider several predictors that would allow us to potentially get rid of the overfitting problem. These include, but are not limited to,
- OLS with fewer, carefully chosen regressors ("small OLS")
- Lasso (with the penalty level λ chosen via plug-in method),
- post-Lasso (with the penalty level λ chosen via plug-in method),
- Lasso (with the penalty level λ chosen via cross-validation),
- Ridge Estimator (with the penalty level λ chosen via cross-validation),
- Elastic Net (with the penalty level λ chosen via cross-validation),
- Random Forests
- Pruned Trees.

Which one do you think would perform better? (i.e. Would you expect this model to the dense or sparse? How would you pick the regressors in small OLS? Do we really know how Random Forests work?) Speculate.

c. Split the data into training and test samples, estimate coefficients using the training sam- ple, and run predictions for the test sample. Compute the out-of-sample performance of your predictors by computing the MSE for prediction on test sample. Calculate the 95% confidence intervals for MSE. How do the predictors compare? Discuss.

d. Now, let's get causal. Assume we're interested in estimating the effect of initial level of per-capita GDP on the growth rate (known as the infamous "convergence hypothesis"). The specification is

yi = α0di + ∑j=1Pβjxij + ∈i,

where yi is the growth rate of GDP over a specified decade in country i, di is the log of GDP at the beginning of the decade, and the xij are country characteristics at the beginning of the decade. The convergence hypothesis holds that α0 < 0. Test the convergence hypothesis via the Frisch-Waugh-Lovell partialling out using Lasso, post-Lasso, and Random Forest. Give intuition.

Attachment:- data.rar

Solution Preview :

Prepared by a verified Expert
Econometrics: Compare the various estimatorspredictors that we discussed
Reference No:- TGS02345786

Now Priced at $70 (50% Discount)

Recommended (94%)

Rated (4.6/5)