Show that linear regression and k-nearest-neighbor


Suppose we have a sample of N pairs xi, yi drawn i.i.d. from the distribution characterized as follows:
xi ∼ h(x), the design density
yi = f(xi) + εi, f is the regression function
εi ∼ (0, σ2) (mean zero, variance σ2)
We construct an estimator for f linear in the yi,
ˆ f(x0) =
XN
i=1
li(x0;X)yi,
where the weights li(x0;X) do not depend on the yi, but do depend on the entire training sequence of xi, denoted here by X.
(a) Show that linear regression and k-nearest-neighbor regression are mem-bers of this class of estimators. Describe explicitly the weights li(x0;X)
in each of these cases.
(b) Decompose the conditional mean-squared error EY|X (f(x0) - ˆ f(x0))2 into a conditional squared bias and a conditional variance component.
Like X, Y represents the entire training sequence of yi.
(c) Decompose the (unconditional) mean-squared error
EY,X (f(x0) - ˆ f(x0))2
into a squared bias and a variance component.
(d) Establish a relationship between the squared biases and variances in the above two cases. 

Request for Solution File

Ask an Expert for Answer!!
Basic Computer Science: Show that linear regression and k-nearest-neighbor
Reference No:- TGS0114857

Expected delivery within 24 Hours