Detection and estimation theory homework suppose we toss a


Detection and estimation theory Homework:

Could you write all steps of solution to understand them?

1) a) Suppose Θ is a random parameter with prior density

863_Figure.png

where α > 0 is known. Suppose the observation Y is a Poisson random variable with rate θ; that is,

pθ(y) = θye/y!,                                y = 0, 1, 2, . . .

Find the minimum mean-squared error (MMSE) estimate of θ based on Y.

Hint: 0xae-tx dx = a!t-(a+1).

b) Suppose we toss a coin n independent times and define an observation sequence y1, . . . , yn with

946_Figure1.png

for k =1,...,n. Let θ be the probability that the output is tails [assume θ ∈ (0, 1)].

Find a minimum variance unbiased estimator (MVUE) of θ.

2) A robot is trying to estimate its position (assume two-dimensional positioning) based on distance measurements from a number of reference devices at known positions. Specifically, the robot obtains K distance measurements from K reference devices, which are expressed as

ri = di + ni,

for i = 1, . . . , K, where di is the true distance between the robot and the ith reference device, which is given by di = √((x - xi)2 + (y - yi)2), with [x y] denoting the unknown position of the robot and [xi yi] being the known position of the ith reference device. In addition, ni represents the noise in the ith measurement, which is modeled as a zero-mean Gaussian random variable with variance σi2. It is assumed that ni,. . . ,nK are independent, and σ21, . . . , σ2K are known.

a) Assume that the prior knowledge about the position of the robot is represented by the following prior distribution:

w(x, y) = 1/2πσxσy exp {-(x2/2σ2x) - (y2/2σ2y)},                                                   (2)

where σx and σy are known values. Obtain the maximum a-posteriori probability (MAP) estimator for the position of the robot based on measurements r1, . . . ,rK. You are not required to obtain a closed-form expression; however, you should express the MAP estimator as the solution of a minimization problem.

b) Now assume that there is no prior information about the position of the robot.

i) Obtain the maximum likelihood estimator (MLE) for the position of the robot based on the measurements. Again, it is enough to express the estimator as the solution of a minimization problem.

ii) Under what conditions do the MAP estimator and the MLE converge to each other?

c) Assume that there is no prior information about the position of the robot. Obtain the Cramer-Rao lower bound (CRLB) for unbiased estimators of robot's position.

3) Consider the following measurement model:

y = Aθ + w,

where A is a known full-rank n x k matrix, and θ ∈ Rk. Noise w is a realization of a zero-mean jointly Gaussian random vector represented by W ~ N(0 , R), with R denoting the known covariance matrix, which is positive-definite.

a) Find the maximum likelihood estimator (MLE) for θ based on y.

Hint: For a vector x, the first-order partial derivatives (gradients) are given by d(bTx)/dx = b and d(xTCx)/dx = 2Cx. Assume that the first-order necessary conditions for the MLE optimization problem are also sufficient for the maximum (i.e., you do not need to check the second-order conditions).

b) Prove that the MLE in Part a) is equivalent to a minimum variance unbiased estimator (MVUE) for θ, without deriving the MVUE. In other words, you are not allowed to use any sufficiency/completeness theorems in the proof.

Hint: One way would be to prove that the MLE is unbiased, and no other unbiased estimators can have Cov{θ^other(y)} < Cov{θ^ML(y)}, where Cov{Z1} < Cov{Z2} means that the difference between the covariance matrix of Z1 and that of Z2 is a negative-definite matrix.

4) Consider a binary hypothesis-testing problem with a scalar observation Y = y. Let π0 and π1 denote the prior probabilities of hypotheses H0 and H1, respectively. In addition, the probability density functions (PDFs) of Y under H0 and H1 are represented, respectively, by p0Y(·) and p1Y(·). The decision rule (detector) for this problem is represented by the function Φ(·).

Suppose that, instead of using observation Y = y, we add an independent random variable W to Y, and input (Y + W) into the detector Φ(·), as shown in the figure. Prove that the optimal probability density function (PDF) of W that minimizes the Bayes risk under uniform cost assignment (UCA) can be expressed in the form of pw(x) = δ (x - c), where δ(·) represents the Dirac delta function [that is, W takes a constant value c with probability 1]. Find an expression for c (you do not need to obtain a closed-form expression for c; you can just express it as the solution of an optimization problem).

Hint: Write down the expression for the Bayes risk under UCA for the decision rule Φ(·) with input (Y + W), and try to express that Bayes risk as the expectation of a function of the random variable W.

1255_Figure2.png

Request for Solution File

Ask an Expert for Answer!!
Engineering Mathematics: Detection and estimation theory homework suppose we toss a
Reference No:- TGS01417994

Expected delivery within 24 Hours