Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

ECE 434 Exam 1, University of Illinois at Urbana-Champaign: Random Processes, Exams of Electrical and Electronics Engineering

The spring 2003 exam for the ece 434: random processes course at the university of illinois at urbana-champaign. The exam covers topics such as gaussian random variables, cauchy distribution, and convergence in distribution. Students are required to solve problems related to finding probabilities, characteristic functions, and limiting distributions.

Typology: Exams

Pre 2010

Uploaded on 03/16/2009

koofers-user-3s1-1
koofers-user-3s1-1 🇺🇸

10 documents

1 / 7

Toggle sidebar

Related documents


Partial preview of the text

Download ECE 434 Exam 1, University of Illinois at Urbana-Champaign: Random Processes and more Exams Electrical and Electronics Engineering in PDF only on Docsity! University of Illinois at Urbana-Champaign ECE 434: Random Processes Spring 2003 Exam 1 Monday, March 10, 2003 Name: • You have 75 minutes for this exam. The exam is closed book and closed note, except that you may consult both sides of one sheet of notes, typed in font size 10 or equivalent handwriting size. • Calculators, laptop computers, Palm Pilots, two-way e-mail pagers, etc. may not be used. • Write your answers in the spaces provided. • Please show all of your work. Answers without appropriate justification will receive very little credit. If you need extra space, use the back of the previous page. Score: 1. (12 pts.) 2. (12 pts.) 3. (8 pts.) 4. (8 pts.) Total: (40 pts.) 1 Problem 1 (12 points) Let X, Y be jointly Gaussian random variables with mean zero and covariance matrix Cov ( X Y ) = ( 4 6 6 18 ) . You may express your answers in terms of the Φ function defined by Φ(u) = ∫ u −∞ 1√ 2π e−s 2/2ds. (a) Find P [|X − 1| ≥ 2]. (b) What is the conditional density of X given that Y = 3? You can either write out the density in full, or describe it as a well known density with specified parameter values. (c) Find P [|X − E[X|Y ]| ≥ 1]. 2 Problem 4(8 points) Suppose in a given application a Kalman filter has been implemented to recur- sively produce x̂k+1|k for k ≥ 0, as in class. Thus by time k, x̂k+1|k, Σk+1|k, x̂k|k−1, and Σk|k−1 are already computed. Suppose that it is desired to also compute x̂k|k at time k. Give additional equations that can be used to compute x̂k|k. You may consult the attached sheet giving the definitions and equations for Kalman filtering. Be as explicit as you can, expressing any matrices you use in terms of the matrices of model or Kalman filter already considered. 5 Excerpt from ECE 434 course notes (for use with exam I) The state and observation equations are given by State: xk+1 = Fkxk + wk k ≥ 0 Observation: yk = HTk xk + vk k ≥ 0. It is assumed that • x0, v0, v1, . . . , w0, w1, . . . are pairwise uncorrelated. • Ex0 = x0, Cov(x0) = P0, Ewk = 0, Cov(wk) = Qk, Evk = 0, Cov(vk) = Rk. • Fk, Hk, Qk, Rk for k ≥ 0; P0 are known matrices. • x0 is a known vector. Let yk = (y0, y1, . . . , yk) represent the observations up to time k. Various estimators are defined by x̂i|j = Ê[xi|yj ] and the associated covariance of error matrices Σi|j = Cov(xi − x̂i|j). The Kalman filter equations are given by x̂k+1|k = [ Fk −KkHTk ] x̂k|k−1 + Kkyk (1) with the initial condition x̂0|−1 = x0, where the gain matrix Kk is given by Kk = FkΣk|k−1Hk [ HTk Σk|k−1Hk + Rk ]−1 (2) and the covariance of error matrices are recursively computed by Σk+1|k = Fk [ Σk|k−1 − Σk|k−1Hk ( HTk Σk|k−1Hk + Rk )−1 HTk Σk|k−1 ] F Tk + Qk (3) with the initial condition Σ0|−1 = P0. The Kalman filter equations are now derived. Roughly speaking, there are two considerations for computing x̂k+1|k once x̂k|k−1 is computed: (1) the change in state from xk to xk+1, and (2) the availability of the new observation yk. It is useful to treat the two considerations separately. To predict xk+1 without the benefit of the new observation we only need to use the state update equation and the fact wk ⊥ yk−1, to find Ê [ xk+1 | yk−1 ] = Fkx̂k|k−1. (4) Thus, if it weren’t for the new observation, the filter update equation would simply consist of multi- plication by Fk. Furthermore, the covariance of error matrix would be Σk+1|k−1 = Cov(xk+1 − Fkx̂k|k−1) = FkΣk|k−1F Tk + Qk. (5) Consider next the new observation yk. The observation yk is not totally new—for it can be predicted in part from the previous observations. Specifically, we can consider ỹk = yk − Ê[yk | yk−1] to be the new part of the observation yk. The variable ỹk is the linear innovation at time k. Since the linear span of the random variables in (yk−1, yk) is the same as the linear span of the random variables in (yk−1, ỹk), for the purposes of incorporating the new observation we can pretend that ỹk is the new observation rather than yk. By the observation equation and the facts E[vk] = 0 and E[yk−1vTk ] = 0, it follows that Ê[yk | yk−1] = HTk x̂k|k−1, so ỹk = yk −HTk x̂k|k−1. 6 Since x̂k|k−1 can be expressed as a linear transformation of (1, yk−1, yk), or equivalently as a linear transformation of (1, yk−1, ỹk), x̂k+1|k = Fkx̂k|k−1 + Ê [ xk+1 − Fkx̂k|k−1 | yk−1, ỹk ] . (6) Since E[ỹk] = 0 and E[yk−1ỹTk ] = 0, Ê[xk+1 − Fkx̂k|k−1 | yk−1, ỹk] = Ê[xk+1 − Fkx̂k|k−1 | yk−1]︸ ︷︷ ︸+ Ê[xk+1 − Fkx̂k|k−1 | ỹk] 0 (7) where the first term on the right side of (7) is zero by (4). Since xk+1−Fkx̂k|k−1 and ỹk are both mean zero, Ê[xk+1 − Fkxk|k−1 | ỹk] = Kkỹk (8) where Kk = Cov(xk+1 − Fkx̂k|k−1, ỹk)Cov(ỹk)−1. (9) Combining (6), (7), and (8) yields the main Kalman filter equation x̂k+1|k = Fkx̂k|k+1 + Kkỹk. (10) Taking into account the new observation ỹk, which is orthogonal to the previous observations, yields a reduction in the covariance of error: Σk+1|k = Σk+1|k−1 − Cov(Kkỹk). (11) The Kalman filter equations (1), (5), and (3) follow easily from (10), (9), and (11), respectively. Some of the details follow. To convert (9) into (2), use Cov(xk+1 − Fkx̂k|k−1,→ ỹk) = Cov(Fk(xk − x̂k|k−1) + wk,HTk (xk − x̂k|k−1) + vk) (12) = Cov(Fk(xk − x̂k|k−1),HTk (xk − x̂k|k−1)) = FkΣk|k−1Hk and Cov(ỹk) = Cov(HTk (xk − x̂k|k−1) + vk) = Cov(HTk (xk − x̂k|k−1)) + Cov(vk) = HTk Σk|k−1Hk + Rk To convert (11) into (3) use (5) and Cov(Kkỹk) = KkCov(ỹk)KTk = Cov(xk+1 − Fkx̂k|k−1)Cov(ỹk)−1Cov(xk+1 − Fkx̂k|k−1) This completes the derivation of the Kalman filtering equations. 7
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved