Solutions to exercises
Source: Gray (2004) . Sec. 5.10, p. 349.
Exercise 1:
For the two-sided process, the mean is
\mathbb{E}\{W_n\} = (1+r)m
and the autocorrelation is
R_W(k, j) = m^2 (1+r)^2 + (1+r^2)\sigma^2 \delta[k-j] + r \sigma^2\delta[k-j-1] + r \sigma^2 \delta[k-j+1]
Wn is strictly stationary.
For the one-sided process, the mean is
\mathbb{E}\{W_n\} = \left[ \begin{array}{ll} (1+r)m & n>0 \\ m & n=0 \end{array}, \right.
and the autocorrelation is
R_w(k, j) = \left[ \begin{array}{ll} m^2 (1+r)^2 + (1+r^2)\sigma^2 \delta[k-j] + r \sigma^2\delta[k-j-1] + r \sigma^2 \delta[k-j+1], & k>0, \quad j>0 \\ m^2 + \sigma^2 + r \sigma^2 \left(\delta[k+j-1]\right) & k=0 \text{ or } j = 0 \\ \end{array}, \right. .
For n > 0, \mathbb{E}\{W_n\} and R(n, n+k) do not depend on n.
Exercise 2:
SOLUTION:
\mathbb{E}\{U_0\} = \frac{5}{2} \\ R_U[k] = \frac{25}{4} + \frac{9}{4} \delta[k] + \delta[k-1] + \delta[k+1]RESOLUTION:
The mean is
\mathbb{E}\{U_0\} = \mathbb{E}\{X_0\} + \mathbb{E}\{X_{-1}\} + \mathbb{E}\{Y_{0}\} = 1 + 1 + \frac{1}{2} = \frac{5}{2}and the autocorrelation is
R_U[k] = \mathbb{E}\{U_{n+k} U_n\} \\ \qquad = \mathbb{E}\{(X_{n+k}+ X_{n+k-1} + Y_{n+k})(X_n + X_{n-1} + Y_n)\} \\ \qquad = \mathbb{E}\{(X_{n+k} + X_{n+k-1})(X_n + X_{n-1})\} + \mathbb{E}\{(X_{n+k} + X_{n+k-1}) Y_n\} + \mathbb{E}\{Y_{n+k}(X_n + X_{n-1})\} + \mathbb{E}\{Y_{n+k} Y_n\}Since X_n and Y_n are independent, we have
R_U[k] = \mathbb{E}\{(X_{n+k} + X_{n+k-1})(X_n + X_{n-1})\} + \mathbb{E}\{X_{n+k} + X_{n+k-1}\} \mathbb{E}\{Y_n\} + \mathbb{E}\{Y_{n+k}\} \mathbb{E}\{X_n + X_{n-1}\} + \mathbb{E}\{Y_{n+k} Y_n\} \\ \qquad = 2R_X[k] + R_X[k-1] + R_X[k+1] + 4 \mathbb{E}\{X_0\}\mathbb{E}\{Y_0\} + R_Y[k]And, since both X_n and Y_n are iid processes, we can write
R_X[k] = \mathbb{E}\{X_0\}^2 + \text{var}\{X_0\} \delta[k] = 1 + \delta[k] \\ R_Y[k] = \mathbb{E}\{Y_0\}^2 + \text{var}\{Y_0\} \delta[k] = \frac{1}{4} + \frac{1}{4} \delta[k]therefore
R_U[k] = 2(1 + \delta[k]) + (1 + \delta[k-1]) + (1 + \delta[k+1]) + 2 + (\frac{1}{4} + \frac{1}{4} \delta[k]) \\ \qquad = (2 + 1 + 1 + 2 + \frac{1}{4}) + (2 + \frac14) \delta[k] + \delta[k-1] + \delta[k+1] \\ \qquad = \frac{25}{4} + \frac{9}{4} \delta[k] + \delta[k-1] + \delta[k+1]Exercise 4:
R_W(k) = m^2 (1+r)^2 + (1+r^2)\sigma^2 \delta[k] + r \sigma^2\delta[k-1] + r \sigma^2 \delta[k+1]therefore
S_W(\omega) = (1+r^2)\sigma^2 + 2 \pi m^2 (1+r)^2 \delta(\omega) + 2 r \sigma^2 \cos(\omega), \qquad \qquad -\pi \le \omega \le \piAlso
S_U(\omega) = \frac{9}{4} + \frac{17\pi}{2} \delta(\omega) + 2 \cos(\omega) \qquad \qquad -\pi \le \omega \le \pi(both S_W(\omega) and S_U(\omega) are periodic with period 2\pi)
Exercise 5:
(a)
\mathbb{E}\{Y_n\} = 0 \\ S_Y(\omega) = \dfrac{\sin^2\left(\frac{K\omega}{2}\right)} {\sin^2\left(\frac{\omega}{2}\right)}(b)
Y_n is a zero-mean Gaussian random variable with variance \frac{1}{K}
f_{Y_n}(y) = \sqrt{\frac{K}{2\pi}}
= \exp\left(-\frac{K Y_n^2}{2}\right)
The characteristic function is
M_{Y_n}(ju) = \exp\left(-\frac{u^2}{2K}\right)
[You can skip questions about the characteristic function]
(c)
\delta_n = \frac{1}{K}\left(\delta_n - \delta_{n-K}\right)(d)
C_Y(n) = \mathbb{E}\{W_n^2\} = \frac{1}{K^2}\left(2\delta_n - \delta_{n-K} - \delta_{n+K}\right)(e) [You can skip questions about convergence]
Exercise 6:
(a) \mathbb{E}\{X_n\} = 0
(b) \mathbb{E}\{X_n^2\} = \dfrac{1}{12n^2}
(c) K_X(i, j) = \dfrac{1}{12n^2} \delta_{i-k}
(d) \mathbb{E}\{S_n\} = 0
(e) [You can skip questions about convergence]
(f) \mathbb{E}\{Y_n\} = 0
(g)
K(i, j)= \left[ \begin{array}{ll} \dfrac{1}{24} \min(i, j) (-1)^{\frac{i+j}{2}}, & i\ge 2, \quad, j\ge 2, \quad i, j \text{ even} \\ - \dfrac{1}{24} \min(i, j) (-1)^{\frac{i+j}{2}}, & i\ge 2, \quad, j\ge 2, \quad i, j \text{ odd} \\ 0, & \text{otherwise} \end{array} \right.(h)
Z = Y_6 + Y_4 = 6 X_6 \quad \Rightarrow F_Z(z) = \left[\begin{array}{ll} z + \frac{1}{2}, & -\frac{1}{2} \le z \le \frac{1}{2} \\ 0, & \text{otherwise} \end{array}\right.
Exercise 8:
Z_n = X_n U_n
Y_n = X_n + U_n
W_n = X_n + U_0
\mathbb{E}\{Z_n\} = \mathbb{E}\{Y_n\} = \mathbb{E}\{W_n\} = 0
\text{cov}(Z_n)
= \mathbb{E}\{X_n^2\} \mathbb{E}\{U_n^2\} = \sigma^2
\text{cov}(Y_n)
= \mathbb{E}\{X_n^2\} + \mathbb{E}\{U_n^2\} = 1 + \sigma^2
\text{cov}(W_n)
= \mathbb{E}\{X_n^2\} + \mathbb{E}\{U_0^2\} = 1 + \sigma^2
R_{Z}(n) = \sigma^2 \delta_n
\Rightarrow S_Z(\omega) = \sigma^2
R_{Y}(n) = (1 + \sigma^2) \delta_n
\Rightarrow S_Y(\omega) = 1 + \sigma^2
R_{W}(n) = 1 + \sigma^2 \delta_n
\Rightarrow S_W(\omega) = 2\pi \delta(w) + \sigma^2, \quad
-\pi \le \omega \le \pi
K_{Z, Y}(n, m)
= \mathbb{E}\{Z_n Y_m\} = \mathbb{E}\{X_n U_n (X_m + U_m)\}
= 0
K_{Z, W}(n, m)
= \mathbb{E}\{Z_n W_m\} = \mathbb{E}\{X_n U_n (X_m + U_0)\}
= 0
K_{Y, W}(n, m)
= \mathbb{E}\{Y_n W_m\} = \mathbb{E}\{(X_n + U_n)(X_m + U_0\}
= \sigma^2 \delta_{n-m} + \delta_n
Exercise 13:
p_{W_n}(w) =
\left[\begin{array}{ll}
p^2 + (1-p)^2, & w = 0 \\
2p(1-p), & w = 1
\end{array}\right.
R_W(k, j)
= 4q^2 (1 - \delta_{k-j} - \delta_{k-j-1}
- \delta_{k-j+1})
+ 2 q\delta_{k-j} + q (\delta_{k-j-1} + \delta_{k-j+1})
where q = p(1-p)
S_W(\omega)
= 2 q
\left(4 q \pi \delta(\omega) +
(1 - 2q) +
(1 - 4q) \cos(\omega) \right)
Source: Beichelt, F. (2016). Chapter 6: Basics of Stochastic processes: exercises (p.252)
Exercise 6.12
SOLUTION:
C_Y[k] = R_Y[k] = \frac{0.8^{|k|}}{0.36}RESOLUTION:
The covariance function is
C_Y[k] = \mathbb{E}\{(Y_{t+k}-m_Y)·(Y_t-m_Y)\}where m_Y is the mean of the process. This mean can be computed through the recursive equation:
m_Y = \mathbb{E}\{Y_t\} = 0.8 \mathbb{E}{Y_{t-1}} + \mathbb{E}\{X_t\} = 0.8 \mathbb{E}\{Y_{t-1}\} = 0.8 m_Y(were we have used that \mathbb{E}\{X_t\}=0). Therefore, m_Y=0 and the covariance function becomes equal to the autocorrelation function:
C_Y[k] = R_Y[k] = \mathbb{E}\{Y_{t+k}·Y_t\}In order to compute the autocorrelation, note that the system driven by the input-output relation Y_t = 0.8 Y_{t-1} + X_t is linear and time invariant. Therefore, if h_t is the impulse response of the system, the autocorrelation function can be computed as
R_Y[k] = R_X[k]*h_k*h_{-k}Therefore, we need to compute the impulse response. This can be done in the frequency domain. Computing the Fourier transform:
Y(\omega) = 0.8 e^{-j\omega} Y(\omega) + X(\omega)Therefore
H(\omega) = \frac{Y(\omega)}{X(\omega)} = \frac{1}{1-0.8 e^{-j\omega}}which is the Fourier transform of
h_t = 0.8^t u[t]where u[t] is the step function.
Since X_t is a zero-mean iid process,
R_X[k] = \mathbb{E}\{X_t^2\}\delta_k = \delta_kTherefore,
R_Y[k] = R_X[k]*h_k*h_{-k} = \delta_k * (0.8^ku[k])*(0.8^{-k}u[-k]) = \frac{0.8^{|k|}}{0.36}Exercise 6.13
[This exercise can be solved by computing the impulse response through the frequency domain. The process is a bit tedious. Alternatively, you can apply eq. (6.50) in this book:]
C(n) = C(0) \dfrac{(1-y_1^2)y_2^{|n|+1} + (1-y_2^2)y_1^{|n|+1}} {(y_2-y_1)(1 + y_1 y_2)}where
y_1 = 0.4 (4 + j)
y_2 = 0.4 (4 - j)
C(0) = 20.0
The output process is weakly stationary because |y_1|<1 and |y_2|<1 (otherwise, the system would be unstable).
Ex. 6.14
[It can be solved in the same way as Ex. 6.13]
C(n) = - 4.88 \left(0.19 \cdot (-0.1)^{|n|+1} + 0.99 \cdot 0.9^{|n|+1}\right)Theodoridis, 2015. Chapter 2 (Secs. 2.4.1-2.4.3)
Ex. 2.9:
This problem is solved using the Cauchy-Schwartz inequality for random variables
|r(k)| = |\mathbb{E}\{U_{n+k} U_n \}|
\le \sqrt{\mathbb{E}\{U_{n+k}^2\}
\mathbb{E}\{U_n^2\}}
= r(0)
|r_{uv}(k)| = |\mathbb{E}\{U_{n+k} V_n \}|
\le \sqrt{\mathbb{E}\{U_{n+k}^2\}
\mathbb{E}\{V_n^2\}}
= \sqrt{r_u(0) r_v(0)}
Ex. 2.10.
Assuming that
D_n = U_n * w_n = \sum_{j=-\infty}^{\infty} U_j w_{n-j} \}
we can write
r_d(k) = \mathbb{E}\{D_{n+k} D_n \} = \\
\qquad = \mathbb{E}\{\sum_{i=-\infty}^{\infty} U_i w_{n+k-i}
\sum_{j=-\infty}^{\infty} U_j w_{n-j} \} \\
\qquad = \sum_{i=-\infty}^{\infty} \sum_{j=-\infty}^{\infty}
\mathbb{E}\{ U_i U_j \}w_{n+k-i} w_{n-j} \\
\qquad = \sum_{i=-\infty}^{\infty} \sum_{j=-\infty}^{\infty}
r_u(i-j) w_{n+k-i} w_{n-j} \\
Applying the index change l=i-j,
r_d(k) = \sum_{l=-\infty}^{\infty} \sum_{j=-\infty}^{\infty}
r_u(l) w_{n+k-l-j} w_{n-j} \\
\qquad = \sum_{l=-\infty}^{\infty} r_u(l)
\sum_{j=-\infty}^{\infty} w_{n+k-l-j} w_{n-j} \\
and, applying the index change m=j-n,
r_d(k) = \sum_{l=-\infty}^{\infty} r_u(l)
\sum_{m=-\infty}^{\infty} w_{-m} w_{(k-l)-m} \\
\qquad = \sum_{l=-\infty}^{\infty} r_u(l)
\left[w_n ∗ w_{−n}\right]_{n=k-l}
\qquad = r_u(k) ∗ w_k ∗ w_{−k}.