next up previous
Next: Simulation Results Up: EM Algorithm for Sequence Channels Previous: EM and ML Sequence

   
EM for Gauss-Markov Channels

Using (2) in (6) and dropping constants:

 \begin{displaymath}Q({\bf A}\vert{\bf B}) \doteq
\int_{\bf H} \ - \sum_{k=1}^{N}...
...{\bf h}_k\vert^{2} \
f({\bf H}\vert{\bf r},{\bf B}) d{\bf H} ,
\end{displaymath} (7)

with:

 \begin{displaymath}f({\bf H}\vert{\bf r},{\bf B}) \doteq
\ f({\bf H}) \
\prod_{...
... {\vert r_k-{\bf b}_k^{T}{\bf h}_k\vert^{2}}
{\sigma^{2}}
} .
\end{displaymath} (8)

As an intermediate development, first assume that the elements of $\bf H$ are independent over time. Then $f({\bf H})$ can be written as $f({\bf H}) = \prod_{k=1}^{N} f({\bf h}_k)$, and $f({\bf H}\vert{\bf r},{\bf B})$ becomes:

 \begin{displaymath}f({\bf H}\vert{\bf r},{\bf B}) =
\ \prod_{k=1}^{N} f({\bf h}_k\vert{\bf r},{\bf B}) ,
\end{displaymath} (9)

with:

 \begin{displaymath}f({\bf h}_k\vert{\bf r},{\bf B}) \doteq
f({\bf h}_k) ~ e^{ -
...
...{\vert r_k -{\bf b}_k^{T}{\bf h}_k\vert^{2}}
{\sigma^{2}}
} .
\end{displaymath} (10)

Using (9) and (10) in (7) we obtain:

 \begin{displaymath}Q({\bf A}\vert{\bf B}) \doteq
- \sum_{k=1}^N V_k ,
\end{displaymath} (11)

with:
   
$\displaystyle {\bf g}_k$ $\textstyle \stackrel{\triangle}{=}$ $\displaystyle E[{\bf h}_k \vert {\bf r},{\bf B}]$ (12)
$\displaystyle {\bf G}_k$ $\textstyle \stackrel{\triangle}{=}$ $\displaystyle E[({\bf h}_k - {\bf g}_k) ({\bf h}_k - {\bf g}_k)^{*T}
\vert {\bf r},{\bf B}]$ (13)
Vk $\textstyle \stackrel{\triangle}{=}$ $\displaystyle \vert r_k - {\bf a}_k^T {\bf g}_k\vert^2
+ {\bf a}_k^{T} {\bf G}_k {\bf a}_k^* .$ (14)

Vk represents the Viterbi algorithm incremental cost function at time k.

Now, for a fast time-varying channel with Gauss-Markov fading parameters, let $\alpha $ represent a complex first-order Markov factor such that $\vert\alpha\vert = e^{-\omega T}$ where T is the sampling period and $\frac {\omega}{\pi}$ is the Doppler spread [17]. Let the time-varying channel coefficients follow a Gauss-Markov distribution such that, at time k:

 \begin{displaymath}{\bf h}_k =
\alpha {\bf h}_{k-1} + {\bf v}_k ,
\end{displaymath} (15)

where ${\bf v}_k$ is complex, white, and Gaussian with mean $\bf d$ and covariance $\bf C$. In this case, $f({\bf h}_k\vert{\bf h}_{k-1})$ is Gaussian with mean ${\bf d} + \alpha {\bf h}_{k-1}$ and covariance $\bf C$. Additional properties of this distribution are derived in the appendix.

(11) and the definitions from (12-14) still apply, with:

 \begin{displaymath}f({\bf H}\vert{\bf r},{\bf B}) \doteq
f({\bf h}_1)
\ \prod_{k...
... {\vert r_k-{\bf b}_k^{T}{\bf h}_k\vert^{2}}
{\sigma^{2}}
} .
\end{displaymath} (16)

To determine $f({\bf h}_k\vert{\bf r},{\bf B})$, and thereby ${\bf g}_k$ and ${\bf G}_k$, we must integrate (16) over ${\bf h}_i, i = 1,\ldots,N, i \ne k$. The derivation of this integration result is shown in the appendix. The result is that $f({\bf h}_k\vert{\bf r},{\bf B})$ is Gaussian, with mean ${\bf g}_k$ and covariance ${\bf G}_k$ determined by the following recursive algorithm:

Initialize: for $k=N, N-1, \ldots , 2$

 
$\displaystyle {\bf\hat{G}}_k$ = $\displaystyle \left[
\frac {{\bf b}_k^*{\bf b}_k^T}
{\sigma^2}
+ {\bf C}^{-1} \right.$  
    $\displaystyle + \vert\alpha\vert^2 \left. \left(
{\bf C}^{-1} - {\bf C}^{-1} {\bf\hat{G}}_{k+1} {\bf C}^{-1}
\right)
\right]^{-1}$  
$\displaystyle {\bf\hat{q}}_k$ = $\displaystyle \frac {r_k{\bf b}_k^*}
{\sigma^2}
+ (1 - \alpha^*) {\bf C}^{-1} {\bf d}$ (17)
    $\displaystyle + \alpha^* {\bf C}^{-1} {\bf\hat{G}}_{k+1} {\bf\hat{q}}_{k+1}$  

Iterate: for $k=1, 2, \ldots, N$

 
$\displaystyle {\bf G}_k$ = $\displaystyle \left[
\frac {{\bf b}_k^*{\bf b}_k^T}
{\sigma^2}
+ {\bf C}^{-1} + \vert\alpha\vert^2 \left( {\bf C}^{-1} \right. \right.$  
    $\displaystyle \left. \left. - {\bf C}^{-1} {\bf\hat{G}}_{k-1} {\bf C}^{-1}
- {\bf C}^{-1} {\bf\hat{G}}_{k+1} {\bf C}^{-1}
\right)
\right]^{-1}$  
$\displaystyle {\bf q_k}$ = $\displaystyle \frac {r_k{\bf b}_k^*}
{\sigma^2}
+ (1 - \alpha^*) {\bf C}^{-1} {\bf d}$ (18)
    $\displaystyle + \alpha {\bf C}^{-1} {\bf\hat{G}}_{k-1} {\bf\hat{q}}_{k-1}
+ \alpha^* {\bf C}^{-1} {\bf\hat{G}}_{k+1} {\bf\hat{q}}_{k+1}$  
$\displaystyle {\bf g}_k$ = $\displaystyle {\bf G_k} {\bf q_k}$  


 
$\displaystyle {\bf\hat{G}}_k$ = $\displaystyle \left[
\frac {{\bf b}_k^*{\bf b}_k^T}
{\sigma^2}
+ {\bf C}^{-1} \right.$  
    $\displaystyle \left. + \vert\alpha\vert^2 \left(
{\bf C}^{-1} - {\bf C}^{-1} {\bf\hat{G}}_{k-1} {\bf C}^{-1}
\right)
\right]^{-1}$  
$\displaystyle {\bf\hat{q}}_k$ = $\displaystyle \frac {r_k{\bf b}_k^*}
{\sigma^2}
+ (1 - \alpha^*) {\bf C}^{-1} {\bf d}$ (19)
    $\displaystyle + \alpha {\bf C}^{-1} {\bf\hat{G}}_{k-1} {\bf\hat{q}}_{k-1} ,$  

with ${\bf\hat{G}}_0 \equiv {\bf\hat{G}}_{N+1} \equiv {\bf C}$, ${\bf\hat{q}}_0 \equiv
\left(
\frac {1 - \alpha^*}
{1 - \alpha}
\right)
{\bf C}^{-1} {\bf d}$, and ${\bf\hat{q}}_{N+1} \equiv {\bf C}^{-1} {\bf d}$.

It is apparent from the above algorithm that $f({\bf h}_k\vert{\bf r},{\bf B})$ at any time kdepends on all of the received data and estimated symbols from time 1 to N. In the EM algorithm E-step, (17-19) are used to estimate ${\bf G}_k$ and ${\bf g}_k$, $k = 1 , \ldots , N$. In the M-step, the Viterbi algorithm with incremental cost function (14) is used to estimate $\bf A$.


next up previous
Next: Simulation Results Up: EM Algorithm for Sequence Channels Previous: EM and ML Sequence
Rick Perry
2000-03-30