next up previous
Next: Simulation Results Up: Sequence Estimation over Linearly-Constrained Previous: EM for Non-Time-Varying Channels

   
EM for Time-Varying Channels

For a time-varying channel, using (6) in (15) and dropping constants, (13) becomes:
 
$\displaystyle Q({\bf A}\vert{\bf B})$ $\textstyle \doteq$ $\displaystyle E[\ - \sum_{k=1}^{N} \vert r_k-{\bf a}_k^{T}{\bf h}_k\vert^{2} \
\vert {\bf r},{\bf B}]$  
  = $\displaystyle \int_{\bf P} \ - \sum_{k=1}^{N}
\vert r_k-{\bf a}_k^{T}{\bf h}_k\vert^{2} \
f({\bf P}\vert{\bf r},{\bf B}) d{\bf P} ,$ (35)

with:

 \begin{displaymath}f({\bf P}\vert{\bf r},{\bf B}) \doteq
\ f({\bf P}) \
\prod_{k...
... {\vert r_k-{\bf b}_k^{T}{\bf h}_k\vert^{2}}
{\sigma^{2}}
} .
\end{displaymath} (36)

In this time-varying case, in the E-step of the EM algorithm, time varying channel impulse response vector conditional means and covariances will be estimated. The details of the EM algorithm in this case depend on the form of $f({\bf P})$. The following subsections examine the specific cases of independent and independent Gaussian underlying channel coefficient distributions.



INDEPENDENT UNDERLYING CHANNEL COEFFICIENTS

If the elements of ${\bf P}$ are independent over time, then $f({\bf P})$ can be written as:

 \begin{displaymath}f({\bf P}) =
\ \prod_{k=1}^{N} f({\bf p}_k) ,
\end{displaymath} (37)

and $f({\bf P }\vert{\bf r},{\bf B})$ becomes:

 \begin{displaymath}f({\bf P}\vert{\bf r},{\bf B}) =
\ \prod_{k=1}^{N} f({\bf p}_k\vert{\bf r},{\bf B}) ,
\end{displaymath} (38)

with:

 \begin{displaymath}f({\bf p}_k\vert{\bf r},{\bf B}) \doteq
f({\bf p}_k) ~ e^{ -
...
...c {\vert r_k -{\bf b}_k^{T}{\bf h}_k\vert^{2}}{\sigma^{2}} } .
\end{displaymath} (39)

Using (38) and (39) in (35) we obtain:
 
$\displaystyle Q({\bf A}\vert{\bf B})$ $\textstyle \doteq$ $\displaystyle \ - \sum_{k=1}^{N} \int_{{\bf p}_k} \vert{r_k}-{\bf a}_k^{T}{\bf ...
...{ - \frac {\vert{r_k}-{\bf b}_k^{T}{\bf h}_k\vert^{2}}{\sigma^{2}}}
d {\bf p}_k$  
  = $\displaystyle - \sum_{k=1}^N V_k ,$ (40)

with:
   
$\displaystyle {\bf g}_k$ $\textstyle \stackrel {\triangle} {=}$ $\displaystyle E[{\bf h}_k \vert {\bf r},{\bf B}]
~ = ~ {\bf M} {\bf d}_k + {\bf\bar{M}} {\bf\bar{p}}$ (41)
$\displaystyle {\bf G}_k$ $\textstyle \stackrel {\triangle} {=}$ $\displaystyle E[({\bf h}_k - {\bf g}_k) ({\bf h}_k - {\bf g}_k)^{*T}
\vert {\bf r},{\bf B}]
~ = ~ {\bf M} {\bf D}_k {\bf M}^{*T}$ (42)
Vk $\textstyle \stackrel {\triangle} {=}$ $\displaystyle \vert r_k - {\bf a}_k^T {\bf g}_k\vert^2
+ {\bf a}_k^{T} {\bf G}_k {\bf a}_k^* .$ (43)

Vk represents the Viterbi algorithm incremental cost function at time k. ${\bf d}_k$ and ${\bf D}_k$ are the mean and covariance of the underlying channel coefficients ${\bf p}_k$ given ${\bf r}$ and ${\bf B}$:
  
$\displaystyle {\bf d}_k$ $\textstyle \stackrel {\triangle} {=}$ $\displaystyle E[{\bf p}_k\vert{\bf r},{\bf B}]$ (44)
$\displaystyle {\bf D}_k$ $\textstyle \stackrel {\triangle} {=}$ $\displaystyle E[({\bf p}_k - {\bf d}_k) ({\bf p}_k - {\bf d}_k)^{*T} \vert{\bf r},{\bf B}] \; ..$ (45)

In the E-step, (41) and (42) will be used to estimate ${\bf g}_k$ and ${\bf G}_k$, $k = 1 , \ldots , N$. In the M-step, the Viterbi algorithm with incremental cost function (43) will be used to estimate $\bf A$. To initialize ${\bf B}$ in step 1, we can initialize ${\bf g}_k$ and ${\bf G}_k$, $k = 1 , \ldots , N$, then use the Viterbi algorithm to estimate $\bf A$, then set ${\bf B}={\bf A}$. To initialize and estimate ${\bf g}_k$ and ${\bf G}_k$in steps 1 and 2 of the algorithm, we need to know the specific form of $f({\bf p}_k)$. The next subsection derives the specific EM algorithm for Gaussian ${\bf p}_k$.



GAUSSIAN UNDERLYING CHANNEL COEFFICIENTS

If $f({\bf p}_k)$ is Gaussian with mean ${\bf c}_k$ and covariance ${\bf C}_k$, then similar to the derivation of (32) for the non-time-varying Gaussian case, we obtain:

 \begin{displaymath}f({\bf p}_k\vert{\bf r},{\bf B}) \doteq
e^{ -
({\bf p}_k-{\bf d}_k)^{*T} {\bf D}_k^{-1} ({\bf p}_k-{\bf d}_k)} \; ,
\end{displaymath} (46)

with:
  
$\displaystyle {\bf D}_k$ = $\displaystyle \left[
\frac {{\bf M}^{*T} {\bf b}_k^{*} {\bf b}_k^{T} {\bf M}}{{\sigma}^2}
+ {\bf C}_k^{-1}
\right] ^{-1}$ (47)
$\displaystyle {\bf d}_k$ = $\displaystyle {\bf D}_k \left( \frac { {\bf M}^{*T} {\bf b}_k^{*}
(r_k - {\bf b...
... {\bf\bar{M}} {\bf\bar{p}}) }{{\sigma}^2 }
+ {\bf C}_k^{-1} {\bf c}_k \right) .$ (48)

For the E-step, (47) and (48) are used in (41) and (42) to estimate ${\bf G}_k$ and ${\bf g}_k$, $k = 1 , \ldots , N$. For the M-step, the Viterbi algorithm is used with incremental cost function (43) to estimate $\bf A$.

GAUSS-MARKOV UNDERLYING CHANNEL COEFFICIENTS

For a fast time-varying channel with Gauss-Markov fading parameters, let $\alpha$ represent a complex first-order Markov factor such that $\vert\alpha\vert = e^{-\omega T}$ where T is the sampling period and $\frac {\omega}{\pi}$ is the Doppler spread [14]. Let the time-varying underlying channel coefficients follow a Gauss-Markov distribution such that, at time k:

 \begin{displaymath}{\bf p}_k =
\alpha {\bf p}_{k-1} + {\bf v}_k ,
\end{displaymath} (49)

where ${\bf v}_k$ is complex, white, and Gaussian with mean $\bf c$ and covariance $\bf C$. In this case, $f({\bf p}_k\vert{\bf p}_{k-1})$ is Gaussian with mean ${\bf c} + \alpha {\bf p}_{k-1}$ and covariance $\bf C$. $Q({\bf A}\vert{\bf B})$ from (35) can be written in terms of $f({\bf p}_k\vert{\bf r},{\bf B})$:
 
$\displaystyle Q({\bf A}\vert{\bf B})$ $\textstyle \doteq$ $\displaystyle \ - \sum_{k=1}^N
\int_{{\bf p}_k} \vert r_k-{\bf a}_k^T{\bf h}_k\vert^2
\ f({\bf p}_k\vert{\bf r},{\bf B}) d {\bf p}_k$  
  = $\displaystyle - \sum_{k=1}^N V_k ,$ (50)

and the definitions of ${\bf g}_k$, ${\bf G}_k$, and Vk from (41-43) still apply. $f({\bf P})$ can be written as:

 \begin{displaymath}f({\bf P}) =
f({\bf p}_1)
\ \prod_{k=2}^N f({\bf p}_k\vert{\bf p}_{k-1}) ,
\end{displaymath} (51)

so $f({\bf P }\vert{\bf r},{\bf B})$ from (36) becomes:

 \begin{displaymath}f({\bf P}\vert{\bf r},{\bf B}) \doteq
f({\bf p}_1)
\ \prod_{k...
...ac {\vert r_k-{\bf b}_k^{T}{\bf h}_k\vert^{2}}{\sigma^{2}} } .
\end{displaymath} (52)

To determine $f({\bf p}_k\vert{\bf r},{\bf B})$, and thereby ${\bf g}_k$ and ${\bf G}_k$, we must integrate (52) over ${\bf p}_i, i = 1,\ldots,N, i \ne k$. Due to space limitations, the derivation of this integration result is not shown here. A derivation for the same problem, but without the linear constraints, appears in [15]. The result is that $f({\bf p}_k\vert{\bf r},{\bf B})$ is Gaussian, with mean ${\bf d}_k$ and covariance ${\bf D}_k$ determined by the following recursive algorithm:
Initialize: for $k=N, N-1, \ldots , 2$
 
$\displaystyle {\bf\hat{D}}_k$ = $\displaystyle \left[
\frac { {\bf M}^{*T} {\bf b}_k^*{\bf b}_k^T {\bf M} }{\sigma^2}
+ {\bf C}^{-1} \right.$  
  + $\displaystyle \left. \vert\alpha\vert^2 \left(
{\bf C}^{-1} - {\bf C}^{-1} {\bf\hat{D}}_{k+1} {\bf C}^{-1} \right)
\right]^{-1}$  
$\displaystyle {\bf\hat{q}}_k$ = $\displaystyle \frac { {\bf M}^{*T} {\bf b}_k^* (r_k - {\bf b}_k^T
{\bf\bar{M}} {\bf\bar{p}}) }{\sigma^2}
+ (1 - \alpha^*) {\bf C}^{-1} {\bf c}$  
  + $\displaystyle \alpha^* {\bf C}^{-1} {\bf\hat{D}}_{k+1} {\bf\hat{q}}_{k+1}$ (53)


Iterate: for $k=1, 2, \ldots, N$
 
$\displaystyle {\bf D}_k$ = $\displaystyle \left[
\frac { {\bf M}^{*T} {\bf b}_k^*{\bf b}_k^T {\bf M} }{\sigma^2}
+ {\bf C}^{-1} \right.$  
  + $\displaystyle \left. \vert\alpha\vert^2 \left( {\bf C}^{-1}
- {\bf C}^{-1} {\bf...
...\bf C}^{-1}
- {\bf C}^{-1} {\bf\hat{D}}_{k+1} {\bf C}^{-1}
\right)
\right]^{-1}$  
$\displaystyle {\bf q}_k$ = $\displaystyle \frac { {\bf M}^{*T} {\bf b}_k^* (r_k - {\bf b}_k^T
{\bf\bar{M}} {\bf\bar{p}}) }{\sigma^2}
+ (1 - \alpha^*) {\bf C}^{-1} {\bf c}$  
  + $\displaystyle \alpha {\bf C}^{-1} {\bf\hat{D}}_{k-1} {\bf\hat{q}}_{k-1}
+ \alpha^* {\bf C}^{-1} {\bf\hat{D}}_{k+1} {\bf\hat{q}}_{k+1}$ (54)
$\displaystyle {\bf d}_k$ = $\displaystyle {\bf D}_k {\bf q}_k , \;
{\bf G}_k =
{\bf M} {\bf D}_k {\bf M}^{*T} , \;
{\bf g}_k =
{\bf M} {\bf d}_k + {\bf\bar{M}} {\bf\bar{p}}$  


 
$\displaystyle {\bf\hat{D}}_k$ = $\displaystyle \left[
\frac { {\bf M}^{*T} {\bf b}_k^*{\bf b}_k^T {\bf M} }{\sigma^2}
+ {\bf C}^{-1} \right.$  
  + $\displaystyle \left. \vert\alpha\vert^2 \left(
{\bf C}^{-1} - {\bf C}^{-1} {\bf\hat{D}}_{k-1} {\bf C}^{-1}
\right)
\right]^{-1}$  
$\displaystyle {\bf\hat{q}}_k$ = $\displaystyle \frac { {\bf M}^{*T} {\bf b}_k^* (r_k - {\bf b}_k^T
{\bf\bar{M}} {\bf\bar{p}}) }{\sigma^2}
+ (1 - \alpha^*) {\bf C}^{-1} {\bf c}$  
  + $\displaystyle \alpha {\bf C}^{-1} {\bf\hat{D}}_{k-1} {\bf\hat{q}}_{k-1} ,$ (55)

with ${\bf\hat{D}}_0 \equiv {\bf\hat{D}}_{N+1} \equiv {\bf C}$, ${\bf\hat{q}}_0 \equiv \left(
\frac {1 - \alpha^*}{1 - \alpha} \right) {\bf C}^{-1} {\bf c}$, and ${\bf\hat{q}}_{N+1} \equiv {\bf C}^{-1} {\bf c}$. It is apparent from the above algorithm that $f({\bf p}_k\vert{\bf r},{\bf B})$ at any time kdepends on all of the received data and estimated symbols from time 1 to N. For the E-step, the algorithm from (53-55) is used to estimate ${\bf G}_k$ and ${\bf g}_k$, $k = 1 , \ldots , N$. For the M-step, the Viterbi algorithm is used with incremental cost function (43) to estimate $\bf A$.


next up previous
Next: Simulation Results Up: Sequence Estimation over Linearly-Constrained Previous: EM for Non-Time-Varying Channels
Rick Perry
2000-03-16