next up previous
Next: Simulation results Up: Time-Recursive Maximum Likelihood Based Channels Previous: Time-Independent Gaussian channel

   
Gauss-Markov channel

For a fast time-varying channel with Gauss-Markov fading parameters, let $\alpha$ represent a complex first-order Markov factor such that $\vert\alpha\vert = e^{-\omega T}$ where T is the sampling period and $\frac {\omega}{\pi}$ is the Doppler spread [9]. Let the time-varying channel coefficients follow a Gauss-Markov distribution such that, at time k, ${\bf h}_k =
\alpha {\bf h}_{k-1} + {\bf v}_k$, where ${\bf v}_k$ is complex, white, and Gaussian with mean $\bf d$ and covariance $\bf C$. In this case, $f({\bf h}_k\vert{\bf h}_{k-1})$ is Gaussian with mean ${\bf d} + \alpha {\bf h}_{k-1}$ and covariance $\bf C$. $f({\bf H})$ can be written as


 \begin{displaymath}f({\bf H}) =
f({\bf h}_1)
\ \prod_{k=2}^n f({\bf h}_k\vert{\bf h}_{k-1}) .
\end{displaymath} (10)

We assume the mean of ${\bf v}_k$ is zero and the initial channel coefficient is known and it is represented as ${\bf h}_0$. So


 \begin{displaymath}{\bf h}_1=
\alpha{\bf h}_0 + {\bf v}_1 .
\end{displaymath} (11)

Substituting (2), (10) and (11) into (3), we obtain


 
$\displaystyle f({\bf r}\vert{\bf A})$ = $\displaystyle E[f({\bf r}\vert{\bf H},{\bf A})]$  
  = $\displaystyle \int_{{\bf H}} f({\bf r}\vert{\bf H},{\bf A})
f({\bf H }) d{\bf H }$  
  $\textstyle \doteq$ $\displaystyle \int_{{\bf H}}\left( \prod_{k=1}^{n} ~ e^{ -
\frac {{\vert r_k-{\bf a}_k^{T}{\bf h}_k\vert}^{2}} {{\sigma}^{2} } } \right)f({\bf h}_1) \cdot$  
    $\displaystyle \prod_{k=2}^n f({\bf h}_{k}\vert{\bf h}_{k-1}) d {\bf H}$ (12)

We can marginalize over the channel coefficient distribution step by step. Integrating over ${\bf h}_1$, we obtain


 \begin{displaymath}I_1 =
\int_{{\bf h}_1} e^{- E_1 } d{\bf h}_1 ,
\end{displaymath} (13)

with

 \begin{displaymath}E_1 =
({\bf h}_1 - \alpha {\bf h}_0)^{*T}
{\bf C}^{-1}
({\bf ...
...frac {\vert r_1 - {\bf a}_1^T {\bf h}_1\vert^2 }
{\sigma^2} .
\end{displaymath} (14)

After integration over ${\bf h}_1$,


 \begin{displaymath}I_1 \doteq
e^{{{\bf {q}}_1}^{*T} {\bf {G}}_1 {\bf {q}}_1} \vert{\bf {G}}_1\vert e^{-{\phi}_1({\bf h}_2)}
\end{displaymath} (15)

with:
 
$\displaystyle {\bf {G}}_1$ = $\displaystyle \left[
\frac {{\bf a}_1^* {\bf a}_1^T}
{\sigma^2}
+ (1 + {\vert\alpha\vert}^2){\bf C}^{-1}
\right]^{-1}$  
$\displaystyle {\bf {q}}_1$ = $\displaystyle \frac {r_1 {\bf a}_1^* }
{\sigma^2}+ \alpha {\bf C}^{-1} {\bf h}_0$ (16)

The term $e^{-{\phi}_1({\bf h}_2)}$ in (15) is a function of ${\bf h}_2$ and will be involved in the integral over ${\bf h}_2$. The rest of the terms in (15) are the results after the integration over ${\bf h}_1$ and are only decided by the input sequence and will be used to compute the maximum likelihood probability.

The general form of the results after integral over ${\bf h}_k$ ( k=2,...,n-1) will be


 \begin{displaymath}I_k \doteq
e^{{{\bf {q}}_k}^{*T} {\bf {G}}_k {\bf {q}}_k} \vert{\bf {G}}_k\vert e^{-{\phi}_k({\bf h}_{k+1})} ,
\end{displaymath} (17)

with
 
$\displaystyle {\bf G}_k$ = $\displaystyle \left[
\frac {{\bf a}_k^* {\bf a}_k^T}
{\sigma^2}
+ {\bf C}^{-1}
...
...ft(
{\bf C}^{-1} - {\bf C}^{-1} {\bf G}_{k-1} {\bf C}^{-1}
\right)
\right]^{-1}$  
$\displaystyle {\bf {q}}_k$ = $\displaystyle \frac {r_k {\bf a}_k^* }
{\sigma^2}
+ \alpha {\bf C}^{-1} {\bf G}_{k-1} {\bf {q}}_{k-1} .$ (18)

For the integral over ${\bf h}_n$,


 \begin{displaymath}I_n \doteq
e^{{{\bf {q}}_n}^{*T} {\bf {G}}_n {\bf {q}}_n} \vert{\bf {G}}_n\vert
\end{displaymath} (19)

with
 
$\displaystyle {\bf G}_n$ = $\displaystyle \left[
\frac {{\bf a}_n^*{\bf a}_n^T}
{\sigma^2}
+ {\bf C}^{-1}
-\vert\alpha\vert^2 {\bf C}^{-1} {\bf G}_{n-1} {\bf C}^{-1}
\right]^{-1}$  
$\displaystyle {\bf q}_n$ = $\displaystyle \frac {r_n{\bf a}_n^*}
{\sigma^2}
+ \alpha {\bf C}^{-1} {\bf G}_{n-1} {\bf q}_{n-1}$ (20)

After the integration over all the channel coefficients, we have the likelihood function


 \begin{displaymath}f(r\vert{\bf A}) \doteq
\prod_{k=1}^n e^{{{\bf q}_k}^{*T} {\bf G}_k {\bf q}_k} \vert{\bf G}_k\vert .
\end{displaymath} (21)

If we take the negative natural logarithm of the above equation, we will obtain the time-recursive form of the cost function. However, from the expressions of ${\bf G}_k$ and ${\bf q}_k$ we know the transition costs depend on all the previous states, so the standard Viterbi algorithm can not be applied here. The optimum results can only be obtained by exhaustive search. However, a suboptimum algorithm which we call GM may be derived and we introduce this algorithm as follows. Observe equations (16), (18) and (21). Note that the parameters in the computation of the cost such as ${\bf g}_k$ and ${\bf G}_k$ are actually a function of all the previous transmitted data symbols. This observation suggests that we use per-survivor processing (PSP) [6] to decrease the computational complexity. It is well known that if the quantities corresponding to the computation of MLSE are unknown, we may use data-aided estimation to estimate them. In PSP, these quantities are estimated by data-aided estimation, but the sequence used in data-aided estimation of the transition cost corresponding to a particular state transition is the sequence associated with the survivor path before the current transition. The path obtained this way should approach the optimum results due to reduced error propagation. Furthermore, in our algorithm, GVA is applied [8], we compute the ${\bf g}_k$ and ${\bf G}_k$ associated with the survivors of the trellis, and these ${\bf g}_k$ and ${\bf G}_k$ are used to compute the current transition cost and to find the survivors at the next stage. At each state of each stage, we keep more than one survivor and we denote this fixed number of survivors as L. As L increases, the algorithm will approach the optimum results. This algorithm is a suboptimum but effective method for adaptive MLSE.


next up previous
Next: Simulation results Up: Time-Recursive Maximum Likelihood Based Channels Previous: Time-Independent Gaussian channel
Rick Perry
2000-10-29