next up previous
Next: Reduced Complexity Processing Up: Multiple User Maximum Likelihood Previous: Multiuser System Model

   
MLSE with Gauss-Markov Channels

For a fast time-varying channel with Gauss-Markov fading parameters, let $\alpha$ represent a complex first-order Markov factor such that $\vert\alpha\vert = e^{-\omega T}$ where T is the symbol period and $\frac {\omega}{\pi}$ is the Doppler spread. Let the time-varying channel coefficients follow a Gauss-Markov distribution such that, at time i, ${\bf h}_i = \alpha {\bf h}_{i-1} + {\bf v}_i$, where ${\bf v}_i$ is complex, white, and Gaussian with zero mean and covariance ${\bf C}$. In this case, $f({\bf h}_i\vert{\bf h}_{i-1})$ is Gaussian with mean $\alpha {\bf h}_{i-1}$ and covariance ${\bf C}$. $f({\bf H}^n)$ can be written as

 \begin{displaymath}f ({\bf H}^n) =
~ \prod_{i=1}^n
f({\bf h}_i\vert{\bf h}_{i-1}) ~ .
\end{displaymath} (11)

We assume that the initial channel coefficients are known, and they are represented as ${\bf h}_0$. Then

 \begin{displaymath}f({\bf r}^n\vert{\bf A}^n)
\doteq
~ \prod_{i=1}^n
\int_{{\bf h}_i}
e^{ - Q_i }
d {\bf h}_i ~ ,
\end{displaymath} (12)

$Q_i = \vert\vert {\bf r}_i - {\bf A}_i {\bf h}_i \vert\vert^2 / \sigma^2
- ({\bf h}_i - \alpha {\bf h}_{i-1})^H {\bf C}^{-1}
({\bf h}_i - \alpha {\bf h}_{i-1})$. We can marginalize over the channel coefficient distribution step by step. The integral over ${\bf h}_1$ is

 \begin{displaymath}I_1 =
\int_{{\bf h}_1} e^{- E_1 } d{\bf h}_1 ~ ,
\end{displaymath} (13)

with
 
E1 = $\displaystyle ({\bf h}_1 - \alpha {\bf h}_0)^H
{\bf C}^{-1}
({\bf h}_1 - \alpha {\bf h}_0)$  
    $\displaystyle + ~ ({\bf h}_2 - \alpha {\bf h}_1)^H
{\bf C}^{-1}
({\bf h}_2 - \alpha {\bf h}_1)$ (14)
    $\displaystyle + ~ \frac {\vert\vert {\bf r}_1 - {\bf A}_1 {\bf h}_1 \vert\vert^2}
{\sigma^2} ~ .$  

E1 can be reduced, dropping constant terms, to
 
$\displaystyle E_1 \doteq({\bf h}_1 - {\bf g}_1)^H
{\bf G}_1^{-1}
({\bf h}_1 - {\bf g}_1)
- {\bf q}_1^H {\bf G}_1 {\bf q}_1 + {\phi}_1({\bf h}_2)$     (15)

with
 
$\displaystyle {\bf G}_1$ = $\displaystyle \left[
\frac {{\bf A}_1^H {\bf A}_1}
{\sigma^2}
+ (1 + {\vert\alpha\vert}^2) {\bf C}^{-1}
\right]^{-1}$  
$\displaystyle {\bf q}_1$ = $\displaystyle \frac {{\bf A}_1^H {\bf r}_1}
{\sigma^2}+ \alpha {\bf C}^{-1} {\bf h}_0$ (16)
$\displaystyle {\bf g}_1$ = $\displaystyle {\bf G}_1 ( {\bf q}_1
+ \alpha^* {\bf C}^{-1} {\bf h}_2 ) ~ ~ .$  

After integration over ${\bf h}_1$,

 \begin{displaymath}I_1 \doteq
\vert{\bf G}_1\vert e^{ {\bf q}_1^H {\bf G}_1 {\bf q}_1}
e^{ - {\phi}_1({\bf h}_2)} ~ ~ .
\end{displaymath} (17)

The term $e^{-{\phi}_1({\bf h}_2)}$ in (17) is a function of ${\bf h}_2$ and will be involved in the integral over ${\bf h}_2$. The rest of the terms in (17) are the results after the integration over ${\bf h}_1$ and are only decided by the input sequence and will be used to compute the maximum likelihood probability. The integral over ${\bf h}_2$ is

 \begin{displaymath}I_2 =
\int_{{\bf h}_2} e^{- E_2 } d{\bf h}_2
\end{displaymath} (18)

with
 
E2 = $\displaystyle ({\bf h}_3 - \alpha {\bf h}_2)^{H}
{\bf C}^{-1}
({\bf h}_3 - \alpha {\bf h}_2)
+ \frac {\vert r_2 - {\bf A}_2 {\bf h}_2\vert^2 }
{\sigma^2}$  
    $\displaystyle ~ - \vert\alpha\vert^2 {\bf h}_2^{H} {\bf C}^{-1} {\bf {G}}_1 {
\bf C}^{-1} {\bf h}_2
- \alpha {\bf {q}}_1^{H} {\bf {G}}_1 {\bf C}^{-1} {\bf h}_2$  
    $\displaystyle ~ - \alpha {\bf h}_2^{H} {\bf C}^{-1} {\bf {G}}_1 {\bf {q}}_1
+ {\bf h}_2^{H} {\bf C}^{-1} {\bf h}_2$ (19)

E2 can be reduced, dropping constant terms, to:

 \begin{displaymath}E_2 \doteq
({\bf h}_2 - {\bf {g}}_2)^{H}
{\bf {G}}_2^{-1}
({\...
...- {\bf {q}}_2^{H} {\bf {G}}_2 {\bf {q}}_2
+{\phi}_2({\bf h}_3)
\end{displaymath} (20)

with
 
$\displaystyle {\bf ~ {G}}_2$ = $\displaystyle \left[
\frac {{\bf A}_2^H {\bf A}_2}
{\sigma^2}
+ {\bf C}^{-1}
+ ...
...left(
{\bf C}^{-1} - {\bf C}^{-1} {\bf {G}}_1 {\bf C}^{-1}
\right)
\right]^{-1}$  
$\displaystyle {\bf {q}}_2$ = $\displaystyle \frac { {\bf A}_2^H\bf {r_2}}
{\sigma^2}+ \alpha {\bf C}^{-1} {\bf {G}}_1 {\bf {q}}_1$ (21)
$\displaystyle {\bf {g}}_2$ = $\displaystyle {\bf {G}}_2 ( {\bf {q}}_2
+ \alpha^* {\bf C}^{-1} {\bf h}_3 )$  

After integration over ${\bf h}_2$ we have:

 \begin{displaymath}I_2 \doteq
\vert{\bf {G}}_2\vert
e^{{{\bf {q}}_2}^{H} {\bf {G}}_2 {\bf {q}}_2}
e^{-{\phi}_2({\bf h}_3)}
\end{displaymath} (22)

The last term in (22) will be involved in the integral over ${\bf h}_3$.

The general form of the results after integral over ${\bf h}_i$( i=2,...,n-1) will be

 \begin{displaymath}I_i \doteq
\vert{\bf {G}}_i\vert e^{{{\bf {q}}_i}^{H} {\bf {G}}_i {\bf {q}}_i}
e^{-{\phi}_i({\bf h}_{i+1})} ,
\end{displaymath} (23)

with
 
$\displaystyle {\bf ~ {G}}_i$ = $\displaystyle \left[
\frac {{\bf A}_i^H {\bf A}_i}
{\sigma^2}
+ {\bf C}^{-1}
+ ...
...(
{\bf C}^{-1} - {\bf C}^{-1} {\bf {G}}_{i-1} {\bf C}^{-1}
\right)
\right]^{-1}$  
$\displaystyle {\bf {q}}_i$ = $\displaystyle \frac {{\bf A}_i^H {\bf r}_i }
{\sigma^2}
+ \alpha {\bf C}^{-1} {\bf {G}}_{i-1} {\bf {q}}_{i-1} .$ (24)

For the integral over ${\bf h}_n$,

 \begin{displaymath}I_n \doteq
\vert{\bf {G}}_n\vert e^{{{\bf {q}}_n}^{H} {\bf {G}}_n {\bf {q}}_n}
\end{displaymath} (25)

with
 
$\displaystyle {\bf G}_n$ = $\displaystyle \left[
\frac {{\bf A}_n^H{\bf A}_n}
{\sigma^2}
+ {\bf C}^{-1}
-\vert\alpha\vert^2 {\bf C}^{-1} {\bf {G}}_{n-1} {\bf C}^{-1}
\right]^{-1}$  
$\displaystyle {\bf q}_n$ = $\displaystyle \frac {{\bf A}_n^H{\bf r}_n}
{\sigma^2}
+ \alpha {\bf C}^{-1} {\bf {G}}_{n-1} {\bf {q}}_{n-1}$ (26)

After the integration over all the channel coefficients, we have the likelihood function

 \begin{displaymath}f({\bf r^n}\vert{\bf A^n}) \doteq
\prod_{i=1}^n \vert{\bf G}_i\vert e^{{{\bf q}_i}^{H} {\bf G}_i {\bf q}_i} .
\end{displaymath} (27)

If we take the negative natural logarithm of the above equation, we will obtain the time-recursive form of the cost function

 \begin{displaymath}- \log f({\bf r^n}\vert{\bf A^n}) \doteq ~ - ~
\sum_{i=1}^n \...
...i}^{H} {\bf G}_i {\bf q}_i}+
\log \vert{\bf G}_i\vert \right].
\end{displaymath} (28)

From the expressions of ${\bf G}_i$ and ${\bf q}_i$ we know the transition costs depend on all the previous states, so the standard Viterbi algorithm cannot be applied here. The optimum results can only be obtained by exhaustive search.


next up previous
Next: Reduced Complexity Processing Up: Multiple User Maximum Likelihood Previous: Multiuser System Model
Rick Perry
2001-11-03