next up previous
Next: First Order Gauss Markov Up: MAP Sequence Estimation for Previous: Introduction

Communication System Model

In the discrete-time FIR model of a fast-fading, frequency-selective noisy communications channel, for the received data sequence up to time n, the complex received data rk at time k is given by

 \begin{displaymath}r_k =
{\bf a}_k^T {\bf h}_k + n_k, ~ ~ ~ ~ ~ k = 1,\ldots,n ~ ,
\end{displaymath} (1)

where ${\bf a}_k^T$ is a complex row vector containing transmitted data { $a_{k-i+1}, i=1,\ldots,M$}, M is the FIR channel length, ${\bf h}_k$is a complex column vector containing the channel impulse response coefficients at time k, and nk is the white Gaussian complex noise at time k with variance $\sigma^2$. Let ${\bf H} = [{\bf h}_1, \ldots , {\bf h}_n]$ represent the matrix of channel coefficient vectors over time, arranged by columns. Let ${\bf A} = [{\bf a}_1, \ldots , {\bf a}_n]^T$ represent the matrix of transmitted data arranged by rows. Also let ${\bf r} = [ r_1 , \ldots , r_n ]^T$ represent the column vector of received data, and ${\bf n} = [ n_1 , \ldots , n_n ]^T$ represent the column vector of noise over time. With this notation, the probability density function of the received data, given $\bf H$ and $\bf A$, is

 \begin{displaymath}f({\bf r}\vert{\bf H},{\bf A}) =
\frac {1}
{(\pi \sigma^2)^n...
...
\frac {\vert r_k-{\bf a}_k^T{\bf h}_k\vert^2}
{\sigma^2} } .
\end{displaymath} (2)

The basic equations above describe the communication system under consideration, based only on the fact that the additive noise is white and Gaussian, without making any additional assumptions about the properties of the FIR channel itself.

To minimize BER, the MAP criterion should be used, i.e. we must find $\bf A$to maximize $f({\bf A}\vert{\bf r})$. Using Bayes rule,

 \begin{displaymath}f({\bf A}\vert{\bf r}) =
\frac {f({\bf r}\vert{\bf A})f({\bf A})}
{f({\bf r})}
\doteq f({\bf r}\vert{\bf A})f({\bf A })
\end{displaymath} (3)

(The notation $\doteq$ means ``equivalent within an additive and/or multiplicative constant'' for equations in which such constants have no effect for optimization purposes.) In (3), the term $f({\bf r})$ is ignored since it has no effect in the relative cost. MAP estimators minimize BER by maximizing $f({\bf r}\vert{\bf A})f({\bf A })$. If $f({\bf A })$ is unknown or assumed to be uniform, then the ML estimator which maximizes the likelihood function $f({\bf r}\vert{\bf A})$ can be used instead. Referring to (2), if the channel is known, the Viterbi algorithm can be used directly to estimate the data sequence. If the channel is unknown, the dependence of $f({\bf r}\vert{\bf A})$ on the random channel coefficients is

 \begin{displaymath}f({\bf r}\vert{\bf A}) =
E[f({\bf r}\vert{\bf H},{\bf A})] =
...
...\bf H}} f({\bf r}\vert{\bf H},{\bf A})
f({\bf H }) d{\bf H } ,
\end{displaymath} (4)

where $f({\bf r}\vert{\bf H},{\bf A})$ is given by (2).

Given prior information on the distribution of the channel coefficient vector sequence ${\bf h}_k$, in the form of a joint probability density function $f( {\bf H} )$, (3) and (4) may be evaluated to derive the MAP sequence estimator.


next up previous
Next: First Order Gauss Markov Up: MAP Sequence Estimation for Previous: Introduction
Rick Perry
2001-03-19