next up previous
Next: EM and ML Sequence Up: Sequence Estimation over Linearly-Constrained Previous: Introduction

   
Communication System Model

In the discrete-time FIR model of a time-varying noisy communications channel with inter-symbol interference, for a block of N received data values, the complex received data rk at time k is given by:

 \begin{displaymath}r_k =
{\bf a}_k^T {\bf h}_k + n_k, \ \ k = 1,\ldots,N ,
\end{displaymath} (1)

where ${\bf a}_k^T$ is a complex row vector containing transmitted data { $a_{k-i+1}, i=1,\ldots,L$}, L is the FIR channel length, ${\bf h}_k$ is a complex column vector containing the channel impulse response coefficients (which are unknown) at time k, and nk is the white Gaussian complex noise at time k with variance $\sigma^2$. For $j \le 0$, the transmitted data aj may be either known (e.g. all 0's), unknown, or estimated with some associated probabilities from the end of a previous data block. If the channel coefficients are known to be constrained linearly, such as having a zero at DC, the constraints can be expressed in general by an underdetermined matrix equation:

 \begin{displaymath}{\bf F} {\bf h}_k = {\bf f}
.
\end{displaymath} (2)

The constraint parameters ${\bf F}$ and ${\bf f}$ may be time-varying; we are omitting the time subscript k on them only to simplify the notation. (2) implies the following orthogonal decomposition of ${\bf h}_k$:

 \begin{displaymath}{\bf h}_k = {\bf M} {\bf p}_k + {\bf\bar{M}} {\bf\bar{p}} \; .
\end{displaymath} (3)

Using the singular-value decomposition of ${\bf F}$:

 \begin{displaymath}{\bf F} = {\bf U} {\bf S} {\bf V}^{*T}
= {\bf U} ~ [{\bf S}_1 {\bf0}]
~ [{\bf V}_1 {\bf V}_0]^{*T} \; \; ,
\end{displaymath} (4)

with the matrices partitioned according to the rank of ${\bf F}$, we can then express ${\bf M}$, ${\bf\bar{M}}$, and ${\bf\bar{p}}$ as:

 \begin{displaymath}{\bf M} = {\bf V}_0 , \; \;
{\bf\bar{M}} = {\bf V}_1 , \; \;
{\bf\bar{p}} =
{\bf S}_1^{-1} {\bf U}^{*T} {\bf f}
\end{displaymath} (5)

where the columns of ${\bf M}$ represent an orthonormal basis for the null space of ${\bf F}$. Note that the constraint parameters and ${\bf M}$, ${\bf\bar{M}}$, and ${\bf\bar{p}}$ are assumed to be known, but ${\bf p}_k$ is unknown. Given values or a statistical description of ${\bf p}_k$ we can produce values or a statistical description of the channel coefficients ${\bf h}_k$using (3). We will refer to ${\bf p}_k$ as the underlying channel coefficients. Let ${\bf P} = [{\bf p}_1, \ldots , {\bf p}_N]$ represent the matrix of underlying channel coefficient vectors over time arranged by columns, and let ${\bf A} = [{\bf a}_1, \ldots , {\bf a}_N]^T$ represent the matrix of transmitted data arranged by rows. Also let ${\bf r} = [ r_1 , \ldots , r_N ]^T$ represent the column vector of received data, and ${\bf n} = [ n_1 , \ldots , n_N ]^T$ represent the column vector of noise over time. ${\bf A}, \ {\bf r}$, and ${\bf n}$ will be used in some matrix-vector formulas, but the notation for ${\bf P}$ as a matrix is simply a notational convenience to refer to the entire collection of underlying channel coefficients over the block of time. With this notation, the probability density function of the received data, given ${\bf P}$ and $\bf A$, is:

 \begin{displaymath}f({\bf r}\vert{\bf P},{\bf A}) =
\frac {1}
{(\pi \sigma^2)^N...
... {\vert r_k-{\bf a}_k^T{\bf h}_k\vert^2}
{\sigma^2} } \; \; .
\end{displaymath} (6)

For a non-time-varying channel, let ${\bf h} = {\bf h}_1 = \cdots = {\bf h}_N$represent the single constant (and unknown) column vector of channel coefficients, and let ${\bf p} = {\bf p}_1 = \cdots = {\bf p}_N$represent the underlying channel coefficient vector. In this case, using matrix-vector notation, (1) and (6) can be expressed more simply as:

 \begin{displaymath}{\bf r} = {\bf A h} + {\bf n} \; \; ,
\end{displaymath} (7)

and

 \begin{displaymath}f({\bf r}\vert{\bf P},{\bf A}) =
\frac {1}
{(\pi \sigma^{2})...
...{ -
\frac {\vert{\bf r}-{\bf Ah}\vert^2}
{\sigma^2}
} \; \; .
\end{displaymath} (8)

The basic equations above describe the communication system under consideration, based only on the facts that the additive noise is white and Gaussian, and the channel coefficients are linearly-constrained, but without making any additional assumptions about the properties of the channel itself. In the succeeding sections, additional known properties of the random channel will be considered, starting from the most general joint pdf case and proceeding to specific examples of underlying channel coefficient probability distribution functions.


next up previous
Next: EM and ML Sequence Up: Sequence Estimation over Linearly-Constrained Previous: Introduction
Rick Perry
2000-03-16