next up previous
Next: Temporally Independent Signal Amplitudes Up: Maximum Likelihood Source Localization Previous: EM and ML Source

   
EM Algorithms for Source Localization

Here we show that the E-step reduces to computing sufficient statistics of $Q( \theta \vert \phi )$, which are the mean and covariance of ${\bf S}$ conditioned on ${\bf X}$ and $\phi$. This enables the use of an ML-like estimator of $\theta$ in the M-step, which employs the sufficient statistics provided in the E-step. From (8) and (3), the auxiliary function $Q( \theta \vert \phi )$ is
 
$\displaystyle Q( \theta \vert \phi )$ $\textstyle \doteq$ $\displaystyle E[~ - \sum_{k=1}^{N} \vert\vert{\bf x}_k - {\bf A} ( \theta ) {\bf s}_k \vert\vert^{2} ~ \vert
{\bf X}, \phi ]$ (11)
=   - $\displaystyle \sum_{k=1}^{N}
E[ \vert\vert{\bf x}_k - {\bf A} ( \theta ) {\bf s}_k\vert\vert^{2} ~ \vert {\bf X}, \phi ]$  
=   - $\displaystyle \sum_{k=1}^{N}
\int_{\bf S} \vert\vert {\bf x}_k - {\bf A} ( \theta ) {\bf s}_k \vert\vert^{2}
f({\bf S} \vert {\bf X}, \phi )
d {\bf S} \; \; .$  

For each k, the conditional expectation over ${\bf S}$ is of the second order function $\vert\vert {\bf x}_k - {\bf A} ( \theta ) {\bf s}_k \vert\vert^{2}$. It can be shown that this reduces to:

 \begin{displaymath}Q( \theta \vert \phi ) \doteq
~ - \sum_{k=1}^{N} \left\{
\ver...
... {\bf A} ( \theta ) {\bf G}_k {\bf A}^H ( \theta ) \} \right\}
\end{displaymath} (12)

where

 \begin{displaymath}{\bf g}_k \stackrel{\triangle}{=}
E[{\bf s}_k \vert {\bf X}, \phi ]
\end{displaymath} (13)

and

 \begin{displaymath}{\bf G}_k \stackrel{\triangle}{=}
E[({\bf s}_k - {\bf g}_k) ({\bf s}_k - {\bf g}_k)^{H}
\vert {\bf X}, \phi ]
\end{displaymath} (14)

are, respectively, the mean and covariance of the ${\bf s}_k$ conditioned on ${\bf X}, \phi$. A key point is that, to evaluate (12), we need only the means and covariances of the ${\bf s}_k$ given ${\bf X}$ and $\phi$ using the distribution of (9). We will derive these for specific cases of $f({\bf S})$ subsequently. The noise variance $\sigma^2$ is needed in general to compute the ${\bf g}_k$ and ${\bf G}_k$. The EM algorithm is as described generally in Section 3 above, where the E-step reduces simply to the computation of the ${\bf g}_k$ and the ${\bf G}_k$. To initialize $\phi$ in step 1, we can initialize ${\bf g}_k$ and ${\bf G}_k$, $k = 1 , \ldots , N$, maximize (12) to estimate $\theta$, then set $ \phi = \theta$. To initialize and estimate ${\bf g}_k$ and ${\bf G}_k$in steps 1 and 2 of the algorithm, we need to know the specific form of $f({\bf S})$.

 
next up previous
Next: Temporally Independent Signal Amplitudes Up: Maximum Likelihood Source Localization Previous: EM and ML Source
Rick Perry
2000-03-16