Cover
Volume 2, Number 2, pp. 200-239, 2000.    


 


  Home  Current Issue  Table of Contents 

 

 

Methods for Determination of Electrode Positions in
Tomographic Images

I. H. de Boer, F. B. Sachse, S. Mang, and O. Dössel

Institute of Biomedical Engineering, University of Karlsruhe, 76128 Karlsruhe, Germany

Correspondence: idb@ibt.etec.uni-karlsruhe.de


Abstract. Mapping of electrical endocardial activity is an important task for cardiac diagnosis and surgical treatment planning. Different kinds of catheters measure this activity with a limited number of electrodes. In recent years an increasing number of mapping systems is used in clinical routine. Various systems have been introduced and discussed in literature.

This work deals with the localization of catheter electrodes in the heart with advanced techniques of digital image processing. The catheters - developed by various enterprises - differ in shape, handling and amount of electrodes. They are specified and presented in detail. Digital image analysis techniques like filters, Fourier and Hough transformation build the background and basics for this work. The main part describes the methods for the detection of the electrodes and the catheter strings. With these methods, it is possible to setup computer models for each catheter. The computer models can be used e. g. in numerical field calculation together with medical tomographic datasets.

Keywords: Catheter, Localization, Calibration, X-Ray, CT, Basket Catheter, Balloon Catheter


 

Introduction

Medical examinations of electrical cardiac activity are necessary in the field of cardiac diagnosis and surgical treatment planning. An important examination is mapping the endocardial electrical potentials. This kind of measurement involves catheters with a limited number of electrodes. In recent years an increasing number of mapping systems is used in clinical routine. Various systems have been introduced and discussed in literature [1, 2, 3, 4, 5]. These systems have local coordinate or reference systems, such as the catheter itself or landmarks in the heart.

This work introduces different digital image processing methods to localize three types of catheters differing in shape, handling and amount of electrodes. The first category covers one string catheters. They have up to n electrodes in a row. Secondly, methods to localize basket catheters are presented. These catheters include eight strings with a total amount of 64 electrodes. Each string contains eight electrodes. One string has an additional marker and another string has two additional markers. The third category describes techniques to determine the position of balloon catheters with 64 electrodes placed on a grid. Two other electrodes are fixed on the catheter which the grid is attached to. A balloon inside the grid can be filled with a radiopaque fluid or - here - NaCl to expand the grid. The different catheters are specified and presented in detail.

The mathematical background for the digital image analysis embraces the Hough and Fourier transformation, as well as the Hessian matrix, filters and active contours. These methods make it possible to extract the electrode positions of the catheters. The images of the catheters come from two modalities: x-ray and computed tomography (CT). Combining the different methods, it is feasible to generate three dimensional computer models of the catheters and the electrodes. The results are presented and discussed.

For imaging bioelectrical sources, the mapping methods combined with additional measurements lead to e. g. endocardial maps or Body Surface Potential Maps (BSPM) and therefore make it possible to gain more information about the activity. Also, the generated computer models of the catheters can be used with other models like a volume dataset generated from medical tomographic images. This leads to a hybrid dataset for further field calculations and data analysis. Examples for these kind of measurements are the inverse and the forward problem of electrocardiography.

Catheters

The catheter description covers information like the design and technical specifications.


Figure 1. One string catheters. Left: Photographic image of a string catheter with four electrodes. Middle: X-ray image of the catheter shown on the left. Right: Photographic image of a string catheter with 20 electrodes.

Figure 1 shows two examples of one string catheters with four and 20 electrodes. The left catheter has a shaft diameter of 6F [1.4224 mm/0.056 inch] (F = French). The LiveWire catheter on the right side has been developed by the Daig Corporation, Minnetonka, USA. It has a shaft diameter of 7F [1.6764 mm/0.066 inch] and a steering mechanism to provide catheter tip movement. It has a variety of lengths and tip deflection styles.
The multielectrode basket catheter (Constellation) as introduced by Boston Scientific, Massachusetts, USA, is shown in figure 2.

Figure 2. Multielectrode basket catheter. The catheter includes eight strings with a total amount of 64 electrodes: Each string contains eight electrodes. One string has an additional marker and a second string has two additional markers. Left: Photographic image. The arrows mark the eight electrodes on one string. Right: X-ray image of the catheter.

The Constellation mapping catheter is in direct contact with the endocardial tissue. It has a basket-shaped design and multiple electrodes. A electrical map of the endocardial atrium can be obtained in several beats of the heart. The Constellation system permits the diagnosis of arrhythmias, such as rapid heartbeats that develop after surgical procedures, and other atrial tachycardias [6]. The specifications are given in Table 1.

The EnSite balloon catheter has been developed by Endocardial Solutions, Inc, St. Paul, USA (Fig. 3).

\includegraphics[width=0.5\textwidth]{balloon_cath}

Figure 3. Multielectrode balloon catheter. 64 Electrodes are placed on the grid and three electrods are fixed on the string: Two to find defect electrodes and one reference electrode. A ballon inside the grid can be filled with a fluid to expand the grid. In this image a small metal plate is attached to the grid for localization reasons.

The EnSite catheter is a non-contact, single-use, multi-electrode array, percutaneous catheter. The multi-electrode array senses electrical activity generated from the endocardial wall while floating in the cavity. The array area is comprised of an inflatable polyurethane balloon within an expandable multi-electrode grid. The multi-electrode array contains a wire braid consisting of 64 braided insulated wires. A handle and cable connector are located at the proximal end of the catheter to allow the user to position the distal end of the catheter. The EnSite catheter is inserted intravasculary over a standard guidewire into a selected chamber of the heart. When positioned, the wire braid is expanded and the balloon residing under the wire braid in the array area of the catheter is inflated with a fluid to form an ellipsoidal, multi-electrode array. When deployed, the array is small enough to permit the normal functioning of the heart [7]. The technical details are shown in table 1.

TABLE 1. Specifications of the basket and the balloon catheter.

Constellation EnSite
Shaft 8 French Shaft 9 French with J-tip
Length 120 cm Length 110 cm
Electrodes 64 (8 per spline) Electrodes 64
Basket diameter 38/48/60/75/94 mm    
Splines 8    
    Lumen Size 0.038 inch diameter
    Balloon 1.8 x 4.5 cm, 7.5 ml

Digital Image Analysis

This section gives the mathematical background to the digital image processing methods and basics. Different transformations, filters and methods explain how to extract the needed information from the images. The interesting information is the histogram, edge, line and shape.

Homogenous Transformation and Coordinates

Adding a fourth component to a three dimensional vector leads to the so called homogenous coordinates (Fig. 4). In general, the fourth component is defined as a constant k with $k \not= 0$. The cartesian vector p = (x,y,z)T becomes the homogenous vector ph = (kx,ky,kz,k)T. A transformation from the homogenous coordinate system into the cartesian coordinate system is achieved by dividing the first three homogenous coordinates by the fourth homogenous coordinate. Homogenous matrices have the advantage that a variety of transformations are represented by matrix multiplications. One of the main advantages is the availability of perspective scaling.

\includegraphics[width=0.5\textwidth]{matrix_en}

Figure 4. The homogenous matrix combines the transformation forms translation, rotation, perspective and scaling into one single matrix. The appropriate transformation are marked by rectangles. The scaling is controllable by the diagonal elements.

Fourier Transformation

An important technique in digital image analysis is the Fourier transformation (FT).

In the field of electrical engineering the parameter t (time) is often used for the one dimensional Fourier transformation. In this case the transformation leads from the time domain to the frequency domain. This context goes back to the field of system theory, because there are time continuous or time discrete signals. The digital image analysis deals with spatial coordinates and a transformation from the spatial domain to the frequency domain. In both domains the intensity or gray value images are represented by functions of two parameters.

Continuous Fourier Transformation

The one dimensional Fourier transformation projects the signal f(x) from the spatial domain into the frequency domain [8]. The definition is

\begin{displaymath}
F(u) = \int \limits_{-\infty}^{+\infty}f(x) e^{-j 2 \pi u x...
...$\circ \hspace{-0.15cm} - \hspace{-0.15cm} \bullet$\ $F(u)$,}\end{displaymath}

with f(x) as a function in the spatial domain expanded to infinity. The function F(u) completely describes f(x) in the frequency domain.

In generall, F(u) and f(x) are complex (a real and an imaginary part):

\begin{displaymath}F(u)=R(u) + jI(u) = \vert F(u)\vert e^{ j \phi (u) }\textrm{, with}\end{displaymath}


\begin{displaymath}
\vert F(u)\vert= \sqrt{R(u)^{2} + I(u)^{2}} \qquad \textrm{...
...quad\phi (u) = \arctan \left ( \frac{I(u)}{R(u)} \right ).\end{displaymath}

The absolute value |F(u)| represents the Fourier spectrum and $\phi (u)$ the phase angle.

The parameter u often stands for frequency, because of the decomposition of the exponential term in a sine and a cosine part [9]:

\begin{displaymath}
e^{ -j 2 \pi u x } = \cos ( -j 2 \pi u x ) + j \sin ( -j 2 \pi u x ).\end{displaymath}

The result from this equation: The Fourier transformation is a sum of sine and cosine harmonics, which frequencies are given by u. The physical dimension in image analysis is 'line pairs' / mm or 'image points' / mm.

The back transformation from the frequency domain to the spatial domain defines

\begin{displaymath}f(x) = \int \limits_{-\infty}^{+\infty}F(u) e^{ j 2 \pi u x...
...$\bullet \hspace{-0.15cm} - \hspace{-0.15cm} \circ$\ $f(x)$.}\end{displaymath}

The extension to a two dimensional function leads to the Fourier transformation

\begin{displaymath}F(u,v) = \int \limits_{-\infty}^{+\infty} \int \limits_{-\infty}^{+\infty}f(x,y)
e^{ -j 2 \pi ( u x + vy ) } dxdy\end{displaymath}

and the inverse Fourier transformation

\begin{displaymath}
f(x,y) = \int \limits_{-\infty}^{+\infty} \int \limits_{-\i...
...{+\infty}F(u,v)e^{ j 2 \pi (u x + v y ) } dudv\textrm{.}\end{displaymath}

Discrete Fourier Transformation

A given continuous function f(x) sampled in N equidistant intervals $\Delta x$ leads to the discrete function or series

\begin{eqnarray*}
f(x) &=& f(x_{0}+x \Delta x) \\
&=& \left \{ f(x_{0}),f(x_{0}+ \Delta x),\cdots,f(x_{0}+ [ N-1 ] \Delta x) \right \}.
\end{eqnarray*}


The discrete value x ranges from x = 0,1,...,N-1. The discrete Fourier transformation (DFT) and its inverse (IDFT) are shown in table 2.

TABLE 2. The one and two dimensional discrete Fourier transformation (DFT)
and their inverse transformations (IDFT).
  DFT IDFT
1D $F(u) = \frac{1}{N} \sum \limits_{x=0}^{N-1}f(x) e^{ -j 2 \pi \frac{u x}{N} }$ $f(x) = \sum \limits_{u=0}^{N-1}F(u) e^{ j 2 \pi \frac{u x}N{} }$
with u,x = 0,1,...,N-1
2D $F(u,v) = \frac{1}{M N} \sum \limits_{x=0}^{M-1} \sum \limits_{y=0}^{N-1}f(x,y)
e^{ -j 2 \pi \left ( \frac{u x}{M} + \frac{v y}{N} \right ) }$ $f(x,y) = \sum \limits_{x=0}^{M-1} \sum \limits_{y=0}^{N-1}F(u,v)
e^{ j 2 \pi \left ( \frac{u x}{M} + \frac{v y}{N} \right ) }$
with u,x = 0,1,...,M-1
and v,y = 0,1,...,N-1


For a detailed description, the reader is advised to the references [10, 11, 12].

Fast Fourier Transformation

The two dimensional image analysis uses the discrete 2D-Fourier transformation. Practise shows, that the calculation of a Fourier transformation given by above equations is an expensive operation.

Taking a closer look towards the 1D-DFT shows that the amount of complex additions and multiplications is proportional to N2: For a certain frequency u (u = 0,1,...,N-1) N complex multiplications of the function f(x) with $e^{ -j 2 \pi \frac{u x}{N} }$ and (N-1) additions of these results are needed. By introducing the fast Fourier transformation (FFT) the number of calculations reduces and the multiplications of the FFT become proportional to N log2N [11]. Several algorithms are found in literature. An example is the 'decimation in time radix-2 algorithm' presented in [11]. The table 3 shows, that for large amount of data the expense of calculation steps is very small in contrast to the DFT.

N-dimensional DFTs can be split into 1D-DFTs (Separability).

TABLE 3. Expense of calculation of the FFT in contrast to the direct DFT [11].
  direct DFT FFT Expense
N N2 N log2N log2N/N
64 4.096 384 9,4%
128 16.384 896 5,5%
256 65.536 2.048 3,1%
512 262.144 4.608 1,8%


Correlation

In image analysis a commonly used technique for detection an object - respectively its position - is the correlation of an image with a template. The resulting correlation image shows maxima at the spots of the largest correspondence of the two images. The basic idea also goes back to the system theory.

Continuous Correlation

The continuous one and two dimensional correlation are defined as

\begin{displaymath}z(x) = f(x) \otimes g(x) =
\int \limits_{-\infty}^{+\infty}f(u) g(x+u) du\end{displaymath} (1)

and

\begin{displaymath}
z(x,y) = f(x,y) \otimes g(x,y) = \int \limits_{-\infty}^...
...ts_{-\infty}^{+\infty}f(u,v) g(x+u,y+v) dudv\textrm {.}\end{displaymath}

The correlation theorem [10] makes it possible to calculate the correlation in the frequency domain, where F*(u) is conjugate complex to F(u) [9] :

\begin{displaymath}z(x,y) = f(x,y) \otimes g(x,y)
\circ \hspace{-0.15cm} - \hspace{-0.15cm} \bullet
Z(u) = F^{\ast}(u) G(u)\textrm {.}\end{displaymath}

Discrete Correlation

The definitions of the discrete correlation are

\begin{displaymath}
z(x) = f(x) \otimes g(x) =
\frac{1}{M} \sum \limits_{x'=0}^{M-1} f(x') g(x+x')\end{displaymath}


\begin{displaymath}
z(x,y) = f(x,y) \otimes g(x,y) =
\frac{1}{MN} \sum \limits_{x'=0}^{M-1} \sum \limits_{y'=0}^{N-1} f(x',y') g(x+x',y+y')\end{displaymath}

with x,x' = 0,1,...,M-1 and y,y' = 0,1,...,N-1, whereas the correlation theorem retains its validity.

Hough Transformation

The general Hough transformation defines a transformation from a data domain into a model domain by using a model equation [12]: The p model parameter m span a p-dimensional vector space. The vector space reflects all solutions for a inverse problem with p parameters. A linear equation system like

Gm = d,

with G as model matrix, m as model or parameter vector and d as data vector build the basic approach. When considering a single point dq, it will lead to

\begin{displaymath}
\sum_{p=1}^{p} G_{qp} m_p = d_q \qquad \textrm{or} \qquad {\bf g}_q {\bf m} = d_q.
\end{displaymath}

The equation is a scalar product of a row q from the model matrix gq with the model vector m. In the model domain, the equation stands for a (P-1)-hyper plane of all vectors m with a normal vector gq and a distance dq from the origin of the model domain. For this reason a one-to-one relationship between a point in the data domain and a (P-1)-hyper plane in the model domain is established.

Three examples follow to illustrate the Hough transformation [13, 14].

Detection of a Line

In the general line equation given by

y = ax + b,

a represents the gradient and b the point where the line intercepts the y axis. The coordinates (xi, yi) are points on the line (Fig. 5a). In the Hough domain (xi, yi) are fix and (a, b) parameters (Fig. 5b):

b = -ax + y.

For each value (xi, yi) in the spatial domain, a corresponding line in the Hough domain exists. These lines intersect in a single point (a', b'), which are the parameters for the seeked line in the spatial domain: Every point (xi, yi) represents a line in the Hough domain. The intersection of these lines in a point (a',b') denotes the parameters for a certain line in the spatial domain.

It is not possible to detect vertical lines, because the gradient a becomes infinity. It can be avoided by choosing the normal form of the line equation

\begin{displaymath}x \cos \alpha + y \sin \alpha - \rho=0,\end{displaymath}

with $\rho$ for the distance from the center of the coordinatesystem to the line and $\alpha$ for the angle between the normal of the line and the x-axis. The image point is not presented as a line in the Hough domain, but as a sinusoidal curve. The intersection point of every curve determines the parameters $\alpha$ and $\rho$.

\includegraphics[width=0.4\textwidth]{hough_line_a} \includegraphics[width=0.4\textwidth]{hough_line_b}
[a][b]

Figure 5.Example for the Hough transformation. These images show the detection of a line in the spatial and the Hough domain. (a) Line in the spatial domain. (b) Hough transformation of the line.

Detection of a Circle

Circles are described by the general circle equation

\begin{displaymath}(x - a)^2 + ( y - b)^2 = r^2 \Leftrightarrow b = y \sqrt{r^2 - (x - a)^2},\end{displaymath}

where a,b refer to the center position and r refers to circle radius. To avoid the computational requirements of a 3-D Hough transformation, the problem is heuristically organized into two stages and thus downgraded to a two dimensional Hough transformation:

  • find all circle centers, then
  • find the radius of each circle.
The corresponding circle center lies on the normal to the tangent from a given point on a circle. Thus, the normals from several pixels from the same circle will intersect at the circle center (Fig. 6). A histogram over an allocated (x, y) parameter space stores the histogramm information. For each pixel the tangent is estimated as the line of best fit to all pixels within a small neighborhood. This method allows the computation of the normal which is recorded in the histogram. The maxima of the histogram give the location of candidate circle centers.

The next step is to find the corresponding radius for each center by computing a radial histogram for each center: For every pixel in the image its distance from the center is computed and recorded in a 1-D histogram. Maxima of the histogram correspond to the radii of circles.

\includegraphics[width=0.4\textwidth]{hough_circle}

Figure 6. Detection of a circle by using the Hough transformation.

Detection of an Ellipse

An ellipse

a (x - p)2 + 2 b (x - p) (y - q) + c (y - q)2 = 1       and       ac-b2>0

has five parameters (a, b, c, p, q). A five parameter Hough transformation will make high computational demands. So, an alternative heuristic approach lies in simplifying the problem by separating the task into two phases: First, all possible ellipse centers are identified and then the remaining three parameters by using a simple focusing implementation of the Hough transformation.

The tangents of three points (x1,y1), (x2,y2) and (x3,y3) on the ellipse determine the center point 0. The intersection point t1 and the middle m1 of the two points (x1,y1) and (x2,y2) are calculated. The center 0 lies on the ray given by t1 and m1. The second and third point form t2 and m2. The intersection of the two rays determine the center point 0 (fig. 7a).

Rays formed from different pairs of image points on an ellipse intersect at the ellipse center. A two parameter Hough transformation accumulates these rays, with the intersection of the rays corresponding to a maximum of the histogram.

To calculate the remaining three parameters a, b, c the ellipse is transformed to the center of the coordinate system. The ellipse equation simplifies to

a x2 + 2 b x y + c y2 = 1.

By substitution of three points on the ellipse the linear equation system

\begin{displaymath}\left[\begin{array}{lll}x_1^2 & 2x_1y_1 & y_1^2 \\
x_......left[\begin{array}{l}1\\1\\1\end{array}\right]\end{displaymath}

follows, which can be solved to a,b,c. The equation ac-b2>0 has to be examined as well. If this criteria is not fulfilled, the chosen three points do not lie on the ellipse or the calculations of the tangents are unprecise. In this case, a recalculation has to be done.

Finally, the parameters a,b,c are converted to $r_1, r_2, \phi$, whereas r1 represents the major radius, r2 the minor radius and $\phi$ the angle between r1 and the x-axis (fig. 7b).

\includegraphics[width=0.4\textwidth]{hough_ellipse_a} \includegraphics[width=0.4\textwidth]{hough_ellipse_b}
[a] [b]

Figure 7. Hough transformation - Detection of an ellipse. (a) Detection of the ellipse center. (b) Determination of the remaining parameters.

Thresholding

A simple method for image segmentation is thresholding, which leads from a gray value image to a binary image. The general approach defines
\begin{displaymath}
\hat{f}(x,y) =
\left\{
\begin{array}{l}
1 \qquad \tex...
... \qquad \textrm{otherwise}
\end{array}
\right. \textrm{ , }
\end{displaymath} (2)

with the original function f(x,y), the binary function $\hat{f}(x,y)$ andthe threshold value t. The threshold value has to be determined with regards to each image. There are different possibilities to set this value. The simplest way is a static value: Every gray value less than a value t becomes zero and every other value 1 (eq. 2). In semi thresholding not 1, but the original gray value will be kept. Another method for thresholding uses the just mentioned methods in intervals - including the limits of this interval - instead of the whole range. Also, histogram information can be chosen to find the threshold value, e.g. by taking the global minimum of the histogram.

If gray values of the background appear in the object itself, problems occur in simple threshold methods. To solve these problems, dynamic threshold takes a m x m window at the position (x,y). If the background area and the object area have almost the same frequency of occurrence, then the average value is a good threshold value:

\begin{displaymath}
t = \frac{1}{m^2} \sum_{i=0}^{m-1} \sum_{j=0}^{m-1} f(x+k-i,y+k-j) \qquad \textrm{with} \quad k = \frac{m-1}{2}.
\end{displaymath}

An extension would be using the histogram information from this window. More elegant methods don't calculate a new value for each pixel and thus are much faster. Detailed information about thresholding can be found in [15].

Region Growing

The name region growing already describes its idea: image pixels or small regions are combined to larger regions. From a set of starting points - seed points - a region spreads by successive appending of neighbor pixels to the region. The appended pixels must have the same certain features - like gray value, color or texture - as the seed point. The methods can also be applied to three dimensional voxels. The nature of region growing can imply some difficulties. Two of these difficulties are obvious right away: One the one hand, there are one or more seed points that have to be determined. On the other hand, features have to be defined for appending an pixel or not. These features must be stable, even if the growing conditions are changing.

Depending on the kind of task, the seed points come from a priori knowledge. The region of interest has a certain shape, brightness or color. If such an information is not available, it is possible to chose seed points from the histogram, e.g. pixels near maxima [11]. The choice of the affiliated features is often a certain gray value area. This area can be fix, variable in different images depending on the seed point or dynamic by recalculation while the growing process is running. The dynamic recalculation could be e.g. the average value of all pixels that belong to the region. Next to the discussed features, the spatial affiliation must be fulfilled. Sometimes these conditions are not sufficient to stop a region from growing. In this case additional image analysis techniques like filters have to be applied on the image.

Neighbors of Pixels

One of the basic relationship between pixels are neighbors [11] which have to be considered in thresholding. Four horizontal and vertical neighbors denote a 4-neighbors of a pixel p, N4(p):

\begin{displaymath}
\begin{tabular}{c\vert c\vert c}
& (x,y+1) & \\
\hline
...
...(x+1,y) \\
\hline
& (x,y-1) & \\
\end{tabular}
\quad .
\end{displaymath}

Taking the diagional directions to acount, p will become a 8-neighbors, or N8(p):

\begin{displaymath}
\begin{tabular}{c\vert c\vert c}
(x-1,y+1) & (x,y+1) & (x...
...
(x-1,y-1) & (x,y-1) & (x+1,y-1) \\
\end{tabular}
\quad .
\end{displaymath}

Image Filtering

Within the scope of a preprocessing, operations on images with filters are adopted to suppress the noise and artifacts and to extract the needed information.

Smoothing Filters

Low pass filters have a smoothing effect. They suppress noise or subtle structures [12,16] and have a loss of sharpness. These filters are also used as fundamental filters to buildup complex filters.

For estimation purposes of the quality of a filter, its transfer function - the impulse response of the Fourier transformation - is looked at: A smoothing filter fulfills its purpose to its best, if the transfer function has a global maximum from which the function falls monotone and becomes zero from a certain cut-off-frequency. The features of the filter are perfect, if the transfer function has also an isotrope - no direction is over- or underrated - feature.

Gaussian Filter

One class of smoothing filters is the Gaussian filter. The one dimensional Gaussian function [9]

\begin{displaymath}
g(x) = \frac{1}{\sqrt{2\pi}\sigma} e^{- \frac{x^2}{2 \sigma ^2 }},
\end{displaymath}

defines the filter and therefore the shape is controlled by the parameter σ . The digital image analysis uses the discrete two dimensional form

\begin{displaymath}
g[x,y]= \frac{1}{\sqrt{2\pi}\sigma} e^{- \frac{(x^2+y^2)}{2 \sigma ^2 }}.\end{displaymath}

The Gaussian function has features, which makes it very useful as a smoothing filter in practice and also as basis for more complex filters [16]:

  1. Same shape in spatial and frequency domain: The Gaussian function keeps the same shape in both domains. The Fourier transformation is computed by

    \begin{eqnarray*}
{\mathcal F}\left\{ g(x) \right\} &=& \int^{\infty}_{-\infty}...
...fty} e^{- \frac{x^2}{2 \sigma ^2 }} j \sin \omega x \, dx . \\
\end{eqnarray*}


    The second integral becomes zero because the sine function and thus the integrand is antisymmetric. The Fourier transformation simplifies to:

    \begin{eqnarray*}
{\mathcal F}\left\{ g(x) \right\} &=& \frac{1}{\sqrt{2\pi}\si...
...\frac{\omega^2}{2 \nu ^2 }}, \qquad \nu^2=\frac{1}{\sigma ^2} .\end{eqnarray*}


  2. Symmetry in rotation: The isotropy of the function grants uniformly smoothness in all directions.
  3. Scale factor: By scaling the parameter $\sigma$, it is possible to set the width of the function and with it the level of smoothness.
  4. Separability: An efficient implementation - even for larger filter widths - can be done by splitting the 2D function into two 1D functions (see 3.2.3).

Derivative Filters

To detect image structures, like edges or lines, filters are used which extract the derivatives from the image. Roberts-, Prewitt- and Sobel-operators [11] are examples of filter types that supply the absolute value of a gradient on different ways.

The sensitivity towards noise can be decreased relevantly by building the derivative of a smoothed (lowpass filtered) image:

Instead of derivate the function directly, it will be convoluted with the Gaussian filter and afterwards derivated. It follows in consideration of the convolution theorem [17, 18]:

\begin{displaymath}
\partial f(x,y) \Longrightarrow \partial (g(x,y)\ast f(x,y)) = f(x,y)\ast \partial g(x,y)
\end{displaymath} (3)

with

\begin{displaymath}
g(x,y)= \frac{1}{\sqrt{2\pi}\sigma} e^{- \frac{(x^2+y^2)}{2 \sigma ^2 }}.
\end{displaymath}

The eq. 3 shows, that it leads to the same results either
  • convolute the image with the Gaussian function and afterwards derivate it, or
  • derivate the Gaussian function and afterwards convolute the result with the image.
The derivation of the Gaussian function (second proposition) can be calculate analytically and therefore is practical in computer systems: From the continuous Gaussian function the first and second derivatives are shown in Figure 8. To use them for digital image analysis, discrete filter masks - for each σ - have to be generated. An efficient filter design with the Gaussian filter was developed by Deriche [19, 18].

\includegraphics[width=.99\textwidth]{gaussplots}

Figure 8: Derivatives of the Gaussian functions. The upper image presents the original Gaussian function $g[x,y]= \frac{1}{\sqrt{2\pi}\sigma} e^{- \frac{(x^2+y^2)}{2 \sigma ^2 }}$. In the second row, from left to right, there are the first derivatives gx and gy. The third row shows the second derivatives gxx, gxy = gyx and gyy.

Erosion and Dilation

The erosion and dilation filter are a morphological filters. The erosion changes the shape of objects in an image by eroding (reducing) the boundaries of bright objects. The boundaries of dark ones get enlarged. The erosion filter is often used to reduce, or eliminate, small bright objects. The dilation filter works the other way round.

Hessian Matrix

Setting the intensity values of a 2D image as a third coordinate in an orthogonal system leads to a three dimensional surface. Information about the topology of a 2D image can be gained - except from the gradients - by taking the second derivatives of the image. These second derivatives form the so called Hessian matrix:

\begin{displaymath}
{\bf H}_f({\bf x}) =
\left( \begin{array}{cccc}
f_{x_1,x_...
...\bf x}) & \cdots & f_{x_n,x_n}({\bf x})
\end{array} \right)
\end{displaymath}

and for a two dimensional image:

\begin{displaymath}
{\bf H}(x,y) =
\left( \begin{array}{cc}
f_{xx}(x,y) & f_{xy}(x,y)\\
f_{xy}(x,y) & f_{yy}(x,y)
\end{array} \right).
\end{displaymath}

The derivatives are calculated for each point of the image. The matrix is real and symmetric. Therefore, its diagonal form [20] and thus a local orthonormal coordinate system can be built for each image point. After transforming the matrix to its diagonal form, the Eigenvector, which belongs to the largest Eigenvalue, points into the direction of the locally strongest curvature. The Eigenvector belonging to the smallest Eigenvalue shows into the direction of the locally smallest curvature. This local coordinate system is shown in an example of a sinusoidal wave (fig. 9).

\includegraphics[width=.6\textwidth]{welle}

Figure 9. Eigenvectors of the Hessian matrix. The Hessian matrix was calculated for a single point on the maximum of a sinusoidal wave. The Eigenvector which belongs to the largest Eigenvalue (red) points into the direction of the locally strongest curvature. The one belonging to the smallest Eigenvalue shows into the direction of the locally smallest curvature (green).

Canny Edge Detection

The edge detection together with methods like region growing or active contours (see 3.6 and 3.10) makes it possible to find interrelated regions. Edge detection improves the robustness of these methods, e.g. the knowledge about the edges allows a better formulation of the truncation criteria.

The basic idea when searching for edges is finding image regions with a high gradient. As shown in section 3.7 some preconsideration has to be undertaken to extract realizable gradients. A derivative operator for edge detection must fulfill two requirements [16]:

  1. Noise reduction: The possibility that noise is detected as an edge must be as low as possible.
  2. Exact localization: The edge has to be found as exact as possible from the calculated gradients.
Both requirements are hard to fulfill at one time, because the more effective the smoothness of an image, the less it is possible to determine the position of the edge. The proposition of Canny to solve this problem [21] subdivides in three parts (fig. 10).

\includegraphics[width=.4\textwidth]{canny_en}

Figure 10. Flowchart for Canny edge detection.

Calculate Local Gradients

The recursive Gaussian filter developed by Deriche [18] - presented in section 3.7.2 - gives the derivatives of the image. This filter has all the needed requirements to fulfill the above quality criteria. Deriche also extended the method of Canny for the three dimensional case [19].

Non-Maxima-Suppression

Sometimes, the image M[x,y] contains wide lines or edges with high gradient values. Searching for local maxima thinners these regions. The width of the detected edges can be restrained to some few pixel:

The directions of the gradients are reduced to the ones, with which a horizontal, vertical and diagonal movement in a 3 x 3 neighborhood is possible. A point from M[x,y] is only taken into the resulting image N[x,y], if its value is larger than its neighbors along the gradient direction. All other values of N[x,y] become zero.

Jain and Devernay describe methods with subpixel accuracy in edge detection in [16] and [22].

Hysteresis-Thresholding

Taking a normal thresholding algorithm with a static value t to suppress the wrong detected edges in N[x,y] comes with the problem of the choice for t: If t is too small, too many wrong detected edges are left in the image. Is t is too large, important edges are missing.

The hysteresis thresholding offers an efficient approach: Two different thresholding values thigh and tlow are applied on the result of the non-maxima-suppression. This leads to two images Thigh[x,y] and Tlow[x,y]. Thigh[x,y] will probably have not many wrong detected edges but openings in these edges. Presume that a related contour in Thigh[x,y] has a minimum length of lmin and an opening in the contour. The algorithm checks, if points from Tlow[x,y] can be added to Thigh[x,y] within a 8-neighborhood till the opening is closed or the end of the contour is reached. The method becomes faster by searching only in the orthogonal direction of the gradient.

Active Contours

Active contours are multidimensional, geometric models and capable of elastic deformation - depending on their degrees of freedom - to match to a quantity of measurement data. The elastic forces come from the inner deformation energy. Outer forces form the model. A potential energy dependening on the model data forms the outer forces.

Active contours have the ability to segment anatomical structures and follow them in image sequences by combining derivated information from the image data (Bottom-Up) with a priori knowledge about the position, size and shape of the structure (Top-Down) [23]. They differ in manner and efficiency of the underlying models, its dimension and its motion equations. Many extensions and adaptations are reported since the first publication of Terzopoulos et. al. [24]. Two examples are discussed in the following.

Snakes

The classical approach to active contours is the so called snake [24], a parameterized contour moving in the plane $(x,y) \in {\mathbb{R}}^2$ to minimize a given energy functional. The coordinate functions x(s) and y(s) build the parameter description of the contour.

These functions have the image domain $(x,y) \in {\mathbb{D}}^{2}$ as value domain and $s \in [0,1]$ as definition domain. The energy functional

\begin{displaymath}
{\bf E}({\bf v})= {\bf S}({\bf v}) + {\bf P}({\bf v})
\end{displaymath} (4)

describes the shape of the contour. The final shape of the contour results from minimizing of the energy. The first energy term stands for the internal elastic deformation energy and is the sum of the strain energy $S_{D}({\bf v})$ and the bend energy $S_{K}({\bf v})$ :
\begin{displaymath}{\bf S}({\bf v}) = S_{D}({\bf v}) + S_{K}({\bf v})\end{displaymath}

\begin{displaymath}
S_{D}({\bf v}) = \int\limits_{0}^{1}
w_{D}\vert\frac{\pa...
...rt\frac{\partial^{2} {\bf v}}{\partial s^{2}}\vert^{2} ds\, .
\end{displaymath}

The parameter wD defines the tension and thus the strain efficiency, whereas wK rules the rigidity and thus the smoothness of the contour.

The image information influences the second term for the external energy in eq. 4. In

\begin{displaymath}{\bf P}({\bf v}) = \int\limits_{0}^{1} P({\bf v}(s))ds\end{displaymath}

the scalar potential function $P({\bf v}(s))$ is chosen in a way, that its minima coincide with the positions of the image characteristics. These characteristics have to be found. Edges suit well as image characteristics for the segmentation. The gradient of the smoothed image I(x,y):

\begin{displaymath}
P(x,y) = -w_{B}\vert\nabla G_{\sigma} \ast I(x,y)\vert\, \mbox{,}\end{displaymath}

builds the potential function with $G_{\sigma}$ being a smoothing filter, which bandwidth influences the expansion of the local minima.

The function ${\bf v}(s)$ must fulfill the Euler-Lagrange equation

\begin{displaymath}
-\frac{\partial}{\partial s}
\left(w_{D} \frac{\partial...
...s^{2}} \right)
+ \nabla P \left({\bf v}(s) \right) = {\bf0}
\end{displaymath}

as the necessary condition for the existence of a solution for eq. 4 [25]. It is possible to solve this equation numerically, e. g. by Finite Difference - and Finite Element Methods.

Balloons

In contrast to the more expensive layered volume segmentation with planar active contours, the three dimensional surface models can speed up the segmentation. Simultaneously, the noise insensitivity and the smoothness between the layers raise. Especially, the three dimensional models can be used efficiently and precisely in segmentation of time dependent volume images.

The physical behavior of the three dimensional active contour or balloon is equivalent to - depending on the variation of parameters - a thin plate or a strained membrane and deforms equally allover. The three dimensional active contour is given by

\begin{eqnarray*}[0,1]^{2} & \mapsto & {\mathbb{R}}^3\nonumber \\
u,v & \maps...
...x}\left( u,v \right) = \Bigl( x(u,v), y(u,v), z(u,v) \Bigr)^T .
\end{eqnarray*}


The energy functional
(5)

describes the deformation energy of the contour [26] and corresponds to the term ${\bf S}({\bf v})$ in eq. 4. The non-negative factors $\alpha_{ij}(u,v)$ and $\beta_{ij}(u,v)$ determine the tension and rigidity of the surface. An increase of $\alpha_{ij}(u,v)$ normally reduces the surface, whereas larger values of $\beta_{ij}(u,v)$ restrict the flexibility of the contour. The factor $\beta_{11}$ is specific for the three dimensional contour and rates the torsion of the surface.

Methods

The section describes the methods to determine the positions of the catheters and the electrodes with techniques of digital image analysis (see section 3). The two and three dimensional images of the catheters come from two modalities: x-ray and computed tomography (CT).

One String Catheter

Calibration

The usage of a biplanar x-ray intensifier system allows to acquire pictures from two different views. This is necessary for a spatial localization of the electrodes.

At first, the system must be calibrated. X-ray image amplifier systems have a larger distortion than lens systems. Thus, a distortion correction is executed before the actual camera calibration.

In x-ray systems two main distortions appear: One due to the mapping of a flat image onto a curved input phosphor (pincushion distortion) and a second due to the deflection of electrons in the earth's magnetic field (S-distortion) [27]. A distortion correction technique approximates the mapping behavior which is independent from the system parameters. The internal parameters make it possible to describe an undistorted mapping of the pixel coordinates. They are calculated heuristically by the bicubic polynome

\begin{displaymath}
\left( \begin{array}{c}
x_u \\
y_u
\end{array}
\r...
...^2_d \\ x_d y^2_d \\ x^3_d \\ y^3_d
\end{array}
\right)
\end{displaymath} (6)

which can handle the possible appearance of five aberrations: pincushion distortion, astigmatism, S-distortion, spherical aberration and coma [28]. These aberrations are errors of the third order.
As reference object for the calculation of the coefficients serves a perspex plate - mounted directly in front of the input phosphor screen - with 265 metal pellets (Fig. 11). The image of the plate is correlated with a template image that presents a circle [29]. This allows to determine the positions of the pellets and thus an overdetermined linear equation system to calculate the coefficients of eq. 6. The singular value decomposition (SVD) for example delivers an approximation to the solution of the equation system [20].

\includegraphics[width=.4\columnwidth]{platte} \includegraphics[width=.4\columnwidth]{plattea}

Figure 11. Photographic and X-ray image of the distortion correction reference object. They have a diameter of 2 mm (middle pellet 4 mm) and a square arrangement with a distance of 10 mm.

The camera calibration can be performed with the undistorted image by the usage of a common method like the direct linear transformation [11, 29]. That method - build upon the principles of a pinhole camera - works with homogenous coordinates. To map three dimensional world coordinates to two dimensional image coordinates, the homogenous world coordinates are multiplied with a calibration matrix:

$\displaystyle {\bf b}_h$ = $\displaystyle \mbox{\bf A}
{\bf w}_h$ (7)
$\displaystyle \left( \begin{array}{c}
b_{h1} \\
b_{h2} \\
b_{h3} \\
b_{h4}
\end{array} \right)$ $\textstyle \stackrel{k = 1}{=}$ $\displaystyle \left( \begin{array}{cccc}
a_{11} & a_{12} & a_{13} & a_{14} \\
...
...t)
\left( \begin{array}{c}
x_{w} \\
y_{w} \\
z_{w} \\
1
\end{array} \right).$ (8)

The variable ${\bf A}$ stands for the calibration matrix, ${\bf b}_h$ for the homogenous image vector and ${\bf w}_h$ for the homogenous world vector. To determine the matrix elements of ${\bf A}$, the coordinates from eq. 8 are transformed to the cartesian form
\begin{displaymath}
x_{b}=\frac{b_{h1}}{b_{h4}}
\textrm{ ,}
\qquad
y_{b}=\f...
...b}=\frac{b_{h3}}{b_{h4}}\stackrel{!}{=}0
\enspace \textrm{.}
\end{displaymath} (9)

From Eq. 9 follows bh1=xh4 and bh2=yh4 and Eq. 7 ensues to
xb bh4 = a11 xw + a12 yw + a13 zw + a14  
yb bh4 = a21 xw + a22 yw + a23 zw + a24 (10)
bh4 = a41 xw + a42 yw + a43 zw + a44 .  

Substituting bh4 in the above eq. 10 leads to two equations with 12 unknowns
a11 xw + a12 yw + a13 zw - a41 xbxw - a42xbyw - a43xbzw - a44 xb + a14 = 0  
a21 xw + a22 yw + a23 zw - a41 ybxw - a42ybyw - a43ybzw - a44 yb + a24 = 0. (11)

To solve eq. 11 six world points with known coordinates (xwi, ywi, zwi) and the corresponding image points (xbi, ybi) are needed (i = 1,...,6). To get a numerical solution one solution component is set, in general a44 = 1 [30]. From N calibration points the equation system
\begin{displaymath}
{\bf M} {\bf x} = {\bf b},
\end{displaymath} (12)

with

\begin{displaymath}{\bf M} =\left (\begin{array}{ccccccccccc}x_{w}^{1} ......{N}y_{w}^{N} & -y_{b}^{N}z_{w}^{N}\\\end{array}\right ),\end{displaymath}


\begin{displaymath}
{\bf x}^T =
\left (
a_{11} \ a_{12} \ a_{13} \ a_{14} \ ...
...a_{22} \ a_{23} \ a_{24} \ a_{41} \ a_{42} \ a_{43}
\right )
\end{displaymath}

and

\begin{displaymath}
{\bf b}^T =
\left (
x_{b}^{1} \ \dots \ x_{b}^{N} \ y_{b}^{1} \ \dots \ y_{b}^{N}
\right )
\end{displaymath}

can be set up and solved with e. g. SVD.

A wooden cube model serves as reference object for the calibration points (fig. 12).

[a] \includegraphics[width=.33\textwidth]{stufen} [b] \includegraphics[width=.33\textwidth]{xstufen_A}
[c] \includegraphics[width=.33\textwidth]{xstufen_B}

Figure 12. Reference object for the calibration. It contains 12 metal pellets with a diameter of 5 mm. The positions of the pellets are measured with a precision of 1 μm. (a)  Intermediate phase of the production - for the exact measurement - before combing to the cube model, (b) top view and (c) side view.

Catheter Electrodes

In a first step, the electrode image (Fig. 13a) is correlated with a template image. The accurate center of the electrode cannot be found because of the electrode shape in the image. The template image contains a small circle, whereas the electrode has a cylindric shape. This shape can even be more deformed dependening on the position of the catheter in the image. So, there is a possibility of more than one maximum in the correlation image that will appear in one electrode. Thus, the found maxima serve as seed points for region growing which results in a segmented image with knowledge of the area and the two dimensional centroid. The centroid approximates the three dimensional projection. With the help of the Hessian matrix, the electrodes can be sorted. The general sorting idea illustrates fig. 13b: The information from the Hessian matrix about the topology helps to detect the tip of the catheter by analyzing the surrounded array in regard to the occurrence of Eigenvectors in the direction of the catheter string. Afterwards, the string is followed automatically likewise the marching lines algorithm [31] and the electrodes get numbered.

\includegraphics[width=.4\textwidth]{kath} \includegraphics[width=.4\textwidth]{kathsurface}

Figure 13. Topology of the catheter. The right image shows the intensity or gray value as third coordinate.

Determination of the 3D-Position

Both x-ray tubes have a calibration matrix of the form

\begin{eqnarray*}
\left( \begin{array}{c}
b_{h1} \\
b_{h2} \\
0 \\
...
...c}
x_{w} \\
y_{w} \\
0 \\
1
\end{array} \right).
\end{eqnarray*}
In the following context l and r depict the left and the right tube. In the style of section 4.1.1 follows:

xb,l = a11,l xw + a12,l yw + a13,l zw - a41,l xb,lxw - a42,lxb,lyw - a43,lxb,lzw + a14,l
yb,l = a21,l xw + a22,l yw + a23,l zw - a41,l yb,lxw - a42,lyb,lyw - a43,lyb,lzw + a24,l
   (13)

and

xb,r = a11,r xw + a12,r yw + a13,r zw - a41,r xb,rxw - a42,rxb,ryw - a43,r xb,rzw + a14,r
yb,r = a21,r xw + a22,r yw + a23,r zw - a41,r yb,rxw - a42,ryb,ryw - a43,r yb,rzw + a24,r .
   (14)

The two calibration matrices and the electrode pairs in the left and right image are known. Thus, the unknown world coordinates (xw, yw, zw) can be calculated. The eq. 13 and 14 lead for each electrode i to N equation systems

\begin{displaymath}
{\bf D}^i {\bf x}^i = {\bf b}^i,
\end{displaymath}

with

\begin{displaymath}
{\bf D}^i =
\left( \begin{array}{ccc}
a_{11,l} -a_{41,l}...
..._{b,r}^i & a_{23,r} -a_{43,r} y_{b,r}^i
\end{array} \right),
\end{displaymath}


\begin{displaymath}
{\bf x}^i =
\left( \begin{array}{c}
x_{w}^i \\
y_{w}^...
... - a_{14,r} \\
y_{b,r}^i - a_{24,r}
\end{array} \right),
\end{displaymath}

and i = 1,...,N. The N equation systems are overdetermined. The solutions deliver the three dimensional position of each electrode i.

Multielectrode Basket Catheter

Due to the fact that the image recording modality is the same as for the string catheter, the calibration and the three dimensional localization procedure does not differ from the ones presented in the last section.

The electrode positions of the basket catheter in the image are initially localized using a high pass filter and a correlation of the images with a circular shaped template. Afterwards, the user interactively marks the strings. Each string is then traced - by using the Hessian Matrix - automatically and each electrode on the string gets marked. The three dimensional electrode positions are determined using the calibration matrices from the x-ray system.

In some cases not all electrodes can be found due to, e.g. extreme image noise or overlapping catheter strings. A three dimensional computer model of the basket catheter is created and visualized (fig. 14).

\includegraphics[width=.45\textwidth]{basket5}

Figure 14. Computer model.

The unknown electrode positions are specified from the three dimensional model data via a matrix multiplication:

\begin{eqnarray*}
{\bf A} \qquad\qquad\qquad\vec{x}_{Model,unknown} \quad\enspa...
...known}\\
z_{h,Mesh,unknown}\\
k_{h}
\end{array}
\right ).
\end{eqnarray*}


All homogenous coordinates can be transformed to cartesian coordinates by dividing each coordinate by the fourth coordinate.

The equation array

\begin{displaymath}
{\bf M}_{Model,known} \,\, \vec{a} = \vec{x}_{Mesh,known}
\end{displaymath}

determines the matrix $\bf {A}$. The vector $\vec{a}$ consists of the components of the homogenous 4 x 4 matrix $\bf {A}$ and the vector $\vec{x}_{Mesh}$ the measured electrode positions from the image analysis. The 15 x n matrix ${\bf M}_{Model}$ is composed of the electrode coordinates from the computer model and the measured electrode positions. Only those n positions are taken which are found in the images. The SVD calculates the elements of $\vec{a}$ resulting in the matrix $\bf {A}$ with a44 set to 1 for numerical reasons.
Two other methods to match the positions are linear and spline interpolation [20].

Multielectrode Balloon Catheter

Segmentation of the Balloon Catheter

The segmentation divides the image data taken from CT into regions: On one hand the balloon catheter and on the other hand the heart tissue and the surrounding liquid. The images are acquired from an isolated beating pig heart positioned in a jar filled with a special perfusion fluid (Langendorff preparation [32, 33]). The general strategy for the segmentation describes Fig. 15.

\includegraphics[width=0.35\textwidth]{seg_balloon}

Figure 15. Flowchart for the segmentation of the balloon catheter.

Two template image volumes are generated in the following way from the original image volume so that their difference image highlights the ballon. To create the first template, in the original image volume the noise is masked out with thresholding and dilation. The result is subtracted from the original volume (fig. 16b). In the second template the fine structures are eliminated by using erosion and dilation filtering. The two templates lead to the difference image volume (fig. 16c) containing almost no tissue information but only the fine structures.

[a] \includegraphics[width=0.3\textwidth]{diffmedian}      [b] \includegraphics[width=0.3\textwidth]{diffvorver}

    
[c] \includegraphics[width=0.3\textwidth]{diff}

Figure 16. Steps to create the difference image. (a) Median filtered image. Large noise areas are denoted white. (b) After masking the noise areas with artefacts. (c) Difference image.

The next step - from a theoretical point of view - would be using a sixth degrees Hough transformation on the three dimensional difference image to find the balloon catheter which has a shape of an ellipsoid. Because of the high computational demands another heuristical way is chosen: In the Hough domain, an ellipsoid denotes a sphere with an inner hole. So, a synthetic sphere is correlated with the difference image volume. The maximum of the correlation result defines the centroid of the ellipsoid.

A model of the catheter positioned into the original volume to the centroid position determines the orientation of the two small axes of the ellipsoid: The model rotates about 180° in steps of 1° in both axis directions. After each rotation a correlation of the result and the difference image volume is performed. The maximum of these correlation values delivers the two angles.

The mentioned methods only approximate the position of the balloon catheter. To get a precise position, the centroid and the orientation define a small initial contour (an ellipsoid). An active contour grows in the Canny filtered original image volume (Fig. 17). After the active contour finds a stationary solution, the principal components and the centroid are calculated. Applied to the model, this information leads to a precise location of the model in the original image volume.

\includegraphics[width=0.3\textwidth]{aktivekontur}

Figure 17. The active contour fills the balloon catheter in the Canny filtered CT-image.

With the next step the rotation in the longitudinal direction of the catheter is specified. Therefore, a model catheter including the marker is rendered into the original image with the information about centroid and orientation. Afterwards, the model rotates about its longitudinal direction to fit the real and the model marker (Fig. 18).

\includegraphics[width=0.5\textwidth]{markerlokalisation1}

Figure 18. Calculation of the orientation of the balloon catheter.

Segmentation of the Electrodes

The general approach for the segmentation of the electrodes shows fig. 19.

\includegraphics[width=0.75\textwidth]{seg_electrodes}

Figure 19. Flowchart for the segmentation of the electrodes.

A calibration measurement with the EnSite-System must be performed to determine the electrode positions of the balloon catheter. Therefore, a perspex cylindric phantom with 81 electrodes is filled with NaCl (Fig. 20). A function generator applies a voltage of 1 V with a frequency of f=10 Hz to two different electrodes on the phantom for a variety of electrode positions. The EnSite-System records the potentials at the electrodes of the catheter.

\includegraphics[width=0.5\textwidth]{eichmess}

Figure 20. The phantom to localize the electrode positions on the balloon catheter. The phantom stands inside the CT and the catheter is fixed inside the phantom. In the background, there is the function generator to apply the voltage.

When acquiring CT images from the phantom with the balloon catheter inside (Fig. 21), it is of importance to note and keep the exact position of the phantom while not moving the catheter. This guarantees a correct measurement and calibration.

\includegraphics[width=0.5\textwidth]{ctphantom}

Figure 21. CT-image of the phantom. The catheter and the markers delimit from the fluid. Artefacts are shown at the wall of the phantom caused by the metal electrodes.

The first step of the image analysis removes the artefacts (Fig. 22a) by eroding the original image volume (Fig. 22b). From the resulting image the positions with gray values higher than zero are taken. The gray values from these positions in the original image form the non-artefacted image (Fig. 22c).

[a] \includegraphics[width=0.3\textwidth]{phantomstoer}     [b] \includegraphics[width=0.3\textwidth]{phantommaske}

    
[c] \includegraphics[width=0.3\textwidth]{phantomentstoer}

Figure 22. Removal of the metal artefacts from the CT-images. (a) CT-image with artefacts caused by the electrodes. (b) Mask image. The erosion filter removes the artefacts. (c) CT-image almost without any artefacts.

A phantom model is matched into the volume by minimizing a quality function, the sum of the the overlapping voxels. After the localization of the balloon catheter in the CT-image volume - described in section 4.3.1 -, the position of the catheter and its marker in the volume are known.

Electrical field calculation with the Finite Difference Method determines the position of the electrodes respectively their rotation. Therefore, a combined model of the phantom and the balloon catheter is used. The phantom model gets the electrode positions for the voltage input by hand. The catheter model includes the electrode positions derived from the EnSite-System but only referred to the centroids of the balloon catheter.

The result of the field calculation is compared to the measured EnSite potentials: The catheter model rotates around its longitudinal direction about 360° in steps of 1°. In each rotation step, the potential information from the model electrodes is correlated with the measured data from the EnSite-System. The correlation is performed for each level of the catheter (Fig. 23). The maximum of the results gives the rotation angle, respectively the electrode positions towards the marker position.

\includegraphics[width=0.4\textwidth]{ballon_level}

Figure 23. The catheter model with the eight levels. Each level or ring contains eight electrodes.

Results

One String Catheter

Distortion Correction

A qualitative analysis of the positioning of the x-ray system shows a translation and rotation deformation of the distortion correction grid when moving the c-bow. Obviously, the heavy weight of the image intensifier and the x-ray source slightly deforms the c-bow. The mechanical distortion is larger than the distortion due to the earth's magnetic field. The distortion correction of the magnetic field shows Fig. 24. To reduce the mechanical distortion a separate distortion correction should be performed for each position of the c-bow and before every calibration. The behavior and precision in the mechanical movement of the x-ray system has to be considered as well.

[a] \includegraphics[width=.45\textwidth]{xplate_raw}

[b] \includegraphics[width=.45\textwidth]{xplate_enh}

Figure 24. Distortion correction. The original image (a) shows a pincushion distortion and a radial loss of brightness towards the edge. The image in (b) is corrected and brightened.

Calibration

The accuracy of the calibration can be validated using the calibration reference object. The object rotates and topples to various positions differing from the calibration position. Per measurement, only a subset of the metal spheres is taken to account. The distances between two spheres are calculated from the measured results xm and from the precise positions ${\bf x}_p$. The root mean square (RMS) error

\begin{displaymath}
RMS = \sqrt{ \frac{\sum \limits_{i = 0}^{n-1} \vert {\bf x...
...limits_{i = 0}^{n-1} \vert {\bf x}_m - {\bf x}_p \vert^2}{n}}
\end{displaymath}

yields information about the accuracy as well as the maximum error. Hereby, n is the amount of the vectors. The vector xi denotes the difference of the length vectors. The measured distances between 60 and 120 mm have a RMS-error of 0.1 mm and a maximum error of < 0.5 mm.

To reduce the calibration effort it is cogitable to use a Look-Up-Table (LUT) for each each x-ray tube and each position of the c-bow. The LUT stores the calibration matrices and the distortion correction parameters. This methods works if the x-ray system offers the possibility for a precise repositioning. In that case, further examinations must analyze the intervals of a recalibration procedure.

Electrode Localization

At first, the electrodes of the x-ray system are localized and numbered. The position of the centroid serves for an approximation position for the three dimensional projection (Fig. 25).

[a] \includegraphics[width=.45\textwidth]{regiongrow} [b] \includegraphics[width=.45\textwidth]{marchlines}

Figure 25. Localization of the string catheter. Figure (a): The centroid of the white area - originated from region growing - is defined as electrode position. Figure (b): The Eigenvectors of the Hessian matrix in the centroids denotes the marching line on the catheter.

Using a phantom the localization of a complete electrode set with extracorporal and intracardial electrodes (fig. 26) was tested. With the aid of a CCD camera system it is possible to localize the three dimensional electrode positions outside the phantom. The inner electrodes and a subset of the outer electrodes are determined from the images of the x-ray system. Both systems can be transformed into one coordinate system by rotation and translation [34] (Fig. 27).

\includegraphics[width=.4\textwidth]{eimer}

Figure 26. Phantom with 12 electrodes. A four electrode string catheter is inside during the x-ray measurement.

Figure 27. Movie: Matching of the three dimensional electrode positions of the CCD cameras and the x-ray system.

Multielectrode Basket Catheter

This technique requires only little interactivity from the user and no additional image sequences or cumbering irradiation to the patient. The modified computer model of the basket catheter contains all measured and rated electrode positions (Fig. 28).

\includegraphics[width=.4\textwidth]{basket8}

Figure 28. Result of the comparison from the computer model and the measured data.

Tests with linear, spline and SVD interpolation show that the SVD with the matrix multiplication yields the best results. The error is less than 0.5 mm: Electrode positions are randomly deleted from a complete localized basket catheter and interpolated by the above methods and compared to the real positions.

Multielectrode Balloon Catheter

Catheter Orientation

The catheter is matched and found in the available image volume data with a robust and reproducible method. The red model catheter denotes the real catheter (fig. 29). The rotation angle to match the real and the model marker is determined for each time step of the CT data set.

\includegraphics[width=0.4\textwidth]{herzcath}

Figure 29. Matching of the balloon catheter (red) with a CT volume set.

Electrode Localization

For the six configurations with eight levels, a total amount of 48 related angles results. An example of the measured and the calculated potentials gives fig. 30: The potentials differ about the same angle in every configuration. The average value of the these related angles computes the rotation angle to 53° .

\includegraphics[width=0.48\textwidth]{3-43_en} \includegraphics[width=0.48\textwidth]{23-63_en}
\includegraphics[width=0.48\textwidth]{21-65_en} \includegraphics[width=0.48\textwidth]{25-61_en}
\includegraphics[width=0.48\textwidth]{1-45_en} \includegraphics[width=0.48\textwidth]{5-41_en}

Figure 30. Comparison of the measured (blue) and the calculated (red) interpolated potentials. The images (a) to (f) show one electrode level in the different configurations.

A final result is the movie in Fig. 31 where the measured potentials are projected on the segmented balloon catheter. The measured potentials are interpolated for the whole surface of the balloon catheter.

Figure 31: The movie shows the measured potentials projected on the ballon catheter

Conclusions

This work presents robust techniques for three dimensional electrode and catheter localization. The techniques require no or only a small amount of interactivity from the user. The localization in the x-ray system needs no additional image sequences or cumbering irradiation to the patient.

The modified computer model of the basket catheter contains all measured and rated electrode positions. That model and the string catheter model can be used in combination with the extracorporal electrode positions and a volume dataset. By using techniques which are already developed, it is possible to transform the presented model into the frame of reference of a volume dataset [35]. The CT data can be expanded to a segmented volume data set by classifying e. g. the heart and other tissues. All the acquired information - together with the measured electric potentials and the conductivity information - can be used for field calculations and reconstruction of bioelectrical sources on the heart. The information concerning the balloon catheter in the heart will be used for validation and parameterization of electrophysiological models of the heart. Future objectives are the simulation and planning of medical interventions (e. g. RF Ablation) and more different catheters and a wider range of modalities are taken into account.

Acknowledgments

This work is supported in parts by SFB 414 "Information Technology in Medicine: Computer- and Sensor-based Surgery", a co-operation between the University of Karlsruhe, the University of Heidelberg and the German Cancer Research Center (DKFZ).

References

[19 R. J. Schilling et. al., "Simultaneous endocardial mapping in the human left ventricle using a noncontact catheter," in Circulation, vol. 98, pp. 887-898, American Heart Association, 1998.

[2] V. Barbaro et. al., "Mapping the organization of human artrial fibrillation using a basket catheter," in Computers in Cardiology, vol. 26, pp. 475-478, IEEE, 1999.

[3] B.-C. Chang et. al., "Use of a 64 channel computerized cardiac mapping system in arrhythmia surgery," Yonsei Medical Journal, vol. 36, no. 4, pp. 378-385, 1995.

[4] A. Kadish et. al., "Mapping of atrial activation with a noncontact, multielectrode catheter in dogs," Circulation, vol. 99, pp. 1906-1913, 1999.

[5] J. K. Triedman et. al., "Multipolar endocardial mapping of the right heart using a basket catheter," PACE, vol. 20-I, pp. 51-59, January 1997.

[6] Boston Scientific Corporation, "Boston scientific receives fda clearance for constellation advanced mapping catheter."

[7] Endocardial Solutions Inc., "Ensite catheter."

[8] W. Abmayr, Einführung in die digitale Bildverabeitung.
Stuttgart: B. G. Teubner, 1994.

[9] I. Bronštejn and K. Semendjajew, Taschenbuch der Mathematik.
Thun und Frankfurt/Main: Verlag Harri Deutsch, 25 ed., 1991.

[10] E. O. Brigham, FFT: Schnelle Fouriertransformation.
München: Oldenbourg Verlag, 5 ed., 1992.

[11] R. Gonzalez and R. Woods, Digital image processing.
Reading, Massachusetts: Addison - Wesley Publishing Company, 1992.

[12] B. Jähne, Digitale Bildverarbeitung.
Berlin: Springer Verlag, 4 ed., 1997.

[13] R. A. McLaughlin, "Technical report - randomized hough transform: Improved ellipse detection with comparison," Tech. Rep. 97/1, The University of Western Australia, 1997.

[14] R. A. McLaughlin and M. D. Alder, "Technical report - the hough transform versus the upwrite," Tech. Rep. 97/2, The University of Western Australia, 1997.

[15] P. Haberäcker, Digitale Bildverarbeitung.
München, Wien: Carl Hanser Verlag, 4 ed., 1991.

[16] R. Jain, R. Kasturi, and B. G. Schunck, Machine Vision.
New York: McGraw-Hill, Inc., 1995.

[17] O. Faugeras, Three-Dimensional Computer Vision: A Geometric Viewpoint.
Cambridge, Massachusetts: The MIT Press, 1993.

[18] R. Deriche, "Recursively implementing the gaussian and its derivatives," Tech. Rep. 1893, INRIA, 1993.

[19] O. Monga, R. Deriche, G. Malandin, and J. P. Cocquerez, "Recursive filtering and edge tracking: two primary tools for 3d edge detection," Image and Vision Computing, vol. 4, Aug. 1991.

[20] W. H. Press et. al., Numerical Recipes in C.
Cambridge: University Press, 2 ed., 1992.

[21] J. Canny, "A computational approach to edge detection," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 679-698, 1986.

[22] F. Devernay, "A non-maxima method for edgedetection with sub-pixel accuracy," Tech. Rep. 2724, INRIA, 1995.

[23] T. McInerney and D. Terzopoulos, "Deformable models in medical image analysis: A survey," Medical Image Analysis, vol. 1, no. 2, 1996.

[24] D. Terzopoulos, A. Witkin, and M. Kass, "Constraints on deformable models recovering 3D shape and nonrigid motion," Artificial Intelligence, vol. 36, pp. 91-123, 1988.

[25] R. Courant and D. Hilbert, Methods of Mathematical Physics, vol. 1.
New York: Interscience, 1953.

[26] T. McInerney and D. Terzopoulos, "A dynamic finite element surface model for segmentation and tracking in multidimensional medical images with application to cardiac 4D image analysis," Computerized Medical Imaging and Graphics, vol. 19, no. 1, pp. 69-83, 1995.

[27] B. Schueler and X. Hu, "Correction of image intensifier distortion for three-dimensional x-ray angiography," Medical Imaging 1995: Physics of Medical Imaging, vol. Proc. SPIE 2432, pp. 272-279, 1995.

[28] E. Pietka and H. K. Huang, "Correction of aberration in image-intensifier systems," Computerized Medical Imaging and Graphics, vol. 16, no. 4, pp. 253-258, 1992.

[29] I. H. de Boer, F. B. Sachse, and O. Dössel, "3D-Lokalisation von Elektroden bei der Multikanal-EKG-Ableitung mittels Analyse von Stereofarbaufnahmen," in 31. Jahrestagung: Deutsche Gesellschaft für Biomedizinische Technik e.V., vol. 42-2, pp. 129-130, Biomedizinische Technik, Oktober 1997.

[30] D. H. Ballard and C. Brown, Computer Vision.
Englewood Cliffs, NJ: Prentice Hall, 1982.

[31] J.-P. Thirion and A. Gourdon, "The marching lines algorithm: new results and proofs," Tech. Rep. 1881-1, INRIA, 1993.

[32] O. Langendorff, "Untersuchungen am überlebenden Säugetierherzen," Pflügers Archiv für die gesamte Physiologie, vol. 61, pp. 291-332, 1895.

[33] H. J. Döring and H. Dehnert, Das isolierte perfundierte Warmblüter-Herz nach Langendorff.
Biomesstechnik-Verlag March GmbH, 1985.

[34] I. H. de Boer, W. Maurer, F. R. Schneider, and O. Dössel, "Matching von dreidimensionalen Elektrodenpositionen ausgehend von biplanaren Röntgenbildverstärkern und CCD-Farbkameras," in Bildverarbeitung für die Medizin 1999, pp. 70-74, Springer Verlag, Berlin, Heidelberg, 1999.

[35] I. H. de Boer, F. B. Sachse, and O. Dössel, "Matching of intracardial and extracorporal electrode positions with segmented volume datasets of the human thorax," in Proc. of the 13th International Congress and Exhibition, (Paris, France), p. 1006, CARS '99, Elsevier, 1999.

table of contents




Official journal of the International Society for Bioelectromagnetism