EP1817547B1  Method and device for navigating and positioning an object relative to a patient  Google Patents
Method and device for navigating and positioning an object relative to a patient Download PDFInfo
 Publication number
 EP1817547B1 EP1817547B1 EP05813615A EP05813615A EP1817547B1 EP 1817547 B1 EP1817547 B1 EP 1817547B1 EP 05813615 A EP05813615 A EP 05813615A EP 05813615 A EP05813615 A EP 05813615A EP 1817547 B1 EP1817547 B1 EP 1817547B1
 Authority
 EP
 European Patent Office
 Prior art keywords
 orientation
 object
 characterized
 patient
 sensors
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Active
Links
 230000001133 acceleration Effects 0 claims 6
 238000004422 calculation algorithm Methods 0 claims 2
 238000004364 calculation methods Methods 0 claims 2
 238000001914 filtration Methods 0 claims 1
 239000011159 matrix materials Substances 0 claims 2
 230000015654 memory Effects 0 claims 1
 238000001356 surgical procedure Methods 0 abstract claims 2
Images
Classifications

 G—PHYSICS
 G01—MEASURING; TESTING
 G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
 G01C21/00—Navigation; Navigational instruments not provided for in preceding groups G01C1/00G01C19/00
 G01C21/10—Navigation; Navigational instruments not provided for in preceding groups G01C1/00G01C19/00 by using measurements of speed or acceleration
 G01C21/12—Navigation; Navigational instruments not provided for in preceding groups G01C1/00G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
 G01C21/16—Navigation; Navigational instruments not provided for in preceding groups G01C1/00G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation

 A—HUMAN NECESSITIES
 A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
 A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
 A61B34/00—Computeraided surgery; Manipulators or robots specially adapted for use in surgery
 A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis

 A—HUMAN NECESSITIES
 A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
 A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
 A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00  A61B50/00, e.g. for luxation treatment or for protecting wound edges
 A61B90/36—Imageproducing devices or illumination devices not otherwise provided for

 A—HUMAN NECESSITIES
 A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
 A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
 A61B34/00—Computeraided surgery; Manipulators or robots specially adapted for use in surgery
 A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
 A61B2034/2046—Tracking techniques
 A61B2034/2048—Tracking techniques using an accelerometer or inertia sensor

 A—HUMAN NECESSITIES
 A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
 A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
 A61B34/00—Computeraided surgery; Manipulators or robots specially adapted for use in surgery
 A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
 A61B2034/2046—Tracking techniques
 A61B2034/2051—Electromagnetic tracking systems

 A—HUMAN NECESSITIES
 A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
 A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
 A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00  A61B50/00, e.g. for luxation treatment or for protecting wound edges
 A61B90/39—Markers, e.g. radioopaque or breast lesions markers
 A61B2090/3954—Markers, e.g. radioopaque or breast lesions markers magnetic, e.g. NMR or MRI
 A61B2090/3958—Markers, e.g. radioopaque or breast lesions markers magnetic, e.g. NMR or MRI emitting a signal
Abstract
Description
 The invention relates to a method and apparatus for navigating and positioning an object relative to a patient during a medical operation in the operating room. For example, the process of navigating and positioning may be the placement of a hip prosthesis in an exactly predicted position relative to the femur of the patient. So far, correct positioning has been visually checked by using cameras in the operating room. In this case, visually recognizable marks were applied to the object to be navigated or the device to be navigated. Also in the patient such marks were provided. The
DE 4225112 describes an optical positioning. Although such a system operates with the required accuracy, it has numerous disadvantages. A system required for this with several cameras and a control unit is associated with very high costs in the range of several 10,000,  €. Such a system is based on reference, ie it is in principle only stationary use. If the system is to be used at a different location, the cameras required for this purpose must be rebuilt again at exactly predetermined locations. Furthermore, it proves to be disadvantageous that only a limited work area in the order of a dimension of about 1 m is available. Particularly disadvantageous is the problem of shading. The system can only work if the related brands can be captured simultaneously by all required cameras. If OP staff between such Brand and a camera or other operating equipment is positioned there, the system is not working.  The present invention has for its object to improve a method and an apparatus of the type described above in such a way that the aforementioned disadvantages do not occur or are largely overcome.
 This object is achieved by a method and an apparatus having the features of independent claims 1 and 3.
 By applying the method according to the invention and the device according to the invention, the object, for example a prosthesis part or a surgical device, can be positioned in a predetermined position on the patient, or in other words, the operating room staff can change the relevant object with regard to its position and orientation until it reaches the desired predetermined position relative to the patient or relative to a region of the patient. The claimed device is associated with much lower costs than the application of the optical detection technique described above. The system according to the invention is largely transportable; it requires with the exception of attachable to the object and the patient sensor devices no larger equipment that would have to be mounted consuming at exactly predetermined positions stationary. The threedimensional inertial sensors having sensor devices are also largely miniaturized, they can be realized in volumes of a few millimeters dimension. These sensor devices can be fastened and released again at a predetermined location and in a predetermined orientation on the said object on the one hand and on an equally predetermined position on the patient on the other hand. For this purpose, the sensor devices advantageously have a preferably visually recognizable one Orientation on which allows a correct determination of the object or on the patient.
 It also proves to be advantageous if the sensor devices have fastening means for detachably fixing the sensor device to the object and to the patient. This may be, for example, clamping or cliplike fastening means.
 In order to be able to compare position data from the measured values of the first sensor device and the second sensor device in the course of the implementation of the method, it is only necessary that a referencing of the sensor devices is carried out at the beginning of the positioning by both sensor devices at a common location or at two be brought to rest in predetermined known orientation to each other places. On this basis, a displacement of the sensor devices is then determined by using the threedimensional sensor system and fed to further data processing.
 The first and in particular also the second sensor device preferably have three acceleration sensors whose signals can be used to calculate a translatory movement, and additionally three rotation rate sensors whose measured values can be used to determine the orientation in space.
 In addition to the rotation rate sensors, magnetic field sensors can advantageously be provided for determining the orientation of the object in space. The magnetic field sensors detect components of the earth's magnetic field and are in turn able to provide information about the orientation of the sensor device in space.
 In such a case, it proves to be advantageous if means for comparing the orientation determined from measured values of the magnetic field sensors in space with the orientation determined from measured values of the rotation rate sensors in space are provided. In addition, an acceleration sensor for measuring the gravitational acceleration can be provided, which can also be used to determine the orientation of the considered sensor device in space.
 Furthermore, it proves to be particularly advantageous if the computing means of the device according to the invention comprises means for the execution of a per se
DE 103 12 154 A1 have known quaternion algorithm.  Furthermore, it proves to be advantageous if the computing means have means for applying a previously determined and stored compensation matrix, which compensating matrix a deviation of the orientation of the axes of the three rotation rate sensors from an assumed orientation of the axes bill each other and by their application Compensated for the calculation of the rotation angle resulting error.
 Furthermore, it proves to be advantageous if the computing means have means for executing a Kalman filter algorithm.
 Since inertial sensors deliver measurements based on acceleration, these measured values are integrated twice to determine position data in the case of acceleration sensors, and are easily integrated in the case of yaw rate sensors. In the course of these integration processes, errors are gradually added up by integration. In this respect, it proves to be advantageous that the method and the device related thereto is designed so that before the start of a data take and then again, as soon as a resting of Device is detected for a certain period of time, an offset value of the output signal of the rotation rate sensors is determined and is subsequently deducted until the next determination of this offset value for the respective rotation rate sensors, so that it does not enter into the integration. In this way, it is ensured that a new current offset value is determined again and again, so that the greatest possible accuracy is achieved.
 It was also found that the nonavoidable deviation in the orientation of the axes of the three rotation rate sensors from an assumed orientation very quickly leads to inaccuracies in the calculation of the rotation angle. By compensating for this error by using the compensation matrix to be determined for the relevant sensor device, it is possible to achieve an increase in accuracy that meets the requirements. To determine the compensation matrix, a related sensor device is brought into a rotating movement about a respective axis before starting the object tracking, namely, stopping the other two axes. Based on the rotation rate sensor signals obtained in this case, the compensation matrix is calculated and stored in a memory of the claimed device. For the rotary drive of the sensor device, an industrial robot can be used. In this way, the 3x3 nonorthogonality matrix can be successively recorded by rotating around the individual spatial axes.
$$\underset{}{\underset{}{N}}=\left(\begin{array}{ccc}{N}_{11}& {N}_{12}& {N}_{13}\\ {N}_{21}& {N}_{22}& {N}_{23}\\ {N}_{31}& {N}_{32}& {N}_{33}\end{array}\right)$$  The inverse diagonal elements of the nonorthogonality matrix in the equation would be 0 in an ideal system. The inaccuracies are due to manufacturing inaccuracies that cause the axes of the yaw rate sensors to be neither orthogonal to each other in a predetermined orientation to a housing of the sensor device.
 If it was previously said that an offset value is determined when a rest of the device is detected for a certain period of time, this is the determination of a drift vector, for three preferably orthogonal rotational rate sensors and for three preferably orthogonal mutually oriented acceleration sensors. In this way, a matrix D can be determined whose rows are the offsets of the individual sensors. They are preferably newly determined at the beginning of an object tracking and then repeatedly at a detected rest position and used for further data processing.
$$\underset{}{\underset{}{D}}=\left(\begin{array}{c}{D}_{1}\\ {D}_{2}\\ {D}_{3}\end{array}\right)$$  According to an embodiment of the invention, the accuracy of determining the orientation is also increased by applying a quaternion algorithm of the type defined below to the three angles of rotation for each measurement data acquisition and determination of the three angles of rotation in order to determine the current orientation of the object in space calculate. The Further improvement is due to the following circumstance: If one were to use the infinitesimal angle of rotation about a respective axis to determine the orientation change of the object obtained by simple integration at each infinitesimal step of the scan, ie, the measurement data taking such that the rotations are successively about the respective axis would be performed, this would result in a mistake. This error is due to the fact that the measurement data of the three sensors are taken at the same time, because a rotation usually takes place simultaneously around three axes and is determined. If, however, the three specific angles of rotation were subsequently taken into account as rotations about the respective axes in order to determine the change in position, an error would result in the rotation about the second and the third axis since these axes already have errors in another during the first rotation Orientation were brought. This is countered by applying the quaternion algorithm to the three rotation angles. In this way, the three rotations are replaced by a single transformation. The quaternion algorithm is defined as follows:





 multiplication
$${\underset{}{q}}_{1}\cdot {\underset{}{q}}_{2}=\left(\begin{array}{c}{w}_{1}\\ {\underset{}{v}}_{1}\end{array}\right)\cdot \left(\begin{array}{c}{w}_{2}\\ {\underset{}{v}}_{2}\end{array}\right)=\left(\begin{array}{c}{w}_{1}\cdot {w}_{2}+{\underset{}{v}}_{1}\cdot {\underset{}{v}}_{2}\\ {{w}_{1}\cdot \underset{}{v}}_{2}+{{w}_{2}\cdot \underset{}{v}}_{1}+{{\underset{}{v}}_{1}\times \underset{}{v}}_{2}\end{array}\right)$$ 

 Special importance for the inertial object tracking is given by the multiplication. This represents a rotation of a quaternion. For this, a rotation quaternion becomes Gl.18. introduced.
$${\underset{}{q}}_{\mathit{red}}=\left(\begin{array}{c}\mathrm{cos}\left(\frac{\left\underset{}{\phi}\right}{2}\right)\\ \mathrm{sin}\left(\frac{\left\underset{}{\phi}\right}{2}\right)\cdot \frac{\underset{}{\phi}}{\left\underset{}{\phi}\right}\end{array}\right)$$  The vector φ consists of the individual rotations around the coordinate axes.
 A rotation of a point or vector can now be calculated as follows. First, the coordinates of the point or the vector is converted into a quaternion by equation Eq.16 and then the multiplication by rotation quaternion is performed (Eq.18). The result quaternion contains the rotated vector in the same notation. Assuming that the magnitude of a quaternion equals one, the inverted quaternion can be replaced by the conjugate (Eq.19).
$${\underset{}{q}}_{\mathit{v\; \text{'}}}={\underset{}{q}}_{\mathit{red}}\cdot {\underset{}{q}}_{\mathit{v}}\cdot {\left({\underset{}{q}}_{\mathit{red}}\right)}^{1}={\underset{}{q}}_{\mathit{red}}\cdot {\underset{}{q}}_{v}\cdot {\left({\underset{}{q}}_{\mathit{red}}\right)}^{k}$$
because of Eq. 20th$$\begin{array}{ccc}\mathrm{because\; of\; G}\mathrm{20.1.}& \phantom{\rule{1em}{0ex}}& \phantom{\rule{1em}{0ex}}\\ \phantom{\rule{1em}{0ex}}& \phantom{\rule{8em}{0ex}}& \left{\underset{}{q}}_{\mathit{red}}\right=1\end{array}$$  How can this operation be clarified? The vector φ is the normal vector to the plane in which a rotation of the angle ½φ is performed. The angle corresponds to the magnitude of the vector φ. Please refer
Fig.1 ,  Out
Fig.1 It can be seen that a rotation in any plane and only one angle can be performed. This also shows the special advantages of this process. Further advantages are the small number of necessary parameters and trigonometric functions, which could be completely replaced by approximations for small angles. For the rotation with the vector ω the differential equation 21 applies:$$\frac{d}{dt}{\underset{}{q}}_{\mathit{red}}=\frac{1}{2}{\underset{}{q}}_{\mathit{red}}\cdot \left(\begin{array}{c}0\\ \underset{}{\omega}\end{array}\right)$$  The concrete implementation of the quaternion algorithm is in
FIG. 2 represented and takes place as follows: Overall the entire calculation using unit vectors. Starting from the start orientation, the start unit vectors E _{x} , E _{y} and E _{z are} determined.  From the unit vectors, Eq. 22 calculates the rotation matrix.
$$R=\left[\begin{array}{ccc}{\left({q}_{0}\right)}^{2}+{\left({q}_{1}\right)}^{2}{\left({q}_{2}\right)}^{2}{\left({q}_{3}\right)}^{2}& 2\cdot \left({q}_{1}\cdot {q}_{2}{q}_{0}\cdot {q}_{3}\right)& 2\cdot \left({q}_{1}\cdot {q}_{3}{q}_{0}\cdot {q}_{2}\right)\\ 2\cdot \left({q}_{1}\cdot {q}_{2}{q}_{0}\cdot {q}_{3}\right)& {\left({q}_{0}\right)}^{2}{\left({q}_{1}\right)}^{2}+{\left({q}_{2}\right)}^{2}{\left({q}_{3}\right)}^{2}& 2\cdot \left({q}_{2}\cdot {q}_{3}{q}_{0}\cdot {q}_{1}\right)\\ 2\cdot \left({q}_{1}\cdot {q}_{3}{q}_{0}\cdot {q}_{2}\right)& 2\cdot \left({q}_{2}\cdot {q}_{3}{q}_{0}\cdot {q}_{1}\right)& {\left({q}_{0}\right)}^{2}{\left({q}_{1}\right)}^{2}{\left({q}_{2}\right)}^{2}+{\left({q}_{3}\right)}^{2}\end{array}\right]$$  Starting from a start orientation of the coordinate system connected to the object, in particular starting from socalled unit start vectors, according to equation 22, the rotation matrix R, which is a 3 × 3 matrix, is calculated. By reversing this equation 22, a rotation quaternion q _{red} (k) can be obtained. With the help of the zero quaternion, which results from the zerounit vectors, the starting quaternion is calculated by a multiplication with the rotation quaternion. In the next sampling, ie at the next measurement data acquisition and integration with current infinitesimal rotation angles A, B, C, a rotation quaternion q _{red} (k) to be used for this step is then calculated. The quaternion q _{akt} (k1) formed in the previous step is then multiplied by this rotation quaternion q _{red} (k) according to Equation 13 to obtain the current quaternion of the present kth step, namely q _{akt} (k). From this current quaternion, the current orientation of the object can then be specified in any desired representation for the currently performed scanning step.
 It has previously been pointed out that a Kalman filtering algorithm can be used to improve accuracy in determining or calculating Increase position data. The concept of Kalman filtering, in particular indirect Kalman filtering, assumes the existence of support information. The difference of information, which is obtained from measured values of sensors, and this support information then serves as input to the Kalman filter. However, since the method according to the invention and the device do not receive any continuous information from a reference system, support information for determining the position is in any case fundamentally unavailable. In order nevertheless to enable the use of indirect Kalman filtering, it is proposed to use a second, parallel acceleration sensor. The difference of the sensor signals of the parallel acceleration sensors then serves as an input signal for the Kalman filter.
Figures 3 .4 and 5 schematically show the inventive concept of a redundant parallel system for Kalman filtering, wherein two sensors are arranged so that their sensitive sensor axes extend parallel to each other (FIG. 4 ).  Advantageously, both integration steps are included in the modeling. One thus obtains an estimation error for the positional error which inevitably results from twofold integration.
FIG. 5 illustrates this schematically in a feedforward configuration as a concrete realization of a general indirect Kalman filter.  In this concept, a 1st order Gauss Markov process generates the acceleration error using white noise. The model is based on the fact that the position error is determined from the acceleration error by double integration. This results in equations 23 and 25.
$${\dot{e}}_{s}\left(t\right)={e}_{v}\left(t\right)$$ $${\dot{e}}_{v}\left(t\right)={e}_{a}\left(t\right)\mathrm{,}$$ $${\dot{e}}_{a}\left(t\right)=\beta \cdot {e}_{a}\left(t\right)\cdot {w}_{a}\left(t\right)$$  Based on the general stochastic state space description of a timecontinuous system model with state vector x (t), state transition matrix Φ (T), stochastic scattering matrix G and measurement noise w (t), the system equations 26 and 27 result.
$$\underset{}{\dot{x}}\left(t\right)=\underset{}{\underset{}{\phi}}\left(T\right)\cdot \underset{}{x}\left(t\right)+\underset{}{\underset{}{G}}\cdot \underset{}{w}\left(t\right)$$ $$\left[\begin{array}{c}{\dot{e}}_{s}\left(t\right)\\ {\dot{e}}_{v}\left(t\right)\\ {\dot{e}}_{a}\left(t\right)\end{array}\right]=\left[\begin{array}{ccc}0& 1& 0\\ 0& 0& 1\\ 0& 0& \beta \end{array}\right]\left[\begin{array}{c}{e}_{s}\left(t\right)\\ {e}_{v}\left(t\right)\\ {e}_{a}\left(t\right)\end{array}\right]+\left[\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& 1\end{array}\right]\left[\begin{array}{c}{w}_{s}\left(t\right)\\ {w}_{v}\left(t\right)\\ {w}_{a}\left(t\right)\end{array}\right]$$ 
 The general stochastic state space description for the equivalent timediscrete system model is given by Equations 30 and 31, respectively.
$$\underset{}{x}\left(k+1k\right)=\underset{}{\underset{}{\phi}}\left(T\right)\cdot \underset{}{x}\left(kk\right)+{\underset{}{w}}_{d}\left(kk\right)$$ $$\left[\begin{array}{c}{e}_{s}\left(k+1k\right)\\ {e}_{v}\left(k+1k\right)\\ {e}_{a}\left(k+1k\right)\end{array}\right]=\underset{}{\underset{}{\phi}}\left(T\right)\cdot \left[\begin{array}{c}{e}_{s}\left(kk\right)\\ {e}_{v}\left(kk\right)\\ {e}_{a}\left(kk\right)\end{array}\right]+{\underset{}{w}}_{d}\left(kk\right)$$  Equations 32 and 33 apply to the required timediscrete measurement equation.
$$y\left(k\right)=\underset{}{C}\cdot \underset{}{x}\left(k\right)+v\left(k\right)$$ $$y\left(k\right)=\underset{}{C}\cdot \left[\begin{array}{c}{e}_{s}\left(k\right)\\ {e}_{v}\left(k\right)\\ {e}_{a}\left(k\right)\end{array}\right]+v\left(k\right)\mathrm{,}$$  In Equation 33, v (k) is a vectorial white noise process. The difference between the two sensor signals applies as input for the Kalman filter. This results in equations 34 to 36 for the measurement equation.
$$y\left(k\right)=\mathrm{\Delta}{e}_{a}\left(k\right)=\left[{a}_{2}\left(k\right)+{e}_{a2}\left(k\right)\right]\left[{a}_{1}\left(k\right)+{e}_{a1}\left(k\right)\right]$$ $$y\left(k\right)={e}_{a2}\left(k\right){e}_{a1}\left(k\right)$$ $$y\left(k\right)=\left[\begin{array}{ccc}0& 0& 1\end{array}\right]\cdot \left[\begin{array}{c}{e}_{s1}\left(k\right)\\ {e}_{v1}\left(k\right)\\ {e}_{a1}\left(k\right)\end{array}\right]+\left[\begin{array}{c}0\\ 0\\ {e}_{a2}\left(k\right)\end{array}\right]$$ 

 Considering e _{a2} (k) as a further state, the extended system model according to Equations 39 and 40 results.
$${\underset{}{x}}_{e}\left(k+1k\right)={\underset{}{\underset{}{\phi}}}_{e}\left(T\right)\cdot {\underset{}{x}}_{e}\left(kk\right)+{\underset{}{w}}_{\mathit{de}}\left(kk\right)$$ $$\left[\begin{array}{l}{e}_{s1}\left(k+1k\right)\\ {e}_{v1}\left(k+1k\right)\\ \frac{\begin{array}{c}{e}_{a1}\left(k+1k\right)\end{array}}{\begin{array}{c}{e}_{a2}\left(k+1k\right)\end{array}}\end{array}\right]={\underset{}{\underset{}{\phi}}}_{e}\left(T\right)\cdot \left[\begin{array}{l}{e}_{s1}\left(kk\right)\\ {e}_{v1}\left(kk\right)\\ \frac{\begin{array}{c}{e}_{a1}\left(kk\right)\end{array}}{\begin{array}{c}{e}_{a2}\left(kk\right)\end{array}}\end{array}\right]+{\underset{}{w}}_{\mathit{de}}\left(kk\right)$$  The equations 41 to 43 apply here.
$${\underset{}{\underset{}{\phi}}}_{e}\left(T\right)=\left[\begin{array}{cc}\underset{}{\underset{}{\phi}}\left(T\right)& 0\\ 0& {e}^{{\beta}_{2}\cdot T}\end{array}\right]$$ $${\underset{}{w}}_{\mathit{de}}=\left[\begin{array}{c}{\underset{}{w}}_{a1}\left(k\right)\\ \underset{}{\begin{array}{c}{\underset{}{w}}_{a2}\left(k\right)\end{array}}\end{array}\right]$$ $${\underset{}{\underset{}{Q}}}_{\mathit{de}}=\left[\begin{array}{cc}{\underset{}{\underset{}{Q}}}_{d}& 0\\ 0& {q}_{a2}\end{array}\right]$$  For the extended measurement model, equations 44 to 47 apply.
$$y\left(k\right)=\left[{a}_{2}\left(k\right)+{e}_{a2}\left(k\right)\right]\left[{a}_{1}\left(k\right)+{e}_{a1}\left(k\right)\right]$$ $$y\left(k\right)={a}_{a2}\left(k\right){e}_{a1}\left(k\right)$$ $$y\left(k\right)=\underset{}{C}\cdot \underset{}{x}\left(k\right)+v\left(k\right)$$ $$y\left(k\right)=\left[\begin{array}{cccc}0& 0& 1& 1\end{array}\right]\cdot \left[\begin{array}{l}{e}_{s1}\left(k\right)\\ {e}_{v1}\left(k\right)\\ \frac{\begin{array}{c}{e}_{a1}\left(k\right)\end{array}}{\begin{array}{c}{e}_{a2}\left(k\right)\end{array}}\end{array}\right]+v\left(k\right)$$ 
 The covariance matrix R of the measurement noise is thus singular, ie R ^{1} does not exist. The existence of R ^{1} is a sufficient but not necessary condition for the stability or for the stochastic observability of the Kalman filter. There are two ways to respond to the singularity:
 1. Use of R = 0. The filter can be stable. Since there is only a need for shortterm stability anyway, longterm stability can be dispensed with.
 2. Use of a reduced observer.
 FIG. 1
 the representation of the rotation of a vector by means of quaternions;
 FIG. 2
 a flow chart illustrating the application of the quaternion algorithm;
 FIG. 3
 a flow chart illustrating the implementation of the method according to the invention;
 FIG. 4
 the schematic representation of an acceleration sensor and a parallel thereto arranged redundant acceleration sensor;
 FIG. 5
 the schematic hint of a Kalman filter with INS error modeling in feedforward configuration;
 FIG. 6
 a schematic representation of the result of the application of a Kalman filtering.
Claims (14)
 Method for navigating and positioning an object relative to a patient during surgery in an operating theatre, characterized in that the position and orientation in the room space of both the object and the patient or of a relevant area of the patient relative to a referencing framework is determined quasi continuously according to a sensing rate by means of threedimensional inertial sensors, and that from this the current position and orientation of the object relative to the patient is determined, that this position and orientation are compared with a desired, predetermined position and orientation, and that there is an indication as to how the position of the object should be modified in order to be placed in the desired predetermined position and orientation.
 Method according to Claim 1, characterized in that the sensing rate is 10  50 Hz, especially 10  40 Hz, especially 10  30 Hz and further especially 15  25 Hz.
 Apparatus for the implementation of the method according to Claim 1 or 2 comprising a first sensor device with acceleration sensors and rotational rate sensors that is attachable to and again removable from a first predetermined area of the object, and a second sensor device with acceleration sensors and rotational rate sensors that is attachable to and again removable from a patient, a memory, where the desired predetermined position and orientation of the object relative to the patient is stored, and computing means to determine the position and orientation from the measured sensor values, and computing means to compare the determined position and orientation with the predetermined position and orientation, and indication means to indicate how the position of the object should be modified in order to be placed into the desired predetermined position and orientation.
 Apparatus according to Claim 3, characterized in that the measured values of the sensor device can be acquired and processed at a sensing rate of 10  50 Hz, especially 10  40 Hz, especially 10  30 Hz and further especially 15  25 Hz.
 Apparatus according to Claim 3 or 4, characterized in that the sensor devices have an orientation aid, which allows correct fastening to the object and to the patient, respectively.
 Apparatus according to Claim 3, 4 or 5, characterized in that the sensor devices have means of fastening for the removable attachment of the sensor device to the object and the patient.
 Apparatus according to one or more of Claims 3  6, characterized in that the first and, especially also the second sensor device comprise three acceleration sensors, whose signals may be used for the calculation of translational movements, and in addition three rotational rate sensors, whose measured values may be used for the determination of the orientation in the room.
 Apparatus according to one or more of Claims 3  7, characterized in that the computing means comprise means for the execution of a quaternion algorithm.
 Apparatus according to one or more of Claims 3  8, characterized in that the computing means comprise means for the application of a compensation matrix that is determined and stored prior to the start of positioning, said compensation matrix allowing for a deviation of the axial orientation of the three rotational rate sensors from an assumed orientation of the axes toward each other and compensating for errors resulting from the calculation of the rotation angles.
 Apparatus according to one or more of Claims 3  9, characterized in that the computing means have means for executing a Kalman filter algorithm.
 Apparatus according to one or more of claims 3  10, characterized in that in addition to the rotational rate sensors, magnetic field sensors for the determination of the space orientation of the object are provided.
 Apparatus according to Claim 11, characterized in that means for comparing the space orientation determined from the values measured by the magnetic field sensors with the space orientation determined by the values measured by the rotational rate sensors are provided.
 Apparatus according to Claim 11 or 12, characterized in that means for comparing the space orientation determined from the values measured by the magnetic field sensors with the space orientation determined by the values measured by a gravitational acceleration sensor are provided.
 Apparatus according to one or more of Claims 3  13, characterized in that for each acceleration sensor a redundant acceleration sensor arranged parallel to it is provided for the implementation of Kalman filtering.
Priority Applications (2)
Application Number  Priority Date  Filing Date  Title 

DE102004057933A DE102004057933A1 (en)  20041201  20041201  Method and apparatus for navigating and positioning of an object relative to a patient 
PCT/EP2005/012473 WO2006058633A1 (en)  20041201  20051122  Method and device for navigating and positioning an object relative to a patient 
Publications (2)
Publication Number  Publication Date 

EP1817547A1 EP1817547A1 (en)  20070815 
EP1817547B1 true EP1817547B1 (en)  20120418 
Family
ID=35691583
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

EP05813615A Active EP1817547B1 (en)  20041201  20051122  Method and device for navigating and positioning an object relative to a patient 
Country Status (5)
Country  Link 

US (1)  US20070287911A1 (en) 
EP (1)  EP1817547B1 (en) 
AT (1)  AT554366T (en) 
DE (1)  DE102004057933A1 (en) 
WO (1)  WO2006058633A1 (en) 
Cited By (6)
Publication number  Priority date  Publication date  Assignee  Title 

US8911447B2 (en)  20080724  20141216  OrthAlign, Inc.  Systems and methods for joint replacement 
US8974467B2 (en)  20030609  20150310  OrthAlign, Inc.  Surgical orientation system and method 
US8974468B2 (en)  20080910  20150310  OrthAlign, Inc.  Hip surgery systems and methods 
US9271756B2 (en)  20090724  20160301  OrthAlign, Inc.  Systems and methods for joint replacement 
US9339226B2 (en)  20100121  20160517  OrthAlign, Inc.  Systems and methods for joint replacement 
US9549742B2 (en)  20120518  20170124  OrthAlign, Inc.  Devices and methods for knee arthroplasty 
Families Citing this family (37)
Publication number  Priority date  Publication date  Assignee  Title 

WO2004091419A2 (en)  20030408  20041028  Wasielewski Ray C  Use of microand miniature position sensing devices for use in tka and tha 
US8057482B2 (en)  20030609  20111115  OrthAlign, Inc.  Surgical orientation device and method 
EP2818187A3 (en) *  20050218  20150415  Zimmer, Inc.  Smart joint implant sensors 
DE102006032127B4 (en) *  20060705  20080430  Aesculap Ag & Co. Kg  Calibration and calibration device for a surgical referencing unit 
WO2009114829A2 (en) *  20080313  20090917  Thornberry Robert L  Computerguided system for orienting the acetabular cup in the pelvis during total hip replacement surgery 
CN101978243B (en) *  20080325  20130424  奥索瑟夫特公司  Tracking system and method 
AU2009227957B2 (en)  20080325  20140710  Orthosoft Inc.  Method and system for planning/guiding alterations to a bone 
US8029566B2 (en)  20080602  20111004  Zimmer, Inc.  Implant sensors 
DE102008030534A1 (en) *  20080627  20091231  Bort Medical Gmbh  Apparatus for determining the stability of a knee joint 
US8223121B2 (en) *  20081020  20120717  Sensor Platforms, Inc.  Host system and method for determining an attitude of a device undergoing dynamic acceleration 
US8515707B2 (en) *  20090107  20130820  Sensor Platforms, Inc.  System and method for determining an attitude of a device undergoing dynamic acceleration using a Kalman filter 
US8587519B2 (en)  20090107  20131119  Sensor Platforms, Inc.  Rolling gesture detection using a multidimensional pointing device 
WO2011088541A1 (en) *  20100119  20110728  Orthosoft Inc.  Tracking system and method 
US9901405B2 (en) *  20100302  20180227  Orthosoft Inc.  MEMSbased method and system for tracking a femoral frame of reference 
US9706948B2 (en) *  20100506  20170718  Sachin Bhandari  Inertial sensor based surgical navigation system for knee replacement surgery 
US8551108B2 (en)  20100831  20131008  Orthosoft Inc.  Tool and method for digital acquisition of a tibial mechanical axis 
US8957909B2 (en)  20101007  20150217  Sensor Platforms, Inc.  System and method for compensating for drift in a display of a user interface state 
GB201021675D0 (en)  20101220  20110202  Taylor Nicola J  Orthopaedic navigation system 
US9921712B2 (en)  20101229  20180320  Mako Surgical Corp.  System and method for providing substantially stable control of a surgical tool 
DE102011050240A1 (en)  20110510  20121115  Medizinische Hochschule Hannover  Apparatus and method for determining the relative position and orientation of objects 
US10342619B2 (en)  20110615  20190709  Brainlab Ag  Method and device for determining the mechanical axis of a bone 
US9459276B2 (en)  20120106  20161004  Sensor Platforms, Inc.  System and method for device selfcalibration 
WO2013104006A2 (en)  20120108  20130711  Sensor Platforms, Inc.  System and method for calibrating sensors for different operating environments 
US9228842B2 (en)  20120325  20160105  Sensor Platforms, Inc.  System and method for determining a uniform external magnetic field 
US9539112B2 (en) *  20120328  20170110  Robert L. Thornberry  Computerguided system for orienting a prosthetic acetabular cup in the acetabulum during total hip replacement surgery 
US9561387B2 (en)  20120412  20170207  Unitversity of Florida Research Foundation, Inc.  Ambiguityfree optical tracking system 
KR20150039801A (en)  20120803  20150413  스트리커 코포레이션  Systems and methods for robotic surgery 
US9820818B2 (en)  20120803  20171121  Stryker Corporation  System and method for controlling a surgical manipulator based on implant parameters 
US9119655B2 (en)  20120803  20150901  Stryker Corporation  Surgical manipulator capable of controlling a surgical instrument in multiple modes 
US9226796B2 (en)  20120803  20160105  Stryker Corporation  Method for detecting a disturbance as an energy applicator of a surgical instrument traverses a cutting path 
US9649160B2 (en)  20120814  20170516  OrthAlign, Inc.  Hip replacement navigation system and method 
US9008757B2 (en)  20120926  20150414  Stryker Corporation  Navigation system including optical and nonoptical sensors 
US9726498B2 (en)  20121129  20170808  Sensor Platforms, Inc.  Combining monitoring sensor measurements and system signals to determine device context 
US9652591B2 (en)  20130313  20170516  Stryker Corporation  System and method for arranging objects in an operating room in preparation for surgical procedures 
JP6461082B2 (en)  20130313  20190130  ストライカー・コーポレイション  Surgical system 
US10363149B2 (en)  20150220  20190730  OrthAlign, Inc.  Hip replacement navigation system and method 
DE102015217449B3 (en) *  20150911  20161229  Dialog Semiconductor B.V.  Sensor combination method for determining the orientation of an object 
Family Cites Families (10)
Publication number  Priority date  Publication date  Assignee  Title 

DE4205869A1 (en) *  19920226  19930902  Teldix Gmbh  Means for determining the relative orientation of a body 
DE4225112C1 (en) *  19920730  19931209  Bodenseewerk Geraetetech  Instrument position relative to processing object measuring apparatus  has measuring device for measuring position of instrument including inertia sensor unit 
US5645077A (en) *  19940616  19970708  Massachusetts Institute Of Technology  Inertial orientation tracker apparatus having automatic drift compensation for tracking human head and other similarly sized body 
US6122538A (en) *  19970116  20000919  Acuson Corporation  MotionMonitoring method and system for medical devices 
DE19830359A1 (en) *  19980707  20000120  Helge Zwosta  Spatial position and movement determination of body and body parts for remote control of machine and instruments 
DE19946948A1 (en) *  19990930  20010405  Philips Corp Intellectual Pty  Method and arrangement for determining the position of a medical instrument 
AU3057802A (en) *  20001030  20020515  Naval Postgraduate School  Method and apparatus for motion tracking of an articulated rigid body 
DE10239673A1 (en) *  20020826  20040311  Peter Pott  Device for processing parts 
DE10312154B4 (en) *  20030317  20070531  FraunhoferGesellschaft zur Förderung der angewandten Forschung e.V.  Method and apparatus for performing object tracking 
US20060258938A1 (en) *  20050516  20061116  Intuitive Surgical Inc.  Methods and system for performing 3D tool tracking by fusion of sensor and/or camera derived data during minimally invasive robotic surgery 

2004
 20041201 DE DE102004057933A patent/DE102004057933A1/en not_active Ceased

2005
 20051122 AT AT05813615T patent/AT554366T/en unknown
 20051122 EP EP05813615A patent/EP1817547B1/en active Active
 20051122 WO PCT/EP2005/012473 patent/WO2006058633A1/en active Application Filing

2007
 20070601 US US11/809,682 patent/US20070287911A1/en not_active Abandoned
Cited By (13)
Publication number  Priority date  Publication date  Assignee  Title 

US8974467B2 (en)  20030609  20150310  OrthAlign, Inc.  Surgical orientation system and method 
US8911447B2 (en)  20080724  20141216  OrthAlign, Inc.  Systems and methods for joint replacement 
US9572586B2 (en)  20080724  20170221  OrthAlign, Inc.  Systems and methods for joint replacement 
US8998910B2 (en)  20080724  20150407  OrthAlign, Inc.  Systems and methods for joint replacement 
US9192392B2 (en)  20080724  20151124  OrthAlign, Inc.  Systems and methods for joint replacement 
US10206714B2 (en)  20080724  20190219  OrthAlign, Inc.  Systems and methods for joint replacement 
US10321852B2 (en)  20080910  20190618  OrthAlign, Inc.  Hip surgery systems and methods 
US9931059B2 (en)  20080910  20180403  OrthAlign, Inc.  Hip surgery systems and methods 
US8974468B2 (en)  20080910  20150310  OrthAlign, Inc.  Hip surgery systems and methods 
US9271756B2 (en)  20090724  20160301  OrthAlign, Inc.  Systems and methods for joint replacement 
US10238510B2 (en)  20090724  20190326  OrthAlign, Inc.  Systems and methods for joint replacement 
US9339226B2 (en)  20100121  20160517  OrthAlign, Inc.  Systems and methods for joint replacement 
US9549742B2 (en)  20120518  20170124  OrthAlign, Inc.  Devices and methods for knee arthroplasty 
Also Published As
Publication number  Publication date 

AT554366T (en)  20120515 
US20070287911A1 (en)  20071213 
EP1817547A1 (en)  20070815 
DE102004057933A1 (en)  20060608 
WO2006058633A1 (en)  20060608 
Similar Documents
Publication  Publication Date  Title 

Madgwick  An efficient orientation filter for inertial and inertial/magnetic sensor arrays  
US7089148B1 (en)  Method and apparatus for motion tracking of an articulated rigid body  
US6409687B1 (en)  Motion tracking system  
DK169045B1 (en)  A method for threedimensional monitoring of the object space  
Yun et al.  Design, implementation, and experimental results of a quaternionbased Kalman filter for human body motion tracking  
US7895761B2 (en)  Measurement method and measuring device for use in measurement systems  
US6912475B2 (en)  System for three dimensional positioning and tracking  
US7587295B2 (en)  Image processing device and method therefor and program codes, storing medium  
JP5237723B2 (en)  System and method for gyrocompass alignment using dynamically calibrated sensor data and iterative extended Kalman filter in a navigation system  
US7233872B2 (en)  Difference correcting method for posture determining instrument and motion measuring instrument  
Jurman et al.  Calibration and data fusion solution for the miniature attitude and heading reference system  
Pittelkau  Kalman filtering for spacecraft system alignment calibration  
EP2542177B1 (en)  Mems based method and system for tracking a femoral frame of reference  
US8548766B2 (en)  Systems and methods for gyroscope calibration  
US5728935A (en)  Method and apparatus for measuring gravity with lever arm correction  
US20060227211A1 (en)  Method and apparatus for measuring position and orientation  
Weiss et al.  Realtime metric state estimation for modular visioninertial systems  
CN104736963B (en)  Mapping system and method  
Ren et al.  Investigation of attitude tracking using an integrated inertial and magnetic navigation system for handheld surgical instruments  
Martinelli et al.  Simultaneous localization and odometry self calibration for mobile robot  
US8676498B2 (en)  Camera and inertial measurement unit integration with navigation data feedback for feature tracking  
AU2004276459B2 (en)  Method and system for determining the spatial position of a handheld measuring appliance  
EP1154234B1 (en)  Electronic time piece with electronic azimuth meter and correcting mechanism for the electronic azimuth meter  
JP2012507011A (en)  Handheld positioning interface for spatial query  
Roetenberg et al.  Estimating body segment orientation by applying inertial and magnetic sensing near ferromagnetic materials 
Legal Events
Date  Code  Title  Description 

AK  Designated contracting states: 
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR 

17P  Request for examination filed 
Effective date: 20070524 

DAX  Request for extension of the european patent (to any country) deleted  
RIN1  Inventor (correction) 
Inventor name: VON LUEBTOW, KAI Inventor name: HAID, MARKUS Inventor name: SCHNEIDER, URS 

AK  Designated contracting states: 
Kind code of ref document: B1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR 

REG  Reference to a national code 
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH 

REG  Reference to a national code 
Ref country code: CH Ref legal event code: EP 

REG  Reference to a national code 
Ref country code: IE Ref legal event code: FG4D Free format text: LANGUAGE OF EP DOCUMENT: GERMAN 

REG  Reference to a national code 
Ref country code: AT Ref legal event code: REF Ref document number: 554366 Country of ref document: AT Kind code of ref document: T Effective date: 20120515 

REG  Reference to a national code 
Ref country code: DE Ref legal event code: R096 Ref document number: 502005012651 Country of ref document: DE Effective date: 20120621 

REG  Reference to a national code 
Ref country code: NL Ref legal event code: VDEP Effective date: 20120418 

LTIE  Lt: invalidation of european patent or patent extension 
Effective date: 20120418 

PG25  Lapsed in a contracting state announced via postgrant inform. from nat. office to epo 
Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIMELIMIT Effective date: 20120418 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIMELIMIT Effective date: 20120418 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIMELIMIT Effective date: 20120418 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIMELIMIT Effective date: 20120418 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIMELIMIT Effective date: 20120818 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIMELIMIT Effective date: 20120418 

PG25  Lapsed in a contracting state announced via postgrant inform. from nat. office to epo 
Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIMELIMIT Effective date: 20120820 Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIMELIMIT Effective date: 20120418 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIMELIMIT Effective date: 20120719 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIMELIMIT Effective date: 20120418 

PG25  Lapsed in a contracting state announced via postgrant inform. from nat. office to epo 
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIMELIMIT Effective date: 20120418 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIMELIMIT Effective date: 20120418 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIMELIMIT Effective date: 20120418 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIMELIMIT Effective date: 20120418 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIMELIMIT Effective date: 20120418 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIMELIMIT Effective date: 20120418 

PG25  Lapsed in a contracting state announced via postgrant inform. from nat. office to epo 
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIMELIMIT Effective date: 20120418 

26N  No opposition filed 
Effective date: 20130121 

PG25  Lapsed in a contracting state announced via postgrant inform. from nat. office to epo 
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIMELIMIT Effective date: 20120729 

REG  Reference to a national code 
Ref country code: DE Ref legal event code: R097 Ref document number: 502005012651 Country of ref document: DE Effective date: 20130121 

BERE  Be: lapsed 
Owner name: FRAUNHOFERGESELLSCHAFT ZUR FORDERUNG DER ANGEWAN Effective date: 20121130 

REG  Reference to a national code 
Ref country code: CH Ref legal event code: PL 

GBPC  Gb: european patent ceased through nonpayment of renewal fee 
Effective date: 20121122 

PG25  Lapsed in a contracting state announced via postgrant inform. from nat. office to epo 
Ref country code: LI Free format text: LAPSE BECAUSE OF NONPAYMENT OF DUE FEES Effective date: 20121130 Ref country code: CH Free format text: LAPSE BECAUSE OF NONPAYMENT OF DUE FEES Effective date: 20121130 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIMELIMIT Effective date: 20120718 

REG  Reference to a national code 
Ref country code: IE Ref legal event code: MM4A 

REG  Reference to a national code 
Ref country code: FR Ref legal event code: ST Effective date: 20130731 

PG25  Lapsed in a contracting state announced via postgrant inform. from nat. office to epo 
Ref country code: BE Free format text: LAPSE BECAUSE OF NONPAYMENT OF DUE FEES Effective date: 20121130 

PG25  Lapsed in a contracting state announced via postgrant inform. from nat. office to epo 
Ref country code: IE Free format text: LAPSE BECAUSE OF NONPAYMENT OF DUE FEES Effective date: 20121122 

PG25  Lapsed in a contracting state announced via postgrant inform. from nat. office to epo 
Ref country code: FR Free format text: LAPSE BECAUSE OF NONPAYMENT OF DUE FEES Effective date: 20121130 Ref country code: GB Free format text: LAPSE BECAUSE OF NONPAYMENT OF DUE FEES Effective date: 20121122 

REG  Reference to a national code 
Ref country code: AT Ref legal event code: MM01 Ref document number: 554366 Country of ref document: AT Kind code of ref document: T Effective date: 20121130 

PG25  Lapsed in a contracting state announced via postgrant inform. from nat. office to epo 
Ref country code: AT Free format text: LAPSE BECAUSE OF NONPAYMENT OF DUE FEES Effective date: 20121130 

PG25  Lapsed in a contracting state announced via postgrant inform. from nat. office to epo 
Ref country code: MC Free format text: LAPSE BECAUSE OF NONPAYMENT OF DUE FEES Effective date: 20121130 Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIMELIMIT Effective date: 20120418 

PG25  Lapsed in a contracting state announced via postgrant inform. from nat. office to epo 
Ref country code: LU Free format text: LAPSE BECAUSE OF NONPAYMENT OF DUE FEES Effective date: 20121122 

PG25  Lapsed in a contracting state announced via postgrant inform. from nat. office to epo 
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIMELIMIT Effective date: 20051122 

REG  Reference to a national code 
Ref country code: DE Ref legal event code: R084 Ref document number: 502005012651 Country of ref document: DE 

PGFP  Postgrant: annual fees paid to national office 
Ref country code: DE Payment date: 20181122 Year of fee payment: 14 