EP1817547B1 - Method and device for navigating and positioning an object relative to a patient - Google Patents

Method and device for navigating and positioning an object relative to a patient Download PDF

Info

Publication number
EP1817547B1
EP1817547B1 EP05813615A EP05813615A EP1817547B1 EP 1817547 B1 EP1817547 B1 EP 1817547B1 EP 05813615 A EP05813615 A EP 05813615A EP 05813615 A EP05813615 A EP 05813615A EP 1817547 B1 EP1817547 B1 EP 1817547B1
Authority
EP
European Patent Office
Prior art keywords
orientation
object
characterized
patient
sensors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP05813615A
Other languages
German (de)
French (fr)
Other versions
EP1817547A1 (en
Inventor
Markus Haid
Urs Schneider
Kai VON LÜBTOW
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Original Assignee
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to DE102004057933A priority Critical patent/DE102004057933A1/en
Application filed by Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV filed Critical Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Priority to PCT/EP2005/012473 priority patent/WO2006058633A1/en
Publication of EP1817547A1 publication Critical patent/EP1817547A1/en
Application granted granted Critical
Publication of EP1817547B1 publication Critical patent/EP1817547B1/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in preceding groups G01C1/00-G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in preceding groups G01C1/00-G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in preceding groups G01C1/00-G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in preceding groups G01C1/00-G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2048Tracking techniques using an accelerometer or inertia sensor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2051Electromagnetic tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/39Markers, e.g. radio-opaque or breast lesions markers
    • A61B2090/3954Markers, e.g. radio-opaque or breast lesions markers magnetic, e.g. NMR or MRI
    • A61B2090/3958Markers, e.g. radio-opaque or breast lesions markers magnetic, e.g. NMR or MRI emitting a signal

Abstract

The invention relates to a method and a device for navigating and positioning an object relative to a patient during surgery in an operating room. According to the invention, the position and orientation of the object and the patient in the room or a respective area of the patient relative to a reference system are determined quasi continuously in accordance with a scanning rate by means of a three-dimensional inertial sensor system, the momentary position and orientation of the object relative to the patient are determined therefrom, said position and orientation are compared to a desired predetermined position and orientation, and an indication is made as to how the position of the object has to be modified in order to reach the desired predetermined position and orientation.

Description

  • The invention relates to a method and apparatus for navigating and positioning an object relative to a patient during a medical operation in the operating room. For example, the process of navigating and positioning may be the placement of a hip prosthesis in an exactly predicted position relative to the femur of the patient. So far, correct positioning has been visually checked by using cameras in the operating room. In this case, visually recognizable marks were applied to the object to be navigated or the device to be navigated. Also in the patient such marks were provided. The DE 4225112 describes an optical positioning. Although such a system operates with the required accuracy, it has numerous disadvantages. A system required for this with several cameras and a control unit is associated with very high costs in the range of several 10,000, - €. Such a system is based on reference, ie it is in principle only stationary use. If the system is to be used at a different location, the cameras required for this purpose must be rebuilt again at exactly predetermined locations. Furthermore, it proves to be disadvantageous that only a limited work area in the order of a dimension of about 1 m is available. Particularly disadvantageous is the problem of shading. The system can only work if the related brands can be captured simultaneously by all required cameras. If OP staff between such Brand and a camera or other operating equipment is positioned there, the system is not working.
  • The present invention has for its object to improve a method and an apparatus of the type described above in such a way that the aforementioned disadvantages do not occur or are largely overcome.
  • This object is achieved by a method and an apparatus having the features of independent claims 1 and 3.
  • By applying the method according to the invention and the device according to the invention, the object, for example a prosthesis part or a surgical device, can be positioned in a predetermined position on the patient, or in other words, the operating room staff can change the relevant object with regard to its position and orientation until it reaches the desired predetermined position relative to the patient or relative to a region of the patient. The claimed device is associated with much lower costs than the application of the optical detection technique described above. The system according to the invention is largely transportable; it requires with the exception of attachable to the object and the patient sensor devices no larger equipment that would have to be mounted consuming at exactly predetermined positions stationary. The three-dimensional inertial sensors having sensor devices are also largely miniaturized, they can be realized in volumes of a few millimeters dimension. These sensor devices can be fastened and released again at a predetermined location and in a predetermined orientation on the said object on the one hand and on an equally predetermined position on the patient on the other hand. For this purpose, the sensor devices advantageously have a preferably visually recognizable one Orientation on which allows a correct determination of the object or on the patient.
  • It also proves to be advantageous if the sensor devices have fastening means for detachably fixing the sensor device to the object and to the patient. This may be, for example, clamping or clip-like fastening means.
  • In order to be able to compare position data from the measured values of the first sensor device and the second sensor device in the course of the implementation of the method, it is only necessary that a referencing of the sensor devices is carried out at the beginning of the positioning by both sensor devices at a common location or at two be brought to rest in predetermined known orientation to each other places. On this basis, a displacement of the sensor devices is then determined by using the three-dimensional sensor system and fed to further data processing.
  • The first and in particular also the second sensor device preferably have three acceleration sensors whose signals can be used to calculate a translatory movement, and additionally three rotation rate sensors whose measured values can be used to determine the orientation in space.
  • In addition to the rotation rate sensors, magnetic field sensors can advantageously be provided for determining the orientation of the object in space. The magnetic field sensors detect components of the earth's magnetic field and are in turn able to provide information about the orientation of the sensor device in space.
  • In such a case, it proves to be advantageous if means for comparing the orientation determined from measured values of the magnetic field sensors in space with the orientation determined from measured values of the rotation rate sensors in space are provided. In addition, an acceleration sensor for measuring the gravitational acceleration can be provided, which can also be used to determine the orientation of the considered sensor device in space.
  • Furthermore, it proves to be particularly advantageous if the computing means of the device according to the invention comprises means for the execution of a per se DE 103 12 154 A1 have known quaternion algorithm.
  • Furthermore, it proves to be advantageous if the computing means have means for applying a previously determined and stored compensation matrix, which compensating matrix a deviation of the orientation of the axes of the three rotation rate sensors from an assumed orientation of the axes bill each other and by their application Compensated for the calculation of the rotation angle resulting error.
  • Furthermore, it proves to be advantageous if the computing means have means for executing a Kalman filter algorithm.
  • Since inertial sensors deliver measurements based on acceleration, these measured values are integrated twice to determine position data in the case of acceleration sensors, and are easily integrated in the case of yaw rate sensors. In the course of these integration processes, errors are gradually added up by integration. In this respect, it proves to be advantageous that the method and the device related thereto is designed so that before the start of a data take and then again, as soon as a resting of Device is detected for a certain period of time, an offset value of the output signal of the rotation rate sensors is determined and is subsequently deducted until the next determination of this offset value for the respective rotation rate sensors, so that it does not enter into the integration. In this way, it is ensured that a new current offset value is determined again and again, so that the greatest possible accuracy is achieved.
  • It was also found that the non-avoidable deviation in the orientation of the axes of the three rotation rate sensors from an assumed orientation very quickly leads to inaccuracies in the calculation of the rotation angle. By compensating for this error by using the compensation matrix to be determined for the relevant sensor device, it is possible to achieve an increase in accuracy that meets the requirements. To determine the compensation matrix, a related sensor device is brought into a rotating movement about a respective axis before starting the object tracking, namely, stopping the other two axes. Based on the rotation rate sensor signals obtained in this case, the compensation matrix is calculated and stored in a memory of the claimed device. For the rotary drive of the sensor device, an industrial robot can be used. In this way, the 3x3 non-orthogonality matrix can be successively recorded by rotating around the individual spatial axes. N = N 11 N 12 N 13 N 21 N 22 N 23 N 31 N 32 N 33
    Figure imgb0001
  • The inverse diagonal elements of the non-orthogonality matrix in the equation would be 0 in an ideal system. The inaccuracies are due to manufacturing inaccuracies that cause the axes of the yaw rate sensors to be neither orthogonal to each other in a predetermined orientation to a housing of the sensor device.
  • If it was previously said that an offset value is determined when a rest of the device is detected for a certain period of time, this is the determination of a drift vector, for three preferably orthogonal rotational rate sensors and for three preferably orthogonal mutually oriented acceleration sensors. In this way, a matrix D can be determined whose rows are the offsets of the individual sensors. They are preferably newly determined at the beginning of an object tracking and then repeatedly at a detected rest position and used for further data processing. D = D 1 D 2 D 3
    Figure imgb0002
  • According to an embodiment of the invention, the accuracy of determining the orientation is also increased by applying a quaternion algorithm of the type defined below to the three angles of rotation for each measurement data acquisition and determination of the three angles of rotation in order to determine the current orientation of the object in space calculate. The Further improvement is due to the following circumstance: If one were to use the infinitesimal angle of rotation about a respective axis to determine the orientation change of the object obtained by simple integration at each infinitesimal step of the scan, ie, the measurement data taking such that the rotations are successively about the respective axis would be performed, this would result in a mistake. This error is due to the fact that the measurement data of the three sensors are taken at the same time, because a rotation usually takes place simultaneously around three axes and is determined. If, however, the three specific angles of rotation were subsequently taken into account as rotations about the respective axes in order to determine the change in position, an error would result in the rotation about the second and the third axis since these axes already have errors in another during the first rotation Orientation were brought. This is countered by applying the quaternion algorithm to the three rotation angles. In this way, the three rotations are replaced by a single transformation. The quaternion algorithm is defined as follows:
  • The quaternion is in Eq. Are defined: q = q 0 + q 1 i + q 2 j + q 3 k
    Figure imgb0003
    with the conditions Eq. to EQ.8. q 0..3
    Figure imgb0004
    i 2 = j 2 = k 2 = - 1
    Figure imgb0005
    i j = - j i = k
    Figure imgb0006
    j k = - k j = i
    Figure imgb0007
    k i = - i k = j
    Figure imgb0008
  • By combining the complex parts into a vector v and q 0 = w, one obtains the notation in Eq. q w v T
    Figure imgb0009

    with the conditions Eq. and Gl.11. w
    Figure imgb0010
    v 3
    Figure imgb0011
  • For the use of the quaternions the definitions Eq. 12 to Eq.17.
    Conjugated quaternion conjugated quaternion q k = w - v
    Figure imgb0012
  • amount amount q = w 2 + v v
    Figure imgb0013
  • inversion inversion q - 1 = q k q
    Figure imgb0014
  • multiplication q 1 q 2 = w 1 v 1 w 2 v 2 = w 1 w 2 + v 1 v 2 w 1 v 2 + w 2 v 1 + v 1 × v 2
    Figure imgb0015
  • Representation of a vector Representation of a vector q v = 0 v
    Figure imgb0016
  • Representation of a scalar q w = w 0
    Figure imgb0017
  • Special importance for the inertial object tracking is given by the multiplication. This represents a rotation of a quaternion. For this, a rotation quaternion becomes Gl.18. introduced. q red = cos φ 2 sin φ 2 φ φ
    Figure imgb0018
  • The vector φ consists of the individual rotations around the coordinate axes.
  • A rotation of a point or vector can now be calculated as follows. First, the coordinates of the point or the vector is converted into a quaternion by equation Eq.16 and then the multiplication by rotation quaternion is performed (Eq.18). The result quaternion contains the rotated vector in the same notation. Assuming that the magnitude of a quaternion equals one, the inverted quaternion can be replaced by the conjugate (Eq.19). q v ' = q red q v q red - 1 = q red q v q red - k
    Figure imgb0019

    because of Eq. 20th because of G 20.1. q red = 1
    Figure imgb0020
  • How can this operation be clarified? The vector φ is the normal vector to the plane in which a rotation of the angle ½φ is performed. The angle corresponds to the magnitude of the vector φ. Please refer Fig.1 ,
  • Out Fig.1 It can be seen that a rotation in any plane and only one angle can be performed. This also shows the special advantages of this process. Further advantages are the small number of necessary parameters and trigonometric functions, which could be completely replaced by approximations for small angles. For the rotation with the vector ω the differential equation 21 applies: t q red = 1 2 q red 0 ω
    Figure imgb0021
  • The concrete implementation of the quaternion algorithm is in FIG. 2 represented and takes place as follows: Overall the entire calculation using unit vectors. Starting from the start orientation, the start unit vectors E x , E y and E z are determined.
  • From the unit vectors, Eq. 22 calculates the rotation matrix. R = q 0 2 + q 1 2 - q 2 2 - q 3 2 2 q 1 q 2 - q 0 q 3 2 q 1 q 3 - q 0 q 2 2 q 1 q 2 - q 0 q 3 q 0 2 - q 1 2 + q 2 2 - q 3 2 2 q 2 q 3 - q 0 q 1 2 q 1 q 3 - q 0 q 2 2 q 2 q 3 - q 0 q 1 q 0 2 - q 1 2 - q 2 2 + q 3 2
    Figure imgb0022
  • Starting from a start orientation of the coordinate system connected to the object, in particular starting from so-called unit start vectors, according to equation 22, the rotation matrix R, which is a 3 × 3 matrix, is calculated. By reversing this equation 22, a rotation quaternion q red (k) can be obtained. With the help of the zero quaternion, which results from the zero-unit vectors, the starting quaternion is calculated by a multiplication with the rotation quaternion. In the next sampling, ie at the next measurement data acquisition and integration with current infinitesimal rotation angles A, B, C, a rotation quaternion q red (k) to be used for this step is then calculated. The quaternion q akt (k-1) formed in the previous step is then multiplied by this rotation quaternion q red (k) according to Equation 13 to obtain the current quaternion of the present k-th step, namely q akt (k). From this current quaternion, the current orientation of the object can then be specified in any desired representation for the currently performed scanning step.
  • It has previously been pointed out that a Kalman filtering algorithm can be used to improve accuracy in determining or calculating Increase position data. The concept of Kalman filtering, in particular indirect Kalman filtering, assumes the existence of support information. The difference of information, which is obtained from measured values of sensors, and this support information then serves as input to the Kalman filter. However, since the method according to the invention and the device do not receive any continuous information from a reference system, support information for determining the position is in any case fundamentally unavailable. In order nevertheless to enable the use of indirect Kalman filtering, it is proposed to use a second, parallel acceleration sensor. The difference of the sensor signals of the parallel acceleration sensors then serves as an input signal for the Kalman filter. Figures 3 . 4 and 5 schematically show the inventive concept of a redundant parallel system for Kalman filtering, wherein two sensors are arranged so that their sensitive sensor axes extend parallel to each other ( FIG. 4 ).
  • Advantageously, both integration steps are included in the modeling. One thus obtains an estimation error for the positional error which inevitably results from two-fold integration. FIG. 5 illustrates this schematically in a feed-forward configuration as a concrete realization of a general indirect Kalman filter.
  • In this concept, a 1st order Gauss Markov process generates the acceleration error using white noise. The model is based on the fact that the position error is determined from the acceleration error by double integration. This results in equations 23 and 25. e ˙ s t = e v t
    Figure imgb0023
    e ˙ v t = e a t ,
    Figure imgb0024
    e ˙ a t = - β e a t w a t
    Figure imgb0025
  • Based on the general stochastic state space description of a time-continuous system model with state vector x (t), state transition matrix Φ (T), stochastic scattering matrix G and measurement noise w (t), the system equations 26 and 27 result. x ˙ t = φ T x t + G w t
    Figure imgb0026
    e ˙ s t e ˙ v t e ˙ a t = 0 1 0 0 0 1 0 0 - β e s t e v t e a t + 0 0 0 0 0 0 0 0 1 w s t w v t w a t
    Figure imgb0027
  • For the measurement noise w (t) the equations 28 and 29 apply. e w t = 0
    Figure imgb0028
    e w t w t T = Q d δ t - t T
    Figure imgb0029
  • The general stochastic state space description for the equivalent time-discrete system model is given by Equations 30 and 31, respectively. x k + 1 | k = φ T x k | k + w d k | k
    Figure imgb0030
    e s k + 1 | k e v k + 1 | k e a k + 1 | k = φ T e s k | k e v k | k e a k | k + w d k | k
    Figure imgb0031
  • Equations 32 and 33 apply to the required time-discrete measurement equation. y k = C x k + v k
    Figure imgb0032
    y k = C e s k e v k e a k + v k ,
    Figure imgb0033
  • In Equation 33, v (k) is a vectorial white noise process. The difference between the two sensor signals applies as input for the Kalman filter. This results in equations 34 to 36 for the measurement equation. y k = Δ e a k = a 2 k + e a 2 k - a 1 k + e a 1 k
    Figure imgb0034
    y k = e a 2 k - e a 1 k
    Figure imgb0035
    y k = 0 0 - 1 e s 1 k e v 1 k e a 1 k + 0 0 e a 2 k
    Figure imgb0036
  • In this model, both e a1 (k) and e a2 (k) should be modeled as a first-order Gauss-Markov process. The approach according to equation 37 serves this purpose. e ˙ a 2 t = - β e a 2 t w a 2 t
    Figure imgb0037
  • The equivalent discrete-time approach follows according to Equation 38. e a 2 k + 1 | k = e - βT e a 2 k | k w a 2 k | k
    Figure imgb0038
  • Considering e a2 (k) as a further state, the extended system model according to Equations 39 and 40 results. x e k + 1 | k = φ e T x e k | k + w de k | k
    Figure imgb0039
    e s 1 k + 1 | k e v 1 k + 1 | k e a 1 k + 1 | k e a 2 k + 1 | k = φ e T e s 1 k | k e v 1 k | k e a 1 k | k e a 2 k | k + w de k | k
    Figure imgb0040
  • The equations 41 to 43 apply here. φ e T = φ T 0 0 e - β 2 T
    Figure imgb0041
    w de = w a 1 k w a 2 k
    Figure imgb0042
    Q de = Q d 0 0 q a 2
    Figure imgb0043
  • For the extended measurement model, equations 44 to 47 apply. y k = a 2 k + e a 2 k - a 1 k + e a 1 k
    Figure imgb0044
    y k = a a 2 k - e a 1 k
    Figure imgb0045
    y k = C x k + v k
    Figure imgb0046
    y k = 0 0 - 1 | 1 e s 1 k e v 1 k e a 1 k e a 2 k + v k
    Figure imgb0047
  • This measurement equation is a perfect and thus error-free measurement, ie no measurement noise v (k) occurs. This requires modeling according to Equation 48. R k = r k = 0
    Figure imgb0048
  • The covariance matrix R of the measurement noise is thus singular, ie R -1 does not exist. The existence of R -1 is a sufficient but not necessary condition for the stability or for the stochastic observability of the Kalman filter. There are two ways to respond to the singularity:
  1. 1. Use of R = 0. The filter can be stable. Since there is only a need for short-term stability anyway, long-term stability can be dispensed with.
  2. 2. Use of a reduced observer.
  • In this concept the varianz R = 0 is used. Thus, the filters used sufficiently stable.
  • The Kalman filter equations for the one-dimensional system in discrete form result in equations 49-50.
  • Determination of the Kalman gain according to equation 49. K k + 1 | = P k + 1 | k C T k C k P k + 1 | k C T k + R k - 1
    Figure imgb0049
  • Updating the state prediction according to Equation 50. x ^ k + 1 | k + 1 = x ^ k + 1 | k + K k + 1 y ~ k + 1
    Figure imgb0050
  • Where equation 51 holds. y ~ k = y k - y ^ k = y k - C k x ^ k
    Figure imgb0051
  • Update the covariance matrix of the estimation error according to Equations 52 and 53, respectively. P k + 1 | k + 1 = I - K k + 1 C T k P k + 1 | k I - K k + 1 C T k T + K k + 1 R k K T k + 1
    Figure imgb0052
    P k + 1 | k + 1 = I - K k + 1 C T k P k + 1 | k
    Figure imgb0053
  • Determination of the prediction value of the system state according to equation 54. x ^ k + 2 | k + 1 = φ T x ^ k + 1 | k + 1
    Figure imgb0054
  • Determination of the prediction value of the covariance matrix of the estimation error according to equation 55. P k + 2 | k + 1 = φ T P k + 1 | k + 1 φ T T + Q k
    Figure imgb0055
  • This completes the filter cycle and begins again for the next measured value. The filter works recursively, with the steps of prediction and correction going through each measurement.
  • The system used describes a three-dimensional translation in three orthogonal spatial axes. These translations are described by the path s, the velocity v and the acceleration a. For an indirect Kalman filter, an additional acceleration sensor also supplies acceleration information for each spatial direction.
  • The basic algorithm of the structure is shown in Figure 5. The actual measurement signal comes from an acceleration sensor in the form of acceleration a for each spatial axis. With the aid of the additional information in the form of the sensor signal of the second acceleration sensor for each spatial axis, the Kalman filter algorithm provides an estimated value for the deviation of the acceleration signal ea for the three spatial directions x, y and z.
  • Further features, details and advantages of the invention will become apparent from the claims and from the drawings and the following description of the invention. In the drawing shows:
  • FIG. 1
    the representation of the rotation of a vector by means of quaternions;
    FIG. 2
    a flow chart illustrating the application of the quaternion algorithm;
    FIG. 3
    a flow chart illustrating the implementation of the method according to the invention;
    FIG. 4
    the schematic representation of an acceleration sensor and a parallel thereto arranged redundant acceleration sensor;
    FIG. 5
    the schematic hint of a Kalman filter with INS error modeling in feed-forward configuration;
    FIG. 6
    a schematic representation of the result of the application of a Kalman filtering.
  • The FIGS. 1, 2 and 4 to 6 have already been explained in advance.
  • As already mentioned, at the beginning of the object tracking, the object to be positioned is provided at a predetermined location with the first sensor device. The object is then brought to rest in the room and thus referenced with a stationary coordinate system, for example the operating table, that the angles and accelerations determined from the signals of the rotation rate sensors and acceleration sensors are initially set to zero. An offset value is then determined during an absolute quiescent period of the subject, which is set at each sample, i. H. at each data take, is considered. This is a drift vector whose components comprise the determined offset values of the sensors.
  • It proves to be particularly advantageous that each time a resting of the sensor device is detected and taken as a basis, the offset values of the sensors are determined anew and the further calculation of the position and orientation is taken as the basis.
  • Furthermore, the aforementioned compensation matrix is determined, which is a deviation of the axes of the rotation rate sensors of an assumed orientation to each other and to a housing of the sensor device corresponds to or compensate for this deviation exactly.
  • The above explanations apply correspondingly to the second sensor device which is to be fixed on the patient.
  • When carrying out the positioning and orientation, the sensor signals are detected at successive times, for example with a sampling rate of 10 to 30 Hz, in particular of approximately 20 Hz, and converted into infinitesimal rotation angles or position data by simple or double integration. However, the compensation matrix for the non-orthogonality of the rotation rate sensors is taken into account in order to achieve an increased accuracy in determining the orientation.
  • Based on the infinitesimally small angle changes, which were determined in each case from the measurement signals of the three rotation rate sensors, the orientation of the rotated coordinate system of the object to the reference coordinate system could now be determined by the Euler method by specifying three angles. Instead, it proves to be advantageous if a quaternion algorithm of the type described above is performed to determine the orientation. As a result, instead of three rotations to be executed one after the other, a single transformation can be assumed, which further increases the accuracy of the orientation of the object system obtained therefrom.
  • As a result of performing the quaternion algorithm, the orientation of the object in space is given.
  • As in FIG. 3 indicated, it is possible, however, to determine by other sensors, such as a three-dimensional magnetic field sensor, at arbitrary times the magnetic field acting on the object, which is the geomagnetic field known at a location. Furthermore, it is conceivable to additionally provide a three-dimensional acceleration sensor for measuring the gravitational acceleration. The measurement signals of the magnetic field sensors and the acceleration sensors can be combined to form an electronic three-dimensional compass which can indicate with high accuracy the orientation of the object in space, if no disturbing effects are present, preferably if the measured values were taken during rest of the object. The resulting orientation of the object in space can be used as support information for that orientation which was obtained only by the signals of the three rotation rate sensors. First, the measurement signals of the magnetic field sensors and the acceleration sensors are examined to see whether they are affected by disturbances or not. If this is not the case, they are compared as support information to be taken into account in the execution of the method with the orientation information obtained from the three rotation rate sensors. For this purpose, a Kalman filter algorithm is advantageously used. This is an estimation algorithm in which the information about the orientation of the object determined from the previously mentioned three-dimensional compass is used as correct support information when compared with the information about the orientation obtained from the rotation rate sensors.
  • As previously described in detail, the measurement values of the acceleration sensors can also be improved by using Kalman filtering, namely as a substitute for otherwise accessible support information Preferably, a parallel thereto arranged redundant acceleration sensor is provided for each acceleration sensor. With the aid of this additional information in the form of the measured value signal of the second acceleration sensor for each spatial axis, an estimated value for the error-related deviation of the acceleration measured value signal for the respective spatial direction can be determined.
  • Claims (14)

    1. Method for navigating and positioning an object relative to a patient during surgery in an operating theatre, characterized in that the position and orientation in the room space of both the object and the patient or of a relevant area of the patient relative to a referencing framework is determined quasi continuously according to a sensing rate by means of three-dimensional inertial sensors, and that from this the current position and orientation of the object relative to the patient is determined, that this position and orientation are compared with a desired, predetermined position and orientation, and that there is an indication as to how the position of the object should be modified in order to be placed in the desired predetermined position and orientation.
    2. Method according to Claim 1, characterized in that the sensing rate is 10 - 50 Hz, especially 10 - 40 Hz, especially 10 - 30 Hz and further especially 15 - 25 Hz.
    3. Apparatus for the implementation of the method according to Claim 1 or 2 comprising a first sensor device with acceleration sensors and rotational rate sensors that is attachable to and again removable from a first predetermined area of the object, and a second sensor device with acceleration sensors and rotational rate sensors that is attachable to and again removable from a patient, a memory, where the desired predetermined position and orientation of the object relative to the patient is stored, and computing means to determine the position and orientation from the measured sensor values, and computing means to compare the determined position and orientation with the predetermined position and orientation, and indication means to indicate how the position of the object should be modified in order to be placed into the desired predetermined position and orientation.
    4. Apparatus according to Claim 3, characterized in that the measured values of the sensor device can be acquired and processed at a sensing rate of 10 - 50 Hz, especially 10 - 40 Hz, especially 10 - 30 Hz and further especially 15 - 25 Hz.
    5. Apparatus according to Claim 3 or 4, characterized in that the sensor devices have an orientation aid, which allows correct fastening to the object and to the patient, respectively.
    6. Apparatus according to Claim 3, 4 or 5, characterized in that the sensor devices have means of fastening for the removable attachment of the sensor device to the object and the patient.
    7. Apparatus according to one or more of Claims 3 - 6, characterized in that the first and, especially also the second sensor device comprise three acceleration sensors, whose signals may be used for the calculation of translational movements, and in addition three rotational rate sensors, whose measured values may be used for the determination of the orientation in the room.
    8. Apparatus according to one or more of Claims 3 - 7, characterized in that the computing means comprise means for the execution of a quaternion algorithm.
    9. Apparatus according to one or more of Claims 3 - 8, characterized in that the computing means comprise means for the application of a compensation matrix that is determined and stored prior to the start of positioning, said compensation matrix allowing for a deviation of the axial orientation of the three rotational rate sensors from an assumed orientation of the axes toward each other and compensating for errors resulting from the calculation of the rotation angles.
    10. Apparatus according to one or more of Claims 3 - 9, characterized in that the computing means have means for executing a Kalman filter algorithm.
    11. Apparatus according to one or more of claims 3 - 10, characterized in that in addition to the rotational rate sensors, magnetic field sensors for the determination of the space orientation of the object are provided.
    12. Apparatus according to Claim 11, characterized in that means for comparing the space orientation determined from the values measured by the magnetic field sensors with the space orientation determined by the values measured by the rotational rate sensors are provided.
    13. Apparatus according to Claim 11 or 12, characterized in that means for comparing the space orientation determined from the values measured by the magnetic field sensors with the space orientation determined by the values measured by a gravitational acceleration sensor are provided.
    14. Apparatus according to one or more of Claims 3 - 13, characterized in that for each acceleration sensor a redundant acceleration sensor arranged parallel to it is provided for the implementation of Kalman filtering.
    EP05813615A 2004-12-01 2005-11-22 Method and device for navigating and positioning an object relative to a patient Active EP1817547B1 (en)

    Priority Applications (2)

    Application Number Priority Date Filing Date Title
    DE102004057933A DE102004057933A1 (en) 2004-12-01 2004-12-01 Method and apparatus for navigating and positioning of an object relative to a patient
    PCT/EP2005/012473 WO2006058633A1 (en) 2004-12-01 2005-11-22 Method and device for navigating and positioning an object relative to a patient

    Publications (2)

    Publication Number Publication Date
    EP1817547A1 EP1817547A1 (en) 2007-08-15
    EP1817547B1 true EP1817547B1 (en) 2012-04-18

    Family

    ID=35691583

    Family Applications (1)

    Application Number Title Priority Date Filing Date
    EP05813615A Active EP1817547B1 (en) 2004-12-01 2005-11-22 Method and device for navigating and positioning an object relative to a patient

    Country Status (5)

    Country Link
    US (1) US20070287911A1 (en)
    EP (1) EP1817547B1 (en)
    AT (1) AT554366T (en)
    DE (1) DE102004057933A1 (en)
    WO (1) WO2006058633A1 (en)

    Cited By (6)

    * Cited by examiner, † Cited by third party
    Publication number Priority date Publication date Assignee Title
    US8911447B2 (en) 2008-07-24 2014-12-16 OrthAlign, Inc. Systems and methods for joint replacement
    US8974467B2 (en) 2003-06-09 2015-03-10 OrthAlign, Inc. Surgical orientation system and method
    US8974468B2 (en) 2008-09-10 2015-03-10 OrthAlign, Inc. Hip surgery systems and methods
    US9271756B2 (en) 2009-07-24 2016-03-01 OrthAlign, Inc. Systems and methods for joint replacement
    US9339226B2 (en) 2010-01-21 2016-05-17 OrthAlign, Inc. Systems and methods for joint replacement
    US9549742B2 (en) 2012-05-18 2017-01-24 OrthAlign, Inc. Devices and methods for knee arthroplasty

    Families Citing this family (37)

    * Cited by examiner, † Cited by third party
    Publication number Priority date Publication date Assignee Title
    WO2004091419A2 (en) 2003-04-08 2004-10-28 Wasielewski Ray C Use of micro-and miniature position sensing devices for use in tka and tha
    US8057482B2 (en) 2003-06-09 2011-11-15 OrthAlign, Inc. Surgical orientation device and method
    EP2818187A3 (en) * 2005-02-18 2015-04-15 Zimmer, Inc. Smart joint implant sensors
    DE102006032127B4 (en) * 2006-07-05 2008-04-30 Aesculap Ag & Co. Kg Calibration and calibration device for a surgical referencing unit
    WO2009114829A2 (en) * 2008-03-13 2009-09-17 Thornberry Robert L Computer-guided system for orienting the acetabular cup in the pelvis during total hip replacement surgery
    CN101978243B (en) * 2008-03-25 2013-04-24 奥索瑟夫特公司 Tracking system and method
    AU2009227957B2 (en) 2008-03-25 2014-07-10 Orthosoft Inc. Method and system for planning/guiding alterations to a bone
    US8029566B2 (en) 2008-06-02 2011-10-04 Zimmer, Inc. Implant sensors
    DE102008030534A1 (en) * 2008-06-27 2009-12-31 Bort Medical Gmbh Apparatus for determining the stability of a knee joint
    US8223121B2 (en) * 2008-10-20 2012-07-17 Sensor Platforms, Inc. Host system and method for determining an attitude of a device undergoing dynamic acceleration
    US8515707B2 (en) * 2009-01-07 2013-08-20 Sensor Platforms, Inc. System and method for determining an attitude of a device undergoing dynamic acceleration using a Kalman filter
    US8587519B2 (en) 2009-01-07 2013-11-19 Sensor Platforms, Inc. Rolling gesture detection using a multi-dimensional pointing device
    WO2011088541A1 (en) * 2010-01-19 2011-07-28 Orthosoft Inc. Tracking system and method
    US9901405B2 (en) * 2010-03-02 2018-02-27 Orthosoft Inc. MEMS-based method and system for tracking a femoral frame of reference
    US9706948B2 (en) * 2010-05-06 2017-07-18 Sachin Bhandari Inertial sensor based surgical navigation system for knee replacement surgery
    US8551108B2 (en) 2010-08-31 2013-10-08 Orthosoft Inc. Tool and method for digital acquisition of a tibial mechanical axis
    US8957909B2 (en) 2010-10-07 2015-02-17 Sensor Platforms, Inc. System and method for compensating for drift in a display of a user interface state
    GB201021675D0 (en) 2010-12-20 2011-02-02 Taylor Nicola J Orthopaedic navigation system
    US9921712B2 (en) 2010-12-29 2018-03-20 Mako Surgical Corp. System and method for providing substantially stable control of a surgical tool
    DE102011050240A1 (en) 2011-05-10 2012-11-15 Medizinische Hochschule Hannover Apparatus and method for determining the relative position and orientation of objects
    US10342619B2 (en) 2011-06-15 2019-07-09 Brainlab Ag Method and device for determining the mechanical axis of a bone
    US9459276B2 (en) 2012-01-06 2016-10-04 Sensor Platforms, Inc. System and method for device self-calibration
    WO2013104006A2 (en) 2012-01-08 2013-07-11 Sensor Platforms, Inc. System and method for calibrating sensors for different operating environments
    US9228842B2 (en) 2012-03-25 2016-01-05 Sensor Platforms, Inc. System and method for determining a uniform external magnetic field
    US9539112B2 (en) * 2012-03-28 2017-01-10 Robert L. Thornberry Computer-guided system for orienting a prosthetic acetabular cup in the acetabulum during total hip replacement surgery
    US9561387B2 (en) 2012-04-12 2017-02-07 Unitversity of Florida Research Foundation, Inc. Ambiguity-free optical tracking system
    KR20150039801A (en) 2012-08-03 2015-04-13 스트리커 코포레이션 Systems and methods for robotic surgery
    US9820818B2 (en) 2012-08-03 2017-11-21 Stryker Corporation System and method for controlling a surgical manipulator based on implant parameters
    US9119655B2 (en) 2012-08-03 2015-09-01 Stryker Corporation Surgical manipulator capable of controlling a surgical instrument in multiple modes
    US9226796B2 (en) 2012-08-03 2016-01-05 Stryker Corporation Method for detecting a disturbance as an energy applicator of a surgical instrument traverses a cutting path
    US9649160B2 (en) 2012-08-14 2017-05-16 OrthAlign, Inc. Hip replacement navigation system and method
    US9008757B2 (en) 2012-09-26 2015-04-14 Stryker Corporation Navigation system including optical and non-optical sensors
    US9726498B2 (en) 2012-11-29 2017-08-08 Sensor Platforms, Inc. Combining monitoring sensor measurements and system signals to determine device context
    US9652591B2 (en) 2013-03-13 2017-05-16 Stryker Corporation System and method for arranging objects in an operating room in preparation for surgical procedures
    JP6461082B2 (en) 2013-03-13 2019-01-30 ストライカー・コーポレイション Surgical system
    US10363149B2 (en) 2015-02-20 2019-07-30 OrthAlign, Inc. Hip replacement navigation system and method
    DE102015217449B3 (en) * 2015-09-11 2016-12-29 Dialog Semiconductor B.V. Sensor combination method for determining the orientation of an object

    Family Cites Families (10)

    * Cited by examiner, † Cited by third party
    Publication number Priority date Publication date Assignee Title
    DE4205869A1 (en) * 1992-02-26 1993-09-02 Teldix Gmbh Means for determining the relative orientation of a body
    DE4225112C1 (en) * 1992-07-30 1993-12-09 Bodenseewerk Geraetetech Instrument position relative to processing object measuring apparatus - has measuring device for measuring position of instrument including inertia sensor unit
    US5645077A (en) * 1994-06-16 1997-07-08 Massachusetts Institute Of Technology Inertial orientation tracker apparatus having automatic drift compensation for tracking human head and other similarly sized body
    US6122538A (en) * 1997-01-16 2000-09-19 Acuson Corporation Motion--Monitoring method and system for medical devices
    DE19830359A1 (en) * 1998-07-07 2000-01-20 Helge Zwosta Spatial position and movement determination of body and body parts for remote control of machine and instruments
    DE19946948A1 (en) * 1999-09-30 2001-04-05 Philips Corp Intellectual Pty Method and arrangement for determining the position of a medical instrument
    AU3057802A (en) * 2000-10-30 2002-05-15 Naval Postgraduate School Method and apparatus for motion tracking of an articulated rigid body
    DE10239673A1 (en) * 2002-08-26 2004-03-11 Peter Pott Device for processing parts
    DE10312154B4 (en) * 2003-03-17 2007-05-31 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Method and apparatus for performing object tracking
    US20060258938A1 (en) * 2005-05-16 2006-11-16 Intuitive Surgical Inc. Methods and system for performing 3-D tool tracking by fusion of sensor and/or camera derived data during minimally invasive robotic surgery

    Cited By (13)

    * Cited by examiner, † Cited by third party
    Publication number Priority date Publication date Assignee Title
    US8974467B2 (en) 2003-06-09 2015-03-10 OrthAlign, Inc. Surgical orientation system and method
    US8911447B2 (en) 2008-07-24 2014-12-16 OrthAlign, Inc. Systems and methods for joint replacement
    US9572586B2 (en) 2008-07-24 2017-02-21 OrthAlign, Inc. Systems and methods for joint replacement
    US8998910B2 (en) 2008-07-24 2015-04-07 OrthAlign, Inc. Systems and methods for joint replacement
    US9192392B2 (en) 2008-07-24 2015-11-24 OrthAlign, Inc. Systems and methods for joint replacement
    US10206714B2 (en) 2008-07-24 2019-02-19 OrthAlign, Inc. Systems and methods for joint replacement
    US10321852B2 (en) 2008-09-10 2019-06-18 OrthAlign, Inc. Hip surgery systems and methods
    US9931059B2 (en) 2008-09-10 2018-04-03 OrthAlign, Inc. Hip surgery systems and methods
    US8974468B2 (en) 2008-09-10 2015-03-10 OrthAlign, Inc. Hip surgery systems and methods
    US9271756B2 (en) 2009-07-24 2016-03-01 OrthAlign, Inc. Systems and methods for joint replacement
    US10238510B2 (en) 2009-07-24 2019-03-26 OrthAlign, Inc. Systems and methods for joint replacement
    US9339226B2 (en) 2010-01-21 2016-05-17 OrthAlign, Inc. Systems and methods for joint replacement
    US9549742B2 (en) 2012-05-18 2017-01-24 OrthAlign, Inc. Devices and methods for knee arthroplasty

    Also Published As

    Publication number Publication date
    AT554366T (en) 2012-05-15
    US20070287911A1 (en) 2007-12-13
    EP1817547A1 (en) 2007-08-15
    DE102004057933A1 (en) 2006-06-08
    WO2006058633A1 (en) 2006-06-08

    Similar Documents

    Publication Publication Date Title
    Madgwick An efficient orientation filter for inertial and inertial/magnetic sensor arrays
    US7089148B1 (en) Method and apparatus for motion tracking of an articulated rigid body
    US6409687B1 (en) Motion tracking system
    DK169045B1 (en) A method for three-dimensional monitoring of the object space
    Yun et al. Design, implementation, and experimental results of a quaternion-based Kalman filter for human body motion tracking
    US7895761B2 (en) Measurement method and measuring device for use in measurement systems
    US6912475B2 (en) System for three dimensional positioning and tracking
    US7587295B2 (en) Image processing device and method therefor and program codes, storing medium
    JP5237723B2 (en) System and method for gyrocompass alignment using dynamically calibrated sensor data and iterative extended Kalman filter in a navigation system
    US7233872B2 (en) Difference correcting method for posture determining instrument and motion measuring instrument
    Jurman et al. Calibration and data fusion solution for the miniature attitude and heading reference system
    Pittelkau Kalman filtering for spacecraft system alignment calibration
    EP2542177B1 (en) Mems -based method and system for tracking a femoral frame of reference
    US8548766B2 (en) Systems and methods for gyroscope calibration
    US5728935A (en) Method and apparatus for measuring gravity with lever arm correction
    US20060227211A1 (en) Method and apparatus for measuring position and orientation
    Weiss et al. Real-time metric state estimation for modular vision-inertial systems
    CN104736963B (en) Mapping system and method
    Ren et al. Investigation of attitude tracking using an integrated inertial and magnetic navigation system for hand-held surgical instruments
    Martinelli et al. Simultaneous localization and odometry self calibration for mobile robot
    US8676498B2 (en) Camera and inertial measurement unit integration with navigation data feedback for feature tracking
    AU2004276459B2 (en) Method and system for determining the spatial position of a hand-held measuring appliance
    EP1154234B1 (en) Electronic time piece with electronic azimuth meter and correcting mechanism for the electronic azimuth meter
    JP2012507011A (en) Handheld positioning interface for spatial query
    Roetenberg et al. Estimating body segment orientation by applying inertial and magnetic sensing near ferromagnetic materials

    Legal Events

    Date Code Title Description
    AK Designated contracting states:

    Kind code of ref document: A1

    Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

    17P Request for examination filed

    Effective date: 20070524

    DAX Request for extension of the european patent (to any country) deleted
    RIN1 Inventor (correction)

    Inventor name: VON LUEBTOW, KAI

    Inventor name: HAID, MARKUS

    Inventor name: SCHNEIDER, URS

    AK Designated contracting states:

    Kind code of ref document: B1

    Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

    REG Reference to a national code

    Ref country code: GB

    Ref legal event code: FG4D

    Free format text: NOT ENGLISH

    REG Reference to a national code

    Ref country code: CH

    Ref legal event code: EP

    REG Reference to a national code

    Ref country code: IE

    Ref legal event code: FG4D

    Free format text: LANGUAGE OF EP DOCUMENT: GERMAN

    REG Reference to a national code

    Ref country code: AT

    Ref legal event code: REF

    Ref document number: 554366

    Country of ref document: AT

    Kind code of ref document: T

    Effective date: 20120515

    REG Reference to a national code

    Ref country code: DE

    Ref legal event code: R096

    Ref document number: 502005012651

    Country of ref document: DE

    Effective date: 20120621

    REG Reference to a national code

    Ref country code: NL

    Ref legal event code: VDEP

    Effective date: 20120418

    LTIE Lt: invalidation of european patent or patent extension

    Effective date: 20120418

    PG25 Lapsed in a contracting state announced via postgrant inform. from nat. office to epo

    Ref country code: PL

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20120418

    Ref country code: CY

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20120418

    Ref country code: FI

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20120418

    Ref country code: SE

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20120418

    Ref country code: IS

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20120818

    Ref country code: LT

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20120418

    PG25 Lapsed in a contracting state announced via postgrant inform. from nat. office to epo

    Ref country code: PT

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20120820

    Ref country code: SI

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20120418

    Ref country code: GR

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20120719

    Ref country code: LV

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20120418

    PG25 Lapsed in a contracting state announced via postgrant inform. from nat. office to epo

    Ref country code: DK

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20120418

    Ref country code: NL

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20120418

    Ref country code: RO

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20120418

    Ref country code: CZ

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20120418

    Ref country code: EE

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20120418

    Ref country code: SK

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20120418

    PG25 Lapsed in a contracting state announced via postgrant inform. from nat. office to epo

    Ref country code: IT

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20120418

    26N No opposition filed

    Effective date: 20130121

    PG25 Lapsed in a contracting state announced via postgrant inform. from nat. office to epo

    Ref country code: ES

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20120729

    REG Reference to a national code

    Ref country code: DE

    Ref legal event code: R097

    Ref document number: 502005012651

    Country of ref document: DE

    Effective date: 20130121

    BERE Be: lapsed

    Owner name: FRAUNHOFER-GESELLSCHAFT ZUR FORDERUNG DER ANGEWAN

    Effective date: 20121130

    REG Reference to a national code

    Ref country code: CH

    Ref legal event code: PL

    GBPC Gb: european patent ceased through non-payment of renewal fee

    Effective date: 20121122

    PG25 Lapsed in a contracting state announced via postgrant inform. from nat. office to epo

    Ref country code: LI

    Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

    Effective date: 20121130

    Ref country code: CH

    Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

    Effective date: 20121130

    Ref country code: BG

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20120718

    REG Reference to a national code

    Ref country code: IE

    Ref legal event code: MM4A

    REG Reference to a national code

    Ref country code: FR

    Ref legal event code: ST

    Effective date: 20130731

    PG25 Lapsed in a contracting state announced via postgrant inform. from nat. office to epo

    Ref country code: BE

    Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

    Effective date: 20121130

    PG25 Lapsed in a contracting state announced via postgrant inform. from nat. office to epo

    Ref country code: IE

    Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

    Effective date: 20121122

    PG25 Lapsed in a contracting state announced via postgrant inform. from nat. office to epo

    Ref country code: FR

    Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

    Effective date: 20121130

    Ref country code: GB

    Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

    Effective date: 20121122

    REG Reference to a national code

    Ref country code: AT

    Ref legal event code: MM01

    Ref document number: 554366

    Country of ref document: AT

    Kind code of ref document: T

    Effective date: 20121130

    PG25 Lapsed in a contracting state announced via postgrant inform. from nat. office to epo

    Ref country code: AT

    Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

    Effective date: 20121130

    PG25 Lapsed in a contracting state announced via postgrant inform. from nat. office to epo

    Ref country code: MC

    Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

    Effective date: 20121130

    Ref country code: TR

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20120418

    PG25 Lapsed in a contracting state announced via postgrant inform. from nat. office to epo

    Ref country code: LU

    Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

    Effective date: 20121122

    PG25 Lapsed in a contracting state announced via postgrant inform. from nat. office to epo

    Ref country code: HU

    Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

    Effective date: 20051122

    REG Reference to a national code

    Ref country code: DE

    Ref legal event code: R084

    Ref document number: 502005012651

    Country of ref document: DE

    PGFP Postgrant: annual fees paid to national office

    Ref country code: DE

    Payment date: 20181122

    Year of fee payment: 14