EP1910997B1 - Verfahren, systeme und computerprogramm zur 3d-erfassung von dreidimensionalen datensätzen, die durch vorzugsweise optische kohärenztomographie auf der basis der ausrichtung von projektionsbildern bzw. grundbildern erzeugt wurden - Google Patents
Verfahren, systeme und computerprogramm zur 3d-erfassung von dreidimensionalen datensätzen, die durch vorzugsweise optische kohärenztomographie auf der basis der ausrichtung von projektionsbildern bzw. grundbildern erzeugt wurden Download PDFInfo
- Publication number
- EP1910997B1 EP1910997B1 EP06788865.1A EP06788865A EP1910997B1 EP 1910997 B1 EP1910997 B1 EP 1910997B1 EP 06788865 A EP06788865 A EP 06788865A EP 1910997 B1 EP1910997 B1 EP 1910997B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- dimensional data
- data set
- rotated
- vip
- registered
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012014 optical coherence tomography Methods 0.000 title claims description 69
- 238000000034 method Methods 0.000 title claims description 33
- 238000004590 computer program Methods 0.000 title claims description 9
- 238000003384 imaging method Methods 0.000 description 21
- 238000005259 measurement Methods 0.000 description 20
- 238000012545 processing Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 12
- 238000001228 spectrum Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 6
- 230000007170 pathology Effects 0.000 description 5
- 238000002310 reflectometry Methods 0.000 description 5
- 210000001525 retina Anatomy 0.000 description 5
- 230000003595 spectral effect Effects 0.000 description 5
- 238000013459 approach Methods 0.000 description 3
- 210000004204 blood vessel Anatomy 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000002595 magnetic resonance imaging Methods 0.000 description 3
- 206010028980 Neoplasm Diseases 0.000 description 2
- 201000011510 cancer Diseases 0.000 description 2
- 210000003733 optic disk Anatomy 0.000 description 2
- 230000002207 retinal effect Effects 0.000 description 2
- 210000001210 retinal vessel Anatomy 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 241000255588 Tephritidae Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000010009 beating Methods 0.000 description 1
- 238000013075 data extraction Methods 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000002281 optical coherence-domain reflectometry Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000002168 optical frequency-domain reflectometry Methods 0.000 description 1
- 239000008188 pellet Substances 0.000 description 1
- 230000004256 retinal image Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/12—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
- A61B3/1225—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes using coherent radiation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/102—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for optical coherence tomography [OCT]
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/17—Systems in which incident light is modified in accordance with the properties of the material investigated
- G01N21/47—Scattering, i.e. diffuse reflection
- G01N21/4795—Scattering, i.e. diffuse reflection spatially resolved investigating of object in scattering medium
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Definitions
- the present invention relates to imaging systems and, more particularly, to optical coherence imaging systems.
- OCT Optical Coherence Tomography
- samples such as tissue, glass and the like.
- OCT Optical Coherence Tomography
- Recent advances in OCT have increased the imaging speed, allowing large image sets, such as three dimensional volumes, to be generated relatively quickly.
- OCT is typically high-speed, non-contact and non-destructive, it may be useful for imaging dynamics over short time scales, for example, well below 1.0 second, such as the beating of a heart tube in a fruit fly, and for imaging physiological changes that occur over a long time scales, for example, over days or even longer, such as over the time it takes tissues to develop or to respond to interventions.
- FD-OCT Fourier domain OCT
- TD-OCT time domain OCT
- FD-OCT generally includes swept source (SS) and spectral domain (SD), where SD systems generally use a broadband source in conjunction with a spectrometer rather than a swept laser source and a photodiode(s).
- SD systems generally rely on movement of a mirror or reference source over time to control imaging depth by providing coherence depth gating for the photons returning from the sample being imaged.
- Each system uses broadband optical sources, producing a low effective coherence that dictates the achievable resolution in the depth, or axial, direction.
- imaging techniques are derived from the general field of Optical Low Coherence Reflectometry (OLCR); the time domain techniques are derived from Optical Coherence Domain Reflectometry, swept source techniques are derived from Optical Frequency Domain Reflectometry, and spectral domain techniques have been referred to as "spectral radar.”
- OLCR Optical Low Coherence Reflectometry
- swept source techniques are derived from Optical Frequency Domain Reflectometry
- spectral domain techniques have been referred to as "spectral radar.”
- the imaging depth may be determined by Fourier transform relationships between the acquired spectrum, rather than by the range of a physically scanned mirror, thereby allowing concurrent acquisition of photons from all imaged depths in the sample.
- the optical frequency interval between sampled elements of the spectrum may be used to control the imaging depth, with a narrower sampling interval providing a deeper imaging capability.
- OCT optical coherence tomography
- Some embodiments of the present invention provide methods of analyzing three dimensional data sets obtained from a sample over time.
- a first three dimensional data set is obtained from the sample at a first time.
- a first volume intensity projection (VIP) image is created from the first three dimensional data set.
- One or more first landmarks are identified and registered in the first VIP image.
- a second three dimensional data set is obtained from the sample at a second time, different from the first time.
- a second VIP image is created from the second three dimensional data set.
- the one ore more first landmarks are identified and registered in the second VIP image.
- the first and second VIP images are aligned based on the registered one or more first landmarks in the first and second VIP images according to the claims.
- one or more subject areas within the three dimensional data set may be registered to the first VIP image.
- the first and second VIP images may be aligned based on the registered at least one first landmark to locate the registered subject area of the first three dimensional data set in the second three dimensional data set so as to allow comparison of the registered subject area in the first and second three dimensional data sets at the respective first and the second times.
- an attribute of the registered subject area of the first three dimensional data set may be measured and an attribute of the located subject area of the second three dimensional data set may be measured.
- the measured attributes of the registered and located subject areas may be compared so as to allow comparison of the subject areas at the first and second times.
- the first and second three dimensional data sets may be optical coherence tomography (OCT) data sets.
- the second three dimensional data set may be rotated to align an axis of the second three dimensional data set with an axis of the first three dimensional data set to obtain a rotated three dimensional data set.
- a rotated VIP image may be created based on the rotated three dimensional data set.
- one or more subject areas may be registered within the first three dimensional data set to the first VIP image .
- the one or more first landmarks may be registered and identified on the rotated VIP image.
- the first and rotated VIP images may be aligned based on the registered at least one first landmark in the first and rotated VIP images.
- the first and rotated VIP images may be aligned based on the registered at least one first landmark to locate the registered subject area of the first three dimensional data set in the rotated three dimensional data set so as to allow comparison of the registered subject area and the located subject area of the first and rotated images, respectively.
- an attribute of the registered subject area of the first three dimensional data set may be measured and an attribute of the located subject area of the rotated three dimensional data set may be measured.
- the measured attributes of the registered and located common subject areas may be compared so as to allow comparison of the subject areas in the first and rotated three dimensional data sets.
- Some embodiments of the present invention provide methods for analyzing three dimensional data sets obtained from a sample, including obtaining a first three dimensional data set from the sample at a first time.
- a first volume intensity projection (VIP) image is created from the first three dimensional data set.
- a second three dimensional data set is obtained from the sample at a second time, different from the first time.
- the second three dimensional data set is rotated to align an axis of the second three dimensional data set with an axis of the first three dimensional data set to obtain a rotated three dimensional data set.
- a rotated VIP image is created based on the rotated three dimensional data set.
- one or more first landmarks may be identified and registered in the first VIP image.
- One or more subject areas in the first three dimensional data set may be registered to the first VIP image.
- One or more of the first landmarks may be identified and registered on the rotated VIP image.
- the first and rotated VIP images may be aligned based on the registered one or more first landmarks in the first and rotated VIP images.
- the first and rotated VIP images may be aligned based on the registered at least one first landmark to locate the registered subject area in the first three dimensional data set in the rotated three dimensional data set so as to allow comparison of the registered and located subject areas at the first and second times.
- an attribute of the registered subject area of the first three dimensional data set may be measured and an attribute of the located subject area of the rotated three dimensional data set may be measured.
- the measured attributes of the registered and located subject areas are compared so as to allow comparison of the subject areas in the first and rotated three dimensional data sets .
- the first, second and rotated three dimensional data sets are optical coherence tomography (OCT) data sets.
- Some embodiments of the present invention provide methods of analyzing data sets obtained from a sample over time, including identifying and registering one or more landmarks in first and second volume intensity projection (VIP) images created from first and second three dimensional data sets, respectively.
- the first and second VIP images may be aligned based on the registered at least one first landmark to locate a common subject area in the first and second three dimensional data sets so as to allow comparison of the common subject area in the first and second three dimensional data sets at the first and the second times, respectively.
- the present invention may be embodied as methods, systems and/or computer program products. Accordingly, the present invention may be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc .) . Furthermore, the present invention may take the form of a computer program product on a computer-usable or computer-readable storage medium having computer-usable or computer-readable program code embodied in the medium for use by or in connection with an instruction execution system.
- a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
- the computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a nonexhaustive list) of the computer-readable medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, and a portable compact disc read-only memory (CD-ROM).
- RAM random access memory
- ROM read-only memory
- EPROM or Flash memory erasable programmable read-only memory
- CD-ROM portable compact disc read-only memory
- the computer-usable or computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
- VIP volume intensity projection
- OCT optical coherence tomography
- VIP images may also be termed summed voxel projections, Fundus images, and the like without departing from the scope of the present invention.
- Using methods, systems and computer program products according to some embodiments of the present invention may increase the likelihood or possibly ensure that measurements of a sample taken at different times are taken from the same or substantially the same location in the sample.
- Various embodiments of the present invention are discussed below including hardware and/or software for an OCT system that provides the capability to generate VIP images from OCT datasets, align datasets taken at different times and/or rotate the images to obtain a different view.
- OCT imaging systems may be categorized in two general categories, time domain OCT (TD-OCT), where a moving mirror or prism in the reference arm determines the current imaging depth location in the sample, and Fourier domain OCT (FD-OCT), where there reference arm is fixed in length and data is acquired over a spectrum of wavelengths to change imaging depth location in the sample.
- FD-OCT is typically further categorized into two categories, swept source OCT (SS-OCT) and spectral domain OCT (SD-OCT).
- SS-OCT swept source OCT
- SD-OCT spectral domain OCT
- SS-OCT narrow-linewidth laser is typically swept in wavelength over time to interrogate the sample at different wavelengths.
- a broad band (low coherence) source such as a superluminscent diode (SLD)
- SLD superluminscent diode
- any three dimensional data set may be used without departing from the scope of the present invention.
- ultrasound data and/or magnetic resonance imaging (MRI) data may be used in some embodiments.
- OCT systems typically operate by acquiring depth data at a particular lateral position on the sample, which may be called an A-scan.
- the OCT beam is moved relative to the sample by any of the various depth adjustment approaches described above and another set of depth data is acquired.
- These series of depth images may be combined to form a 2-D image, which may be called a B-scan.
- Any scan pattern can generally be used without departing from the scope of the present invention.
- commonly used scan patterns include linear and circular scan patterns. By scanning in two directions instead of just one, a three dimensional volume of data can be acquired. Again any scan pattern can generally be used to create the three dimensional image, for example, commonly used three dimensional scan patterns include rectangular, sets of radial lines, and sets of concentric circles.
- OCT data is a measurement of the backscattered reflectivity at each depth in the sample at a given point.
- the contrast in the image is generally due to variations in the backscattered reflectivity in the sample.
- a desirable image set that may be extracted is a surface projection of the sub-surface scattering data.
- One way of generating this type of image is by summing the OCT data over an A-scan. This value is the total reflectivity at that particular lateral position. By applying this over a volume scan, a 2-D image may be created.
- This type of image may be referred to as a Fundus image when generated from OCT data sets of retina scans.
- this type of image may be referred to as a VIP image.
- this image may be, essentially, a black and white picture of the sample.
- VIP images are created from the OCT data, there is a direct correlation between pixels on the VIP image and A-scans in the OCT data set.
- Other algorithms to generate a useful VIP-like image may be used with some embodiments of the present invention as well, such as by summing over a limited subset of an A-scan, and/or by weighting the sum over the A-scan with some selected function suited to a particular use of the scan information.
- the VIP image can be used to align the OCT system with respect to the sample in some embodiments when the VIP image is generated in nearly real time.
- the alignment VIP image may be acquired at a lower lateral resolution, which may increase the rate at which the VIP images are created. This image may allow the user to align the system based on OCT data, thus providing a preview of the OCT dataset. This approach in some embodiments may be more accurate than trying to visually align the sample to the OCT system or using a video camera for alignment.
- the VIP images can be used to align OCT datasets taken at different times and possibly ensure that subject pathologies (targets) observed within various datasets taken at different times are from the same location in the sample.
- one or more landmarks in the sample may be identified and used.
- landmarks refer to elements of the sample, the locations of which do not significantly change over time, for example, a branch point of a retinal blood vessel in an eye sample may be a landmark. Since the locations of the landmarks do not significantly change over time, the location(s) of targets may be referenced with respect to the landmarks and, therefore, these same or similar location(s) can be located in the future.
- the VIP or Fundus
- image typically clearly shows the location of blood vessels, the optic nerve head, and the fovea.
- the degrees of freedom for alignment of the samples may include, for example, translational in X & Y, rotational in theta, and/or scaling in X & Y.
- the VIP plane is orthogonal to the individual A-scans. However, in some embodiments, any other plane may be defined by a three-degree of-freedom rotation about the scan axis. This plane may then become a reference plane for landmark identification, and subsequent images may be aligned with an original image applying these three additional degrees of freedom.
- one or more measurements can be made, generally in a direction orthogonal to the reference Fundus plane, on one or more datasets at the same location in each dataset for a particular measurement.
- These measurements can include almost any value of interest, relative scattering strength, such as layer thickness, the distance between two points, the volume of a cavity or feature, and time-rate-of-change measurements, and/or Doppler flow measurements.
- the landmarks in the sample can either be part of the sample, such as blood vessels in the retina and/or artificially introduced landmarks, such as holes drilled into a MEMS sample or surgically introduced pellets in a tissue sample or painted landmarks without departing from the scope of the present invention.
- the location of the OCT image acquisition may be separated in time and space from the generation of the VIP image and again from the alignment of multiple images and again from the acquisition of measurements of interest from the datasets.
- a portable OCT imaging system could be used in an animal facility to acquire daily images.
- the daily images may be transferred over a network to a central server, where once a week all the data is processed and longitudinal measurements of retinal thickness are generated.
- the level of automation in the process may vary.
- all the operations described herein for image acquisition may be automated in software, but varying degrees of reduced automation may be provided in some embodiments without departing from the scope of the present invention.
- the user may align the multiple VIP images on the computer screen including the X & Y translation, rotation, and/or X & Y scaling.
- the determination of the measurement of interest may be based on user input and/or may happen automatically in software.
- the data processing system 100 typically includes a user interface 144, such as a keyboard, keypad, touchpad or the like, I/O data ports 146 and a memory 136 that communicate with a processor 138.
- the I/O data ports 146 can be used to transfer information between the data processing system 100 and another computer system or a network.
- These components may be conventional components, such as those used in many conventional data processing systems, which may be configured to operate as described herein.
- the processor 138 communicates with the memory 136 via an address/data bus 248 and the I/O data ports 146 via an address/date bus 249.
- the processor 138 can be any commercially available or custom microprocessor.
- the memory 136 is representative of the overall hierarchy of memory devices containing the software and data used to implement the functionality of the data processing system 100.
- the memory 136 can include, but is not limited to, the following types of devices: cache, ROM, PROM, EPROM, EEPROM, flash memory, SRAM, and DRAM.
- the memory 136 may include several categories of software and data used in the data processing system 100: an operating system 252; application programs 254; input/output (I/O) device drivers 258; and data 256.
- the operating system 252 may be any operating system suitable for use with a data processing system, such as OS/2, AIX or zOS from International Business Machines Corporation, Armonk, NY, Windows95, Windows98, Windows2000 or WindowsXP from Microsoft Corporation, Redmond, WA, Unix or Linux.
- the I/O device drivers 258 typically include software routines accessed through the operating system 252 by the application programs 254 to communicate with devices such as the I/O data port(s) 146 and certain memory 136 components.
- the application programs 254 are illustrative of the programs that implement the various features of the data processing system 100 and preferably include at least one application that supports operations according to some embodiments of the present invention.
- the data 256 represents the static and dynamic data used by the application programs 254, the operating system 252, the I/O device drivers 258, and other software programs that may reside in the memory 136.
- the data 256 may include three dimensional data sets 250 and 255 obtained from a sample, for example, an eye. Although the data 256 only includes two sets of data sets 250 and 255, embodiments of the present invention are not limited to this configuration. One data set or more than two data sets may be present without departing from the scope of the present invention.
- the application programs 254 may include a data set acquisition module 221, a volume intensity projection (VIP) module 222, a registration module 223, an alignment module 224, a comparison module 225 and a rotation module 226 according to some embodiments of the present invention. While the present invention is illustrated, for example, with reference to the data set acquisition module 221, the VIP module 222, the registration module 223, the alignment module 224, the comparison module 225 and the rotation module 226 being application programs in Figure 2 , as will be appreciated by those of skill in the art, other configurations may also be utilized while still benefiting from the teachings of the present invention.
- VIP volume intensity projection
- the data set acquisition module 221, the VIP module 222, the registration module 223, the alignment module 224, the comparison module 225 and the rotation module 226 may also be incorporated into the operating system 252 or other such logical division of the data processing system 100.
- the present invention should not be construed as limited to the configuration of Figure 2 , but is intended to encompass any configuration capable of carrying out the operations described herein.
- the data set acquisition module 221, the VIP module 222, the registration module 223, the alignment module 224, the comparison module 225 and the rotation module 226 are illustrated in a single data processing system, as will be appreciated by those of skill in the art, such functionality may be distributed across one or more data processing systems.
- the present invention should not be construed as limited to the configuration illustrated in Figures 1 through 2 , but may be provided by other arrangements and/or divisions of function between data processing systems.
- the data set acquisition module 221 is configured to obtain three dimensional data sets from a sample.
- the three dimensional data sets can be any type of three dimensional data, for example, sonogram data, MRI data and/or OCT data.
- the data sets may be obtained from the sample at different times. Having data sets of the sample, for example, the human eye, taken at different times may allow comparison of the sample to determine if anything in the sample has changed over time. For example, a first three dimensional data set may be obtained from the sample at a first time and a second three dimensional data set may be obtained from the sample at a second time, different from the first time.
- the volume intensity projection (VIP) module 222 may be configured to create a VIP image from the three dimensional data set. For example, a first VIP image may be created from the first three dimensional data set and a second VIP image may be created from the second three dimensional data set.
- the registration module 223 may be configured to identify and register one or more landmarks in the VIP image(s).
- landmarks refer to elements of the sample, the locations of which do not significantly change over time, for example, a branch point of a retinal blood vessel in an eye sample may be a landmark. Since the location of the landmarks do not significantly change over time, the location(s) of samples may be referenced with respect to the landmarks and, therefore, these same or similar location(s) can be located in the future as will be discussed further below.
- the registration module 223 may be further configured to register one or more subject areas in the first VIP image.
- a "subject area” refers to any area of interest in the sample, for example, an area of the sample that includes cancer cells. This subject area may be located in the VIP images taken at various times by the subject areas relation to the registered landmarks on the VIP images.
- the alignment module 224 may be configured to align the first and second VIP images based on the registered one or more landmarks in the VIP image(s) as will be discussed further below with respect to Figure 5 .
- an alignment module configured to align the first and second VIP images based on the registered one or more landmarks in the first and second VIP images.
- the alignment module 224 may be further configured to align the first and second VIP images based on the registered one or more landmarks to locate the registered subject area of the first three dimensional data set in the second three dimensional data set so as to allow comparison of the registered subject area in the first and second three dimensional data sets at the respective first and the second times.
- a change in the subject area of the sample may be monitored over time so as to allow a determination of whether the condition being monitored is the same, better or worse.
- the comparison module 225 may be configured to measure an attribute of the registered subject area of the first three dimensional data set.
- an attribute of the subject area can be any aspect of the subject area that may be of interest.
- an attribute of the subject area may be the size of the area affected by cancer.
- the comparison module 225 may be further configured to measure an attribute of the subject area located in the second three dimensional data set based on the registered subject area in the first three dimensional data set.
- the comparison module 225 may be configured to compare the measured attributes of the registered and located subject areas so as to allow comparison of the subject areas at the first and second times.
- three dimensional data sets created at different times containing the subject area of the sample may be compared. This comparison may be used to, for example, determine if the monitored condition is the same, worse or better.
- the rotation module 226 may be configured rotate a three dimensional data set to align an axis of the three dimensional data set with an axis of the second three dimensional data set.
- an axis of the first three dimensional data set may be rotated to align an axis of the first three dimensional data set with an axis of the second three dimensional data set to obtain a rotated three dimensional data set.
- the VIP module 222 may be further configured to create a rotated VIP image based on the rotated three dimensional data set.
- the registration module 223 may be further configured to register one or more subject areas in the first VIP image and identify and register the one or more landmarks on the rotated VIP image.
- the alignment module 224 may be further configured to align the first and rotated VIP images based on the registered one or more landmarks in the first and rotated VIP image and align the first and rotated VIP images based on the registered one or more landmarks to locate the registered subject area in the first three dimensional data set in the rotated three dimensional data set so as to allow comparison of the registered subject area and the located subject area of the first and rotated three dimensional data set, respectively.
- a Fundus image being a VIP originating from an OCT image of a retina.
- embodiments of the present invention are discussed herein with respect to Fundus images, embodiments of the present invention are not limited to this configuration.
- any VIP image could be used without departing from the scope of the present invention.
- the OCT system includes a computer running software 300 and an OCT imaging system 301.
- Figure 1 also illustrates the components of the software (memory) 302 running on the computer 300.
- the computer 300 is connected to the OCT imaging system 301.
- the computer executes software that may be resident in memory 302.
- Various software modules in the memory 302 may, among other things, process raw data to create OCT image datasets, for example, A-scans, B-scans and/or volume images 302a, generate Fundus or VIP images from the OCT datasets 302b, display data to a user display, for example, a monitor, or the like 302c, and/or store data 302d.
- the OCT dataset 400 may be converted into a Fundus image 401 by taking the raw spectral data 410, subtracting the DC spectrum to provide the spectrum after subtracting the DC spectrum 411, and squaring and summing the remaining spectrum to arrive at a number that is the measure of total reflectivity 412. This is repeated for the A-scans in the three dimensional dataset to generate a 2D image (the Fundus image) 401.
- the same procedure can be used to generate 1D line image 403 from a 2D B-scan 402.
- FIG. 5 a schematic diagram illustrating alignment of two images according to some embodiments of the present invention will be discussed.
- multiple OCT datasets can be aligned using landmarks 510 on the Fundus image.
- a measurement is taken in the Fundus image 1 500 from a particular A-scan 501 at a point relative to two landmarks.
- a second OCT dataset is acquired at a later time and a Fundus image 2 502 is generated from that dataset.
- the same measurement location can be determined and the second A-scan 503 from the same location can be selected.
- the number of landmarks 510 can vary and the measurement to be acquired can be almost anything in various embodiments, including thickness measurements, distance measurements, volume measurements, Doppler flow measurements, and/or other measurements.
- FIG. 6 a flowchart illustrating methods for aligning two or more images according to some embodiments of the present invention will be discussed.
- operations begin with retrieving and/or acquiring OCT image datasets (block 600).
- Fundus images may be generated at block 600.
- Landmarks are identified, for example, as described above using Fundus images (block 601).
- the images are moved and scaled to provide alignment based on the identified landmarks (block 602).
- Data is extracted from the aligned images (block 603).
- FIG. 7 a schematic diagram illustrating Fundus images orthogonal to A-scan and rotated relative to A-scan according to some embodiments of the present invention will be discussed.
- the embodiments illustrated in Figure 7 may, in some respects, correspond to those described with reference to Figure 4 .
- the Fundus image plane 701 is rotated in 1, 2 and/or 3 axis relative to the A-scan 700.
- the Fundus image plane 701 is illustrated with as containing landmarks 702 for reference and alignment of multiple OCT datasets and the procedure for longitudinal data extraction may remain generally the same.
- Operations begin at block 800 by obtaining a first three dimensional data set from the sample at a first time.
- the three dimensional data set may be any three dimensional data set without departing from the scope of the present invention.
- the three dimensional data set may be sonogram data, MRI data and/or OCT data.
- a first VIP image is created from the first three dimensional data set (block 810).
- One or more first landmarks may be registered on the first VIP image (block 820).
- Landmarks refer to elements of the sample, the locations of which do not significantly change over time, for example, a retina in an eye sample may be a landmark.
- a second three dimensional data set is obtained from the sample at a second time, different from the first time (block 830).
- a second VIP image is created from the second three dimensional data set (block 840).
- the one or more landmarks are identified and registered in the second VIP image (block 850).
- the first and second VIP images may be aligned based on the registered one or more landmarks in the first and second VIP images as discussed above with respect to Figure 5 (block 860).
- one or more subject areas may be registered in the first VIP image (block 925).
- a subject area is any area of interest in the sample.
- the second three dimensional data set may be rotated to align an axis of the second three dimensional data set with an axis of the first three dimensional data set to obtain a rotated three dimensional data set (block 935).
- a rotated VIP image may be created based on the rotated three dimensional data set (block 945).
- the one or more first landmarks may be identified and registered in the rotated VIP image (block 955).
- the first and rotated VIP images may be aligned based on the registered one or more first landmarks in the first and rotated VIP images (block 965).
- alignment may include aligning the first and rotated VIP images based on the registered one or more first landmarks to locate the registered subject area in the first three dimensional data set in the rotated three dimensional data set so as to allow comparison of the registered subject area and the located subject area of the first and rotated three dimensional data set, respectively.
- An attribute of the registered subject area of the first three dimensional data set may be measured and an attribute of the located subject area of the rotated three dimensional data set may be measured (block 975).
- the measured attributes of the registered and located common subject areas may be compared so as to allow comparison of the subject areas in the first and rotated three dimensional data set (block 985).
- Operations for analyzing three dimensional data sets obtained from a sample will now be discussed with respect to the flowchart of Figure 10 .
- Operations begin at block 1000 by obtaining a first three dimensional data set from the sample at a first time.
- a first volume intensity projection (VIP) image is created from the first three dimensional data set (block 1010).
- a second three dimensional data set is obtained from the sample at a second time, different from the first time (block 1023).
- the second three dimensional data set is rotated to align an axis of the second three dimensional data set with an axis of the first three dimensional data set to obtain a rotated three dimensional data set (block 1033).
- a rotated VIP image is created based on the rotated three dimensional data set (block 1043).
- Operations for analyzing data sets obtained from a sample over time will now be discussed with respect to the flowchart of Figure 11 .
- Operations begin at block 1120 by identifying and registering one or more landmarks in first and second VIP images created from first and second three dimensional data sets, respectively.
- the first and second VIP images are aligned based on the registered at least one first landmark to locate a common subject area in the first and second three dimensional data sets so as to allow comparison of the common subject area in the first and second three dimensional data sets at the first and the second times, respectively (block 1147).
- Operations for analyzing data sets obtained from a sample over time will now be discussed with respect to the flowchart of Figure 12 .
- Operations begin at block 1200 by acquiring a first volumetric image from a first three dimensional data set. It is determined if the volumetric image corresponds to a desired viewing axis (block 1205). If it is determined that the volumetric image does not correspond to a desired viewing axis (block 1205), the three dimensional data set is rotated and interpolated about the imaging axis until the desired viewing axis is obtained (block 1210) and operations proceed to block 1215 discussed below.
- a VIP image is created having an en face plane that is normal to the viewing axis (block 1215).
- One or more landmarks may be identified on the VIP image (block 1220).
- the locations of one or more subject areas (target pathologies) may be registered to the one or more landmarks on the VIP image (block 1225).
- a second or next volumetric image is acquired (block 1230). It is determined if the second or next volumetric image corresponds to a desired viewing axis (block 1235). If it is determined that the second or next volumetric image does not correspond to a desired viewing axis (block 1235), the three dimensional data set is rotated and interpolated about the imaging axis until the desired viewing axis is obtained (block 1240) and operations proceed to block 1245 discussed below.
- a second or next VIP image is created having an en face plane that is normal to the viewing axis (block 1245).
- the one or more landmarks may be identified on the second or next VIP image (block 1250).
- the locations of one or more subject areas (target pathologies) may be registered to the one or more landmarks on the VIP image (block 1255). Attributes of subject areas of the first and second VIP images may be compared as discussed above (block 1260). Operations of blocks 1230 through 1260 may repeat until all images have been processed.
- Operations for analyzing data sets obtained from a sample over time will now be discussed with respect to the flowchart of Figure 13 .
- Operations begin at block 1300 by acquiring a first volumetric image from a first three dimensional data set.
- the axis of the three dimensional data set may be rotated (block 1305) and interpolated to a regular grid (block 1310).
- a first VIP image is created (block 1315).
- One or more landmarks may be identified and registered on the first VIP image (block 1320).
- the locations of one or more subject areas (target pathologies) may be registered to the one or more landmarks on the first VIP image (block 1325).
- a second or next volumetric image is acquired (block 1330).
- the axis of the second or next three dimensional data set may be rotated to match the orientation of the first VIP image (block 1335).
- the three dimensional data of the second or next volumetric image may be interpolated to a regular grid (block 1340).
- a second or next VIP image is created (block 1345).
- the one or more landmarks may be identified and registered on the second or next VIP image (block 1350).
- the locations of one or more subject areas (target pathologies) may be located in the second or next VIP image based on the registered subject area in the first VIP image (block 1355). Attributes of subject areas of the first and second VIP images may be compared as discussed above (block 1360). Operations of blocks 1330 through 1360 may repeat until all images have been processed.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Veterinary Medicine (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Ophthalmology & Optometry (AREA)
- Public Health (AREA)
- Biophysics (AREA)
- General Physics & Mathematics (AREA)
- Radiology & Medical Imaging (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
- Eye Examination Apparatus (AREA)
Claims (15)
- Verfahren zur Analyse von dreidimensionalen Datensätzen, die aus einer Abtastung über die Zeit erhalten werden, unter Verwendung von Volumenintensitätsprojektions- (VIP) Bildern, die durch Summieren über einen A-Scan optischer Kohärenztomographie- (OCT-) Daten generiert werden, umfassend:Erhalten eines ersten dreidimensionalen Datensatzes (250) aus der Abtastung zu einer ersten Zeit;Erzeugen eines ersten Volumenintensitätsprojektions- (VIP) Bilds (500) aus dem ersten dreidimensionalen Datensatz;Identifizieren und Erfassen mindestens eines ersten Bezugspunktes (510) in dem ersten VIP Bild;Erhalten eines zweiten dreidimensionalen Datensatzes (255) aus der Abtastung zu einer zweiten Zeit, die von der ersten Zeit verschieden ist;Erzeugen eines zweiten VIP Bilds (502) aus dem zweiten dreidimensionalen Datensatz;Identifizieren und Erfassen des mindestens einen ersten Bezugspunktes (510) in dem zweiten VIP Bild; undAusrichten des ersten und zweiten VIP Bilds (500, 502) auf der Basis des erfassten mindestens einen Bezugspunktes in dem ersten und zweiten VIP Bild;wobei der erste und zweite dreidimensionale Datensatz (250, 255) optische Kohärenztomographie- (OCT) Datensätze sind;wobei das Identifizieren und Erfassen des mindestens einen ersten Bezugspunktes (510) in dem ersten VIP Bild (500) von einem Erfassen mindestens eines Subjektbereichs in dem ersten dreidimensionalen Datensatz gefolgt wird; undwobei das Ausrichten ferner ein Ausrichten des ersten und zweiten VIP Bilds (500, 502) auf der Basis des erfassten mindestens einen ersten Bezugspunktes umfasst, um den erfassten Subjektbereich des ersten dreidimensionalen Datensatzes in dem zweiten dreidimensionalen Datensatz zu lokalisieren, so dass ein Vergleich des erfassten Subjektbereichs in dem ersten und zweiten dreidimensionalen Datensatz zu der jeweiligen ersten und zweiten Zeit gestattet wird.
- Verfahren nach Anspruch 1, ferner umfassend:Messen eines Attributs des erfassten Subjektbereichs des ersten VIP Bilds (500);Messen eines Attributs des lokalisierten Subjektbereichs des zweiten VIP Bilds (502); undVergleichen der gemessenen Attribute des erfassten und lokalisierten Subjektbereichs, so dass ein Vergleich der Subjektbereiche zu der ersten und zweiten Zeit gestattet wird.
- Verfahren nach Anspruch 1, ferner umfassend:Drehen des zweiten dreidimensionalen Datensatzes (255), um eine Achse des zweiten dreidimensionalen Datensatzes mit einer Achse des ersten dreidimensionalen Datensatzes auszurichten, um einen gedrehten dreidimensionalen Datensatz zu erhalten; undErzeugen eines gedrehten VIP Bilds auf der Basis des gedrehten dreidimensionalen Datensatzes.
- Verfahren nach Anspruch 3,
wobei das Identifizieren und Erfassen mindestens eines ersten Bezugspunktes (510) in dem ersten VIP Bild (250) von einem Erfassen mindestens eines Subjektbereichs in dem ersten dreidimensionalen Datensatz gefolgt wird, wobei das Verfahren ferner umfasst:Identifizieren und Erfassen des mindestens einen ersten Bezugspunktes auf dem gedrehten dreidimensionalen Datensatz; undAusrichten des ersten und gedrehten VIP Bilds auf der Basis des erfassten mindestens einen ersten Bezugspunktes in dem ersten und gedrehten VIP Bild, wobei das Ausrichten ein Ausrichten des ersten und gedrehten VIP Bilds auf der Basis des erfassten mindestens einen ersten Bezugspunktes umfasst, um den erfassten Subjektbereich in dem ersten dreidimensionalen Datensatz in dem gedrehten dreidimensionalen Datensatz zu lokalisieren, so dass ein Vergleich des erfassten Subjektbereichs und des lokalisierten Subjektbereichs jeweils des ersten und gedrehten dreidimensionalen Datensatzes gestattet wird. - Verfahren nach Anspruch 4, ferner umfassend:Messen eines Attributs des erfassten Subjektbereichs des ersten dreidimensionalen Datensatzes;Messen eines Attributs des lokalisierten Subjektbereichs des gedrehten dreidimensionalen Datensatzes; undVergleichen der gemessenen Attribute des erfassten und lokalisierten gemeinsamen Subjektbereichs, so dass ein Vergleich der Subjektbereiche in dem ersten und gedrehten dreidimensionalen Datensatz gestattet wird.
- System zur Analyse von dreidimensionalen Datensätze (250, 255), die aus einer Abtastung über die Zeit erhalten werden, umfassend:ein Datensatz-Erfassungsmodul (221), das eingerichtet ist, um einen ersten dreidimensionalen Datensatz (250) aus der Abtastung zu einer ersten Zeit und einen zweiten dreidimensionalen Datensatz (255) aus der Abtastung zu einer zweiten Zeit zu erhalten, die von der ersten Zeit verschieden ist;ein Volumenintensitätsprojektions- (VIP) Modul (222), das eingerichtet ist, um ein erstes Volumenintensitätsprojektions- (VIP) Bild (500) aus dem ersten dreidimensionalen Datensatz und ein zweites VIP Bild (502) aus dem zweiten dreidimensionalen Datensatz zu erzeugen, wobei die VIP Bilder durch Summieren über einen A-Scan optischer Kohärenztomographie-(OCT) Daten erzeugt sind;ein Erfassungsmodul (223), das eingerichtet ist, um mindestens einen ersten Bezugspunkt (510) in dem ersten VIP Bild (500) und den mindestens einen ersten Bezugspunkt in dem zweiten VIP Bild (502) zu identifizieren und zu erfassen; undein Ausrichtungsmodul (224), das eingerichtet ist, um das erste und zweite VIP Bild (500, 502) auf der Basis des erfassten mindestens einen ersten Bezugspunktes (510) in dem ersten und zweiten VIP Bild auszurichten;wobei der erste und zweite dreidimensionale Datensatz (250, 255) optische Kohärenztomographie- (OCT) Datensätze sind;wobei das Erfassungsmodul (223) ferner eingerichtet ist, um mindestens einen Subjektbereich in dem ersten dreidimensionalen Datensatz zu erfassen; undwobei das Ausrichtungsmodul (224) ferner eingerichtet ist, um das erste und zweite VIP Bild (500, 502) auf der Basis des erfassten mindestens einen ersten Bezugspunktes auszurichten, um den erfassten Subjektbereich des ersten dreidimensionalen Datensatzes in dem zweiten dreidimensionalen Datensatz zu lokalisieren, so dass ein Vergleich des erfassten Subjektbereichs in der ersten und zweiten VIP zur jeweiligen ersten und zweiten Zeit gestattet wird.
- System nach Anspruch 6,
ferner umfassend ein Vergleichsmodul (225), das eingerichtet ist, um:ein Attribut des erfassten Subjektbereichs des ersten dreidimensionalen Datensatzes (250) zu messen;ein Attribut des lokalisierten Subjektbereichs des zweiten dreidimensionalen Datensatzes (255) zu messen; unddie gemessenen Attribute des erfassten und lokalisierten Subjektbereichs zu vergleichen, so dass ein Vergleich der Subjektbereiche zur ersten und zweiten Zeit gestattet wird. - System nach Anspruch 6, ferner umfassend:
ein Drehmodul (226), das eingerichtet ist, um den zweiten dreidimensionalen Datensatz (255) zu drehen, um eine Achse des zweiten dreidimensionalen Datensatzes mit einer Achse des ersten dreidimensionalen Datensatzes (250) auszurichten, um einen gedrehten dreidimensionalen Datensatz zu erhalten, wobei das VIP Modul (222) ferner eingerichtet ist, um ein gedrehtes VIP Bild auf der Basis des gedrehten dreidimensionalen Datensatzes zu erzeugen. - System nach Anspruch 8,
wobei das Erfassungsmodul (223) ferner eingerichtet ist, um mindestens einen Subjektbereich in dem ersten dreidimensionalen Datensatz zu erfassen, und den mindestens einen ersten Bezugspunkt auf dem gedrehten VIP Bild zu identifizieren und zu erfassen; und
wobei das Ausrichtungsmodul (224) ferner eingerichtet ist, um das erste und gedrehte VIP Bild auf der Basis des erfassten mindestens einen Bezugspunktes in dem ersten und gedrehten VIP Bild auszurichten, und das erste und gedrehte VIP Bild auf der Basis des erfassten mindestens einen ersten Bezugspunktes auszurichten, um den erfassten Subjektbereich in dem ersten dreidimensionalen Datensatz in dem gedrehten dreidimensionalen Datensatz zu lokalisieren, so dass ein Vergleich des erfassten Subjektbereichs und des lokalisierten Subjektbereichs jeweils des ersten und gedrehten dreidimensionalen Datensatzes gestattet wird. - System nach Anspruch 9, ferner umfassend ein Vergleichsmodul (225), das eingerichtet ist, um:ein Attribut des erfassten Subjektbereichs des ersten dreidimensionalen Datensatzes zu messen;ein Attribut des lokalisierten Subjektbereichs des gedrehten dreidimensionalen Datensatzes zu messen; unddie gemessenen Attribute des erfassten und lokalisierten Subjektbereichs zu vergleichen, so dass ein Vergleich der Subjektbereiche in dem ersten und gedrehten dreidimensionalen Datensatz gestattet wird.
- Computerprogrammprodukt zur Analyse von dreidimensionalen Datensätzen, die aus einer Abtastung über die Zeit erhalten werden, wobei das Computerprogrammprodukt umfasst:
ein computerlesbares Speichermedium, das einen in dem Medium verkörperten computerlesbaren Programmcode aufweist, wobei der computerlesbare Programmcode umfasst:
einen computerlesbaren Programmcode, der eingerichtet ist, um die Schritte des Verfahrens nach einem der Ansprüche 1 bis 6 durchzuführen. - Verfahren nach Anspruch 1, ferner umfassend:Drehen des ersten dreidimensionalen Datensatzes (250) um eine Achse; Interpolieren des gedrehten ersten dreidimensionalen Datensatzes auf ein regelmäßiges Gitter;Erzeugen eines gedrehten VIP Bilds aus dem gedrehten ersten dreidimensionalen Datensatz;Identifizieren und Erfassen mindestens eines zweiten Bezugspunktes auf dem gedrehten VIP Bild;Erfassen eines zweiten Subjektbereichs in dem gedrehten dreidimensionalen Datensatz, für den Bezugspunkt auf dem gedrehten VIP Bild;Erhalten eines dritten dreidimensionalen Datensatzes zu einer dritten Zeit, die von der ersten und zweiten Zeit verschieden ist;Drehen und Ausrichten der Orientierung des dritten dreidimensionalen Datensatzes mit dem ersten gedrehten dreidimensionalen Datensatz;Erzeugen eines dritten VIP Bilds aus dem gedrehten und ausgerichteten dritten dreidimensionalen Datensatz;Identifizieren und Erfassen des mindestens einen zweiten Bezugspunktes auf dem dritten VIP Bild;Ableiten einer Ausrichtung zwischen dem zweiten mindestens einen Bezugspunkt, der auf dem gedrehten VIP und dritten VIP Bild erfasst wird; undLokalisieren des Subjektbereichs, der in dem gedrehten dreidimensionalen Datensatz in dem dritten dreidimensionalen Datensatz auf der Basis der Erfassung des mindestens einen zweiten Bezugspunktes erfasst wird, die von dem gedrehten dreidimensionalen Datensatz und der Ausrichtung zwischen dem gedrehten und dritten VIP Bild abgeleitet wird.
- Verfahren nach Anspruch 12, ferner umfassend:Messen eines Attributs des erfassten Subjektbereichs in dem gedrehten dreidimensionalen Datensatz;Messen eines Attributs des lokalisierten Subjektbereichs in dem dritten dreidimensionalen Datensatz; undVergleichen der Attribute des erfassten und lokalisierten Subjektbereichs jeweils des gedrehten und dritten Datensatzes.
- Verfahren nach Anspruch 1,
wobei der erste dreidimensionale Datensatz (250) um eine Achse gedreht wird, um einen gedrehten ersten dreidimensionalen Datensatz bereitzustellen, wobei das erste VIP Bild aus dem gedrehten ersten dreidimensionalen Datensatz erzeugt wird, wobei der zweite dreidimensionale Datensatz um eine Achse gedreht wird, die mit dem gedrehten ersten dreidimensionalen Datensatz ausgerichtet ist, um einen gedrehten zweiten dreidimensionalen Datensatz bereitzustellen, und wobei das zweite VIP Bild aus dem gedrehten zweiten dreidimensionalen Datensatz erzeugt wird. - Verfahren nach Anspruch 14, ferner umfassend:Messen eines Attributs des erfassten Subjektbereichs in dem ersten gedrehten dreidimensionalen Datensatz;Messen eines Attributs des lokalisierten Subjektbereichs in dem zweiten dreidimensionalen Datensatzsubje; undVergleichen der Attribute des erfassten und lokalisierten Subjektbereichs jeweils des gedrehten ersten und zweiten Datensatzes.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US70434305P | 2005-08-01 | 2005-08-01 | |
PCT/US2006/029535 WO2007016397A2 (en) | 2005-08-01 | 2006-07-31 | Methods, systems and computer programm for 3d-registrati0n of three dimensional data sets obtained by preferably optical coherence tomography based on the alignment of projection images or fundus images, respectively |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1910997A2 EP1910997A2 (de) | 2008-04-16 |
EP1910997B1 true EP1910997B1 (de) | 2019-11-20 |
Family
ID=37497864
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP06788865.1A Active EP1910997B1 (de) | 2005-08-01 | 2006-07-31 | Verfahren, systeme und computerprogramm zur 3d-erfassung von dreidimensionalen datensätzen, die durch vorzugsweise optische kohärenztomographie auf der basis der ausrichtung von projektionsbildern bzw. grundbildern erzeugt wurden |
Country Status (5)
Country | Link |
---|---|
US (2) | US7869663B2 (de) |
EP (1) | EP1910997B1 (de) |
JP (1) | JP2009503544A (de) |
CN (1) | CN101288102B (de) |
WO (1) | WO2007016397A2 (de) |
Families Citing this family (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7668342B2 (en) | 2005-09-09 | 2010-02-23 | Carl Zeiss Meditec, Inc. | Method of bioimage data processing for revealing more meaningful anatomic features of diseased tissues |
US7768652B2 (en) | 2006-03-16 | 2010-08-03 | Carl Zeiss Meditec, Inc. | Methods for mapping tissue with optical coherence tomography data |
US8223143B2 (en) | 2006-10-27 | 2012-07-17 | Carl Zeiss Meditec, Inc. | User interface for efficiently displaying relevant OCT imaging data |
US8401257B2 (en) | 2007-01-19 | 2013-03-19 | Bioptigen, Inc. | Methods, systems and computer program products for processing images generated using Fourier domain optical coherence tomography (FDOCT) |
JP5058627B2 (ja) * | 2007-02-26 | 2012-10-24 | 株式会社トプコン | 眼底観察装置 |
US8180131B2 (en) * | 2007-05-04 | 2012-05-15 | Bioptigen, Inc. | Methods, systems and computer program products for mixed-density optical coherence tomography (OCT) imaging |
DE102007023270A1 (de) * | 2007-05-18 | 2008-11-20 | Linos Photonics Gmbh & Co. Kg | Funduskamera |
US8175352B2 (en) * | 2007-09-21 | 2012-05-08 | Siemens Aktiengesellschaft | System and method for automated magnetic resonance scan prescription for optic nerves |
FR2924255A1 (fr) * | 2007-11-27 | 2009-05-29 | Gen Electric | Procede de traitement d'images cardiaques radiographiques en vue d'obtenir une image soustraite et recalee |
JP5568725B2 (ja) * | 2007-12-10 | 2014-08-13 | オプトス・ピーエルシー | 大容量の網膜像及び良く記録された眼底像に基づいてマイクロ視野測定試験を行うための方法 |
WO2010017356A2 (en) * | 2008-08-08 | 2010-02-11 | University Of Pittsburgh - Of The Commonwealth System Of Higher Education | Establishing compatibility between two-and three dimensional optical coherence tomography scans |
JP5566657B2 (ja) * | 2008-10-15 | 2014-08-06 | 株式会社東芝 | 3次元画像処理装置及びx線診断装置 |
DE102009010467A1 (de) * | 2009-02-26 | 2010-09-09 | Carl Zeiss Vision Gmbh | Verfahren und Vorrichtung zur Bestimmung der Augendrehpunktlage |
US9089331B2 (en) | 2009-07-31 | 2015-07-28 | Case Western Reserve University | Characterizing ablation lesions using optical coherence tomography (OCT) |
JP5698465B2 (ja) * | 2010-04-22 | 2015-04-08 | キヤノン株式会社 | 眼科装置、表示制御方法及びプログラム |
EP2678826B1 (de) * | 2011-02-23 | 2018-09-19 | Koninklijke Philips N.V. | Automatisierte projektion von markierungen zur erzeugung zusätzlicher korrespondenzen in einer bildregistrierung |
JP5827024B2 (ja) | 2011-03-31 | 2015-12-02 | 株式会社吉田製作所 | 光干渉断層画像生成装置の制御装置、制御方法及び制御プログラム |
EP2508842B1 (de) * | 2011-04-06 | 2014-08-13 | Agfa HealthCare N.V. | Verfahren und System zur optischen Kohärenztomographie |
US9226654B2 (en) | 2011-04-29 | 2016-01-05 | Carl Zeiss Meditec, Inc. | Systems and methods for automated classification of abnormalities in optical coherence tomography images of the eye |
JP2013075035A (ja) * | 2011-09-30 | 2013-04-25 | Canon Inc | 光断層像撮像方法、光断層像撮像装置およびプログラム |
US8944597B2 (en) | 2012-01-19 | 2015-02-03 | Carl Zeiss Meditec, Inc. | Standardized display of optical coherence tomography imaging data |
US9677869B2 (en) | 2012-12-05 | 2017-06-13 | Perimeter Medical Imaging, Inc. | System and method for generating a wide-field OCT image of a portion of a sample |
WO2014123395A1 (en) | 2013-02-08 | 2014-08-14 | Ewoosoft Co., Ltd. | Image display to display 3d image and sectional images |
US9351698B2 (en) | 2013-03-12 | 2016-05-31 | Lightlab Imaging, Inc. | Vascular data processing and image registration systems, methods, and apparatuses |
US9420945B2 (en) | 2013-03-14 | 2016-08-23 | Carl Zeiss Meditec, Inc. | User interface for acquisition, display and analysis of ophthalmic diagnostic data |
US9471975B2 (en) * | 2013-10-22 | 2016-10-18 | Bioptigen, Inc. | Methods, systems and computer program products for dynamic optical histology using optical coherence tomography |
US10307056B2 (en) | 2013-12-05 | 2019-06-04 | Bioptigen, Inc. | Systems and methods for quantitative doppler optical coherence tomography |
US10499813B2 (en) | 2014-09-12 | 2019-12-10 | Lightlab Imaging, Inc. | Methods, systems and apparatus for temporal calibration of an intravascular imaging system |
SG11201705408QA (en) * | 2014-12-30 | 2017-08-30 | Agency Science Tech & Res | Method and apparatus for aligning a two-dimensional image with a predefined axis |
US10105107B2 (en) | 2015-01-08 | 2018-10-23 | St. Jude Medical International Holding S.À R.L. | Medical system having combined and synergized data output from multiple independent inputs |
US9519949B2 (en) | 2015-03-13 | 2016-12-13 | Koninklijke Philips N.V. | Determining transformation between different coordinate systems |
US10646198B2 (en) | 2015-05-17 | 2020-05-12 | Lightlab Imaging, Inc. | Intravascular imaging and guide catheter detection methods and systems |
US10109058B2 (en) | 2015-05-17 | 2018-10-23 | Lightlab Imaging, Inc. | Intravascular imaging system interfaces and stent detection methods |
US10222956B2 (en) | 2015-05-17 | 2019-03-05 | Lightlab Imaging, Inc. | Intravascular imaging user interface systems and methods |
US9996921B2 (en) | 2015-05-17 | 2018-06-12 | LIGHTLAB IMAGING, lNC. | Detection of metal stent struts |
CN107847141B (zh) * | 2015-07-09 | 2020-08-28 | 佳能株式会社 | 用于获取与多个图像数据集的位置位移有关的信息的设备、方法和程序 |
CN107920745B (zh) | 2015-07-25 | 2022-01-28 | 光学实验室成像公司 | 血管内数据可视化方法 |
WO2017020932A1 (en) * | 2015-07-31 | 2017-02-09 | Ismeca Semiconductor Holding Sa | An assembly and method for handling components |
WO2017087821A2 (en) | 2015-11-18 | 2017-05-26 | Lightlab Imaging, Inc. | X-ray image feature detection and registration systems and methods |
EP3381014B1 (de) | 2015-11-23 | 2020-12-16 | Lightlab Imaging, Inc. | Nachweis und validierung von schatten auf intravaskulären bildern |
EP3443536B1 (de) | 2016-04-14 | 2021-12-15 | Lightlab Imaging, Inc. | Identifikation von verzweigungen in blutgefässen |
WO2017201026A1 (en) | 2016-05-16 | 2017-11-23 | Lightlab Imaging, Inc. | Intravascular absorbable stent detection and diagnostic methods and systems |
US10839515B2 (en) | 2017-04-28 | 2020-11-17 | Massachusetts Institute Of Technology | Systems and methods for generating and displaying OCT angiography data using variable interscan time analysis |
WO2018204748A1 (en) * | 2017-05-05 | 2018-11-08 | Massachusetts Institute Of Technology | Systems and methods for generating and displaying oct blood flow speeds by merging mutiple integrated spatial samplings government license rights |
WO2019014767A1 (en) | 2017-07-18 | 2019-01-24 | Perimeter Medical Imaging, Inc. | SAMPLE CONTAINER FOR STABILIZING AND ALIGNING EXCISED ORGANIC TISSUE SAMPLES FOR EX VIVO ANALYSIS |
US20220198689A1 (en) * | 2019-04-10 | 2022-06-23 | The Board Of Trustees Of The Leland Stanford Junior University | High Resolution Alignment of 3D Imaging with 2D Imaging |
CN115112701A (zh) * | 2021-03-23 | 2022-09-27 | 武汉中科牛津波谱技术有限公司 | 核磁共振样品检测系统 |
GB2613246B (en) | 2021-10-15 | 2024-05-29 | Rtx Corp | Lubrication system for turbine engine electric machine |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008053420A2 (en) * | 2006-10-31 | 2008-05-08 | Koninklijke Philips Electronics N.V. | Combined intensity projection |
Family Cites Families (64)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4580219A (en) * | 1983-05-02 | 1986-04-01 | General Electric Company | Method for reducing image artifacts due to projection measurement inconsistencies |
US5226113A (en) * | 1989-10-30 | 1993-07-06 | General Electric Company | Method and apparatus for volumetric projection rendering using reverse ray casting |
JP2808773B2 (ja) * | 1990-01-09 | 1998-10-08 | 株式会社日立製作所 | 階調変換自動化装置 |
US5204627A (en) * | 1991-03-14 | 1993-04-20 | Wisconsin Alumni Research Foundation | Adaptive NMR angiographic reprojection method |
US5233299A (en) * | 1991-03-25 | 1993-08-03 | General Electric Company | Projection methods for producing two-dimensional images from three-dimensional data |
US5297551A (en) * | 1992-08-06 | 1994-03-29 | Picker International, Inc. | Weighted ray projection imaging for MR angiography |
US5368033A (en) * | 1993-04-20 | 1994-11-29 | North American Philips Corporation | Magnetic resonance angiography method and apparatus employing an integration projection |
JPH09512937A (ja) * | 1994-09-06 | 1997-12-22 | ザ リサーチ ファウンデーション オブ ステイト ユニヴァーシティ オブ ニューヨーク | ボリュームを実時間で視覚化する装置及び方法 |
DE19620371A1 (de) * | 1996-05-21 | 1997-12-04 | Philips Patentverwaltung | Röntgenaufnahme-Verfahren |
US5946425A (en) * | 1996-06-03 | 1999-08-31 | Massachusetts Institute Of Technology | Method and apparatus for automatic alingment of volumetric images containing common subject matter |
US5912720A (en) * | 1997-02-13 | 1999-06-15 | The Trustees Of The University Of Pennsylvania | Technique for creating an ophthalmic augmented reality environment |
US6102864A (en) * | 1997-05-07 | 2000-08-15 | General Electric Company | Three-dimensional ultrasound imaging of velocity and power data using average or median pixel projections |
US6249616B1 (en) * | 1997-05-30 | 2001-06-19 | Enroute, Inc | Combining digital images based on three-dimensional relationships between source image data sets |
US6094163A (en) * | 1998-01-21 | 2000-07-25 | Min-I James Chang | Ins alignment method using a doppler sensor and a GPS/HVINS |
AU6145499A (en) * | 1998-09-17 | 2000-04-03 | Brigham And Women's Hospital | Method and apparatus for projecting mr angiographic data |
US6112112A (en) * | 1998-09-18 | 2000-08-29 | Arch Development Corporation | Method and system for the assessment of tumor extent in magnetic resonance images |
US6904163B1 (en) | 1999-03-19 | 2005-06-07 | Nippon Telegraph And Telephone Corporation | Tomographic image reading method, automatic alignment method, apparatus and computer readable medium |
JP4408988B2 (ja) * | 1999-05-31 | 2010-02-03 | 株式会社東芝 | 超音波診断装置 |
US6819318B1 (en) * | 1999-07-23 | 2004-11-16 | Z. Jason Geng | Method and apparatus for modeling via a three-dimensional image mosaic system |
US6671538B1 (en) * | 1999-11-26 | 2003-12-30 | Koninklijke Philips Electronics, N.V. | Interface system for use with imaging devices to facilitate visualization of image-guided interventional procedure planning |
FR2802002B1 (fr) * | 1999-12-02 | 2002-03-01 | Ge Medical Syst Sa | Procede de recalage automatique d'images tridimensionnelles |
US7224357B2 (en) * | 2000-05-03 | 2007-05-29 | University Of Southern California | Three-dimensional modeling based on photographic images |
US6909792B1 (en) * | 2000-06-23 | 2005-06-21 | Litton Systems, Inc. | Historical comparison of breast tissue by image processing |
US6907281B2 (en) * | 2000-09-07 | 2005-06-14 | Ge Medical Systems | Fast mapping of volumetric density data onto a two-dimensional screen |
US6459094B1 (en) * | 2000-12-20 | 2002-10-01 | Eastman Kodak Company | Method for stitching partial radiation images to reconstruct a full image |
US7020318B2 (en) * | 2001-05-22 | 2006-03-28 | Advanced Mri Technologies, Llc | Translucent intensity projection imaging |
US7219034B2 (en) * | 2001-09-13 | 2007-05-15 | Opnet Technologies, Inc. | System and methods for display of time-series data distribution |
DE10149556A1 (de) * | 2001-10-08 | 2003-04-24 | Siemens Ag | Verfahren zur Erzeugung eines zweidimensionalen Bildes aus einem 3D-Datensatz eines Tomographie-Geräts und medizinisches Tomographie-Gerät |
US7010158B2 (en) * | 2001-11-13 | 2006-03-07 | Eastman Kodak Company | Method and apparatus for three-dimensional scene modeling and reconstruction |
US6490335B1 (en) * | 2001-11-21 | 2002-12-03 | Ge Medical Systems Global Technologies Company Llc | Helical segment image reconstruction |
US6885764B2 (en) * | 2001-11-21 | 2005-04-26 | Ge Medical Systems Global Technology Company, Llc | High Speed Z-smoothing method and apparatus for CT imaging system |
US7355716B2 (en) * | 2002-01-24 | 2008-04-08 | The General Hospital Corporation | Apparatus and method for ranging and noise reduction of low coherence interferometry LCI and optical coherence tomography OCT signals by parallel detection of spectral bands |
US7532750B2 (en) * | 2002-04-17 | 2009-05-12 | Sony Corporation | Image processing apparatus and method, program, and image processing system |
CA2390072C (en) * | 2002-06-28 | 2018-02-27 | Adrian Gh Podoleanu | Optical mapping apparatus with adjustable depth resolution and multiple functionality |
US7321699B2 (en) * | 2002-09-06 | 2008-01-22 | Rytec Corporation | Signal intensity range transformation apparatus and method |
US7170517B2 (en) * | 2002-11-27 | 2007-01-30 | The Board Of Trustees Of The Leland Stanford Junior University | Curved-slab maximum intensity projections |
US7570791B2 (en) * | 2003-04-25 | 2009-08-04 | Medtronic Navigation, Inc. | Method and apparatus for performing 2D to 3D registration |
WO2004111929A2 (en) * | 2003-05-28 | 2004-12-23 | Duke University | Improved system for fourier domain optical coherence tomography |
EP1491150A1 (de) * | 2003-06-27 | 2004-12-29 | Universite Libre De Bruxelles | Verfahren zur Erfassung Informationen um eine Schraube in einem Loch eines metallischen Gegenstand zu verriegeln |
US7620229B2 (en) * | 2003-08-14 | 2009-11-17 | Fujifilm Corporation | Method and apparatus for aiding image interpretation and computer-readable recording medium storing program therefor |
US7204640B2 (en) * | 2003-08-29 | 2007-04-17 | Accuray, Inc. | Apparatus and method for registering 2D radiographic images with images reconstructed from 3D scan data |
US7935055B2 (en) * | 2003-09-19 | 2011-05-03 | Siemens Medical Solutions Usa, Inc. | System and method of measuring disease severity of a patient before, during and after treatment |
US20050089213A1 (en) * | 2003-10-23 | 2005-04-28 | Geng Z. J. | Method and apparatus for three-dimensional modeling via an image mosaic system |
US20050096515A1 (en) * | 2003-10-23 | 2005-05-05 | Geng Z. J. | Three-dimensional surface image guided adaptive therapy system |
US7486812B2 (en) * | 2003-11-25 | 2009-02-03 | Icad, Inc. | Shape estimates and temporal registration of lesions and nodules |
US7145661B2 (en) * | 2003-12-31 | 2006-12-05 | Carl Zeiss Meditec, Inc. | Efficient optical coherence tomography (OCT) system and method for rapid imaging in three dimensions |
US7142633B2 (en) * | 2004-03-31 | 2006-11-28 | General Electric Company | Enhanced X-ray imaging system and method |
CN1961340B (zh) * | 2004-05-28 | 2012-08-15 | 皇家飞利浦电子股份有限公司 | 用于图像处理的方法、计算机程序、设备和成像系统 |
US7616799B2 (en) * | 2004-06-18 | 2009-11-10 | Siemens Medical Solutions Usa, Inc. | System and method for monitoring disease progression or response to therapy using multi-modal visualization |
DE102004032914A1 (de) * | 2004-07-07 | 2006-02-02 | Siemens Ag | Verfahren zur Bestimmung einer Koordinatentransformation von Bildkoordinaten verschiedener Bilder eines Objekts |
WO2006054191A1 (en) * | 2004-11-17 | 2006-05-26 | Koninklijke Philips Electronics N.V. | Improved elastic image registration functionality |
US7301644B2 (en) * | 2004-12-02 | 2007-11-27 | University Of Miami | Enhanced optical coherence tomography for anatomical mapping |
EP1844446B1 (de) * | 2005-01-28 | 2011-03-16 | Koninklijke Philips Electronics N.V. | Benutzerschnittstelle zur bewegungsanalyse bei kinematischen mr-studien |
US10492749B2 (en) * | 2005-05-03 | 2019-12-03 | The Regents Of The University Of California | Biopsy systems for breast computed tomography |
US7623736B2 (en) * | 2005-05-06 | 2009-11-24 | Stereotaxis, Inc. | Registration of three dimensional image data with patient in a projection imaging system |
DE102005023195A1 (de) * | 2005-05-19 | 2006-11-23 | Siemens Ag | Verfahren zur Erweiterung des Darstellungsbereiches einer Volumenaufnahme eines Objektbereiches |
US7653263B2 (en) * | 2005-06-30 | 2010-01-26 | General Electric Company | Method and system for volumetric comparative image analysis and diagnosis |
US7391520B2 (en) * | 2005-07-01 | 2008-06-24 | Carl Zeiss Meditec, Inc. | Fourier domain optical coherence tomography employing a swept multi-wavelength laser and a multi-channel receiver |
US20070066880A1 (en) * | 2005-09-09 | 2007-03-22 | Warren Lee | Image-based probe guidance system |
US7817836B2 (en) * | 2006-06-05 | 2010-10-19 | Varian Medical Systems, Inc. | Methods for volumetric contouring with expert guidance |
US20070291277A1 (en) * | 2006-06-20 | 2007-12-20 | Everett Matthew J | Spectral domain optical coherence tomography system |
KR100763239B1 (ko) * | 2006-06-27 | 2007-10-04 | 삼성전자주식회사 | 디스플레이되는 영상의 시인성 향상을 위한 영상 처리 장치및 방법 |
EP2066225B1 (de) * | 2006-09-26 | 2014-08-27 | Oregon Health and Science University | In-vivo-struktur- und flussdarstellung |
DE102009014764B4 (de) * | 2008-05-28 | 2019-05-23 | Siemens Healthcare Gmbh | Verfahren zur Visualisierung tubulärer anatomischer Strukturen, insbesondere Gefäßstrukturen, in medizinischen 3D-Bildaufnahmen |
-
2006
- 2006-07-31 WO PCT/US2006/029535 patent/WO2007016397A2/en active Application Filing
- 2006-07-31 JP JP2008525059A patent/JP2009503544A/ja active Pending
- 2006-07-31 CN CN2006800366115A patent/CN101288102B/zh active Active
- 2006-07-31 US US11/461,083 patent/US7869663B2/en active Active
- 2006-07-31 EP EP06788865.1A patent/EP1910997B1/de active Active
-
2010
- 2010-12-03 US US12/959,722 patent/US8442356B2/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008053420A2 (en) * | 2006-10-31 | 2008-05-08 | Koninklijke Philips Electronics N.V. | Combined intensity projection |
Non-Patent Citations (2)
Title |
---|
DR. NEUMAN, JOEL, M.D: "Volume Intensity Projection - FadeMIP", 29 July 2005 (2005-07-29), Retrieved from the Internet <URL:http://clinical.netforum.healthcare.philips.com/global/Explore/Presentations/CT/Volume-Intensity-Projection-FadeMIP> [retrieved on 20160414] * |
LI A ET AL: "Methods for efficient, high quality volume resampling in the frequency domain", IEEE VISUALIZATION 2004 - PROCEEDINGS, VIS 2004 - IEEE VISUALIZATION 2004 - PROCEEDINGS, VIS 2004 2004 INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS INC. US, 2004, pages 3 - 10 * |
Also Published As
Publication number | Publication date |
---|---|
CN101288102A (zh) | 2008-10-15 |
EP1910997A2 (de) | 2008-04-16 |
CN101288102B (zh) | 2013-03-20 |
WO2007016397A2 (en) | 2007-02-08 |
US8442356B2 (en) | 2013-05-14 |
US7869663B2 (en) | 2011-01-11 |
US20110075946A1 (en) | 2011-03-31 |
WO2007016397A3 (en) | 2008-05-08 |
US20070025642A1 (en) | 2007-02-01 |
JP2009503544A (ja) | 2009-01-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1910997B1 (de) | Verfahren, systeme und computerprogramm zur 3d-erfassung von dreidimensionalen datensätzen, die durch vorzugsweise optische kohärenztomographie auf der basis der ausrichtung von projektionsbildern bzw. grundbildern erzeugt wurden | |
US10271822B2 (en) | Sensor coordinate calibration in an ultrasound system | |
US9418423B2 (en) | Motion correction and normalization of features in optical coherence tomography | |
US10219782B2 (en) | Position correlated ultrasonic imaging | |
US9514513B2 (en) | Establishing compatibility between two- and three-dimensional optical coherence tomography scans | |
US8319974B2 (en) | Enhanced optical coherence tomography for anatomical mapping | |
US8180131B2 (en) | Methods, systems and computer program products for mixed-density optical coherence tomography (OCT) imaging | |
WO2010129544A1 (en) | Methods and computer program products for quantitative three-dimensional image correction and clinical parameter computation in optical coherence tomography | |
US9471975B2 (en) | Methods, systems and computer program products for dynamic optical histology using optical coherence tomography | |
CN105011900B (zh) | 用于生成宽视场光学相干体层析图的方法和装置 | |
US11278259B2 (en) | Thrombus detection during scanning | |
JP2021007666A (ja) | 光コヒーレンストモグラフィ(oct)イメージング方法、octデータ処理方法、oct装置、その制御方法、octデータ処理装置、その制御方法、プログラム、及び、記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20080225 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL BA HR MK RS |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06T 7/00 20060101AFI20080630BHEP |
|
DAX | Request for extension of the european patent (deleted) | ||
R17D | Deferred search report published (corrected) |
Effective date: 20080508 |
|
17Q | First examination report despatched |
Effective date: 20101126 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R079 Ref document number: 602006058846 Country of ref document: DE Free format text: PREVIOUS MAIN CLASS: G06T0007000000 Ipc: G06T0007330000 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: A61B 3/12 20060101ALI20190228BHEP Ipc: G06T 7/33 20170101AFI20190228BHEP Ipc: A61B 3/10 20060101ALI20190228BHEP |
|
INTG | Intention to grant announced |
Effective date: 20190322 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602006058846 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 1205028 Country of ref document: AT Kind code of ref document: T Effective date: 20191215 |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20191120 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191120 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191120 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191120 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191120 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191120 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200221 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191120 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200220 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200320 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191120 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191120 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200412 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191120 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191120 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1205028 Country of ref document: AT Kind code of ref document: T Effective date: 20191120 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602006058846 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191120 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20200821 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191120 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191120 Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191120 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191120 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191120 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20200731 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200731 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200731 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200731 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200731 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200731 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191120 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191120 |
|
P01 | Opt-out of the competence of the unified patent court (upc) registered |
Effective date: 20230414 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20240730 Year of fee payment: 19 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20240724 Year of fee payment: 19 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20240725 Year of fee payment: 19 |