WO2011048748A1 - 眼底画像処理装置及び眼底観察装置 - Google Patents

眼底画像処理装置及び眼底観察装置 Download PDF

Info

Publication number
WO2011048748A1
WO2011048748A1 PCT/JP2010/005633 JP2010005633W WO2011048748A1 WO 2011048748 A1 WO2011048748 A1 WO 2011048748A1 JP 2010005633 W JP2010005633 W JP 2010005633W WO 2011048748 A1 WO2011048748 A1 WO 2011048748A1
Authority
WO
WIPO (PCT)
Prior art keywords
fundus
image
cross
sectional position
sectional
Prior art date
Application number
PCT/JP2010/005633
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
林 健史
Original Assignee
株式会社トプコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社トプコン filed Critical 株式会社トプコン
Publication of WO2011048748A1 publication Critical patent/WO2011048748A1/ja

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/102Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for optical coherence tomography [OCT]

Definitions

  • the present invention relates to a fundus image processing apparatus that processes a fundus photographed image obtained by photographing the fundus and a three-dimensional image of the fundus formed using optical coherence tomography (OCT), and
  • OCT optical coherence tomography
  • the present invention relates to a fundus observation apparatus capable of forming a fundus photographed image and / or a three-dimensional image of the fundus.
  • OCT that forms an image representing the surface form and internal form of an object to be measured using a light beam from a laser light source or the like has attracted attention. Since OCT has no invasiveness to the human body like X-ray CT, it is expected to be applied particularly in the medical field and the biological field. For example, in the field of ophthalmology, an apparatus for forming an image of the fundus oculi, cornea, etc. has entered a practical stage.
  • Patent Document 1 discloses an apparatus to which OCT is applied.
  • the measuring arm scans an object with a rotary turning mirror (galvanomirror)
  • a reference mirror is installed on the reference arm
  • the intensity of the interference light of the light beam from the measuring arm and the reference arm is dispersed at the exit.
  • An interferometer is provided for analysis by the instrument.
  • the reference arm is configured to change the phase of the reference light beam stepwise by a discontinuous value.
  • Patent Document 1 uses a so-called “Fourier Domain OCT (Fourier Domain OCT)” technique.
  • Fourier Domain OCT Frier Domain OCT
  • a low-coherence beam is irradiated onto the object to be measured, the reflected light and the reference light are superimposed to generate interference light, and the spectral intensity distribution of the interference light is acquired and subjected to Fourier transform.
  • the form of the object to be measured in the depth direction (z direction) is imaged.
  • this type of technique is also called a spectral domain.
  • the apparatus described in Patent Document 1 includes a galvanometer mirror that scans a light beam (signal light), thereby forming an image of a desired measurement target region of the object to be measured. Since this apparatus is configured to scan the light beam only in one direction (x direction) orthogonal to the z direction, the image formed by this apparatus is in the scanning direction (x direction) of the light beam. It becomes a two-dimensional tomogram in the depth direction (z direction) along.
  • a plurality of two-dimensional tomographic images in the horizontal direction are formed by scanning (scanning) the signal light in the horizontal direction (x direction) and the vertical direction (y direction), and based on the plurality of tomographic images.
  • a technique for acquiring and imaging three-dimensional tomographic information of a measurement range is disclosed. Examples of the three-dimensional imaging include a method of displaying a plurality of tomographic images side by side in a vertical direction (referred to as stack data) and a method of rendering a plurality of tomographic images to form a three-dimensional image. Conceivable.
  • Patent Documents 3 and 4 disclose other types of OCT apparatuses.
  • Patent Document 3 scans the wavelength of light applied to an object to be measured, acquires a spectral intensity distribution based on interference light obtained by superimposing reflected light of each wavelength and reference light
  • an OCT apparatus for imaging the form of an object to be measured by performing Fourier transform on the object is described.
  • Such an OCT apparatus is called a swept source type.
  • the swept source type is a kind of Fourier domain type.
  • Patent Document 4 the traveling direction of light is obtained by irradiating the object to be measured with light having a predetermined beam diameter, and analyzing the component of interference light obtained by superimposing the reflected light and the reference light.
  • An OCT apparatus for forming an image of an object to be measured in a cross-section orthogonal to is described. Such an OCT apparatus is called a full-field type or an en-face type.
  • Patent Document 5 discloses a configuration in which OCT is applied to the ophthalmic field.
  • fundus cameras, slit lamps, and the like Prior to the application of OCT, fundus cameras, slit lamps, and the like have been used as devices for observing the eye to be examined (see, for example, Patent Document 6 and Patent Document 7).
  • a fundus camera is a device that shoots the fundus by illuminating the subject's eye with illumination light and receiving the fundus reflection light.
  • a slit lamp is a device that acquires an image of a cross-section of the cornea by cutting off a light section of the cornea using slit light.
  • An apparatus using OCT has an advantage over a fundus camera or the like in that a high-definition image can be acquired, and further, a tomographic image or a three-dimensional image can be acquired.
  • an apparatus using OCT can be applied to observation of various parts of an eye to be examined and can acquire high-definition images, it has been applied to diagnosis of various ophthalmic diseases.
  • a diagnostic method that grasps changes over time in the state of an attention site (affected area, characteristic site, etc.) by comparing images acquired at different timings.
  • comparative observation is performed in the diagnosis of glaucoma and macular diseases.
  • the physical quantity related to the target region may be quantitatively compared. Examples of physical quantities to be compared include the size of the affected area (radius, diameter, area, volume, etc.), the cup size of the optic nerve head, the disk size, and the rim size.
  • the present invention has been made to solve the above-described problems, and an object of the present invention is to provide a fundus image processing apparatus and a fundus oculi observation apparatus that can perform fundus comparative observation with high accuracy. .
  • the invention according to claim 1 includes a first fundus photographed image of the eye to be examined and a first three-dimensional image of the fundus of the eye to be examined, which are obtained at the first examination timing.
  • Storage means for preliminarily storing a second fundus photographed image of the eye to be examined and a second three-dimensional image of the fundus acquired at a second examination timing different from the first examination timing;
  • First calculating means for calculating a positional shift amount between the first fundus photographic image and the second fundus photographic image in the fundus surface direction based on the fundus photographic image and the second fundus photographic image;
  • Designating means for designating a cross-sectional position at substantially the same position on the fundus described in each of the first fundus photographed image and the second fundus photographed image based on the calculated positional deviation amount; Corresponding to the cross-sectional position designated in the first fundus image.
  • the invention according to claim 2 is the fundus image processing apparatus according to claim 1, wherein the specifying unit specifies a plurality of the cross-sectional positions, and the second calculating unit includes the plurality of cross-sectional positions.
  • the cross-sectional positions of the first and second three-dimensional images corresponding to the cross-sectional positions are set to the designated cross-sectional positions based on the first tomographic image and the second tomographic image at the cross-sectional positions.
  • the tilt displacement amount in a plane stretched between a direction along the depth direction and the fundus depth direction is calculated.
  • the invention according to claim 3 is the fundus image processing apparatus according to claim 2, wherein the designation means is a pair of linear cross-sectional positions intersecting at right angles as the plurality of cross-sectional positions.
  • the second calculation means calculates the amount of inclination deviation for the cross-sectional positions of the first and second three-dimensional images corresponding to the pair of cross-sectional positions, respectively.
  • the invention according to claim 4 is the fundus image processing apparatus according to claim 2, wherein the designation means has two or more straight lines that are arranged radially and intersect with each other as the plurality of cross-sectional positions. A second cross-sectional position is calculated, and the second calculating unit calculates the amount of inclination deviation for the cross-sectional positions of the first and second three-dimensional images corresponding to the two or more cross-sectional positions, The maximum value of the inclination deviation amounts corresponding to the two or more cross-sectional positions calculated by the second calculation means is selected, and the inclination deviation amount is set so as to cancel the selected inclination deviation amount.
  • Correction means for correcting a deviation in inclination between the first three-dimensional image and the second three-dimensional image in a plane stretched by the direction along the corresponding cross-sectional position and the depth direction of the fundus. It is characterized by.
  • the invention according to claim 5 is the fundus image processing apparatus according to any one of claims 1 to 3, wherein the positional deviation amount calculated by the first calculation unit and the positional deviation amount are calculated.
  • Alignment means for performing alignment between the first three-dimensional image and the second three-dimensional image so as to cancel out the tilt deviation amount calculated by the second calculation means; It is characterized by.
  • the invention according to claim 6 is the fundus image processing apparatus according to any one of claims 1 to 3, wherein the positional deviation amount calculated by the first calculation means and / or Based on the amount of tilt deviation calculated by the second calculation means, the first three-dimensional image is analyzed to calculate a first value of a predetermined physical quantity, and the second three-dimensional image And analyzing means for calculating the second value of the predetermined physical quantity.
  • the invention according to claim 7 is the fundus image processing apparatus according to claim 1, wherein the first calculation means uses the parallel movement amount and rotational movement in the fundus surface direction as the positional deviation amount. And calculating the quantity.
  • the invention according to claim 8 is an imaging means for imaging the fundus of the subject's eye, and the low-coherence light is divided into signal light and reference light, and the signal light passing through the fundus and the reference light path are passed through.
  • An optical system that generates and detects interference light by superimposing reference light, an image forming unit that forms a three-dimensional image of the fundus based on the detection result of the interference light, and the imaging at a first examination timing A first fundus photographed image of the eye to be examined photographed by means, a first three-dimensional image of the fundus formed by the image forming means, and a second examination timing different from the first examination timing.
  • Storage means for storing the photographed second fundus photographed image of the subject eye and the formed second three-dimensional image of the fundus, the first fundus photographed image, and the second fundus photographed image Based on the fundus surface direction
  • First calculation means for calculating a positional deviation amount between the first fundus photographic image and the second fundus photographic image, and the first fundus imaging based on the calculated positional deviation amount.
  • Designating means for designating a cross-sectional position at substantially the same position on the fundus depicted in each of the image and the second fundus photographed image, and the first corresponding to the cross-sectional position designated in the first fundus photographed image
  • a second calculating unit configured to calculate an inclination shift amount between the first tomographic image and the second tomographic image on a plane stretched by the direction along the designated cross-sectional position and the fundus depth direction; And a fundus oculi observation device.
  • the invention according to claim 9 divides low-coherence light into signal light and reference light, and superimposes the signal light passing through the fundus of the eye to be examined and the reference light passing through the reference light path to interfere light.
  • An optical system that generates and detects the image, an image forming unit that forms a three-dimensional image of the fundus oculi based on the detection result of the interference light, and a first fundus of the eye to be examined acquired at a first examination timing
  • a photographed image and a second fundus photographed image of the eye to be examined acquired at a second examination timing different from the first examination timing are stored in advance, and at the first examination timing, by the image forming unit.
  • Storage means for storing the formed first three-dimensional image of the fundus and the formed second three-dimensional image of the fundus at the second examination timing; the first fundus image; Based on the second fundus image
  • a first calculating means for calculating a positional shift amount between the first fundus captured image and the second fundus captured image in the fundus surface direction, and based on the calculated positional shift amount, Designating means for designating a cross-sectional position at substantially the same position on the fundus depicted in each of the first fundus photographed image and the second fundus photographed image, and a cross-sectional position designated in the first fundus photographed image
  • a second tomographic image at a cross-sectional position of the second three-dimensional image corresponding to the cross-sectional position designated by the second fundus image.
  • An inclination shift amount between the first tomographic image and the second tomographic image on a plane stretched by the direction along the designated cross-sectional position and the depth direction of the fundus is calculated based on the tomographic image of the first tomographic image.
  • An eye comprising: a second calculating means; It is an observation device.
  • the invention described in claim 10 is different from the imaging means for imaging the fundus of the eye to be examined, the first three-dimensional image of the fundus acquired at the first examination timing, and the first examination timing.
  • a second three-dimensional image of the fundus acquired at the second examination timing is stored in advance, and a first fundus photographed image of the eye to be examined photographed by the photographing means at the first examination timing
  • Storage means for storing a second fundus photographed image of the subject eye photographed at the second examination timing, and a fundus oculi based on the first fundus photographed image and the second fundus photographed image.
  • First calculation means for calculating a positional deviation amount between the first fundus photographic image and the second fundus photographic image in the surface direction, and based on the calculated positional deviation amount, the first Fundus photographed image and second fundus photographed image
  • Designating means for designating a cross-sectional position at substantially the same position on the fundus depicted in each of the first and three-dimensional images in the cross-sectional position of the first three-dimensional image corresponding to the cross-sectional position designated in the first fundus image
  • a second calculating means for calculating an amount of inclination deviation between the first tomographic image and the second tomographic image on a plane stretched by the direction along the direction of the fundus depth. It is an observation device.
  • the present invention it is possible to obtain the positional deviation amount and the inclination deviation amount that are interposed between images to be compared in comparative observation of the fundus. Therefore, by performing comparative observation with reference to the obtained positional deviation amount and inclination deviation amount, it is possible to compare the images to be compared as those acquired under substantially the same conditions. This makes it possible to perform comparative observation of the fundus with high accuracy.
  • This fundus oculi observation device is equipped with this fundus image processing device. That is, this fundus oculi observation device acquires part or all of an image to be subjected to processing by the fundus image processing device.
  • this fundus oculi observation device acquires part or all of an image to be subjected to processing by the fundus image processing device.
  • a fundus oculi observation device will be described in detail.
  • the fundus oculi observation device forms a tomographic image or a three-dimensional image of the fundus using OCT.
  • images acquired by OCT may be collectively referred to as OCT images.
  • a measurement operation for forming an OCT image may be referred to as OCT measurement.
  • a fundus oculi observation device capable of acquiring both a fundus OCT image and a fundus photographed image is taken up as in the device disclosed in Patent Document 5.
  • the fundus oculi observation device may be configured to acquire only one of an OCT image and a fundus photographic image. That is, the present invention also includes the following two types of devices: (1) A fundus observation device (OCT device) that can acquire only an OCT image by itself, and other devices (fundus camera, slit lamp microscope) (Slit lamp), configured to receive the fundus photographic image acquired by a scanning laser ophthalmoscope (SLO (Scanning Laser Ophthalmoscope), etc.) and memorize it; (2) Only the fundus photographic image itself A fundus observation device (fundus camera, slit lamp microscope, scanning laser ophthalmoscope, etc.) that can be acquired, and configured to receive an OCT image acquired by another device (OCT device) and store it. Things.
  • OCT device A fundus observation device
  • SLO Scnning Laser Ophthalmoscope
  • the fundus oculi observation device 1 includes a fundus camera unit 2, an OCT unit 100, and an arithmetic control unit 200.
  • the retinal camera unit 2 has almost the same optical system as a conventional retinal camera.
  • the OCT unit 100 is provided with an optical system for acquiring an OCT image of the fundus.
  • the arithmetic control unit 200 includes a computer that executes various arithmetic processes and control processes.
  • the arithmetic control unit 200 is an example of the “fundus image processing apparatus” of the present invention.
  • the fundus camera unit 2 shown in FIG. 1 is provided with an optical system for acquiring a two-dimensional image (fundus photographed image) representing the surface form of the fundus oculi Ef of the eye E to be examined.
  • the fundus photographed image includes an observation image and a photographed image.
  • the observation image is, for example, a monochrome moving image formed at a predetermined frame rate using near infrared light.
  • the captured image is a color image obtained by flashing visible light, for example.
  • the fundus camera unit 2 may be configured to be able to acquire images other than these, such as a fluorescein fluorescent image, an indocyanine green fluorescent image, a spontaneous fluorescent image, and the like.
  • the fundus photographed image used in the present invention is mainly a photographed image.
  • the fundus camera unit 2 is an example of the “photographing means” of the present invention.
  • the fundus camera unit 2 is provided with a chin rest and a forehead for supporting the subject's face so that the face of the subject does not move, as in a conventional fundus camera. Further, the fundus camera unit 2 is provided with an illumination optical system 10 and a photographing optical system 30 as in the conventional fundus camera.
  • the illumination optical system 10 irradiates the fundus oculi Ef with illumination light.
  • the photographing optical system 30 guides the fundus reflection light of the illumination light to the imaging device (CCD image sensors 35 and 38).
  • the imaging optical system 30 guides the signal light from the OCT unit 100 to the fundus oculi Ef and guides the signal light passing through the fundus oculi Ef to the OCT unit 100.
  • the observation light source 11 of the illumination optical system 10 is composed of, for example, a halogen lamp.
  • the light (observation illumination light) output from the observation light source 11 is reflected by the reflection mirror 12 having a curved reflection surface, passes through the condensing lens 13, passes through the visible cut filter 14, and is converted into near infrared light. Become. Further, the observation illumination light is once converged in the vicinity of the photographing light source 15, reflected by the mirror 16, and passes through the relay lenses 17 and 18, the diaphragm 19 and the relay lens 20. Then, the observation illumination light is reflected by the peripheral part (region around the hole part) of the perforated mirror 21 and illuminates the fundus oculi Ef via the objective lens 22.
  • the fundus reflection light of the observation illumination light is refracted by the objective lens 22, passes through a hole formed in the central region of the aperture mirror 21, passes through the dichroic mirror 55, passes through the focusing lens 31, and then goes through the dichroic mirror. 32 is reflected. Further, the fundus reflection light passes through the half mirror 40, is reflected by the dichroic mirror 33, and forms an image on the light receiving surface of the CCD image sensor 35 by the condenser lens 34.
  • the CCD image sensor 35 detects fundus reflected light at a predetermined frame rate, for example.
  • the display device 3 displays an image (observation image) K based on fundus reflected light detected by the CCD image sensor 35.
  • the photographing light source 15 is constituted by, for example, a xenon lamp.
  • the light (imaging illumination light) output from the imaging light source 15 is applied to the fundus oculi Ef through the same path as the observation illumination light.
  • the fundus reflection light of the imaging illumination light is guided to the dichroic mirror 33 through the same path as that of the observation illumination light, passes through the dichroic mirror 33, is reflected by the mirror 36, and is reflected by the condenser lens 37 of the CCD image sensor 38.
  • An image is formed on the light receiving surface.
  • On the display device 3, an image (captured image) H based on fundus reflection light detected by the CCD image sensor 38 is displayed.
  • the display device 3 that displays the observation image K and the display device 3 that displays the captured image H may be the same or different.
  • the LCD 39 displays a fixation target and a visual target for visual acuity measurement.
  • the fixation target is a target for fixing the eye E to be examined, and is used at the time of fundus photographing or OCT measurement.
  • a part of the light output from the LCD 39 is reflected by the half mirror 40, reflected by the dichroic mirror 32, passes through the focusing lens 31 and the dichroic mirror 55, and passes through the hole of the perforated mirror 21.
  • the light is refracted by the objective lens 22 and projected onto the fundus oculi Ef.
  • the fixation position of the eye E can be changed by changing the display position of the fixation target on the screen of the LCD 39.
  • As the fixation position of the eye E for example, a position for acquiring an image centered on the macular portion of the fundus oculi Ef, or a position for acquiring an image centered on the optic disc as in the case of a conventional fundus camera And a position for acquiring an image centered on the fundus center between the macula and the optic disc.
  • the fundus camera unit 2 is provided with an alignment optical system 50 and a focus optical system 60 as in the conventional fundus camera.
  • the alignment optical system 50 generates a visual target (alignment visual target) for performing alignment (alignment) of the apparatus optical system with respect to the eye E.
  • the focus optical system 60 generates a visual target (split visual target) for focusing on the fundus oculi Ef.
  • the light (alignment light) output from the LED (Light Emitting Diode) 51 of the alignment optical system 50 is reflected by the dichroic mirror 55 via the apertures 52 and 53 and the relay lens 54, and passes through the hole portion of the perforated mirror 21. It passes through and is projected onto the cornea of the eye E by the objective lens 22.
  • the corneal reflection light of the alignment light passes through the objective lens 22 and the hole, and a part thereof passes through the dichroic mirror 55, passes through the focusing lens 31, is reflected by the dichroic mirror 32, and passes through the half mirror 40. Then, it is reflected by the dichroic mirror 33 and projected onto the light receiving surface of the CCD image sensor 35 by the condenser lens 34.
  • a light reception image (alignment target) by the CCD image sensor 35 is displayed on the display device 3 together with the observation image K.
  • the user performs alignment by performing the same operation as that of a conventional fundus camera. Further, the arithmetic control unit 200 may perform alignment by analyzing the position of the alignment target and moving the optical system.
  • the reflecting surface of the reflecting rod 67 is obliquely provided on the optical path of the illumination optical system 10.
  • the light (focus light) output from the LED 61 of the focus optical system 60 passes through the relay lens 62, is separated into two light beams by the split target plate 63, passes through the two-hole aperture 64, and is reflected by the mirror 65.
  • the light is once focused on the reflecting surface of the reflecting bar 67 by the condenser lens 66 and reflected. Further, the focus light passes through the relay lens 20, is reflected by the perforated mirror 21, and forms an image on the fundus oculi Ef by the objective lens 22.
  • the fundus reflection light of the focus light is detected by the CCD image sensor 35 through the same path as the corneal reflection light of the alignment light.
  • a light reception image (split target) by the CCD image sensor 35 is displayed on the display device 3 together with the observation image.
  • the arithmetic and control unit 200 analyzes the position of the split target and moves the focusing lens 31 and the focus optical system 60 to focus, as in the conventional case. Alternatively, focusing may be performed manually while visually checking the split target.
  • An optical path including a mirror 41, a collimator lens 42, and galvanometer mirrors 43 and 44 is provided behind the dichroic mirror 32. This optical path is guided to the OCT unit 100.
  • the galvanometer mirror 44 scans the signal light LS from the OCT unit 100 in the x direction.
  • the galvanometer mirror 43 scans the signal light LS in the y direction.
  • the OCT unit 100 is provided with an optical system for acquiring an OCT image of the fundus oculi Ef (see FIG. 2).
  • This optical system has the same configuration as a conventional Fourier domain type OCT apparatus. That is, this optical system divides low-coherence light into reference light and signal light, and generates interference light by causing interference between the signal light passing through the fundus oculi Ef and the reference light passing through the reference optical path. It is configured to detect spectral components. This detection result (detection signal) is sent to the arithmetic control unit 200.
  • the light source unit 101 outputs a broadband low-coherence light L0.
  • the low coherence light L0 includes, for example, a near-infrared wavelength band (about 800 nm to 900 nm) and has a temporal coherence length of about several tens of micrometers. Note that near-infrared light having a wavelength band invisible to the human eye, for example, a center wavelength of about 1050 to 1060 nm, may be used as the low-coherence light L0.
  • the light source unit 101 includes a super luminescent diode (Super Luminescent Diode: SLD), an LED, and an optical output device such as an SOA (Semiconductor Optical Amplifier).
  • SLD Super Luminescent Diode
  • LED an LED
  • SOA semiconductor Optical Amplifier
  • the low coherence light L0 output from the light source unit 101 is guided to the fiber coupler 103 by the optical fiber 102, and is divided into the signal light LS and the reference light LR.
  • the fiber coupler 103 functions as both a means for splitting light (splitter) and a means for combining light (coupler), but here it is conventionally referred to as a “fiber coupler”.
  • the signal light LS is guided by the optical fiber 104 and becomes a parallel light beam by the collimator lens unit 105. Further, the signal light LS is reflected by the respective galvanometer mirrors 44 and 43, collected by the collimator lens 42, reflected by the mirror 41, transmitted through the dichroic mirror 32, and through the same path as the light from the LCD 39, the fundus oculi Ef. Is irradiated. The signal light LS is scattered and reflected on the fundus oculi Ef. The scattered light and reflected light may be collectively referred to as fundus reflected light of the signal light LS. The fundus reflection light of the signal light LS travels in the opposite direction on the same path and is guided to the fiber coupler 103.
  • the reference light LR is guided by the optical fiber 106 and becomes a parallel light beam by the collimator lens unit 107. Further, the reference light LR is reflected by the mirrors 108, 109, 110, is attenuated by the ND (Neutral Density) filter 111, is reflected by the mirror 112, and forms an image on the reflection surface of the reference mirror 114 by the collimator lens 113. . The reference light LR reflected by the reference mirror 114 travels in the opposite direction on the same path and is guided to the fiber coupler 103.
  • An optical element for dispersion compensation such as a pair prism
  • an optical element for polarization correction such as a wavelength plate
  • the fiber coupler 103 combines the fundus reflection light of the signal light LS and the reference light LR reflected by the reference mirror 114.
  • the interference light LC thus generated is guided by the optical fiber 115 and emitted from the emission end 116. Further, the interference light LC is converted into a parallel light beam by the collimator lens 117, dispersed (spectral decomposition) by the diffraction grating 118, condensed by the condenser lens 119, and projected onto the light receiving surface of the CCD image sensor 120.
  • the diffraction grating 118 shown in FIG. 2 is a transmission type, but a reflection type diffraction grating may be used.
  • the CCD image sensor 120 is, for example, a line sensor, and detects each spectral component of the split interference light LC and converts it into electric charges.
  • the CCD image sensor 120 accumulates this electric charge and generates a detection signal. Further, the CCD image sensor 120 sends this detection signal to the arithmetic control unit 200.
  • a Michelson type interferometer is used, but any type of interferometer such as a Mach-Zehnder type can be appropriately used.
  • any type of interferometer such as a Mach-Zehnder type can be appropriately used.
  • another form of image sensor for example, a CMOS (Complementary Metal Oxide Semiconductor) image sensor or the like can be used.
  • CMOS Complementary Metal Oxide Semiconductor
  • the configuration of the arithmetic control unit 200 will be described.
  • the arithmetic control unit 200 analyzes the detection signal input from the CCD image sensor 120 and forms an OCT image of the fundus oculi Ef.
  • the arithmetic processing for this is the same as that of a conventional Fourier domain type OCT apparatus.
  • the arithmetic control unit 200 controls each part of the fundus camera unit 2, the display device 3, and the OCT unit 100.
  • the arithmetic and control unit 200 displays an OCT image such as a tomographic image G (see FIG. 2) of the fundus oculi Ef on the display device 3.
  • the arithmetic control unit 200 controls the operation of the observation light source 11, the imaging light source 15 and the LEDs 51 and 61, the operation control of the LCD 39, the movement control of the focusing lens 31, and the movement control of the reflector 67. Further, movement control of the focus optical system 60, operation control of the galvanometer mirrors 43 and 44, and the like are performed.
  • the arithmetic control unit 200 performs operation control of the light source unit 101, movement control of the reference mirror 114 and collimator lens 113, operation control of the CCD image sensor 120, and the like.
  • the arithmetic control unit 200 includes, for example, a microprocessor, a RAM, a ROM, a hard disk drive, a communication interface, and the like, as in a conventional computer.
  • a computer program for controlling the fundus oculi observation device 1 is stored in a storage device such as a hard disk drive.
  • the arithmetic control unit 200 may include a dedicated circuit board that forms an OCT image based on a detection signal from the CCD image sensor 120.
  • the arithmetic control unit 200 may include an operation device (input device) such as a keyboard and a mouse, and a display device such as an LCD.
  • the fundus camera unit 2, the display device 3, the OCT unit 100, and the arithmetic control unit 200 may be configured integrally (that is, in a single casing) or may be configured separately.
  • Control system The configuration of the control system of the fundus oculi observation device 1 will be described with reference to FIG.
  • the control system of the fundus oculi observation device 1 is configured around the control unit 210 of the arithmetic control unit 200.
  • the control unit 210 includes, for example, the aforementioned microprocessor, RAM, ROM, hard disk drive, communication interface, and the like.
  • the control unit 210 is provided with a main control unit 211 and a storage unit 212.
  • the main control unit 211 performs the various controls described above.
  • the main control unit 211 controls the scanning drive unit 70 and the focusing drive unit 80 of the fundus camera unit 2, and further the light source unit 101 and the reference drive unit 130 of the OCT unit 100.
  • the scanning drive unit 70 includes a servo motor, for example, and independently changes the directions of the galvanometer mirrors 43 and 44.
  • the focusing drive unit 80 includes, for example, a pulse motor, and moves the focusing lens 31 in the optical axis direction. Thereby, the focus position of the light toward the fundus oculi Ef is changed.
  • the reference driving unit 130 includes, for example, a pulse motor, and moves the collimator lens 113 and the reference mirror 114 integrally along the traveling direction of the reference light LR.
  • the main control unit 211 performs a process of writing data to the storage unit 212 and a process of reading data from the storage unit 212.
  • the storage unit 212 stores various data.
  • the data stored in the storage unit 212 includes, for example, image data of an OCT image, image data of a fundus image, and eye information to be examined.
  • the eye information includes information about the subject such as patient ID and name, and information about the eye such as left / right eye identification information.
  • the storage unit 212 is used as an example of the “storage unit” of the present invention, and stores an image used for processing according to this embodiment.
  • this image there are a fundus image and a three-dimensional image of the fundus.
  • the storage unit 212 stores the first fundus photographic image H1 and the first three-dimensional image M1 acquired at the first inspection timing, and acquires them at the second inspection timing.
  • the second fundus photographed image H2 and the second three-dimensional image M2 are stored.
  • the respective fundus images H1 and H2 are acquired by fundus imaging, and the three-dimensional images M1 and M2 are acquired by OCT measurement.
  • timing means the date and time of the inspection conducted in a series of flows.
  • the “date and time” may be the same day and the same time, the same day and a non-same time, a non-same day and the same time, a non-same day and a non-same time. That is, fundus imaging and OCT measurement according to the same timing are acquired by an examination performed as a series of flows including any one of these four options.
  • the fundus oculi observation device 1 capable of performing both fundus imaging and OCT measurement is considered, a case where fundus imaging and a three-dimensional image are acquired on the same day and at approximately the same time will be described.
  • a fundus camera and an OCT device When two separate devices (for example, a fundus camera and an OCT device) are used, these images are acquired on the same day and at the same time, at the same day and at the same time, or at the same day and at the same time. There is also a possibility. For example, it is assumed that the fundus is photographed in a medical institution that does not have an OCT apparatus and then moved to a medical institution having the fundus and OCT measurement is performed.
  • This predetermined period is determined based on, for example, a disease progression rate or a cure rate.
  • a disease progression rate or a cure rate since the comparative observation is to compare images and analysis results between timings with different degrees of disease and healing, the fundus photographic image and the three-dimensional image at each timing have almost the same degree of disease and healing. There is a need. Then, the fundus photographed image and the three-dimensional image related to the same timing must be acquired in a state where the degree of disease and healing is almost the same. Therefore, the “predetermined period” is determined so as to secure this “substantially the same state”. In general, fundus imaging and OCT measurement are performed on the same day.
  • the first examination timing and the second examination timing have a period that is different from the degree of disease or cure. That is, the comparative observation is performed for the purpose of grasping the progress of the disease and healing.
  • the first fundus photographic image H1 and the first three-dimensional image M1 are stored in the storage unit 212 when they are acquired, for example, at the first examination timing.
  • the second fundus photographed image H2 and the second three-dimensional image M2 are stored in the storage unit 212 when they are acquired, for example, at the second examination timing.
  • the other storage medium include a portable storage medium such as a DVD-R and a storage medium installed on a communication line such as a NAS (Network Attached Storage).
  • the image forming unit 220 forms tomographic image data of the fundus oculi Ef based on the detection signal from the CCD image sensor 120.
  • This process includes processes such as noise removal (noise reduction), filter processing, and FFT (Fast Fourier Transform) as in the conventional Fourier domain type optical coherence tomography.
  • the image forming unit 220 includes, for example, the above-described circuit board and communication interface.
  • image data and “image” presented based on the “image data” may be identified with each other.
  • the image processing unit 230 performs various types of image processing and analysis processing on the image formed by the image forming unit 220. For example, the image processing unit 230 executes various correction processes such as image brightness correction and dispersion correction.
  • the three-dimensional image forming unit 231 of the image processing unit 230 executes known image processing such as interpolation processing for interpolating pixels between tomographic images formed by the image forming unit 220, and performs a three-dimensional image of the fundus oculi Ef. Form image data.
  • the image forming unit 220 and the three-dimensional image forming unit 231 are examples of the “image forming unit” of the present invention.
  • the image data of a three-dimensional image means image data in which pixel positions are defined by a three-dimensional coordinate system.
  • image data of a three-dimensional image there is image data composed of voxels arranged three-dimensionally. This image data is called volume data or voxel data.
  • the image processing unit 230 When displaying an image based on volume data, the image processing unit 230 performs a rendering process (such as volume rendering or MIP (Maximum Intensity Projection)) on the volume data, and views the image from a specific line-of-sight direction.
  • Image data of a pseudo three-dimensional image is formed. This pseudo three-dimensional image is displayed on a display device such as the display unit 240.
  • stack data of a plurality of tomographic images is image data of a three-dimensional image.
  • the stack data is image data obtained by three-dimensionally arranging a plurality of tomographic images obtained along a plurality of scanning lines based on the positional relationship of the scanning lines. That is, stack data is image data obtained by expressing a plurality of tomographic images originally defined by individual two-dimensional coordinate systems by one three-dimensional coordinate system (that is, by embedding them in one three-dimensional space). is there.
  • the image processing unit 230 is further provided with a positional deviation amount calculation unit 232, a cross-sectional position designation unit 233, an inclination deviation amount calculation unit 235, an alignment processing unit 236, an inclination deviation correction unit 237, and an image analysis unit 238.
  • a positional deviation amount calculation unit 232 a positional deviation amount calculation unit 232, a cross-sectional position designation unit 233, an inclination deviation amount calculation unit 235, an alignment processing unit 236, an inclination deviation correction unit 237, and an image analysis unit 238.
  • the positional deviation amount calculation unit 232 is based on the first fundus photographic image H1 and the second fundus photographic image H2, and between the first fundus photographic image H1 and the second fundus photographic image H2 in the fundus surface direction. Calculate the amount of displacement.
  • the positional deviation amount calculation unit 232 is an example of the “first calculation means” in the present invention. This positional deviation amount is a vector amount.
  • fundus surface direction means a direction on the xy plane shown in FIG. 1, that is, a direction defined by the x coordinate and the y coordinate.
  • the fundus surface direction is a direction orthogonal to the optical axis direction (z direction) of the apparatus optical system in a state where the apparatus optical system is aligned with the fundus oculi Ef.
  • the optical axis direction at this time is the traveling direction of the signal light LS toward the fundus oculi Ef.
  • this optical axis direction is referred to as “fundus depth direction”.
  • the processing executed by the positional deviation amount calculation unit 232 will be described more specifically. Since the first fundus photographic image H1 and the second fundus photographic image H2 are obtained by photographing the same fundus oculi Ef at different timings, the eye E rotates, the fixation position shift, and the alignment shift. The position in the frame varies depending on the difference in conditions. For example, when the rotation state of the eye E to be examined is different, the first fundus photographed image H1 and the second fundus photographed image H2 are displaced in the rotational direction around the rotation point in the fundus surface direction. . In addition, when there is a shift in the fixation position or alignment, a displacement due to translation in the fundus surface direction is interposed between the first fundus photographed image H1 and the second fundus photographed image H2.
  • the positional deviation amount calculation unit 232 first calculates the characteristic portion of the fundus oculi Ef depicted in each fundus oculi image H1, H2. Detect the position in the image.
  • the characteristic part include an affected part, an optic disc, a macula, a blood vessel, and a branch point of the blood vessel.
  • the feature part in the image is specified by, for example, analyzing the pixel value of each pixel of the image and specifying a pixel having a pixel value corresponding to the feature part to be detected.
  • the optic disc is generally brighter than other parts of the fundus image (ie, with high brightness), so an image area corresponding to the optic disc is identified by searching for pixels with a brightness greater than or equal to a predetermined value. it can.
  • the affected part and the macula are generally drawn darker than other parts, and therefore, it is only necessary to search for a pixel having a luminance of a predetermined value or less.
  • pixel values may be analyzed by known image processing to identify image regions corresponding to blood vessels (at this time, thinning processing or the like may be performed as necessary).
  • the positional deviation amount calculation unit 232 calculates the displacement in the frame of the image area in each of the fundus photographic images H1 and H2 obtained in this manner, thereby obtaining the first fundus photographic image H1 and the second fundus photographic image.
  • the amount of positional deviation (particularly the amount of translation) from H2 is calculated.
  • the amount of rotational movement can be easily calculated.
  • an affine transformation is obtained so that the positions of the image regions of both feature parts most closely match, and the parallel movement in this affine transformation is obtained. It is possible to set the component as the target parallel movement amount and the rotational movement component as the target rotational movement amount. Note that it is also possible to obtain a positional deviation amount between the two fundus photographic images H1 and H2 by a method other than the above.
  • FIG. 4 shows an example of a positional shift state between the fundus photographed images H1 and H2.
  • 4A represents the first fundus photographic image H1
  • FIG. 4B represents the second fundus photographic image H2.
  • FIG. 4C shows a state in which the first fundus photographic image H1 (solid line) and the second fundus photographic image H2 (broken line) are overlapped in order to show the positional relationship between both fundus photographic images H1 and H2.
  • the xy coordinate system shown in FIG. 4C corresponds to the xy coordinate system included in the xyz coordinate system shown in FIG. Further, “ ⁇ ” represents a positional shift amount of the second fundus photographic image H2 with respect to the first fundus photographic image H1. That is, ⁇ x represents the parallel movement amount in the x direction, ⁇ y represents the parallel movement amount in the y direction, and ⁇ represents the rotational movement amount.
  • the cross-section position specifying unit 233 is substantially on the fundus oculi Ef depicted in each of the first fundus photographic image H1 and the second fundus photographic image H2 based on the position shift amount calculated by the position shift amount calculation unit 232. Specify the cross-sectional position at the same position.
  • the cross-section position designation unit 233 is an example of the “designation unit” of the present invention.
  • the positional deviation amount between the two fundus photographic images H1 and H2 is obtained by the positional deviation amount calculation unit 232, the fundus photographic images H1 and H2 can be aligned in the fundus surface direction. This alignment can be performed by relatively moving the two fundus photographic images H1 and H2 so as to cancel out the obtained positional deviation amount. Further, instead of moving the image, association may be performed in consideration of the positional deviation amount between the pixels in the fundus photographic image H1 and the pixels in the fundus photographic image H2. In any case, the position in the fundus photographic image H1 and the position in the fundus photographic image H2 can be associated with each other by taking the positional deviation amount into consideration.
  • the amount of positional deviation is obtained as the positional deviation of the image area corresponding to the characteristic part of the fundus oculi Ef. Therefore, this alignment associates the position of the part in the first fundus photographic image H1 with the position of the part in the second fundus photographic image H2 for each part of the fundus oculi Ef.
  • the cross-sectional position designating unit 233 sets the cross-sectional position at substantially the same position on the fundus oculi Ef depicted in each of the first fundus photographic image H1 and the second fundus photographic image H2 associated by such alignment. specify.
  • the cross-sectional position specified by the cross-section position specifying unit 233 will be described.
  • the number of designated cross-sectional positions may be one or plural.
  • the subsequent processing is executed for each cross-sectional position.
  • Examples of the plurality of cross-sectional positions include a cross-shaped cross-sectional position and a radial cross-sectional position.
  • the cross-shaped cross-sectional position is composed of a pair of linear cross-sectional positions that are orthogonal to each other and intersect each other.
  • An example of the cross-sectional position of the cross shape is shown in FIG.
  • a cross-shaped cross-sectional position U1 shown in FIG. 5A is designated for the first fundus photographic image H1
  • a cross-shaped cross-sectional position U2 shown in FIG. 5B is designated for the second fundus photographic image H2.
  • FIG. 5C shows a first fundus photographic image H1 and a cross-sectional position U1 (solid line), a second fundus photographic image H2 and a cross-sectional position U2 (broken line) in order to show the positional relationship between the two cross-sectional positions U1 and U2.
  • Both cross-sectional positions U1 and U2 are designated at substantially the same position on the fundus oculi Ef.
  • the cross-shaped cross-sectional positions U1 and U2 have the same form as a scanning line for cross scanning described later.
  • the radial cross-sectional position is composed of two or more linear cross-sectional positions arranged radially and intersecting each other (illustration is omitted).
  • the two cross-sectional positions are arranged so as to be orthogonal to each other in the radial cross-sectional position, the above-described cross-shaped cross-sectional position is obtained.
  • the radial cross-sectional position has the same form as the scanning line of the radiation scan described later.
  • the cross-sectional position specified by the cross-section position specifying unit 233 is not limited to the above example.
  • the cross-sectional position so as to pass through the site of the fundus oculi Ef having a characteristic shape in the fundus depth direction (z direction).
  • a part having this characteristic shape there is a part (uneven part) having irregularities in the fundus depth direction. Examples of such irregularities include the optic nerve head and the macula, and also the affected area with irregularities.
  • the cross-sectional position designating unit 233 executes, for example, the following processing.
  • the cross-sectional position specifying unit 233 specifies the cross-sectional position so as to pass through the image region.
  • the size (length, etc.) of the cross-sectional position may be set in advance, or may be set according to the size of the image area.
  • the cross-sectional position specifying part 233 can be configured to specify the image area corresponding to the uneven part. This identification process is the same as the process performed by the positional deviation amount calculation unit 232.
  • the cross-section position specifying unit 233 generates a two-dimensional image (integrated image) by integrating the three-dimensional images M1 and M2 in the fundus depth direction.
  • pixel values luminance values
  • a scan image a depth direction image
  • the dot-like image obtained by integrating the A-scan images has a luminance value obtained by adding the luminance value at each z position of the A-scan image in the depth direction (exceeds the maximum gradation value of the luminance value). If so, perform appropriate image processing).
  • the accumulated image is described in detail in Japanese Patent Application No. 2005-337628 by the present applicant.
  • the accumulated image is an image depicting the surface of the fundus as if it were a fundus photographed image, but high image quality such as a fundus photographed image cannot be obtained.
  • the cross-sectional position specifying unit 233 specifies an image region corresponding to a characteristic part of the fundus oculi Ef in the three-dimensional image M1 based on the pixel value of the accumulated image based on the first three-dimensional image M1, and performs first fundus photographing. An image region corresponding to the characteristic part is specified in the image H1. Then, the cross-section position specifying unit 233 performs alignment between these characteristic portions in the same manner as the positional deviation amount calculation unit 232.
  • the cross-section position specifying unit 233 analyzes the first three-dimensional image M1 and specifies an image region corresponding to a predetermined uneven portion. Since the first three-dimensional image M1 depicts the three-dimensional form of the fundus oculi Ef, for example, by searching for unevenness in the z direction in the image region corresponding to the fundus surface, the uneven portion can be easily identified. . Subsequently, the cross-sectional position designating unit 233 specifies an image region in the first fundus photographic image H1 corresponding to the image region of the uneven portion based on the above-described alignment result of the characteristic portion, and this image region is determined. Specify the cross-sectional position to pass. The process of designating the cross-sectional position for the second fundus photographic image H2 can be executed in the same manner.
  • the process of designating the cross-sectional position for each fundus photographic image H1, H2 based on each three-dimensional image M1, M2 can also be performed without interposing the accumulated image.
  • the cross-section position specifying unit 233 first specifies an image region corresponding to a characteristic part of the fundus oculi Ef in each of the first fundus photographic image H1 and the first three-dimensional image M1, and between these image regions. The amount of positional deviation in the xy direction is obtained. Since both images are defined using the x-coordinate and the y-coordinate, this process can be easily executed in the same manner as the positional deviation amount calculation unit 232.
  • the cross-section position specifying unit 233 specifies an image region corresponding to a predetermined uneven portion in the first three-dimensional image M1. Further, the cross-section position specifying unit 233 specifies an image area in the first fundus photographic image H1 corresponding to the image area of the uneven portion based on the position shift amount, and cross-sections so as to pass through the image area. Specify the position. The process of designating the cross-sectional position for the second fundus photographic image H2 can be executed in the same manner.
  • the image region corresponding to the uneven portion it is also possible to set a cross-sectional position passing through the image region corresponding to an arbitrary portion of the fundus oculi Ef.
  • the designated cross-sectional position is used to determine the amount of tilt deviation in the fundus depth direction. Therefore, if the cross-sectional position is specified so that the surface of the fundus oculi Ef or the deep layer of the fundus oculi Ef is clearly depicted in the tomographic image, the relative inclination of the surface or layer can be obtained. That is, on the condition that such a tomographic image is obtained, it is possible to designate a cross-sectional position that passes through an image region corresponding to an arbitrary part of the fundus oculi Ef.
  • the image processing unit 230 stores position information (cross-section position information) of a cross-sectional position designated on each fundus photographic image H1, H2.
  • This cross-sectional position information is, for example, the coordinate value of the xy coordinate system in each fundus photographed image H1, H2. Since the optical axis (imaging optical axis) of the imaging optical system 30 is aligned with the eye E during fundus imaging, the frame centers of the respective fundus images H1 and H2 substantially coincide with the imaging optical axis. Considering this, it is also possible to store the displacement of the cross-sectional position with respect to the photographing optical axis as cross-sectional position information.
  • the image processing unit 230 forms a tomographic image at the cross-sectional position specified by the cross-sectional position specifying unit 233 based on the three-dimensional images M1 and M2. Instead of actually forming the tomographic image, it is also possible to execute the subsequent processing with reference to the pixels at the cross-sectional positions of the three-dimensional images M1 and M2 corresponding to the designated cross-sectional positions.
  • the image processing unit 230 first performs alignment between the first fundus image H1 and the first three-dimensional image M1. This process can be performed, for example, through the above-described integrated image, or can be performed by aligning the image region of the characteristic part.
  • the image processing unit 230 specifies a position in the first three-dimensional image M1 corresponding to the cross-sectional position designated in the first fundus photographed image H1 based on the result of the above alignment. Then, the image processing unit 230 forms a tomographic image (first tomographic image) along the specific position (having the same form as the cross-sectional position) based on the first three-dimensional image M1. This processing is performed by specifying pixels (such as voxels) positioned along the specific position, and arranging the specified pixels two-dimensionally to form an image representing the form of the fundus oculi Ef in the cross section. . The same applies to the process of forming a tomographic image (second tomographic image) at the cross-sectional position designated in the second fundus photographic image H2.
  • FIG. 6A shows a cross-shaped cross-sectional position U1 designated in the first fundus photographic image H1.
  • FIG. 6B shows a cross-shaped cross-sectional position U1 ′ set on the first three-dimensional image M1 according to the cross-sectional position U1.
  • two tomographic images G1a and G1b along the cross-sectional position U1 ′ are shown in FIG. 6B.
  • These tomographic images G1a and G1b are examples of the first tomographic image described above.
  • the second tomogram is obtained in the same manner.
  • the inclination shift amount calculation unit 235 has, for each cross-section position designated by the cross-section position designation unit 233, a surface (target surface) stretched by a direction along the cross-sectional position (cross-sectional direction) and a fundus depth direction (z direction). An inclination shift amount between the first tomographic image and the second tomographic image is calculated based on the first tomographic image and the second tomographic image.
  • the tilt deviation amount calculation unit 235 is an example of the “second calculation unit” in the present invention.
  • the first cross-sectional position and the second cross-sectional position are designated at substantially the same position on the fundus oculi Ef
  • the first tomographic image and the second tomographic image are images on substantially the same cross-section of the fundus oculi Ef. It is. Therefore, it can be said that the inclination shift amount calculation unit 235 compares the state at the first inspection timing and the state at the second inspection timing with respect to the cross section at substantially the same position.
  • the cross-sectional position designated by the cross-sectional position designation unit 233 is arranged in a direction along an arbitrary direction on the xy plane.
  • the target surfaces in this case are the xz plane and the yz plane.
  • the second cross-sectional position is 45 degrees from the x direction toward the y direction.
  • the third cross-sectional position is arranged in the 90-degree direction
  • the fourth cross-sectional position is arranged in the 135-degree direction.
  • the cross-sectional direction is constant at each point on the cross-sectional position, and the surface stretched by the cross-sectional direction and the z direction is a plane.
  • the cross-sectional position is not linear, such as a curved line or a polygonal line, the cross-sectional direction changes according to a point on the cross-sectional position, and the obtained surface is not a flat surface.
  • the target surface is a two-dimensional region obtained by extending the cross-sectional position in the depth direction. For example, when a circular cross-sectional position is used, a cylindrical target surface whose axis is the z direction is obtained.
  • the tilt deviation amount calculation unit 235 specifies an image area (comparison area) in which a predetermined part of the fundus oculi Ef is depicted for each of the first tomographic image and the second tomographic image.
  • the predetermined part is desirably a part that is clearly depicted in the tomographic image or a part that is depicted in a characteristic shape in the tomographic image.
  • the predetermined portion is a portion whose shape does not substantially change with time. Examples of such a predetermined site include an optic disc, macula, retinal surface, and a predetermined layered tissue of the retina and choroid.
  • the comparison region corresponding to the predetermined part is identified by well-known image processing such as analysis of the pixel value of the tomographic image and shape analysis of the image region.
  • the inclination shift amount calculation unit 235 compares the comparison area (first comparison area) in the first tomographic image with the comparison area (second comparison area) in the second image area. An inclination is calculated, and this is set as an inclination shift amount between the first tomographic image and the second tomographic image.
  • FIG. 7A shows the first tomogram G1a shown in FIG. 6B.
  • the first tomographic image G1a is a tomographic image along a linear cross-sectional position (target cross-sectional position) substantially along the x direction at the cross-shaped cross-sectional position U1 shown in FIG. 5A.
  • FIG. 7B shows a second tomographic image G2a along the straight cross-sectional position corresponding to the target cross-sectional position at the cross-sectional position U2 shown in FIG. 5B.
  • the first and second tomographic images G1a and G2a depict the macular (corresponding image area) YS of the fundus oculi Ef and the vicinity thereof.
  • Symbols A1 and A2 in FIGS. 7A and 7B indicate comparison regions corresponding to the retina surface (the predetermined portion) of the fundus oculi Ef, respectively.
  • FIG. 7C shows the tomographic images G1a and G2a superimposed on each other in order to show that the first and second tomographic images G1a and G2a are relatively inclined.
  • FIG. 8 is an enlarged view of only the comparison areas A1 and A2 shown in FIG. 7C.
  • the tilt shift amount calculation unit 235 calculates the tilt shift amount between the first tomographic image G1a and the second tomographic image G2a by executing the following processing based on the comparison areas A1 and A2. To do.
  • the tilt deviation amount calculation unit 235 first identifies the feature position in each of the comparison areas A1 and A2. This feature position may be one point in the comparison area or an area having a spread.
  • the image positions P1 and P2 corresponding to the deepest part (fovea) of the macular YS in the comparison areas A1 and A2 are specified as characteristic parts, respectively.
  • the inclination shift amount calculation unit 235 obtains a curve that optimally approximates each of the comparison areas A1 and A2.
  • this approximate curve an appropriate one such as a spline curve or a Bezier curve is used.
  • the inclination shift amount calculation unit 235 analyzes the shape of the approximate curve, for example, by calculating the first-order differential coefficient and the second-order differential coefficient at each point of the approximate curve, so that the feature position (fovea) The image positions P1 and P2 corresponding to are specified.
  • the characteristics (position, shape, etc.) of the feature position in the comparison area can be used.
  • the concave part (macular) is identified based on the differential coefficient of each point, and this concave The deepest part is specified with reference to the differential coefficient of each point on the part.
  • the feature position corresponding to the fovea may be obtained as follows. First, a line segment connecting the edges of the concave portion is obtained. Next, the midpoint of the line segment is obtained. Further, a straight line orthogonal to the line segment and passing through the midpoint is obtained. Then, the intersection of the straight line and the concave part is obtained. This intersection is taken as the target feature position.
  • the inclination deviation amount calculation unit 235 determines the direction (or inclination) of the tomographic image based on the feature position.
  • the tangent lines T1 and T2 of the comparison areas A1 and A2 at the image positions P1 and P2 corresponding to the fovea are obtained, and the tangent direction is set as the direction of each tomographic image G1a and G2a.
  • the approximation curve is used to enable differentiation at each point on the comparison area, but there are cases where this is not the case.
  • the direction of the tomographic image can be determined in the same manner as in this processing example as long as differentiation is possible at least at the feature position.
  • a differentiable point may be defined as a characteristic part.
  • the feature position is a non-differentiable point, for example, a differentiable point separated from the feature position by a predetermined distance (path) along the comparison region is obtained, and in this respect, the same processing as this processing example is obtained. It can be performed.
  • the inclination shift amount calculation unit 235 calculates an angle formed by these directions.
  • the inclination shift amount calculation unit 235 calculates an angle ⁇ V formed by the tangent line T1 and the tangent line T2. This angle ⁇ V is employed as the amount of tilt deviation between the tomographic images G1a and G2a.
  • the tilt deviation amount is a vector amount.
  • the tangent line T1 in the first tomographic image G1a is used as a reference (that is, angle 0), and the counterclockwise direction is an angle with respect to the center position (intersection of the tangent line T1 and the tangent line T2) at the time of angle measurement.
  • the positive direction In the case shown in FIG. 8, the tangent T2 is displaced in the positive direction by [Delta] [theta] V relative to the tangent T1.
  • the angle of the tangent line T1 with respect to the tangent line T2 may be obtained, or the angle of each tangent line T1, T2 with respect to a predetermined reference direction may be obtained. In any case, it is sufficient if the relative inclination shift amount between the tangent lines T1 and T2 can be expressed.
  • the inclination shift amount calculation unit 235 performs the above-described processing for each cross-sectional position. Thereby, for each cross-sectional position, an inclination shift amount between the first tomographic image and the second tomographic image along the cross-sectional position is obtained.
  • the above-described processing is executed for each of a plurality of cross-sectional positions forming the radial cross-sectional position. Thereby, a plurality of tilt shift amounts corresponding to a plurality of cross-sectional positions forming a radial cross-sectional position are obtained.
  • the alignment processing unit 236 uses the first three-dimensional image M1 and the second two-dimensional image M1 based on the positional deviation amount calculated by the positional deviation amount calculation unit 232 and the inclination deviation amount calculated by the inclination deviation amount calculation unit 235. Is aligned with the three-dimensional image M2.
  • the alignment processing unit 236 is an example of the “alignment unit” in the present invention.
  • FIG. 9 shows a positional shift state between the first three-dimensional image M1 and the second three-dimensional image M2.
  • FIG. 9C shows the three-dimensional images M1 and M2 superimposed on each other in order to show a positional shift state between the first three-dimensional image M1 shown in FIG. 9A and the second three-dimensional image M2 shown in FIG. 9B. It is what was presented.
  • the positional deviation amount calculated by the positional deviation amount calculation unit 232 includes the parallel movement amounts ⁇ x and ⁇ y and the rotational movement amount ⁇ .
  • the slope deviation amount calculated by the inclination shift amount calculating unit 235 is [Delta] [theta] V.
  • the cross-shaped cross-sectional positions U1 and U2 are formed by a horizontal cross-sectional position and a vertical cross-sectional position, respectively.
  • This inclination deviation amount ⁇ V includes an inclination deviation amount ⁇ V 1 corresponding to the horizontal cross-sectional position and an inclination deviation amount ⁇ V 2 corresponding to the vertical cross-sectional position (not shown).
  • the positioning process unit 236, the values [Delta] x, [Delta] y, [Delta] [theta], so as to cancel the [Delta] [theta] V, and aligned by relatively moving these 3-dimensional images M1, M2.
  • the alignment processing unit 236 translates the second three-dimensional image M2 by ⁇ x in the x direction and ⁇ y in the y direction, and rotates and moves it by ⁇ in the xy plane.
  • the alignment processing unit 236 applies the second three-dimensional image M2 to ⁇ in a plane stretched by the direction along the cross-sectional position on the second three-dimensional image M2 and the fundus depth direction (z direction). Rotate and move by V1 .
  • the alignment processing unit 236 displays the second 3D image M2 in a plane stretched by the direction along the longitudinal sectional position on the second 3D image M2 and the fundus depth direction (z direction). Rotate and move by ⁇ V 2.
  • the alignment processing of the three-dimensional images M1 and M2 is not limited to the above. That is, the value [Delta] x, to counteract [Delta] y, [Delta] [theta], the [Delta] [theta] V, so long as it is capable of moving one or both of the two three-dimensional images M1, M2, it is possible to apply any of the alignment process.
  • the image of the fundus oculi Ef depicted in the first three-dimensional image M1 the image of the fundus oculi Ef depicted in the second three-dimensional image M2, and Are relatively parallel and rotated between the three-dimensional images M1 and M2.
  • the inclination shift correction unit 237 operates when a radial cross-sectional position is specified by the cross-sectional position specifying unit 233.
  • the tilt shift amount calculation unit 235 calculates the tilt shift amount for each cross-sectional position forming a radial cross-sectional position.
  • the inclination deviation correction unit 237 first selects the maximum value of the inclination deviation amounts corresponding to these cross-sectional positions. This process is executed by selecting an inclination value (maximum inclination deviation amount) having the maximum absolute value from among the inclination deviation amounts (a vector amount, that is, expressed by an angle having a direction).
  • the inclination deviation correction unit 237 is arranged in a plane stretched by the direction along the cross-sectional position corresponding to the maximum inclination deviation amount and the fundus depth direction (z direction) so as to cancel the selected maximum inclination deviation amount.
  • the deviation of the inclination between the first and second three-dimensional images M1 and M2 is corrected.
  • the inclination deviation correction unit 237 relatively rotates the first and second three-dimensional images M1 and M2 on the xz plane so as to cancel the maximum inclination deviation amount ⁇ Vmax .
  • the inclination shift correction unit 237 rotates the second three-dimensional image M2 by ⁇ Vmax on the xy plane. Thereby, the inclination (maximum inclination) in the plane direction of the image of the fundus oculi Ef depicted in each of the first and second three-dimensional images M1 and M2 is corrected.
  • Such an inclination shift correcting unit 237 is an example of the “correcting unit” of the present invention.
  • the image analysis unit 238 analyzes the first three-dimensional image M1 and calculates a first value of a predetermined physical quantity. Similarly, the image analysis unit 238 analyzes the second three-dimensional image M2 and calculates a second value of a predetermined physical quantity. Note that the three-dimensional images M1 and M2 analyzed by the image analysis unit 238 have not been subjected to the above-described alignment process or tilt shift correction process. Further, “analyze the three-dimensional images M1 and M2” includes a case where an OCT image (tomographic image or the like) based on the three-dimensional images M1 and M2 is analyzed. The image analysis unit 238 is an example of the “analysis unit” in the present invention.
  • the “predetermined physical quantity” to be calculated is an arbitrary test result (obtained as a numerical value) referred to in diagnosis of fundus oculi disease. Specific examples include the size of the lesion (radius, diameter, area, volume, etc.), the depth of the lesion from the retina surface, the size of the optic disc (cup size, disc size, rim size, etc.), retinal pigment epithelium (Retinal Pigment) Epithelium (RPE) spacing, retinal thickness, etc.
  • the images used for the processing of the image analysis unit 238 include the three-dimensional images M1 and M2 of the fundus oculi Ef and the tomographic images obtained from these three-dimensional images M1 and M2.
  • the process of forming a tomographic image from the three-dimensional images M1 and M2 is executed in the same manner as described above.
  • the process of forming a tomographic image at an arbitrary cross-sectional position based on a three-dimensional image is called multi-planar reconstruction (MPR).
  • MPR multi-planar reconstruction
  • the fundus photographic images H1 and H2 may be analyzed to calculate a predetermined physical quantity.
  • a tomographic image J1 shown in FIG. 11A is obtained by designating a linear cross-sectional position along the x direction through the image region OP corresponding to the optic disc for the first three-dimensional image M1.
  • the tomographic image J2 shown in FIG. 11B is obtained by designating the same cross-sectional position for the second three-dimensional image M2. That is, the tomographic images J1 and J2 are two-dimensional images on the xz plane.
  • Reference numerals B1 and B2 denote retinal surfaces (corresponding image areas).
  • the image analysis unit 238 analyzes the tomographic image J1 and calculates the RPE interval d1. More specifically, the image analysis unit 238 first analyzes the pixel values of the pixels constituting the tomographic image J1, and specifies the image region OP1 corresponding to the optic nerve head and the image region C1 corresponding to RPE. .
  • the image region OP1 is specified by, for example, specifying the image region B1 corresponding to the retina surface and analyzing the shape of the image region B1 to specify the concave image region corresponding to the optic disc. it can.
  • the image region C1 corresponding to the RPE is specified by, for example, specifying image regions of various layered tissues (boundaries) of the fundus oculi Ef depicted in the tomographic image J1, and determining the luminance of these image regions and the retinal surface B1. This can be done by selecting the equivalent of RPE based on depth. Since the RPE is positioned so as to surround the optic disc, the tomographic image J1 specifies the image region C1 on both the + x side and the ⁇ x side with respect to the image region OP1 corresponding to the optic disc. .
  • the image analysis unit 238 specifies a pixel (closest pixel) located closest to the image region OP1 among pixels in the image region C1 corresponding to the RPE located on the + x side with respect to the image region OP1. .
  • a distance to each pixel in the image area OP1 (for example, Euclidean distance in the xz coordinate system) is calculated, and the shortest of these distances is selected.
  • the shortest of the shortest pixel distances in the image area C1 obtained in this way is selected.
  • the pixel corresponding to the selected shortest distance becomes the target pixel.
  • the radius of a circle centered on the pixel is gradually enlarged until it touches (or intersects) the image area OP1.
  • the radius at this time is the distance between the pixel and the image area OP1.
  • the pixel with the shortest distance is the target pixel.
  • the closest pixel is similarly specified for the image region C1 corresponding to the RPE located on the ⁇ x side with respect to the image region OP1.
  • the image analysis unit 238 calculates a distance d1 between the nearest pixel located on the + x side and the nearest pixel located on the ⁇ x side with respect to the image region OP1.
  • the distance d1 may be a Euclidean distance in the xz coordinate system, or may be a distance along the x direction (that is, an x component of the Euclidean distance). This distance d1 is the RPE interval based on the tomographic image J1.
  • the tomographic image J2 shown in FIG. 11B is relatively inclined with respect to the tomographic image J1 shown in FIG. 11A. If processing similar to that of the tomographic image J1 is executed as in the prior art, this inclination is not reflected in the calculation of the RPE interval, so the analysis results of the tomographic images J1 and J2 acquired under different conditions are compared. Large errors may occur. For example, in the case shown in FIG. 11, when distances along the x direction are compared, for example, the distance d1 shown in FIG. 11A and the distance d2 ′ shown in FIG. 11B are obtained as RPE intervals, respectively. Comparison accuracy is greatly reduced.
  • the tomographic images J1 and J2 are both inclined with respect to the true direction due to the movement of the eye E, and it is extremely difficult to obtain the true value of the RPE interval.
  • the comparative observation since it is necessary to compare both analysis results under (substantially) the same conditions, it is possible to improve accuracy by considering the relative inclination.
  • the image analysis unit 238 performs the following processing to reflect the inclination in the analysis result in order to improve the accuracy of comparative observation.
  • the image analysis unit 238 specifies an image region OP2 corresponding to the optic disc and an image region C2 corresponding to RPE in the same manner as the processing related to the tomographic image J1. Further, the image analysis unit 238 specifies the closest pixel in the image region C2 corresponding to the RPE located on the + x side and the ⁇ x side with respect to the image region OP2.
  • the image analysis unit 238 calculates the measurement direction of the distance d1 when the RPE interval is obtained for the tomographic image J1 (that is, the direction of the double-sided arrow indicating the RPE interval in FIG. 11A) by the inclination deviation amount calculation unit 235. Is inclined by the amount of inclination deviation ⁇ V. Then, a distance d2 between the nearest pixel on the + x side and the nearest pixel on the -x side is calculated along this inclination direction. This distance d2 is the RPE interval based on the tomographic image J2. Thereby, RPE intervals d1 and d2 measured in substantially the same direction with respect to the tomographic images J1 and J2 are obtained, and the RPE intervals can be compared with high accuracy.
  • the image processing unit 230 that functions as described above includes, for example, the aforementioned microprocessor, RAM, ROM, hard disk drive, circuit board, and the like.
  • a storage device such as a hard disk drive, a computer program for causing the microprocessor to execute the above functions is stored in advance.
  • the display unit 240 includes the display device of the arithmetic control unit 200 described above.
  • the operation unit 250 includes the operation device of the arithmetic control unit 200 described above.
  • the operation unit 250 may include various buttons and keys provided on the housing of the fundus oculi observation device 1 or outside.
  • the operation unit 250 may include a joystick, an operation panel, or the like provided on the housing.
  • the display unit 240 may include various display devices such as a touch panel monitor provided on the housing of the fundus camera unit 2.
  • the display unit 240 and the operation unit 250 need not be configured as individual devices.
  • a device in which a display function and an operation function are integrated, such as a touch panel monitor, can be used.
  • Examples of the scanning mode of the signal light LS by the fundus oculi observation device 1 include a horizontal scan, a vertical scan, a cross scan, a radiation scan, a circle scan, a concentric scan, and a spiral (vortex) scan. These scanning modes are selectively used as appropriate in consideration of the observation site of the fundus, the analysis target (such as retinal thickness), the time required for scanning, the precision of scanning, and the like.
  • the horizontal scan is to scan the signal light LS in the horizontal direction (x direction).
  • the horizontal scan also includes an aspect in which the signal light LS is scanned along a plurality of horizontal scanning lines arranged in the vertical direction (y direction). In this aspect, it is possible to arbitrarily set the scanning line interval. Further, the above-described three-dimensional image can be formed by sufficiently narrowing the interval between adjacent scanning lines (three-dimensional scanning). The same applies to the vertical scan.
  • the cross scan scans the signal light LS along a cross-shaped trajectory composed of two linear trajectories (straight trajectories) orthogonal to each other.
  • the signal light LS is scanned along a radial trajectory composed of a plurality of linear trajectories arranged at a predetermined angle.
  • the cross scan is an example of a radiation scan.
  • the circle scan scans the signal light LS along a circular locus.
  • the signal light LS is scanned along a plurality of circular trajectories arranged concentrically around a predetermined center position.
  • a circle scan is an example of a concentric scan.
  • the signal light LS is scanned along a spiral (spiral) locus while the radius of rotation is gradually reduced (or increased).
  • the galvanometer mirrors 43 and 44 are configured to scan the signal light LS in directions orthogonal to each other, the signal light LS can be scanned independently in the x direction and the y direction, respectively. Furthermore, by simultaneously controlling the directions of the galvanometer mirrors 43 and 44, it is possible to scan the signal light LS along an arbitrary locus on the xy plane. Thereby, various scanning modes as described above can be realized.
  • a tomographic image in the fundus depth direction (z direction) along the scanning line (scanning locus) can be formed.
  • the above-described three-dimensional image can be formed.
  • the region on the fundus oculi Ef to be scanned with the signal light LS as described above, that is, the region on the fundus oculi Ef to be subjected to OCT measurement is referred to as a scanning region.
  • the scanning area in the three-dimensional scan is a rectangular area in which a plurality of horizontal scans are arranged.
  • the scanning area in the concentric scan is a disk-shaped area surrounded by the locus of the circular scan with the maximum diameter.
  • the scanning area in the radial scan is a disk-shaped (or polygonal) area connecting both end positions of each scan line.
  • the fundus oculi observation device 1 includes the first fundus photographic image H1 and the first three-dimensional image M1 acquired at the first examination timing, and the second fundus photographic image H2 and the first fundus photographic image H2 obtained at the second examination timing. 2 three-dimensional images M2 are stored.
  • the fundus oculi observing device 1 is based on the first fundus photographic image H1 and the second fundus photographic image H2, and the first fundus photographic image H1 and the second fundus photographic image H2 in the fundus surface direction (xy direction). Displacement amounts ⁇ x, ⁇ y, and ⁇ between are calculated.
  • the fundus oculi observation device 1 designates a cross-sectional position at substantially the same position on the fundus oculi Ef depicted in each fundus oculi image H1, H2 based on the calculated positional deviation amounts ⁇ x, ⁇ y, ⁇ .
  • the fundus oculi observation device 1 forms a first tomographic image at the cross-sectional position designated in the first fundus photographic image H1 based on the first three-dimensional image M1, and also the second three-dimensional image. Based on M2, a second tomographic image is formed at the cross-sectional position designated in the second fundus photographic image H2.
  • the subsequent processing is executed with reference to the pixels at the cross-sectional positions of the three-dimensional images M1 and M2 corresponding to the designated cross-sectional positions. May be.
  • the fundus oculi observation device 1 uses the first tomographic image and the second tomographic image to generate a first tomographic image on a plane stretched in the direction along the designated cross-sectional position and the fundus depth direction (z direction).
  • An inclination shift amount ⁇ V between the image and the second tomographic image is calculated.
  • the positional deviation amounts ⁇ x, ⁇ y, ⁇ and the inclination deviation amount ⁇ V interposed between images (particularly the three-dimensional images M1, M2) to be compared in the comparative observation are obtained.
  • the fundus oculi observation device 1 can designate a plurality of cross-sectional positions for the respective fundus oculi images H1 and H2.
  • the fundus oculi observation device 1 forms a first tomographic image and a second tomographic image for each of a plurality of designated sectional positions, and the first tomographic image and the second tomographic image at each sectional position.
  • the amount of tilt deviation in the surface stretched between the direction along the cross-sectional position and the fundus depth direction is calculated. Thereby, a plurality of pairs of tomographic images corresponding to a plurality of cross-sectional positions are obtained.
  • the fundus oculi observation device 1 can designate a cross-shaped cross-sectional position, that is, a pair of linear cross-sectional positions intersecting at right angles to each other.
  • the fundus oculi observation device 1 calculates a tilt shift amount by forming a first tomographic image and a second tomographic image for each designated cross-sectional position. Thereby, the amount of inclination shift in each of a pair of planes orthogonal to each other is obtained.
  • the tilt shift is calculated as a vector sum of the pair of tilt shift amounts.
  • a radial cross-sectional position that is, two or more linear cross-sectional positions arranged radially and intersecting each other.
  • the fundus oculi observation device 1 calculates a tilt shift amount by forming a first tomographic image and a second tomographic image for each designated cross-sectional position. Thereby, the same number of inclination deviation amounts as the number of cross-sectional positions (two or more) can be obtained. Further, the fundus oculi observation device 1 selects the maximum value (maximum inclination deviation amount) of these two or more inclination deviation amounts.
  • the fundus oculi observation device 1 has the first three-dimensional image M1 on the plane stretched by the direction along the cross-sectional position corresponding to the maximum tilt shift amount and the fundus depth direction so as to cancel the maximum tilt shift amount.
  • the inclination deviation between the second three-dimensional image M2 is corrected.
  • the inclination deviation can be corrected in the direction with the largest inclination, so that it is possible to correct the inclination deviation in the direction that has the greatest influence on the comparative observation while saving the resources of the arithmetic control unit 200. is there.
  • the cross-sectional position may be specified manually.
  • the first and second fundus photographed images H1 and H2 can be displayed on the display unit 240, and the operation unit 250 can be operated to specify the cross-sectional position on these display images.
  • one fundus photographed image for example, the first fundus photographed image H1
  • the cross-sectional position is manually designated
  • the other fundus photographed image for example, the second fundus photographed image H2 is manually selected. It is also possible to configure such that a position substantially the same as the designated cross-sectional position is automatically designated.
  • the designation of the cross-sectional position in the fundus photographic images H1 and H2 may be performed automatically, only a part may be performed automatically, or all may be performed manually. .
  • the doctor can specify the desired cross-sectional position in both fundus photographic images H1 and H2, so even if the target region is difficult to extract by image processing, the doctor's knowledge and experience can be obtained.
  • the cross-sectional position can be specified by taking advantage of it. Which of these methods is adopted is arbitrary. It is also possible to configure such that these methods can be selectively used.
  • the fundus oculi observation device 1 the calculated position deviation amount [Delta] x, [Delta] y, so as to cancel the [Delta] [theta] and inclination shift amount [Delta] [theta] V, a position between the first three-dimensional image M1 M2 second three-dimensional image It is possible to combine. As a result, when the three-dimensional images M1 and M2 are visually compared, the common points and differences between the three-dimensional images M1 and M2 can be easily grasped, and thus the accuracy of comparative observation can be improved. Is possible.
  • the fundus oculi observation device 1 the calculated position deviation amount [Delta] x, [Delta] y, based on one or both of [Delta] [theta] and the slope deviation amount [Delta] [theta] V, the first analysis of a given physical quantity based on the first three-dimensional image M1
  • a result (first value) and a second analysis result (second value) of a predetermined physical quantity based on the second three-dimensional image M2 can be calculated.
  • these deviation amounts are not taken into consideration, and therefore there is a possibility that the comparison accuracy of the analysis result may be lowered according to the difference in conditions between the first inspection timing and the second inspection timing. It was.
  • the analysis result is obtained in consideration of the above-described deviation amount due to the difference in the conditions, it is possible to compare the analysis result with higher accuracy than before. .
  • the positional deviation amounts ⁇ x, ⁇ y, ⁇ in the fundus surface direction (xy direction) and the inclining deviation amount ⁇ V in the plane including the fundus depth direction (z direction) are obtained.
  • a positional shift amount in the z direction may be obtained.
  • a method for obtaining the amount of positional deviation in the z direction for example, there is the following. First, z coordinate values in a frame of a predetermined part of the fundus oculi Ef in each of the first tomographic image and the second tomographic image are obtained. Then, a deviation of these z coordinate values (that is, a positional deviation amount in the z direction) is obtained.
  • the predetermined part of the fundus oculi Ef includes the retina surface, the layered tissue of the fundus oculi Ef, and the like. Note that three-dimensional images may be compared instead of comparing tomographic images.
  • the amount of positional deviation and the amount of inclination deviation are expressed using the xyz coordinate system, and the image alignment and analysis processing are performed. However, these processes are performed using other coordinate systems. You may make it perform.
  • the central position (papillary center) OPa of the optic disc (corresponding image area) OP and the central position (fovea of the macula) YS (corresponding image area) , Macular center) YSa This process can be executed using a conventional fundus image analysis technique.
  • a two-dimensional polar coordinate system is defined in which the nipple center OPa is the origin and the direction connecting the nipple center OPa and the fovea YSa is the reference direction S.
  • the unit of the distance from the origin OPa can be set, for example, based on the distance between the nipple center OPa and the fovea YSa.
  • an arbitrary position P in the fundus photographic image H1 can be expressed by coordinate values (r, ⁇ ).
  • r is the distance between the origin OPa and the position P.
  • is an angle formed by a line segment connecting the origin OPa and the position P with respect to the reference direction S.
  • the direction of turning counterclockwise from the origin OPa when viewing the fovea YSa is defined as the positive direction of the angle.
  • This two-dimensional polar coordinate system can be used instead of the above xy coordinate system.
  • the fundus photographed image is described as an image photographed by a fundus camera or the like, but the fundus photographed image is not limited to this.
  • an arbitrary image depicting a two-dimensional form of the fundus surface such as an image obtained by photographing the fundus (for example, the above-described integrated image), can be used as a fundus image.
  • the position of the reference mirror 114 is changed to change the optical path length difference between the optical path of the signal light LS and the optical path of the reference light LR, but the method of changing the optical path length difference is limited to this. Is not to be done.
  • the optical path length difference can be changed by moving the fundus camera unit 2 or the OCT unit 100 with respect to the eye E to change the optical path length of the signal light LS. It is also effective to change the optical path length difference by moving the measurement object in the depth direction (z direction), particularly when the measurement object is not a living body part.
  • the computer program in the above embodiment can be stored in any recording medium readable by a computer.
  • this recording medium for example, an optical disk, a magneto-optical disk (CD-ROM / DVD-RAM / DVD-ROM / MO, etc.), a magnetic storage medium (hard disk / floppy (registered trademark) disk / ZIP, etc.), etc. are used. Is possible. It can also be stored in a storage device such as a hard disk drive or memory.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Eye Examination Apparatus (AREA)
PCT/JP2010/005633 2009-10-21 2010-09-15 眼底画像処理装置及び眼底観察装置 WO2011048748A1 (ja)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2009242036A JP2011087672A (ja) 2009-10-21 2009-10-21 眼底画像処理装置及び眼底観察装置
JP2009-242036 2009-10-21

Publications (1)

Publication Number Publication Date
WO2011048748A1 true WO2011048748A1 (ja) 2011-04-28

Family

ID=43899993

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/005633 WO2011048748A1 (ja) 2009-10-21 2010-09-15 眼底画像処理装置及び眼底観察装置

Country Status (2)

Country Link
JP (1) JP2011087672A (enrdf_load_stackoverflow)
WO (1) WO2011048748A1 (enrdf_load_stackoverflow)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014023933A (ja) * 2012-07-30 2014-02-06 Canon Inc 断層撮影方法及び装置
US9986679B2 (en) 2013-10-31 2018-06-05 Fmc Corporation Alginate coating for sett treatment
WO2019016319A1 (en) * 2017-07-19 2019-01-24 Charité Universitätsmedizin Berlin METHOD FOR ESTIMATING FAPEA SHAPE PARAMETERS BY OPTICAL COHERENCE TOMOGRAPHY

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013048717A (ja) 2011-08-31 2013-03-14 Sony Corp 画像処理装置及び方法、記録媒体、並びにプログラム
JP5926533B2 (ja) 2011-10-27 2016-05-25 キヤノン株式会社 眼科装置
DK2797493T3 (en) * 2011-12-28 2018-09-03 Wavelight Gmbh PROCEDURE FOR OPTICAL COHESE TOMOGRAPHY AND FITTING FOR OPTICAL COHESE TOMOGRAPHY
JP6115007B2 (ja) * 2012-01-31 2017-04-19 株式会社ニデック 眼科画像処理装置及びプログラム
US9357916B2 (en) * 2012-05-10 2016-06-07 Carl Zeiss Meditec, Inc. Analysis and visualization of OCT angiography data
JP6402879B2 (ja) * 2013-08-06 2018-10-10 株式会社ニデック 眼科撮影装置
JP2015102537A (ja) * 2013-11-28 2015-06-04 キヤノン株式会社 光干渉断層計
JP6499398B2 (ja) * 2014-04-01 2019-04-10 キヤノン株式会社 眼科装置および制御方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007252692A (ja) * 2006-03-24 2007-10-04 Topcon Corp 眼底観察装置
JP2008005987A (ja) * 2006-06-28 2008-01-17 Topcon Corp 眼底観察装置及びそれを制御するプログラム
JP2010110393A (ja) * 2008-11-05 2010-05-20 Nidek Co Ltd 眼科撮影装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007252692A (ja) * 2006-03-24 2007-10-04 Topcon Corp 眼底観察装置
JP2008005987A (ja) * 2006-06-28 2008-01-17 Topcon Corp 眼底観察装置及びそれを制御するプログラム
JP2010110393A (ja) * 2008-11-05 2010-05-20 Nidek Co Ltd 眼科撮影装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014023933A (ja) * 2012-07-30 2014-02-06 Canon Inc 断層撮影方法及び装置
US9986679B2 (en) 2013-10-31 2018-06-05 Fmc Corporation Alginate coating for sett treatment
WO2019016319A1 (en) * 2017-07-19 2019-01-24 Charité Universitätsmedizin Berlin METHOD FOR ESTIMATING FAPEA SHAPE PARAMETERS BY OPTICAL COHERENCE TOMOGRAPHY
JP2020527066A (ja) * 2017-07-19 2020-09-03 シャリテ−ウニベルジテーツメディツィン ベルリン 光干渉断層法によって中心窩の形状パラメータを推定するための方法
US11733029B2 (en) 2017-07-19 2023-08-22 Charité Universitätsmedizin Berlin Method for estimating shape parameters of the fovea by optical coherence tomography

Also Published As

Publication number Publication date
JP2011087672A (ja) 2011-05-06

Similar Documents

Publication Publication Date Title
WO2011048748A1 (ja) 眼底画像処理装置及び眼底観察装置
JP4971872B2 (ja) 眼底観察装置及びそれを制御するプログラム
US8622547B2 (en) Fundus observation apparatus
JP5437755B2 (ja) 眼底観察装置
JP5912358B2 (ja) 眼底観察装置
JP5628636B2 (ja) 眼底画像処理装置及び眼底観察装置
WO2016039187A1 (ja) 眼科撮影装置および眼科情報処理装置
WO2011013314A1 (ja) 眼科観察装置
JP5543171B2 (ja) 光画像計測装置
JP5937163B2 (ja) 眼底解析装置及び眼底観察装置
JP5415902B2 (ja) 眼科観察装置
JP5936254B2 (ja) 眼底観察装置及び眼底画像解析装置
WO2017065146A1 (ja) 眼科撮影装置及び眼科情報処理装置
JP5706506B2 (ja) 眼科装置
JP5956518B2 (ja) 眼科解析装置及び眼科撮影装置
JP5514026B2 (ja) 眼底画像処理装置及び眼底観察装置
JP6378795B2 (ja) 眼科撮影装置
JP6158535B2 (ja) 眼底解析装置
JP6374549B2 (ja) 眼科解析装置
JP6186453B2 (ja) 眼科解析装置及び眼科撮影装置
JP6186454B2 (ja) 眼科解析装置
WO2016039188A1 (ja) 眼底解析装置及び眼底観察装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10824603

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 10824603

Country of ref document: EP

Kind code of ref document: A1