WO2014203901A1 - Dispositif d'imagerie ophtalmologique et dispositif d'affichage d'image ophtalmologique - Google Patents

Dispositif d'imagerie ophtalmologique et dispositif d'affichage d'image ophtalmologique Download PDF

Info

Publication number
WO2014203901A1
WO2014203901A1 PCT/JP2014/066046 JP2014066046W WO2014203901A1 WO 2014203901 A1 WO2014203901 A1 WO 2014203901A1 JP 2014066046 W JP2014066046 W JP 2014066046W WO 2014203901 A1 WO2014203901 A1 WO 2014203901A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
image
region
display
eye
Prior art date
Application number
PCT/JP2014/066046
Other languages
English (en)
Japanese (ja)
Inventor
祥聖 森口
秋葉 正博
Original Assignee
株式会社トプコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社トプコン filed Critical 株式会社トプコン
Priority to JP2015522939A priority Critical patent/JP6046250B2/ja
Publication of WO2014203901A1 publication Critical patent/WO2014203901A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/102Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for optical coherence tomography [OCT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions

Definitions

  • the present invention relates to an ophthalmologic imaging apparatus that acquires an image of an eye to be examined using optical coherence tomography (OCT), and an ophthalmic image display apparatus that displays an image of the eye to be examined acquired using OCT. .
  • OCT optical coherence tomography
  • OCT that forms an image representing the surface form and internal form of an object to be measured using a light beam from a laser light source or the like has attracted attention. Since OCT has no invasiveness to the human body like X-ray CT, it is expected to be applied particularly in the medical field and the biological field. For example, in the field of ophthalmology, an apparatus for forming an image of the fundus oculi or cornea has been put into practical use.
  • Patent Document 1 discloses an apparatus using a so-called “Fourier Domain OCT (Fourier Domain OCT)” technique. That is, this apparatus irradiates the object to be measured with a beam of low coherence light, superimposes the reflected light and the reference light to generate interference light, acquires the spectral intensity distribution of the interference light, and performs Fourier transform. By performing the conversion, the form of the object to be measured in the depth direction (z direction) is imaged. Further, this apparatus includes a galvanometer mirror that scans a light beam (signal light) in one direction (x direction) orthogonal to the z direction, thereby forming an image of a desired measurement target region of the object to be measured. It has become. An image formed by this apparatus is a two-dimensional cross-sectional image in the depth direction (z direction) along the scanning direction (x direction) of the light beam. This method is also called a spectral domain.
  • Fourier Domain OCT Frourier Domain OCT
  • a plurality of two-dimensional cross-sectional images in the horizontal direction are formed by scanning (scanning) the signal light in the horizontal direction (x direction) and the vertical direction (y direction), and based on the plurality of cross-sectional images.
  • a technique for acquiring and imaging three-dimensional cross-sectional information of a measurement range is disclosed.
  • this three-dimensional imaging technology for example, a method of displaying a plurality of cross-sectional images side by side (referred to as stack data), volume data (voxel data) is generated based on the stack data, and rendering processing is performed on the volume data. And a method for forming a three-dimensional image.
  • Patent Documents 3 and 4 disclose other types of OCT apparatuses.
  • the wavelength of light irradiated to a measured object is scanned (wavelength sweep), and interference intensity obtained by superimposing reflected light of each wavelength and reference light is detected to detect spectral intensity distribution.
  • Is obtained, and an apparatus for imaging the form of the object to be measured by performing Fourier transform on the obtained image is described.
  • Such an apparatus is called a swept source type.
  • the swept source type is a kind of Fourier domain type.
  • Patent Document 4 the traveling direction of light is obtained by irradiating the object to be measured with light having a predetermined beam diameter, and analyzing the component of interference light obtained by superimposing the reflected light and the reference light.
  • An OCT apparatus for forming an image of an object to be measured in a cross-section orthogonal to is described. Such an OCT apparatus is called a full-field type or an en-face type.
  • Patent Document 5 discloses a configuration in which OCT is applied to the ophthalmic field.
  • fundus cameras Prior to the application of OCT, fundus cameras, slit lamps, SLO (Scanning Laser Ophthalmoscope), and the like were used as devices for observing the subject's eye (for example, Patent Document 6, Patent Document 7, Patent Document). 8).
  • a fundus camera is a device that shoots the fundus by illuminating the subject's eye with illumination light and receiving the fundus reflection light.
  • a slit lamp is a device that acquires an image of a cross-section of the cornea by cutting off a light section of the cornea using slit light.
  • the SLO is an apparatus that images the fundus surface by scanning the fundus with laser light and detecting the reflected light with a highly sensitive element such as a photomultiplier tube.
  • An apparatus using OCT has an advantage over a fundus camera or the like in that a high-definition image can be acquired and a cross-sectional image or a three-dimensional image can be acquired.
  • an apparatus using OCT can be applied to observation of various parts of an eye to be examined and can acquire high-definition images, it has been applied to diagnosis of various ophthalmic diseases.
  • vitreous observation using OCT has made progress.
  • the vitreous body is a transparent jelly-like tissue filled in the lumen of the eyeball, and its form can be depicted by OCT.
  • Examples of the vitreous body observation include running of the vitreous fibers, the shape and arrangement of the boundary surface of the vitreous body (for example, posterior vitreous detachment), the shape and arrangement of the cloak tube, and the distribution of traces of the vitreous blood vessels.
  • the image quality changes in accordance with the position (depth position) in the depth direction (the direction in which the signal light travels), so measurement is performed with the main observation site in focus.
  • the main observation sites are the retina, choroid, and vitreous
  • the retina, choroid, and vitreous are respectively focused.
  • the image quality of the image area corresponding to the out-of-focus area is lower than the image quality of the image area corresponding to the in-focus area, which hinders observation of a relatively low-quality image area. May occur. That is, the conventional OCT technique provides sufficient performance for local observation of the eye to be examined, but it has been difficult to suitably perform global observation.
  • An object of the present invention is to provide a technique capable of suitably performing not only local observation of a subject eye but also global observation.
  • the invention according to claim 1 divides the light from the light source into signal light and reference light, and the signal light passing through the eye to be examined and the reference light passing through the reference light path
  • the invention according to claim 2 is the ophthalmologic photographing apparatus according to claim 1, wherein when the image formed by the forming unit includes a two-dimensional image, the dividing unit is formed by the forming unit.
  • an image region specifying unit that specifies an image region corresponding to a predetermined part of the eye to be examined, and a two-dimensional image region having the image region specified by the image region specifying unit as a boundary, And a partial area specifying unit that specifies the partial area.
  • the invention according to claim 3 is the ophthalmologic photographing apparatus according to claim 1, wherein when the image formed by the forming unit includes a three-dimensional image, the dividing unit is formed by the forming unit.
  • an image region specifying unit that specifies an image region corresponding to a predetermined part of the eye to be examined, and a three-dimensional image region having the image region specified by the image region specifying unit as a boundary, And a partial area specifying unit that specifies the partial area.
  • the invention according to claim 4 is the ophthalmologic imaging apparatus according to claim 2 or claim 3, wherein when the predetermined region includes an inner boundary film, the image region specifying unit is configured to determine the inner boundary of the eye to be examined.
  • An inner boundary membrane region corresponding to a membrane is specified, and the partial region specifying unit, based on a result of specifying the inner boundary membrane region, a retinal region corresponding to a retina of a subject eye and a vitreous region corresponding to a vitreous body Is specified as the partial region.
  • the invention according to claim 5 is the ophthalmologic imaging apparatus according to claim 2 or claim 3, wherein the Bruch film is included in the Bruch film of the eye to be examined when the Bruch film is included in the predetermined part.
  • the corresponding Bruch's membrane region is specified, and the partial region specifying unit specifies, as the partial region, the retinal region corresponding to the retina of the eye to be examined and the choroid region corresponding to the choroid based on the specification result of the Bruch's membrane region. It is characterized by doing.
  • the invention according to claim 6 is the ophthalmologic imaging apparatus according to claim 2 or 3, wherein when the predetermined site includes a choroid-sclera boundary, the image region specifying unit A choroid-sclera boundary region corresponding to the choroid-sclera boundary is specified, and the partial region specifying unit determines whether the choroid region corresponding to the choroid of the eye to be examined is strong based on the result of specifying the choroid-sclera boundary region. A scleral region corresponding to a film is specified as the partial region.
  • a seventh aspect of the present invention is the ophthalmologic photographing apparatus according to any one of the first to sixth aspects, wherein the setting unit includes each of the plurality of partial regions acquired by the dividing unit.
  • the parameter value for changing the pixel values of a plurality of pixels included in the partial area is set as the display condition.
  • the invention according to claim 8 is the ophthalmologic photographing apparatus according to claim 7, wherein the parameter is a first parameter for changing pixel values of the plurality of pixels to pseudo color values, and the plurality of pixels.
  • a second parameter for changing the luminance value in the pixel value, a third parameter for changing the contrast based on the luminance value in the pixel value of the plurality of pixels, and for smoothing the pixel value of the plurality of pixels It includes at least one of a fourth parameter and a fifth parameter for emphasizing at least one of the plurality of pixels.
  • the invention according to claim 9 is the ophthalmologic imaging apparatus according to claim 7 or claim 8, wherein the setting unit associates the value of the parameter with each of a plurality of parts of the eye.
  • a correspondence information storage unit in which information is stored in advance, and a part selection unit that selects a part corresponding to the partial area from the plurality of parts for each of the plurality of partial areas acquired by the dividing unit;
  • a parameter specifying unit that specifies a value of the parameter associated with the part selected by the part selecting unit for each of the plurality of partial regions based on the correspondence information, and is specified by the parameter specifying unit. Further, the parameter value is set as the display condition.
  • a tenth aspect of the present invention is the ophthalmologic photographing apparatus according to any one of the first to ninth aspects, wherein the setting unit includes the plurality of partial regions acquired by the dividing unit. A partial area to which predetermined image processing is applied is set as the display condition.
  • An eleventh aspect of the present invention is the ophthalmologic photographing apparatus according to the tenth aspect, wherein the image processing includes a superimposition process in which two or more images are superimposed to form a single image, and the optical processing is performed.
  • the system includes a scanning unit that scans substantially the same cross section of the eye to be inspected a plurality of times with the signal light, and the forming unit detects the interference light acquired by the optical system in accordance with the plurality of times of scanning.
  • the dividing unit divides each of the plurality of images formed by the forming unit into the plurality of partial regions, and the display control unit
  • the superimposing process for a plurality of images superimposing is performed for superimposing the plurality of images only on the partial area set as the application target of the superimposing process by the setting unit among the plurality of partial areas.
  • the invention according to claim 12 is the ophthalmologic photographing apparatus according to claim 11, wherein the display control unit is configured to apply the plurality of partial areas to which the superimposition process is not applied among the plurality of partial areas.
  • a thirteenth aspect of the present invention is the ophthalmologic photographing apparatus according to any one of the first to twelfth aspects, wherein the optical system includes a scanning unit that scans an eye to be examined with the signal light, A timing unit that starts timing at a predetermined timing, and a control unit that controls the optical system to scan substantially the same cross section of the eye to be examined a plurality of times with the signal light after a predetermined time is measured by the timing unit And the forming unit forms a plurality of images of the cross section based on the detection result of the interference light acquired by the optical system with the plurality of scans, and the display control unit includes: An overlay processing unit that forms a single image by superimposing the plurality of images formed by the forming unit, and the display unit displays the single image formed by the overlay processing unit.
  • a fourteenth aspect of the invention is the ophthalmologic photographing apparatus according to any one of the first to twelfth aspects of the invention, in which the optical system uses the signal light to cross substantially the same cross section of the eye to be examined.
  • a scanning unit that repeatedly scans, and the forming unit sequentially forms an image of the cross section based on the detection result of the interference light acquired by the optical system with the repeated scanning, and the forming unit forms
  • the movement state information indicating the movement state of the specific part of the eye to be examined depicted in the image is obtained, and the specific part is obtained based on the movement state information sequentially obtained.
  • Said optical system to scan once A control unit that controls, the forming unit forms a plurality of images of the cross section based on a detection result of the interference light acquired by the optical system in association with the plurality of scans, and the display
  • the control unit includes a superimposition processing unit that forms a single image by superimposing the plurality of images formed by the forming unit, and the single image formed by the superimposition processing unit It is displayed on the display means.
  • a fifteenth aspect of the present invention is the ophthalmologic photographing apparatus according to any one of the first to fourteenth aspects, in which the plurality of partial regions to which the display condition set by the setting unit is applied. It is characterized by having an analysis part which calculates the size of the partial area by analyzing any of them.
  • the invention according to claim 16 is a reception unit that receives an image of an eye to be examined formed using optical coherence tomography, a division unit that divides the image received by the reception unit into a plurality of partial regions, A setting unit that sets display conditions for each of the plurality of partial areas acquired by the dividing unit, and a display unit that displays an image received by the receiving unit based on the display conditions set by the setting unit.
  • An ophthalmologic image display device having a display control unit. This ophthalmologic image display apparatus may have an arbitrary function or configuration of the ophthalmologic photographing apparatus according to the embodiment.
  • the ophthalmologic imaging apparatus forms a cross-sectional image or a three-dimensional image of the eye to be examined using OCT.
  • images acquired by OCT may be collectively referred to as OCT images.
  • a measurement operation for forming an OCT image may be referred to as OCT measurement.
  • the configuration according to the embodiment is applied to an ophthalmologic imaging apparatus using another type (for example, a swept source type) of OCT.
  • another type for example, a swept source type
  • an apparatus combining an OCT apparatus and a fundus camera will be described in detail.
  • an OCT having the configuration according to the embodiment is applied to an imaging apparatus other than the fundus camera, for example, an SLO, a slit lamp, an ophthalmic surgical microscope, or the like. It is also possible to combine devices.
  • the configuration according to the embodiment can be incorporated into a single OCT apparatus.
  • the configuration according to the embodiment can be applied to an ophthalmologic imaging apparatus including an OCT apparatus that can image an arbitrary part of an eye to be examined such as a cornea, an iris, and a crystalline lens.
  • the ophthalmologic photographing apparatus 1 includes a fundus camera unit 2, an OCT unit 100, and an arithmetic control unit 200.
  • the retinal camera unit 2 has almost the same optical system as a conventional retinal camera.
  • the OCT unit 100 is provided with an optical system for acquiring an OCT image of the fundus.
  • the arithmetic control unit 200 includes a computer that executes various arithmetic processes and control processes.
  • the fundus camera unit 2 shown in FIG. 1 is provided with an optical system for obtaining a two-dimensional image (fundus image) representing the surface form of the fundus oculi Ef of the eye E to be examined.
  • the fundus image includes an observation image and a captured image.
  • the observation image is, for example, a monochrome moving image formed at a predetermined frame rate using near infrared light.
  • the captured image may be, for example, a color image obtained by flashing visible light, or a monochrome still image using near infrared light or visible light as illumination light.
  • the fundus camera unit 2 may be configured to be able to acquire images other than these, such as a fluorescein fluorescent image, an indocyanine green fluorescent image, a spontaneous fluorescent image, and the like.
  • the fundus camera unit 2 is provided with a chin rest and a forehead for supporting the subject's face. Further, the fundus camera unit 2 is provided with an illumination optical system 10 and a photographing optical system 30.
  • the illumination optical system 10 irradiates the fundus oculi Ef with illumination light.
  • the photographing optical system 30 guides the fundus reflection light of the illumination light to an imaging device (CCD image sensor (sometimes simply referred to as a CCD) 35, 38).
  • the imaging optical system 30 guides the signal light from the OCT unit 100 to the fundus oculi Ef and guides the signal light passing through the fundus oculi Ef to the OCT unit 100.
  • the observation light source 11 of the illumination optical system 10 is composed of, for example, a halogen lamp.
  • the light (observation illumination light) output from the observation light source 11 is reflected by the reflection mirror 12 having a curved reflection surface, passes through the condensing lens 13, passes through the visible cut filter 14, and is converted into near infrared light. Become. Further, the observation illumination light is once converged in the vicinity of the photographing light source 15, reflected by the mirror 16, and passes through the relay lenses 17 and 18, the diaphragm 19 and the relay lens 20. Then, the observation illumination light is reflected at the peripheral portion (region around the hole portion) of the aperture mirror 21, passes through the dichroic mirror 46, and is refracted by the objective lens 22 to illuminate the fundus oculi Ef.
  • An LED Light Emitting Diode
  • the fundus reflection light of the observation illumination light is refracted by the objective lens 22, passes through the dichroic mirror 46, passes through the hole formed in the central region of the perforated mirror 21, passes through the dichroic mirror 55, and is a focusing lens. It is reflected by the mirror 32 via 31. Further, the fundus reflection light passes through the half mirror 39A, is reflected by the dichroic mirror 33, and forms an image on the light receiving surface of the CCD image sensor 35 by the condenser lens.
  • the CCD image sensor 35 detects fundus reflected light at a predetermined frame rate, for example. On the display device 3, an image (observation image) based on fundus reflection light detected by the CCD image sensor 35 is displayed. When the photographing optical system is focused on the anterior segment, an observation image of the anterior segment of the eye E is displayed.
  • the photographing light source 15 is constituted by, for example, a xenon lamp.
  • the light (imaging illumination light) output from the imaging light source 15 is applied to the fundus oculi Ef through the same path as the observation illumination light.
  • the fundus reflection light of the imaging illumination light is guided to the dichroic mirror 33 through the same path as that of the observation illumination light, passes through the dichroic mirror 33, is reflected by the mirror 36, and is reflected by the condenser lens 37 of the CCD image sensor 38.
  • An image is formed on the light receiving surface.
  • On the display device 3 an image (captured image) based on fundus reflection light detected by the CCD image sensor 38 is displayed.
  • the display device 3 that displays the observation image and the display device 3 that displays the captured image may be the same or different.
  • an infrared captured image is displayed. It is also possible to use an LED as a photographing light source.
  • the LCD (Liquid Crystal Display) 39 displays a fixation target and an eyesight measurement index.
  • the fixation target is an index for fixing the eye E to be examined, and is used at the time of fundus photographing or OCT measurement.
  • a part of the light output from the LCD 39 is reflected by the half mirror 39A, reflected by the mirror 32, passes through the focusing lens 31 and the dichroic mirror 55, passes through the hole of the perforated mirror 21, and reaches the dichroic.
  • the light passes through the mirror 46, is refracted by the objective lens 22, and is projected onto the fundus oculi Ef.
  • the fixation position of the eye E can be changed by changing the display position of the fixation target on the screen of the LCD 39.
  • As the fixation position of the eye E for example, a position for acquiring an image centered on the macular portion of the fundus oculi Ef, or a position for acquiring an image centered on the optic disc as in the case of a conventional fundus camera And a position for acquiring an image centered on the fundus center between the macula and the optic disc. It is also possible to arbitrarily change the display position of the fixation target.
  • the fundus camera unit 2 is provided with an alignment optical system 50 and a focus optical system 60 as in the conventional fundus camera.
  • the alignment optical system 50 generates an index (alignment index) for performing alignment (alignment) of the apparatus optical system with respect to the eye E.
  • the focus optical system 60 generates an index (split index) for focusing on the fundus oculi Ef.
  • the light (alignment light) output from the LED 51 of the alignment optical system 50 is reflected by the dichroic mirror 55 via the apertures 52 and 53 and the relay lens 54, passes through the hole of the perforated mirror 21, and reaches the dichroic mirror 46. And is projected onto the cornea of the eye E by the objective lens 22.
  • the corneal reflection light of the alignment light passes through the objective lens 22, the dichroic mirror 46 and the hole, part of which passes through the dichroic mirror 55, passes through the focusing lens 31, is reflected by the mirror 32, and is half mirror
  • the light passes through 39A, is reflected by the dichroic mirror 33, and is projected onto the light receiving surface of the CCD image sensor 35 by the condenser lens.
  • the light reception image (alignment index) by the CCD image sensor 35 is displayed on the display device 3 together with the observation image.
  • the user performs alignment by performing the same operation as that of a conventional fundus camera. Further, the arithmetic control unit 200 may perform alignment by analyzing the position of the alignment index and moving the optical system (auto-alignment function).
  • the reflecting surface of the reflecting rod 67 is obliquely provided on the optical path of the illumination optical system 10.
  • the light (focus light) output from the LED 61 of the focus optical system 60 passes through the relay lens 62, is separated into two light beams by the split indicator plate 63, passes through the two-hole aperture 64, and is reflected by the mirror 65, The light is focused on the reflecting surface of the reflecting bar 67 by the condenser lens 66 and reflected. Further, the focus light passes through the relay lens 20, is reflected by the perforated mirror 21, passes through the dichroic mirror 46, is refracted by the objective lens 22, and is projected onto the fundus oculi Ef.
  • the fundus reflection light of the focus light is detected by the CCD image sensor 35 through the same path as the corneal reflection light of the alignment light.
  • a light reception image (split index) by the CCD image sensor 35 is displayed on the display device 3 together with the observation image.
  • the arithmetic and control unit 200 analyzes the position of the split index and moves the focusing lens 31 and the focus optical system 60 to perform focusing as in the conventional case (autofocus function). Alternatively, focusing may be performed manually while visually checking the split indicator.
  • the dichroic mirror 46 branches the optical path for OCT measurement from the optical path for fundus imaging.
  • the dichroic mirror 46 reflects light in a wavelength band used for OCT measurement and transmits light for fundus photographing.
  • a collimator lens unit 40, an optical path length changing unit 41, a galvano scanner 42, a focusing lens 43, a mirror 44, and a relay lens 45 are provided in this order from the OCT unit 100 side. It has been.
  • the optical path length changing unit 41 is movable in the direction of the arrow shown in FIG. 1, and changes the optical path length of the optical path for OCT measurement. This change in the optical path length is used for correcting the optical path length according to the axial length of the eye E or adjusting the interference state.
  • the optical path length changing unit 41 includes, for example, a corner cube and a mechanism for moving the corner cube.
  • the galvano scanner 42 changes the traveling direction of light (signal light LS) passing through the optical path for OCT measurement. Thereby, the fundus oculi Ef can be scanned with the signal light LS.
  • the galvano scanner 42 includes, for example, a galvano mirror that scans the signal light LS in the x direction, a galvano mirror that scans in the y direction, and a mechanism that drives these independently. Thereby, the signal light LS can be scanned in an arbitrary direction on the xy plane.
  • the OCT unit 100 is provided with an optical system for acquiring an OCT image of the fundus oculi Ef.
  • This optical system has the same configuration as a conventional spectral domain type OCT apparatus. That is, this optical system divides low-coherence light into reference light and signal light, and generates interference light by causing interference between the signal light passing through the fundus oculi Ef and the reference light passing through the reference optical path. It is configured to detect spectral components. This detection result (detection signal) is sent to the arithmetic control unit 200.
  • a wavelength swept light source is provided instead of a light source that outputs a low coherence light source, and an optical member that spectrally decomposes interference light is not provided.
  • a known technique according to the type of optical coherence tomography can be arbitrarily applied.
  • the light source unit 101 outputs broadband low-coherence light.
  • Low-coherence light includes, for example, a near-infrared wavelength band (about 800 nm to 900 nm) and has a temporal coherence length of about several tens of micrometers. Note that near-infrared light having a wavelength band that cannot be visually recognized by the human eye, for example, a center wavelength of about 1040 to 1060 nm, may be used as the low-coherence light.
  • the light source unit 101 includes a super luminescent diode (Super Luminescent Diode: SLD), an LED, and an optical output device such as an SOA (Semiconductor Optical Amplifier).
  • SLD Super Luminescent Diode
  • LED an LED
  • SOA semiconductor Optical Amplifier
  • the low coherence light output from the light source unit 101 is guided to the fiber coupler 103 by the optical fiber 102 and split into the signal light LS and the reference light LR.
  • the reference light LR is guided by the optical fiber 104 and reaches an optical attenuator (attenuator) 105.
  • the optical attenuator 105 automatically adjusts the amount of the reference light LR guided to the optical fiber 104 under the control of the arithmetic control unit 200 using a known technique.
  • the reference light LR whose light amount has been adjusted by the optical attenuator 105 is guided by the optical fiber 104 and reaches the polarization adjuster (polarization controller) 106.
  • the polarization adjuster 106 is, for example, a device that adjusts the polarization state of the reference light LR guided in the optical fiber 104 by applying a stress from the outside to the optical fiber 104 in a loop shape.
  • the configuration of the polarization adjuster 106 is not limited to this, and any known technique can be used.
  • the reference light LR whose polarization state is adjusted by the polarization adjuster 106 reaches the fiber coupler 109.
  • the signal light LS generated by the fiber coupler 103 is guided by the optical fiber 107 and converted into a parallel light beam by the collimator lens unit 40. Further, the signal light LS reaches the dichroic mirror 46 via the optical path length changing unit 41, the galvano scanner 42, the focusing lens 43, the mirror 44, and the relay lens 45. The signal light LS is reflected by the dichroic mirror 46, is refracted by the objective lens 22, and is applied to the fundus oculi Ef. The signal light LS is scattered (including reflection) at various depth positions of the fundus oculi Ef. The backscattered light of the signal light LS from the fundus oculi Ef travels in the same direction as the forward path in the reverse direction, is guided to the fiber coupler 103, and reaches the fiber coupler 109 via the optical fiber 108.
  • the fiber coupler 109 causes the backscattered light of the signal light LS and the reference light LR that has passed through the optical fiber 104 to interfere with each other.
  • the interference light LC generated thereby is guided by the optical fiber 110 and emitted from the emission end 111. Further, the interference light LC is converted into a parallel light beam by the collimator lens 112, dispersed (spectral decomposition) by the diffraction grating 113, condensed by the condenser lens 114, and projected onto the light receiving surface of the CCD image sensor 115.
  • the diffraction grating 113 shown in FIG. 2 is a transmission type, other types of spectroscopic elements such as a reflection type diffraction grating may be used.
  • the CCD image sensor 115 is a line sensor, for example, and detects each spectral component of the split interference light LC and converts it into electric charges.
  • the CCD image sensor 115 accumulates this electric charge, generates a detection signal, and sends it to the arithmetic control unit 200.
  • a Michelson type interferometer is used, but any type of interferometer such as a Mach-Zehnder type can be appropriately used.
  • any type of interferometer such as a Mach-Zehnder type can be appropriately used.
  • another form of image sensor for example, a CMOS (Complementary Metal Oxide Semiconductor) image sensor or the like can be used.
  • CMOS Complementary Metal Oxide Semiconductor
  • the configuration of the arithmetic control unit 200 will be described.
  • the arithmetic control unit 200 analyzes the detection signal input from the CCD image sensor 115 and forms an OCT image of the fundus oculi Ef.
  • the arithmetic processing for this is the same as that of a conventional spectral domain type OCT apparatus.
  • the arithmetic control unit 200 controls each part of the fundus camera unit 2, the display device 3, and the OCT unit 100. For example, the arithmetic control unit 200 displays an OCT image of the fundus oculi Ef on the display device 3.
  • the arithmetic control unit 200 controls the operation of the observation light source 11, the imaging light source 15 and the LEDs 51 and 61, the operation control of the LCD 39, the movement control of the focusing lenses 31 and 43, and the reflector 67. Movement control, movement control of the focus optical system 60, movement control of the optical path length changing unit 41, operation control of the galvano scanner 42, and the like are performed.
  • the arithmetic control unit 200 performs operation control of the light source unit 101, operation control of the optical attenuator 105, operation control of the polarization adjuster 106, operation control of the CCD image sensor 115, and the like.
  • the arithmetic control unit 200 includes, for example, a microprocessor, a RAM, a ROM, a hard disk drive, a communication interface, and the like, as in a conventional computer.
  • a computer program for controlling the ophthalmologic photographing apparatus 1 is stored in a storage device such as a hard disk drive.
  • the arithmetic control unit 200 may include various circuit boards, for example, a circuit board for forming an OCT image.
  • the arithmetic control unit 200 may include an operation device (input device) such as a keyboard and a mouse, and a display device such as an LCD.
  • the fundus camera unit 2, the display device 3, the OCT unit 100, and the calculation control unit 200 may be configured integrally (that is, in a single housing) or separated into two or more cases. It may be.
  • Control system The configuration of the control system of the ophthalmologic photographing apparatus 1 will be described with reference to FIGS.
  • the control system of the ophthalmologic photographing apparatus 1 is configured around the control unit 210.
  • the control unit 210 includes, for example, the aforementioned microprocessor, RAM, ROM, hard disk drive, communication interface, and the like.
  • the control unit 210 is provided with a main control unit 211 and a storage unit 212.
  • the main control unit 211 performs the various controls described above.
  • the main control unit 211 includes focusing drive units 31A and 43A of the fundus camera unit 2, an optical path length changing unit 41, a galvano scanner 42, a light source unit 101 of the OCT unit 100, an optical attenuator 105, and a polarization adjuster 106. To control.
  • the focusing drive unit 31A moves the focusing lens 31 in the optical axis direction. Thereby, the focus position of the photographic optical system 30 is changed.
  • the focusing drive unit 43A moves the focusing lens 43 provided in the optical path of the signal light LS in the optical axis direction. Thereby, the focus position of the signal light LS is changed.
  • the main control unit 211 can also move an optical system provided in the fundus camera unit 2 in a three-dimensional manner by controlling an optical system drive unit (not shown). This control is used in alignment and tracking. Tracking is to move the apparatus optical system in accordance with the eye movement of the eye E. When tracking is performed, alignment and focusing are performed in advance. Tracking is a function of maintaining a suitable positional relationship in which the alignment and focus are achieved by causing the position of the apparatus optical system to follow the eye movement.
  • the main control unit 211 performs a process of writing data to the storage unit 212 and a process of reading data from the storage unit 212.
  • the storage unit 212 stores various data. Examples of the data stored in the storage unit 212 include OCT image image data, fundus image data, and examined eye information.
  • the eye information includes information about the subject such as patient ID and name, and information about the eye such as left / right eye identification information.
  • the storage unit 212 stores various programs and data for operating the ophthalmologic photographing apparatus 1.
  • the image forming unit 220 forms image data of a cross-sectional image of the fundus oculi Ef based on the detection signal from the CCD image sensor 115. This process includes processes such as noise removal (noise reduction), filter processing, dispersion compensation, and FFT (Fast Fourier Transform) as in the conventional spectral domain type optical coherence tomography. In the case of another type of OCT apparatus, the image forming unit 220 executes a known process corresponding to the type. The image forming unit 220 functions as a “forming unit”.
  • the image forming unit 220 includes, for example, the circuit board described above. In this specification, “image data” and “image” based thereon may be identified.
  • the data processing unit 230 executes various data processing. For example, the data processing unit 230 performs various types of image processing and analysis processing on the image formed by the image forming unit 220. As a specific example, the data processing unit 230 executes correction processing such as image luminance correction. Further, the data processing unit 230 performs various types of image processing and analysis processing on the image (fundus image, anterior eye image, etc.) obtained by the fundus camera unit 2.
  • the data processing unit 230 executes known image processing such as interpolation processing for interpolating pixels between cross-sectional images to form image data of a three-dimensional image of the fundus oculi Ef.
  • image data of a three-dimensional image means image data in which pixel positions are defined by a three-dimensional coordinate system.
  • image data of a three-dimensional image there is image data composed of voxels arranged three-dimensionally. This image data is called volume data or voxel data.
  • the data processing unit 230 When displaying an image based on volume data, the data processing unit 230 performs a rendering process (such as volume rendering or MIP (Maximum Intensity Projection)) on the volume data to view it from a specific line-of-sight direction.
  • a rendering process such as volume rendering or MIP (Maximum Intensity Projection)
  • Image data of a pseudo three-dimensional image is formed.
  • This pseudo three-dimensional image is displayed on a display device such as the display unit 241.
  • the data processing unit 230 can form a two-dimensional cross-sectional image from the volume data.
  • image processing for that purpose there is multi-planar reconstruction (MPR).
  • MPR forms a two-dimensional cross-sectional image of the cross section based on the voxel group located in the cross section set for the volume data.
  • stack data of a plurality of cross-sectional images is image data obtained by three-dimensionally arranging a plurality of cross-sectional images obtained along a plurality of scanning lines based on the positional relationship of the scanning lines. That is, stack data is image data obtained by expressing a plurality of cross-sectional images originally defined by individual two-dimensional coordinate systems by one three-dimensional coordinate system (that is, by embedding them in one three-dimensional space). is there.
  • the data processing unit 230 includes an image dividing unit 231 and a display condition setting unit 232.
  • the image dividing unit 231 divides the OCT image of the eye E to be examined into a plurality of partial areas.
  • the image dividing unit 231 functions as a “dividing unit”. In this embodiment, the case where the image dividing unit 231 divides the two-dimensional image (cross-sectional image) of the eye E into a plurality of two-dimensional partial regions will be described. Examples of the two-dimensional image include a cross-sectional image formed by the image forming unit 220 and a cross-sectional image (MPR image) based on the three-dimensional image.
  • a functional unit that forms a three-dimensional image (three-dimensional image forming unit) and a functional unit that forms a two-dimensional image from the three-dimensional image (two-dimensional image forming unit) Together with the forming unit 220, it functions as a “forming unit”.
  • the image dividing unit 231 divides a three-dimensional image will be described later as a modified example.
  • dividing an image means to classify an image into a plurality of partial areas, that is, to divide a set of a plurality of pixels constituting an image into a plurality of subsets (or to configure an image). A label corresponding to any partial region is assigned to each pixel).
  • the plurality of pixels to be subjected to the classification process may be all the pixels constituting the image or some of the pixels.
  • the process of “dividing the image” may include a process of extracting the partial area obtained by the classification process from the original image, that is, a process of extracting the partial area. In this case, in the subsequent display process, the plurality of extracted partial areas are combined again to form a display image.
  • the image dividing unit 231 includes an image area specifying unit 2311 and a partial area specifying unit 2312.
  • the image region specifying unit 2311 specifies a one-dimensional image region corresponding to a predetermined part of the eye E by analyzing a two-dimensional image of the eye E.
  • This predetermined part may be an arbitrary part of the eye E to be examined.
  • the predetermined part is a layer of the retina (retinal pigment epithelium layer, photoreceptor layer, outer boundary membrane, outer granule layer, outer reticulated layer, inner granular layer, inner reticulated layer, ganglion cell layer, nerve fiber layer, inner boundary Or a boundary thereof, a Bruch's membrane, a choroidal boundary, a scleral boundary, or a vitreous boundary.
  • the predetermined part may be a boundary of a lesioned part.
  • the boundary of the cornea, the layer of the cornea (corneal epithelium, Bowman's membrane, corneal stroma, Descemet's membrane, corneal endothelium) or its boundary, the boundary of the lens, the boundary of the iris It may be.
  • the predetermined part is not limited to an anatomical tissue or a lesioned part, an area obtained by analyzing an image of the eye to be examined, an area manually set with reference to the image of the eye to be examined, etc.
  • the concept may also include Examples of regions obtained by analyzing images include image regions (layer regions, layer boundary regions, etc.) obtained by the following segmentation.
  • the image region specifying unit 2311 specifies a plurality of pixels included in the image region corresponding to the predetermined part based on the pixel value of the two-dimensional image of the eye E.
  • the image area specifying unit 2311 obtains an approximate curve based on the specified plurality of pixels.
  • This approximate curve can be obtained by an arbitrary method, and examples thereof include a linear approximate curve, a logarithmic approximate curve, a polynomial approximate curve, a power approximate curve, an exponential approximate curve, and a moving average approximate curve.
  • the approximate curve acquired in this way can be used as a one-dimensional image region corresponding to a predetermined part of the eye E to be examined. Such processing is called “segmentation” or the like.
  • the number of predetermined parts specified by the image area specifying unit 2311 is arbitrary.
  • One or more predetermined parts to be specified are set in advance.
  • the setting method there are a default setting, a selective setting according to an analysis method and an observation site, an arbitrary setting by a user, and the like.
  • the number of partial areas obtained by the image dividing unit 231 is arbitrary.
  • the partial region specifying unit 2312 specifies a two-dimensional image region having the one-dimensional image region specified by the image region specifying unit 2311 as a boundary as a partial region that is a specific target by the image dividing unit 231.
  • Each partial area may be a predetermined tissue of the eye E or a part thereof, or may be two or more tissues or a part thereof.
  • Each partial area may be a lesion or a part thereof.
  • the image area specifying unit 2311 analyzes the cross-sectional image G to thereby obtain a one-dimensional image area (retina-vitreous boundary area) corresponding to the boundary between the retina and the vitreous body (inner boundary film).
  • a one-dimensional image region (retina-choroid boundary region) g2 corresponding to the boundary between the retina and choroid (Bruch's membrane), and a one-dimensional image region corresponding to the boundary between the choroid and sclera (choroid-sclera boundary) Region (g3) is specified.
  • the partial region specifying unit 2312 includes both the vitreous region G1 having the retina-vitreous boundary region g1 as a boundary, and the retina-vitreous boundary region g1 and the retina-choroidal boundary region g2. , A retina-choroid boundary region g2 and a choroid-sclera boundary g3 as a boundary, and a scleral region G4 with a choroid-sclera boundary g3 as a boundary.
  • the display condition setting unit 232 sets the display conditions for each of the partial areas acquired by the image dividing unit 231.
  • the display condition setting unit 232 functions as a “setting unit”.
  • the display condition is a condition applied to display the OCT image of the eye E.
  • the display condition includes a parameter for changing the pixel value of the OCT image.
  • This parameter may include, for example, at least one of the following three parameters: (1) a parameter for changing the pixel value of the OCT image to a pseudo color value (pseudo color parameter); (2) OCT Parameters for changing the luminance values in the pixel values of the image (luminance parameters); (3) Parameters for changing the contrast based on the luminance values in the pixel values of the OCT image (contrast parameters).
  • the display in the pseudo color is a display method in which the tissue is expressed by an arbitrarily assigned hue instead of the actual color of the tissue of the eye E.
  • the luminance of the OCT image (grayscale image) is expressed in a predetermined gradation range (for example, 256 gradations from 0 to 255).
  • the luminance is one of the components of the color space, and is a parameter that expresses the color together with the two color difference components.
  • the contrast indicates a difference between the minimum value and the maximum value of the luminance in a predetermined image area (entire or part of the image).
  • the display condition setting unit 232 includes a correspondence information storage unit 2321, a part selection unit 2322, and a parameter identification unit 2323.
  • correspondence information storage unit 2321 correspondence information is stored in advance.
  • the correspondence information is information in which the value of the parameter is associated with each of a plurality of parts of the eye.
  • the correspondence information storage unit 2321 may be configured as a part of the storage unit 212.
  • the correspondence information 2321a is table information that associates the eye region with the display conditions. That is, the correspondence information 2321a is a lookup table in which display parameters for each eye region are defined. In the item of the eye part, a retina, a choroid, a sclera, and a vitreous body are provided.
  • the display condition item includes a pseudo color value, a luminance value, and a contrast.
  • the pseudo color value indicates a coordinate value in a predetermined color space (for example, RGB color system), for example.
  • a pseudo color value the value “A1” corresponds to the retina
  • the value “A2” corresponds to the choroid
  • the value “A3” corresponds to the sclera
  • the value “A4” corresponds to the vitreous body. It is attached.
  • each of the values A1 to A4 is set according to the gradation of the pixel value of the original OCT image. That is, each of the values A1 to A4 is not a single value, but associates the gradation range of the pixel value with the gradation range of the pseudo color display color.
  • the luminance value indicates a value in a predetermined gradation range, for example.
  • the value “B1” is associated with the retina
  • the value “B2” is associated with the choroid
  • the value “B3” is associated with the sclera
  • the value “B4” is associated with the vitreous body. It has been.
  • each of the values B1 to B4 is not a single value but associates the gradation range of the pixel value of the original OCT image with the gradation range of the pseudo color display color.
  • at least one of the values B1 to B4 may correspond to a change amount of zero with respect to the luminance value of the original OCT image. That is, the display condition (luminance value) may not be changed for a certain partial area.
  • Contrast indicates a value in a predetermined range, for example.
  • the value “C1” is associated with the retina
  • the value “C2” is associated with the choroid
  • the value “C3” is associated with the sclera
  • the value “C4” is associated with the vitreous body.
  • at least one of the values C1 to C4 may correspond to a change amount of zero with respect to the contrast of the original OCT image. That is, the display condition (contrast) may not be changed for a certain partial area.
  • Part selection part For each of the plurality of partial regions acquired by the image dividing unit 231, the region selection unit 2322 selects an eye region corresponding to the partial region from among the plurality of regions included in the item of the eye region of the correspondence information 2321 a. Select from. The part selection unit 2322 selects such an eye part by analyzing the OCT image or according to the content of the instruction from the user.
  • an example of selecting an eye part by analyzing an OCT image will be described.
  • a part corresponding to the specified image region can be recognized.
  • the part selection unit 2322 analyzes each partial area of the OCT image, and compares the size, outline shape, characteristic part shape, pixel value distribution state, or other partial areas (parts specified by segmentation, etc.). Based on the information indicating the drawing state of the eye part, such as the positional relationship, the eye part corresponding to the partial region can be specified. Information indicating the drawing state of the eye part is created in advance by analyzing a plurality of OCT images, for example, and stored in the part selection unit 2322 (or storage unit 212).
  • the main control unit 211 causes the display unit 241 to display the OCT image. At this time, the main control unit 211 can display the OCT image so as to clearly indicate the image region specified by the image region specifying unit 2311 or the partial region specified by the partial region specifying unit 2312.
  • the user designates a partial area of the displayed OCT image and designates identification information (part name, etc.) of the eye part corresponding to the partial area.
  • the specification of the identification information of the eye part is executed by, for example, displaying a list of identification information of the eye part by the main control unit 211 and selecting desired identification information from the list via the operation unit 242. .
  • the part selection unit 2322 selects the part of the eye corresponding to the partial area by associating the partial area specified by the user with the identification information specified corresponding to the partial area.
  • the parameter specifying unit 2323 specifies, for each partial area acquired by the image dividing unit 231, the parameter value associated with the part selected by the part selecting unit 2322 based on the correspondence information 2321 a.
  • the correspondence information 2321a is information in which a plurality of parts of the eye are associated with parameter values.
  • the parameter specifying process includes a process of searching for the value of the parameter associated with the part selected by the part selecting unit 2322 from the correspondence information 2321a.
  • the vitreous region G1, the retina region G2, the choroid region G3, and the sclera region G4 shown in FIG. 5B are obtained by the image dividing unit 231 and the region of the eye corresponding to the image regions G1 to G4 (
  • the parameter identifying unit 2323 obtains the following parameter values for the image areas G1 to G4: ⁇ vitreous area G1> pseudo color parameter A4, luminance parameter B4, contrast parameter C4; ⁇ retinal region G2> pseudo color parameter A1, luminance parameter B1, contrast parameter C1; ⁇ choroid region G3> pseudo color parameter A2, luminance parameter B2, contrast parameter C2; ⁇ sclera region G4> pseudo color Parameter A3, brightness parameter B3, contrast parameter Over data C3.
  • the value of the parameter specified in this way is set as a display condition.
  • the data processing unit 230 that functions as described above includes, for example, the aforementioned microprocessor, RAM, ROM, hard disk drive, circuit board, and the like.
  • a storage device such as a hard disk drive, a computer program for causing the microprocessor to execute the above functions is stored in advance.
  • the user interface 240 includes a display unit 241 and an operation unit 242.
  • the display unit 241 includes the display device of the arithmetic control unit 200 and the display device 3 described above.
  • the operation unit 242 includes the operation device of the arithmetic control unit 200 described above.
  • the operation unit 242 may include various buttons and keys provided on the housing of the ophthalmologic photographing apparatus 1 or outside.
  • the operation unit 242 may include a joystick, an operation panel, or the like provided on the housing.
  • the display unit 241 may include various display devices such as a touch panel provided in the housing of the fundus camera unit 2.
  • the display unit 241 and the operation unit 242 do not need to be configured as individual devices.
  • a device in which a display function and an operation function are integrated such as a touch panel
  • the operation unit 242 includes the touch panel and a computer program.
  • the operation content for the operation unit 242 is input to the control unit 210 as an electrical signal. Further, operations and information input may be performed using the graphical user interface (GUI) displayed on the display unit 241 and the operation unit 242.
  • GUI graphical user interface
  • the scanning mode of the signal light LS by the ophthalmologic photographing apparatus 1 includes, for example, a line scan (horizontal scan, vertical scan), a cross scan, a radial scan, a circular scan, a concentric scan, and a spiral (spiral) scan. These scanning modes are selectively used as appropriate in consideration of the observation site of the fundus, the analysis target (such as retinal thickness), the time required for scanning, the precision of scanning, and the like.
  • the horizontal scan is to scan the signal light LS in the horizontal direction (x direction).
  • the horizontal scan also includes an aspect in which the signal light LS is scanned along a plurality of horizontal scanning lines arranged in the vertical direction (y direction). In this aspect, it is possible to arbitrarily set the scanning line interval. Further, the above-described three-dimensional image can be formed by sufficiently narrowing the interval between adjacent scanning lines (three-dimensional scanning). The same applies to the vertical scan.
  • the cross scan scans the signal light LS along a cross-shaped trajectory composed of two linear trajectories (straight trajectories) orthogonal to each other.
  • the signal light LS is scanned along a radial trajectory composed of a plurality of linear trajectories arranged at a predetermined angle.
  • the cross scan is an example of a radiation scan.
  • the circle scan scans the signal light LS along a circular locus.
  • the signal light LS is scanned along a plurality of circular trajectories arranged concentrically around a predetermined center position.
  • a circle scan is an example of a concentric scan.
  • the signal light LS is scanned along a spiral (spiral) trajectory while gradually reducing (or increasing) the radius of rotation.
  • the galvano scanner 42 is configured to scan the signal light LS in directions orthogonal to each other, the signal light LS can be scanned independently in the x direction and the y direction, respectively. Further, by simultaneously controlling the directions of the two galvanometer mirrors included in the galvano scanner 42, the signal light LS can be scanned along an arbitrary locus on the xy plane. Thereby, various scanning modes as described above can be realized.
  • a cross-sectional image on a plane stretched by the direction along the scanning line (scanning locus) and the fundus depth direction (z direction) can be acquired.
  • the above-described three-dimensional image can be acquired particularly when the scanning line interval is narrow.
  • the region on the fundus oculi Ef to be scanned with the signal light LS as described above, that is, the region on the fundus oculi Ef to be subjected to OCT measurement is called a scanning region.
  • the scanning area in the three-dimensional scan is a rectangular area in which a plurality of horizontal scans are arranged.
  • the scanning area in the concentric scan is a disk-shaped area surrounded by the locus of the circular scan with the maximum diameter.
  • the scanning area in the radial scan is a disk-shaped (or polygonal) area connecting both end positions of each scan line.
  • FIG. 7 shows an example of the operation of the ophthalmologic photographing apparatus 1.
  • the main control unit 211 causes the display unit 241 to display a screen for selecting a shooting mode.
  • This screen is provided with a GUI for selecting a plurality of shooting modes.
  • the imaging mode is provided, for example, for each part (particularly a part to be observed), for each injury and / or for each inspection technique.
  • a vitreous imaging mode, a retinal imaging mode, and a choroidal imaging mode are provided will be described as a case where an imaging mode is provided for each part of the eye.
  • the vitreous body, the retina, and the choroid differ in the position in the depth direction (z direction).
  • a near-infrared moving image of the fundus oculi Ef is acquired by continuously illuminating the fundus oculi Ef with illumination light from the observation light source 11 (which becomes near-infrared light by the visible cut filter 14). This near-infrared moving image is obtained in real time until the continuous illumination ends.
  • An alignment index, a split index, and a fixation target are projected onto the eye E to be examined.
  • An alignment index and a split index are drawn on the near-infrared moving image. Using these indexes, auto alignment and auto focus of an optical system for acquiring a fundus image are performed.
  • the process shifts to auto focus of an optical system for acquiring an OCT image.
  • This autofocus is executed as follows, for example.
  • the line scan is repeatedly executed.
  • cross-sectional images of substantially the same cross section of the fundus oculi Ef are sequentially obtained.
  • the data processing unit 230 obtains an image quality evaluation value by analyzing an image region in the cross-sectional image corresponding to the imaging mode selected in step S2.
  • the vitreous imaging mode is selected, analysis of the vitreous region G1 of FIG. 5B is performed, and when the retinal imaging mode is selected, analysis of the retinal region G2 is performed and the choroidal imaging mode is selected. In this case, the choroid region G3 is analyzed.
  • the image quality evaluation method is arbitrary.
  • a histogram can be created based on the pixel value of the target image area (or the OCT signal before the imaging process), and the image quality evaluation value can be obtained based on the histogram.
  • the main control unit 211 adjusts the position of the focusing lens 43 so that the evaluation value of the image quality obtained sequentially is optimized.
  • the optical system for acquiring an OCT image there is a method of applying a predetermined focus state corresponding to an imaging mode (eye part).
  • the control state of the focusing lens 43 for each eye part (for each imaging mode) is stored in advance, and the focusing lens 43 can be controlled to a state corresponding to the selection result of the imaging mode.
  • the focus state in the vitreous photography mode a state in which only 4 diopters are defocused from the focus state in the retinal photography mode can be applied.
  • the main control unit 211 continuously executes sequential acquisition of cross-sectional images of the same cross-section and sequential calculation of image quality evaluation values. Further, the main control unit 211 causes the display unit 241 to display cross-sectional images and evaluation values acquired sequentially. At this time, the cross-sectional image is displayed as a moving image, and the evaluation value is switched and displayed in synchronization with the frame rate of the cross-sectional image. As a display mode of the evaluation value, there are a bar graph display and a numerical display.
  • the user adjusts the position of the focusing lens 43 while referring to the moving image and the evaluation value displayed on the display unit 241. This manual adjustment is performed via the operation unit 242.
  • the display unit 241 is a touch panel, a configuration in which the main control unit 211 moves the focusing lens 43 so as to focus on a position in the cross-sectional image touched by the user can be applied.
  • step S5 Taking an image of the eye to be examined
  • the main control unit 211 causes the ophthalmologic imaging apparatus 1 to perform OCT measurement in the focus state adjusted in step S4.
  • the scanning mode (scan pattern) at this time is determined according to, for example, the photographing mode set in step S2.
  • the main control unit 211 controls the fundus camera unit 2 to execute fundus photographing.
  • the acquired OCT image and fundus image are stored in the storage unit 212.
  • the OCT image is sent to the data processing unit 230.
  • the image region specifying unit 2311 specifies a one-dimensional image region corresponding to a predetermined part of the eye E by executing segmentation of the OCT image.
  • a predetermined part of the eye E by executing segmentation of the OCT image.
  • the retina-vitreous boundary region g1, the retina-choroidal boundary region g2, and the choroid-sclera boundary region g3 in the cross-sectional image G are specified.
  • the partial area specifying unit 2312 specifies a two-dimensional image area having the one-dimensional image area specified in step S6 as a boundary. Thereby, a partial region which is a specific target by the image dividing unit 231 is obtained.
  • a partial region which is a specific target by the image dividing unit 231 is obtained.
  • FIG. 5B it is assumed that the vitreous region G1, the retina region G2, the choroid region G3, and the sclera region G4 are obtained.
  • the part selection unit 2322 selects, for each partial area acquired in step S7, an eye part corresponding to the partial area from among a plurality of parts included in the eye part item of the correspondence information 2321a.
  • “vitreous” is selected for the vitreous region G1
  • “retinal” is selected for the retinal region G2
  • “choroid” is selected for the choroid region G3
  • “sclera” is selected for the scleral region G4.
  • the parameter specifying unit 2323 specifies the value of the parameter associated with the part selected in step S8 for each partial region acquired in step S7 based on the correspondence information 2321a.
  • the parameter specifying unit 2323 may be configured to specify only display conditions (parameters) according to, for example, a shooting mode or a user instruction.
  • a case where only the parameters for pseudo color display are applied will be described.
  • the parameter values for pseudo color display “A4” for the vitreous region G1, “A1” for the retinal region G2, “A2” for the choroid region G3, and for the sclera region G4. Assume that “A3” is specified.
  • the main control unit 211 causes the display unit 241 to display the OCT image by applying the display condition specified in step S9.
  • the cross-sectional image G shown in FIG. 5A is displayed in pseudo color.
  • the vitreous body region G1 is displayed with the first display color of the gradation “A4”
  • the retinal region G2 is displayed with the second display color of the gradation “A1”
  • the choroid region G3 is displayed with the gradation “A”.
  • the display is performed in the third display color of “A2” and the sclera region G4 is displayed in the fourth display color of gradation “A3”.
  • the first to fourth display colors may all be different or a part of them may be the same.
  • the first to fourth display colors may all be the same. The same applies to the case where arbitrary display conditions are applied.
  • the ophthalmologic photographing apparatus includes an optical system, a forming unit, a dividing unit, a setting unit, and a display control unit.
  • the optical system divides the light from the light source (light source unit 101) into signal light (LS) and reference light (LR), and the signal light passing through the eye to be examined (E) and the reference light passing through the reference light path. Interference light (LC) is detected.
  • the optical system includes an element stored in the OCT unit 100 and an element that forms an optical path of signal light among the elements stored in the fundus camera unit 2.
  • the forming unit forms an image of the eye to be examined based on the detection result of the interference light by the optical system.
  • the forming unit includes an image forming unit 220 and may include a data processing unit 230.
  • the dividing unit image dividing unit 231) divides the image (G) formed by the forming unit into a plurality of partial regions (G1 to G4).
  • the setting unit sets display conditions for each of the plurality of partial areas acquired by the dividing unit.
  • the display control unit main control unit 211) causes the display unit (display unit 241) to display the image formed by the forming unit based on the display conditions set by the setting unit.
  • the display means may be included in the ophthalmologic photographing apparatus or an external device.
  • each partial region of the OCT image (some image regions corresponding to unfocused regions) are appropriately used. Can be displayed under various display conditions. Therefore, it is possible to improve visibility over the entire OCT image, not just the focused portion. Thereby, not only local observation of the eye to be examined but also global observation can be suitably performed.
  • the forming unit forms a two-dimensional image (two-dimensional cross-sectional image).
  • the dividing unit may include an image region specifying unit and a partial region specifying unit.
  • the image region specifying unit (2311) specifies the one-dimensional image region (g1 to g3) corresponding to the predetermined part of the eye to be examined by analyzing the two-dimensional image formed by the forming unit.
  • the partial region specifying unit (2312) specifies two-dimensional image regions (G1 to G4) having the one-dimensional image region specified by the image region specifying unit as a boundary. These two-dimensional image regions are used as partial regions to be acquired by the dividing unit.
  • the two-dimensional cross-sectional image may be any of a vertical cross-sectional image (B cross-sectional image, B-scan image), a horizontal cross-sectional image (front image, C cross-sectional image, C-scan image), and an arbitrary cross-sectional image (MPR image, etc.).
  • the image region specifying unit can specify the inner boundary membrane region (g1) corresponding to the inner boundary membrane of the eye to be examined.
  • the partial region specifying unit specifies the retina region (G2) corresponding to the retina of the eye to be examined and the vitreous region (G1) corresponding to the vitreous based on the result of specifying the inner boundary membrane region. be able to.
  • the image area specifying unit can specify the Bruch film area (g2) corresponding to the Bruch film of the eye to be examined.
  • the partial region specifying unit can specify the retina region (G2) corresponding to the retina of the eye to be examined and the choroid region (G3) corresponding to the choroid based on the result of specifying the Bruch's membrane region. .
  • the image region specifying unit can specify a choroid-sclera boundary region corresponding to the choroid-sclera boundary of the eye to be examined.
  • the partial region specifying unit determines a choroid region (G3) corresponding to the choroid of the eye to be examined and a sclera region (G4) corresponding to the sclera based on the result of specifying the choroid-sclera boundary region. Can be identified.
  • the vitreous region (G1), retinal region (G2), choroid region (G3), and sclera region (G4) are typical observation targets in the diagnosis of the posterior segment. Therefore, these structures contribute to the diagnosis of the posterior eye segment.
  • the image area specified by the image area specifying unit from the two-dimensional image need not be a one-dimensional image area.
  • the image region specifying unit can specify a two-dimensional image region corresponding to a predetermined part of the eye to be examined in the two-dimensional image.
  • This two-dimensional image region may be a predetermined part having a free curve as an outline, for example.
  • the setting unit can set a parameter value for changing the pixel values of a plurality of pixels included in the partial area as a display condition for each partial area acquired by the dividing unit.
  • This parameter may include any of the following: (1) a first parameter for changing pixel values of a plurality of pixels to pseudo color values; (2) changing a luminance value in the pixel values of the plurality of pixels. (3) A third parameter for changing the contrast based on the luminance value among the pixel values of a plurality of pixels.
  • the parameters set as display conditions are not limited to the above.
  • the setting unit can set parameters relating to arbitrary filtering, such as smoothing processing (smoothing) and enhancement processing (enhancement), as display conditions.
  • Smoothing processing is image processing for smoothing pixel values of pixels in a certain image area, and is used for noise removal in the image area.
  • the averaging process is executed using, for example, a moving average filter or a Gaussian filter.
  • the setting unit sets parameters applied in the averaging process (for example, the kernel size in the moving average filter and the Gaussian filter, the rate value set in the kernel, and the like) as display conditions.
  • Such a parameter relating to the smoothing process corresponds to a “fourth parameter”.
  • the enhancement process is an image process that emphasizes part or all of a certain image area, and is used to make the target area clear.
  • As the enhancement process there is an outline enhancement process (unsharp masking) for enhancing the outline of an image region.
  • the contour enhancement process is performed, for example, by performing a smoothing process on the original image data, obtaining a difference between the original image data and the smoothed image data, and combining the difference with the original image data.
  • edge enhancement processing parameters similar to the smoothing processing, Laplacian filter parameters, and the like are taken into consideration.
  • the setting unit sets the above parameters applied in the averaging process, the kernel size of the Laplacian filter, the rate value set in the kernel, and the like as display conditions.
  • Such a parameter relating to enhancement processing corresponds to a “fifth parameter”.
  • the setting unit may include a correspondence information storage unit, a part selection unit, and a parameter identification unit.
  • correspondence information storage unit (2321), correspondence information (2321a) in which parameter values are associated with each of a plurality of parts of the eye is stored in advance.
  • the part selection unit (2322) selects a part corresponding to the partial region from the plurality of parts for each partial region acquired by the dividing unit.
  • the parameter specifying unit (2323) specifies the value of the parameter associated with the part selected by the part selecting unit for each partial region based on the correspondence information.
  • the setting unit can set the parameter value specified by the parameter specifying unit as a display condition.
  • the forming unit includes an image forming unit 220 and a functional unit (three-dimensional image forming unit) that forms a three-dimensional image in the data processing unit 230.
  • the dividing unit includes an image region specifying unit and a partial region specifying unit.
  • the image region specifying unit (2311) specifies a two-dimensional image region corresponding to a predetermined part of the eye to be examined by analyzing the three-dimensional image formed by the forming unit.
  • the predetermined portion may include any of the inner boundary membrane, the Bruch's membrane, and the choroid-sclera boundary.
  • the partial area specifying unit (2312) specifies a three-dimensional image area having the two-dimensional image area specified by the image area specifying unit as a boundary as a partial area.
  • the processing executed by the dividing unit is obtained by extending the processing executed on the two-dimensional image in the above embodiment to a three-dimensional image.
  • the image region specifying unit specifies the inner boundary membrane region corresponding to the inner boundary membrane of the eye to be examined. Further, the partial region specifying unit specifies a retina region corresponding to the retina of the eye to be examined and a vitreous region corresponding to the vitreous body based on the result of specifying the inner boundary membrane region.
  • the image area specifying unit specifies the Bruch film area corresponding to the Bruch film of the eye to be examined. Further, the partial region specifying unit specifies a retina region corresponding to the retina of the eye to be examined and a choroid region corresponding to the choroid based on the result of specifying the Bruch's membrane region.
  • the image region specifying unit specifies the choroid-sclera boundary region corresponding to the choroid-sclera boundary of the eye to be examined. Further, the partial region specifying unit specifies the choroid region corresponding to the choroid and the sclera region corresponding to the sclera of the eye to be examined based on the result of specifying the choroid-sclera boundary region.
  • the image region specified by the image region specifying unit from the three-dimensional image need not be a two-dimensional image region.
  • the image region specifying unit can specify a three-dimensional image region corresponding to a predetermined part of the eye to be examined in the three-dimensional image.
  • This three-dimensional image region may be a predetermined part having a free curved surface as an outline, for example.
  • the setting unit sets display conditions for each of the plurality of partial areas acquired by the dividing unit. This process is the same as in the above embodiment. That is, the setting unit can set a parameter value similar to that in the above embodiment as a display condition. Further, this parameter includes at least one of a first parameter for changing to a pseudo color value, a second parameter for changing the luminance value, and a third parameter for changing the contrast. Good.
  • the setting unit may include a correspondence information storage unit (2321), a site selection unit (2322), and a parameter identification unit (2323) similar to those in the above embodiment.
  • the display control unit causes the display unit (display unit 241) to display the image formed by the forming unit based on the display conditions set by the setting unit.
  • This display means may be an external device.
  • the display control unit may include a main control unit 211 and a functional unit (a two-dimensional image forming unit) that forms a two-dimensional image from a three-dimensional image in the data processing unit 230.
  • the two-dimensional image forming unit performs processing for forming an image for display from a three-dimensional image, such as MPR processing and volume rendering.
  • FIG. 8 shows an example of the operation of the ophthalmologic photographing apparatus.
  • step S24 OCT measurement and fundus imaging of the fundus oculi Ef are executed.
  • OCT measurement and fundus imaging of the fundus oculi Ef are executed.
  • three-dimensional scanning is applied in OCT measurement, and volume data is acquired.
  • the image area specifying unit 2311 specifies a two-dimensional image area corresponding to a predetermined part of the eye E by performing segmentation of volume data.
  • the two-dimensional image region is, for example, a retina-vitreous boundary region, a retina-choroid boundary region, a choroid-sclera boundary region, or the like in volume data. These two-dimensional image regions typically have a curved surface shape.
  • the partial area specifying unit 2312 specifies a three-dimensional image area having the two-dimensional image area specified in step S26 as a boundary.
  • the three-dimensional image region is, for example, a vitreous region, a retina region, a choroid region, or a sclera region in volume data.
  • the part selection unit 2322 selects, for each partial area acquired in step S27, an eye part corresponding to the partial area from among a plurality of parts included in the eye part item of the correspondence information 2321a.
  • “vitreous body” is selected as the vitreous region
  • “retinal” is selected as the retina region
  • “choroid” is selected as the choroid region
  • “sclera” is selected as the scleral region.
  • the parameter specifying unit 2323 specifies, for each partial region acquired in step S27, the parameter value associated with the part selected in step S28 based on the correspondence information 2321a. This process is executed, for example, in the same manner as in the first embodiment.
  • the parameter values for pseudo color display are “A4” for the vitreous region, “A1” for the retina region, “A2” for the choroid region, and “A3” for the sclera region. Are identified.
  • the main control unit 211 causes the display unit 241 to display the OCT image by applying the display condition specified in step S29.
  • This processing includes processing for forming a display image by performing MPR processing or volume rendering on the volume data, and processing for displaying the display image by applying the display condition acquired in step S29. It is.
  • the data processing unit 230 performs MPR processing on the volume data to form the cross-sectional image G shown in FIG. 5A, and the main control unit 211 displays the cross-sectional image G in a pseudo color display. In this pseudo color display, the vitreous body region G1 shown in FIG.
  • 5B is displayed in the first display color of gradation “A4”
  • the retina region G2 is displayed in the second display color of gradation “A1”
  • the sclera region G4 is displayed in the fourth display color of gradation “A3”.
  • each partial region of the three-dimensional image (some image regions correspond to regions that are not in focus). It can be displayed under appropriate display conditions. Therefore, it is possible to improve visibility over the entire three-dimensional image, not just the focused portion. Thereby, not only local observation of the eye to be examined but also global observation can be suitably performed.
  • the pixel value of each partial region can be changed to a desired value.
  • the size of the eye to be examined may be measured.
  • the measurement target includes an arbitrary part of the eye to be examined and a lesioned part.
  • the data processing unit 230 can obtain the size of the partial area by analyzing any of the plurality of partial areas to which the display condition set by the display condition setting unit 232 is applied. The portion of the data processing unit 230 that executes this processing corresponds to an “analysis unit”.
  • the size of the partial area is a length.
  • the size is a length (maximum width, minimum width, average width, etc.) or area.
  • the size is a length (maximum width, minimum width, average width, etc.), area, and volume.
  • predetermined image processing is applied to the OCT image.
  • a typical example of this image processing is image overlay processing.
  • This process is a process of superimposing a plurality of cross-sectional images acquired for substantially the same cross-section.
  • the image superimposition processing includes, as in the conventional art, alignment processing (registration) of a plurality of cross-sectional images, and synthesis processing that combines a plurality of aligned cross-sectional images to form a single cross-sectional image. .
  • the combining process is, for example, an averaging process. For example, when the measurement time is short enough to ignore the eye movement, it is not necessary to perform registration.
  • image processing has been applied to the entire cross-sectional image.
  • this embodiment it is possible to selectively perform image processing on a part of the cross-sectional image.
  • the image processing can be performed by setting a parameter value for each partial region of the cross-sectional image. This embodiment also includes such a configuration.
  • the ophthalmologic photographing apparatus has the same configuration as the first embodiment (particularly, the optical system and the forming unit).
  • the first embodiment particularly, the optical system and the forming unit.
  • FIG. 9 shows a configuration example of the ophthalmologic photographing apparatus according to this embodiment.
  • the configuration shown in FIG. 9 can be applied instead of FIG. 4 of the first embodiment.
  • the correspondence information storage unit 2321, the region selection unit 2322, and the parameter specifying unit 2323 shown in FIG. 4 can be added to the configuration shown in FIG. In that case, it is possible to execute a combination of the process described in the first embodiment and the process according to this embodiment described below.
  • the data processing unit 230 of this embodiment includes an overlay processing unit 233 in addition to the image dividing unit 231 and the display condition setting unit 232.
  • substantially the same cross section of the eye to be examined is scanned with the signal light a plurality of times. Further, the forming unit forms a plurality of cross-sectional images of the cross-section based on the detection result of the interference light acquired by the optical system with the plurality of scans.
  • This cross-sectional image may be a two-dimensional image or a three-dimensional image.
  • the image dividing unit 231 divides each cross-sectional image formed by the forming unit into a plurality of partial areas. This process is executed, for example, by performing the following two-stage process on each of the plurality of cross-sectional images formed by the forming unit.
  • the image region specifying unit 2311 analyzes the cross-sectional image formed by the forming unit, thereby specifying an image region corresponding to a predetermined part of the eye to be examined.
  • the partial area specifying unit 2312 specifies an image area having the image area specified by the image area specifying unit 2311 as a boundary as a partial area.
  • the display condition setting unit 232 sets, as a display condition, a partial region to which predetermined image processing (image superimposition processing or the like) is applied among the plurality of partial regions acquired by the image dividing unit 231. That is, the display condition setting unit 232 classifies the plurality of partial areas of the cross-sectional image into partial areas to be subjected to image processing and partial areas to be applied.
  • Such classification of the partial areas is performed according to the shooting mode, for example.
  • the second correspondence information in which one or both of the type of the partial area to which the image processing is applied and the type of the non-application target partial area is associated with the shooting mode is the display condition setting unit. 232 (or storage unit 212) is stored in advance.
  • Another example of processing for classifying partial areas is a method based on user instructions.
  • the user classifies a plurality of partial areas of the cross-sectional image displayed on the display unit 241 via the operation unit 242.
  • This operation includes, for example, an operation of clicking a partial area in the cross-sectional image, or an operation of selecting a type of partial area (such as an eye part name) from a drop-down list.
  • the superimposition processing unit 233 performs a process of superimposing a plurality of tomographic images only on a partial region set as an application target of the image superimposition processing among the plurality of partial regions acquired by the image dividing unit 231.
  • the overlay processing unit 233 includes an image position alignment unit 2331 and an image composition unit 2332.
  • the image alignment unit 2331 performs registration of a plurality of cross-sectional images.
  • the image composition unit 2332 extracts a partial region set as an application target of the image overlay process from each of the plurality of cross-sectional images that have been registered.
  • the extracted partial area is classified according to the position in the frame (that is, classified according to the part of the eye to be examined). Furthermore, the image composition unit 2332 synthesizes each type of partial region (the number of cross-sectional images is the same).
  • An image obtained by combining a plurality of partial areas corresponding to each class is referred to as a combined partial area.
  • one or more of a plurality of partial regions extracted from a plurality of cross-sectional images are selected for the partial region to which the image superimposition process is not applied.
  • the main control unit 211 displays a single cross-sectional image (composite cross-sectional image) on the same cross-section as the plurality of cross-sectional images on the display unit 241. Display.
  • the partial area to which the image superimposition process is applied is composed of the combined partial areas.
  • the partial area to be unapplied in the composite cross-sectional image is composed of one selected partial area (selected partial area).
  • the retinal region G2, the choroid region G3, and the sclera region G4 are “partial regions to which the image overlay process is applied”.
  • the vitreous region G1 can be set to “a partial region to which image overlay processing is not applied”. This is because if the vitreous body moves during a plurality of scans, the form of the image drawn in a plurality of partial areas corresponding to the vitreous body is different, and a desired composite image cannot be obtained.
  • a composite partial region can be presented for the retinal region G2, choroid region G3, and sclera region G4 in the composite tomogram, and a selected partial region can be presented for the vitreous region G1.
  • a combined cross-sectional image composed entirely of still images is displayed.
  • a combined partial region can be presented, and for the vitreous region G1, two or more partial regions can be switched and presented.
  • the moving image display can be performed on the vitreous area G1. According to this display mode, a combined cross-sectional image displayed as a moving image is displayed for a non-application target partial region, and a still image is displayed for other partial regions. Note that two or more partial areas as non-application target partial areas can be switched automatically or manually.
  • FIG. 10 shows an example of the operation of the ophthalmologic photographing apparatus.
  • step S45 Repeat OCT measurement
  • the ophthalmologic photographing apparatus repeatedly performs OCT measurement (line scan, circle scan, etc.) of the fundus oculi Ef. Instead of repeating the line scan or the like, the three-dimensional scan may be repeatedly executed. Further, fundus imaging can be performed after repeated OCT measurement.
  • the image area specifying unit 2311 specifies a one-dimensional image area corresponding to a predetermined part of the eye E by executing segmentation of each cross-sectional image formed sequentially in step S45.
  • the one-dimensional image region specified for each cross-sectional image is, for example, a retina-vitreous boundary region g1, a retina-choroidal boundary region g2, and a choroid-sclera boundary region g3 as shown in FIG. 5A. Note that when the three-dimensional scan is repeatedly executed in step S45, a two-dimensional image region in each volume data is specified as in the modification of the first embodiment.
  • the partial area specifying unit 2312 specifies a two-dimensional image area having the one-dimensional image area specified in step S46 as a boundary.
  • the two-dimensional image region is, for example, a vitreous region G1, a retina region G2, a choroid region G3, a sclera region G4, etc. as shown in FIG. 5B. Note that when the three-dimensional scan is repeatedly executed in step S45, the three-dimensional image region in each volume data is specified as in the modification of the first embodiment.
  • the display condition setting unit 232 selects a partial area to which the image overlay process or the like is applied from among the plurality of partial areas acquired in step S47.
  • display conditions are set so that image superposition processing is performed on the retina region G2, the choroid region G3, and the sclera region G4, and this is not performed on the vitreous region G1. To do.
  • the image alignment unit 2331 performs registration of the plurality of cross-sectional images acquired in step S45.
  • the image composition unit 2332 extracts the partial area selected in step S48 (that is, the partial area set as the application target of the image superimposition process) from each of the plurality of cross-sectional images that have been registered. Further, the image composition unit 2332 synthesizes the extracted partial areas for each type (the above-mentioned class), thereby forming each type of combined partial area. In this example, it is assumed that all of a plurality of cross-sectional images are selected for the partial area to which the image overlay process is not applied.
  • the main control unit 211 causes the display unit 241 to display a combined cross-sectional image in the same cross section as the plurality of cross-sectional images.
  • the main control unit 211 displays a still image of the combined partial region acquired in step S50 for the retinal region G2, the choroidal region G3, and the scleral region G4 in the combined tomogram, and the vitreous region For G1, a moving image display based on a plurality of cross-sectional images is performed.
  • the ophthalmologic photographing apparatus includes an optical system, a forming unit, a dividing unit, a setting unit, and a display control unit.
  • the optical system divides the light from the light source (light source unit 101) into signal light (LS) and reference light (LR), and the signal light passing through the eye to be examined (E) and the reference light passing through the reference light path. Interference light (LC) is detected.
  • the forming unit forms an image of the eye to be examined based on the detection result of the interference light by the optical system.
  • the dividing unit (image dividing unit 231) divides the image (G) formed by the forming unit into a plurality of partial regions (G1 to G4).
  • the setting unit sets, as a display condition, a partial region to which a predetermined image processing is applied (a partial region to be applied) among a plurality of partial regions acquired by the dividing unit. .
  • the display control unit causes the display unit (display unit 241) to display the image formed by the forming unit based on the display conditions set by the setting unit. That is, the display control unit causes the display unit 241 to display an OCT image obtained by applying image processing only to the “application target” partial region.
  • the display means may be included in the ophthalmologic photographing apparatus or an external device.
  • an ophthalmologic photographing apparatus it is possible to determine whether or not to apply an image region for each partial region of the OCT image, and display an OCT image obtained by spatially selective image processing. Therefore, it is possible to improve visibility over the entire OCT image. Thereby, not only local observation of the eye to be examined but also global observation can be suitably performed.
  • the optical system scans a substantially identical cross section of the eye to be examined multiple times with signal light
  • the formation unit includes a galvano scanner 42), and forms a plurality of images of the cross section based on the detection result of the interference light acquired by the optical system in association with the plurality of scans.
  • the dividing unit divides each of the plurality of images formed by the forming unit into a plurality of partial regions.
  • the display control unit includes an overlay processing unit (233).
  • the superimposition processing unit performs a process of superimposing a plurality of images only on a partial area set as an application target of the superimposition process by the setting unit among the plurality of partial areas as the superimposition process on the plurality of images.
  • the display control unit causes the display unit to display a single image formed by the overlay processing unit.
  • the partial area suitable for the superimposition processing is, for example, an image area in which a part that does not substantially move, that is, a part that is substantially stationary is drawn.
  • partial areas corresponding to the retina, choroid and sclera are applied as the appropriate partial areas for the overlay process, and partial areas corresponding to the vitreous body as the inappropriate partial areas for the overlay process. Has been applied.
  • the display control unit can perform moving image display based on a plurality of images for a partial region to which the overlay process is not applied among the plurality of partial regions. Thereby, the motion state of the partial area can be grasped. In the above example, it is possible to observe the motion state of the vitreous body.
  • ⁇ Third Embodiment> Another embodiment in the case where the same image overlay processing as that of the second embodiment is applied will be described.
  • the ophthalmologic imaging apparatus according to this embodiment is suitably used when a region that may move during OCT measurement, such as a vitreous body, is included in the imaging range.
  • the configuration according to this embodiment can be added to the configuration according to another embodiment.
  • the ophthalmologic photographing apparatus has the same configuration as that of the second embodiment.
  • differences from the second embodiment will be described in detail with reference to the drawings of the first embodiment and the second embodiment as appropriate.
  • FIG. 11 shows a configuration example of the ophthalmologic photographing apparatus according to this embodiment.
  • the configuration shown in FIG. 11 is applied instead of FIG. 9 of the second embodiment.
  • the control unit 210 is provided with a timer unit 213.
  • the timer unit 213 starts timing at a predetermined timing.
  • This timing start timing is arbitrary.
  • the timing at which the eye E is brought into the examination position can be applied.
  • This timing may be determined by, for example, a sensor that detects that the subject's face has contacted the chin rest or the forehead, a sensor that detects that the subject has sat on the chair, and the subject's face on the ophthalmic imaging apparatus. This is the input timing of a signal from a sensor (or camera and image analysis unit) that detects proximity.
  • the timing at which the reflected light of the illumination light output from the fundus camera unit 2 starts to be detected, the timing at which the signal light LS output from the OCT unit 100 starts to be detected, or the like may be set as the timing start timing.
  • the timing at which the user performs a predetermined operation via the operation unit 242 may be set as the timing start timing.
  • the timer unit 213 starts timing at the timing as described above, and outputs a signal indicating that a predetermined time has been reached.
  • This time measurement is for the part of the eye with passive motion, taking into account the time it takes for the part (subject eye) to become substantially stationary after the subject (subject eye) is substantially stationary.
  • the time keeping time is set to 15 seconds, for example.
  • the timed time may be set in consideration of the inspection throughput.
  • the time measured is, for example, actually measured for a plurality of eyes from the state in which the part is moving until it substantially stops, and is statistically calculated from the measurement results (for example, the average value, the mode) Value, maximum value, etc.). It is also possible to selectively apply a plurality of timings prepared according to the characteristics of the subject or eye (age, gender, name of wound, disease degree, etc.).
  • FIG. 12 illustrates an example of the operation of the ophthalmologic photographing apparatus.
  • the eye to be examined is placed at the examination position
  • the eye E to be examined is placed at the examination position. This process is executed, for example, in the same manner as in the first embodiment. Any of the above-described sensors detects that the eye E has been placed at the examination position, and sends a signal to the control unit 210.
  • the main control unit 211 receives a signal from the sensor, and starts time measurement by the time measurement unit 213.
  • steps S63 and S64 and steps S65 to S67 are executed in parallel.
  • Steps S63 and S64 are processing for shifting to an apparatus state in which OCT measurement can be performed (that is, processing for the purpose of waiting until the vitreous body or the like stops).
  • steps S65 to S67 are preparation processes for executing the OCT measurement.
  • the main control unit 211 performs control so that OCT measurement cannot be performed unless both processes are completed.
  • the time measuring unit 213 measures time for a predetermined time and sends a signal to the main control unit 211.
  • the main control unit 211 receives the signal output from the time measuring unit 213 in step S63, and shifts the apparatus state of the ophthalmologic photographing apparatus to an apparatus state in which OCT measurement can be performed.
  • the apparatus state in which OCT measurement can be performed refers to an apparatus state in which an OCT measurement start instruction can be received and OCT measurement can be performed.
  • an instruction to start OCT measurement (an instruction made manually or an automatic instruction corresponding to the arrival of another ready state) is made.
  • the main control unit 211 rejects this start instruction.
  • a predetermined message can be output.
  • the main control unit 211 can cause the display unit 241 to display a message that prompts the user to wait until the apparatus state transitions.
  • the progress status of timing by the timing unit 213 can be presented.
  • the main control unit 211 can cause the display unit 241 to display a numerical value indicating the timekeeping time in a countdown display or a countup display.
  • the user selects a shooting mode. This process is executed, for example, in the same manner as in the first embodiment.
  • the image alignment unit 2331 registers a plurality of images (cross-sectional images or volume data) acquired in step S68.
  • the image composition unit 2332 forms a single image by compositing a plurality of registered images. This image synthesis process is performed on the entire image, unlike the second embodiment in which the partial areas are synthesized.
  • the main control unit 211 causes the display unit 241 to display the image formed in step S70.
  • a process of specifying a partial area (S27) and a process of selecting an eye part corresponding to the partial area (S28) The process for specifying the display condition (S29) and the process for displaying the image based on the display condition (S30) can be executed.
  • the optical system includes a scanning unit (galvano scanner 42) that scans the eye to be examined with signal light. Further, the ophthalmologic photographing apparatus scans substantially the same cross section of the eye to be inspected with the signal light a plurality of times after a predetermined time is measured by the time measuring unit (213) that starts timing at a predetermined timing. And a control unit (main control unit 211) for controlling the optical system.
  • the forming unit (the image forming unit 220 (and the data processing unit 230)) forms a plurality of images of this cross section based on the detection result of the interference light acquired by the optical system during a plurality of scans.
  • the display control unit includes an overlay processing unit (233) that forms a single image by superimposing a plurality of images formed by the forming unit. Further, the display control unit (main control unit 211) causes the display unit to display a single image formed by the overlay processing unit.
  • an ophthalmologic photographing apparatus even when a region with movement is included in the imaging range, it is obtained by performing repetitive OCT measurement after the region is substantially stationary. A plurality of images can be superimposed. Therefore, a suitable superimposed image can be obtained. Further, for example, by adding a configuration that applies display conditions for each partial region as in the first embodiment, it is possible to perform not only local observation of the subject eye but also global observation suitably. A combined image can be acquired.
  • the fourth embodiment is suitably used when a region that may move during OCT measurement is included in the imaging range.
  • OCT measurement is performed after the part is stationary.
  • the movement of the part is monitored by an image acquired in real time, and OCT measurement is performed after substantially stopping.
  • the configuration according to this embodiment can be added to the configuration according to another embodiment.
  • the ophthalmologic photographing apparatus has the same configuration as that of the second embodiment.
  • differences from the second embodiment will be described in detail with reference to the drawings of the first embodiment and the second embodiment as appropriate.
  • FIG. 13 shows a configuration example of the ophthalmologic photographing apparatus according to this embodiment.
  • the configuration shown in FIG. 13 is applied instead of FIG. 9 of the second embodiment.
  • an exercise state determination unit 234 is provided in the data processing unit 230.
  • the overlay processing unit 233 is provided with an image alignment unit 2331 and an image composition unit 2332 as in the second embodiment.
  • the movement state determination unit 234 determines the movement state of a specific part of the eye E. In particular, the exercise state determination unit 234 can determine whether the specific part is substantially stationary.
  • the exercise state determination unit 234 functions as a “determination unit”.
  • the exercise state determination unit 234 includes an exercise state information acquisition unit 2341 and a determination processing unit 2342.
  • part is a site
  • the motion state information acquisition unit 2341 sequentially identifies an OCT image (a two-dimensional image or a three-dimensional image) acquired in real time by repetitive OCT measurement, thereby identifying the eye E to be examined depicted in the OCT image.
  • Information indicating the exercise state of the part is acquired.
  • the exercise state information acquisition unit 2341 executes, for example, the following process. First, the exercise state information acquisition unit 2341 analyzes the OCT image and specifies an image region corresponding to the specific part. Next, the exercise state information acquisition unit 2341 acquires position information of the specified image region. This position information is, for example, coordinates of the contour region corresponding to the contour of the image region. Further, a feature position (such as a gravity center position or a feature point position) in the image area may be obtained as position information.
  • the exercise state information acquisition unit 2341 performs the above processing for each OCT image acquired by repetitive OCT measurement or for each OCT image obtained by thinning out the acquired OCT image.
  • the position information sequentially acquired by the exercise state information acquisition unit 2341 is information representing a time-series change in the position of the specific part, and is an example of exercise state information.
  • shape information indicating the shape of the image area may be obtained.
  • the shape information sequentially acquired by the exercise state information acquisition unit 2341 is information representing a time-series change in the shape of the specific part, and is an example of exercise state information.
  • the exercise state information includes arbitrary parameters that change with the movement of a specific part, and the exercise state information acquisition unit 2341 has a function of analyzing the OCT image and acquiring the parameters.
  • the determination processing unit 2342 determines whether the specific part of the eye E is substantially stationary based on the motion state information sequentially acquired by the motion state information acquisition unit 2341.
  • Determination processing unit 2342 executes the following processing, for example. First, the determination processing unit 2342 compares the position information obtained from the first OCT image with the position information obtained from the second OCT image (for example, calculates a coordinate difference). Note that the determination processing unit 2342 can perform registration between these OCT images prior to this processing. Different positions (coordinates) indicated by these position information means that the specific part has moved during the OCT measurement corresponding to these OCT images. Conversely, the fact that the positions indicated by these position information are (substantially) the same means that the specific part has not moved (substantially) during these OCT measurements. The difference in position information (coordinates) is called “displacement information”.
  • Each displacement information represents the displacement of a specific part at a predetermined time interval (OCT measurement repetition rate or an integer multiple thereof). Further, the N ⁇ 1 pieces of displacement information as a whole represents a time-series change in the displacement of the specific part.
  • the determination processing unit 2342 determines whether the sequentially acquired displacement information is equal to or less than a predetermined threshold value. When the displacement information exceeds the threshold value, the specific part is regarded as moving. On the other hand, when the displacement information is equal to or less than the threshold value, the specific part is considered to be substantially stationary. That is, the threshold value is set in advance as a value of displacement information that can be permitted when the specific part is in a stationary state. This threshold is arbitrarily set.
  • the determination processing unit 2342 can determine whether the specific part is substantially stationary by executing the same processing.
  • FIG. 14 illustrates an example of the operation of the ophthalmologic photographing apparatus.
  • step S84 Perform OCT measurement
  • the ophthalmologic photographing apparatus performs OCT measurement with a predetermined scan pattern.
  • the motion state information acquisition unit 2341 sequentially analyzes the OCT image acquired by the OCT measurement in step S85, thereby acquiring motion state information regarding the specific part of the eye E that is depicted in the OCT image.
  • the determination processing unit 2342 determines whether or not the specific part of the eye E is substantially stationary based on the motion state information acquired in step S86. If it is determined that the specific part is not stationary (S87: No), the process returns to the OCT measurement in step S85. Steps S85 to S87 are repeated until it is determined that the specific part is stationary (S87: Yes).
  • step S87 If it is determined in step S87 that the specific part is stationary (S87: Yes), the main control unit 211 causes the OCT unit to scan the substantially same cross section of the fundus oculi Ef a plurality of times with the signal light LS. 100 and the galvano scanner 42 are controlled.
  • the scan pattern in this repetitive OCT measurement may be the same as or different from that in step S85. Further, fundus imaging may be performed after this repetitive OCT measurement.
  • the image alignment unit 2331 performs registration of a plurality of images (cross-sectional images or volume data) acquired in step S88.
  • the image composition unit 2332 forms a single image by compositing a plurality of registered images. This image composition processing is performed on the entire image, as in the third embodiment.
  • the main control unit 211 causes the display unit 241 to display the image formed in step S90.
  • a process of specifying a partial area (S27) and a process of selecting an eye part corresponding to the partial area (S28) The process for specifying the display condition (S29) and the process for displaying the image based on the display condition (S30) can be executed.
  • the optical system includes a scanning unit (galvano scanner 42) that repeatedly scans substantially the same cross section of the eye to be examined with signal light.
  • the forming unit (the image forming unit 220 (and the data processing unit 230)) sequentially forms an image of this cross section based on the detection result of the interference light acquired by the optical system with this repeated scanning.
  • the ophthalmologic photographing apparatus includes a determination unit (motion state determination unit 234) and a control unit (main control unit 211). The determination unit sequentially analyzes the images formed by the forming unit, thereby acquiring motion state information indicating the motion state of the specific part of the eye to be examined depicted in the image.
  • the determination unit determines whether the specific part of the eye to be examined is substantially stationary based on the motion state information acquired sequentially.
  • the control unit controls the optical system to scan the substantially same cross section with the signal light a plurality of times in response to the determination that the specific part is determined to be substantially stationary by the determination unit.
  • the forming unit forms a plurality of images of the cross section based on the detection result of the interference light acquired by the optical system during the plurality of scans.
  • the display control unit includes an overlay processing unit (233) that forms a single image by superimposing a plurality of images formed by the forming unit.
  • the display control unit main control unit 211) causes the display unit (display unit 241) to display a single image formed by the overlay processing unit.
  • the motion state of the region is monitored, and the OCT is repeated after the region is substantially stationary. Measurement can be performed and a plurality of images obtained thereby can be superimposed. Therefore, a suitable superimposed image can be obtained. Further, for example, by adding a configuration that applies display conditions for each partial region as in the first embodiment, it is possible to perform not only local observation of the subject eye but also global observation suitably. A combined image can be acquired.
  • the ophthalmologic imaging apparatus performs OCT measurement a plurality of times at different focal positions, and forms an image by combining partial regions of the obtained plurality of OCT images.
  • Each partial area used for the combination includes a position focused in the OCT measurement.
  • a case where two partial regions based on two OCT images obtained by performing OCT measurement twice will be described.
  • the number of times of OCT measurement and the number of partial regions to be combined are arbitrary.
  • the ophthalmologic photographing apparatus has the same configuration as that of the first embodiment. Therefore, the drawings of the first embodiment are appropriately referred to.
  • the ophthalmologic photographing apparatus includes a focal position changing unit for changing the focal position of the signal light LS.
  • the focal position changing unit includes, for example, a focusing lens 43 (and a focusing driving unit 43A).
  • the main control unit 211 moves the focusing lens 43 in the optical axis direction of the signal light LS by controlling the focusing driving unit 43A. Thereby, the focus position of the signal light LS is moved in the depth direction (z direction) of the eye E to be examined.
  • FIG. 15 illustrates an example of the operation of the ophthalmologic photographing apparatus.
  • the user selects a shooting mode.
  • the image pasting mode is selected as the shooting mode.
  • the image pasting mode is a mode for operating the ophthalmologic photographing apparatus so that the OCT measurement is performed a plurality of times at different focal positions and an image is formed by combining partial regions of the obtained plurality of OCT images.
  • the process for selecting the shooting mode is executed, for example, as in the first embodiment.
  • Step S104 Finely adjust the focus state to match the first focus position
  • the user finely adjusts the focus state. This process is executed, for example, in the same manner as in the first embodiment.
  • the focus is set to an arbitrary position (first focal position) of the vitreous body of the eye E. This process can be performed automatically. Further, the focus may be adjusted to the first focus position by the auto focus in step S104.
  • step S107 Perform second OCT measurement
  • the ophthalmologic photographing apparatus performs OCT measurement with a predetermined scan pattern. Thereby, a second image in a state where the focus is in the second focal position is obtained.
  • the scan pattern applied in step S107 may be the same as or different from the scan pattern applied in step S105. However, it is assumed that at least a part of both scan areas overlap. Subsequent processing can be performed on this overlapping area.
  • the image area specifying unit 2311 performs segmentation of each of the first image and the second image. This segmentation is performed on the same part of the eye E to be examined. Thereby, a one-dimensional image region (or two-dimensional image region) corresponding to a predetermined portion of the eye E in the first image is specified, and a one-dimensional image region (or two corresponding to the same predetermined portion in the second image). Dimensional image area) is identified.
  • FIG. 16A schematically shows the first image H1
  • FIG. 16B schematically shows the second image H2.
  • Symbol V indicates the rear surface of the vitreous body
  • symbol P indicates a vitreous pocket (shore pocket).
  • FIG. 16A a portion corresponding to the vitreous body focused in the first OCT measurement is indicated by a solid line.
  • FIG. 16B a portion corresponding to the retina or choroid that is focused in the second OCT measurement is indicated by a solid line.
  • step S108 for example, the retina-vitreous boundary region h1 in the first image H1 and the retina-vitreous boundary region h2 in the second image H2 are specified.
  • the partial area specifying unit 2312 specifies a two-dimensional image area having the one-dimensional image area specified in step S108 as a boundary. Thereby, a partial region which is a specific target by the image dividing unit 231 is obtained.
  • An image region (retinal region, choroid region, sclera region) below h2 (retina side) is obtained.
  • the display condition setting unit 232 sets display conditions for the first partial region (vitreous region) of the first image (H1).
  • the display condition setting unit 232 sets display conditions for a second partial region (retinal region) different from the first partial region among the partial regions of the second image (H2).
  • This setting process is performed, for example, by a process of selecting an eye part corresponding to a partial region (step S8) and a process of specifying a display condition (step S9), as in the first embodiment.
  • the main control unit 211 displays a single image including the first partial region of the first image and the second partial region of the second image on the display unit 241 by applying the display condition specified in step S110.
  • the first partial area is extracted (trimmed) from the first image
  • the second partial area is extracted (trimmed) from the second image
  • the first partial area and the second partial area are further divided. This is performed by pasting at the boundary position (retina-vitreous boundary region).
  • the display process of step S111 can be performed by superimposing the first layer on which the first image is displayed and the second layer on which the second image is displayed on the display unit 241.
  • the first partial region (vitreous region) of the first image H1 and the second partial region (retinal region, choroid region, scleral region) of the second image H2 are combined.
  • An image is obtained.
  • This image is schematically shown in FIG. 16C.
  • the image H was obtained by pasting the first partial region (vitreous region) K1 of the first image H1 and the second partial region (retinal region, choroid region, sclera region) K2 of the second image H2. It is an image.
  • the focus is set at an arbitrary position in the first partial area K1 and an arbitrary position in the second partial area. Therefore, the image H is a clear image that is entirely focused as compared with the original images H1 and H2.
  • the ophthalmologic photographing apparatus includes a focal position changing unit (the focusing lens 43 (and the focusing driving unit 43A)) for changing the focal position of the signal light.
  • the optical system detects the first interference light based on the signal light at the first focal position and detects the second interference light based on the signal light at the second focal position.
  • the forming unit (the image forming unit 220 (and the data processing unit 230)) forms the first image based on the detection result of the first interference light, and forms the second image based on the detection result of the second interference light.
  • the dividing unit (image dividing unit 231) divides the first image and the second image into a plurality of substantially identical partial regions, respectively.
  • the setting unit sets display conditions for the first partial region of the plurality of partial regions of the first image, and the first part of the plurality of partial regions of the second image. Display conditions are set for a second partial area different from the area.
  • the display control unit main control unit 211 displays a single image including the first partial region of the first image and the second partial region of the second image based on the display conditions set by the setting unit. It is displayed on the means (display unit 241).
  • a single image can be formed by synthesizing a focused portion among a plurality of images having different focal positions. An image can be acquired. Furthermore, since display conditions can be set for each partial area, not only local observation of the eye to be examined but also global observation can be suitably performed.
  • ⁇ Ophthalmic image display device An embodiment of an ophthalmologic image display apparatus will be described. A configuration example of the ophthalmologic image display apparatus is shown in FIG.
  • the ophthalmologic image display apparatus 1000 includes a control unit 1210, a data processing unit 1230, a user interface (UI) 1240, and a data receiving unit 1250.
  • the control unit 1210 has the same function as the control unit 210 of the ophthalmologic photographing apparatus 1 according to the first embodiment, for example, and includes a main control unit 1211 and a storage unit 1212.
  • the main control unit 1211 and the storage unit 1212 have the same functions as the main control unit 211 and the storage unit 212, respectively.
  • the data processing unit 230 has the same function as the data processing unit 230 of the ophthalmologic photographing apparatus 1 according to the first embodiment, for example, and includes an image dividing unit 1231 and a display condition setting unit 1232.
  • the image dividing unit 1231 and the display condition setting unit 1232 have the same functions as the image dividing unit 231 and the display condition setting unit 232, respectively.
  • the user interface 1240 has a function similar to that of the ophthalmologic photographing apparatus 1 according to the first embodiment, for example, and includes a display unit 1241 and an operation unit 1242.
  • the display unit 1241 and the operation unit 1242 have the same functions as the display unit 241 and the operation unit 242, respectively.
  • the data reception unit 1250 receives an image of the eye to be examined formed using OCT.
  • the data receiving unit 1250 includes a configuration corresponding to a data receiving mode, such as a communication interface or a drive device.
  • the data receiving unit 1250 functions as a “receiving unit”.
  • FIG. 18 illustrates an example of the operation of the ophthalmologic image display apparatus.
  • the data receiving unit 1250 receives an OCT image from an external device or a recording medium.
  • the accepted OCT image is stored in the storage unit 1212.
  • the OCT image is sent to the data processing unit 1230.
  • the image dividing unit 1231 divides the OCT image received in step S121 into a plurality of partial areas. This process includes, for example, segmentation (step S6) and partial area specifying process (step S7) as in the first embodiment.
  • the display condition setting unit 1232 sets a display condition for each of the plurality of partial areas acquired in step S122. This process includes, for example, a process of selecting an eye part corresponding to the partial region (step S8) and a process of specifying display conditions (step S9) as in the first embodiment.
  • step S124 An image is displayed based on display conditions
  • the main control unit 1211 applies the display condition specified in step S123 and causes the display unit 1241 to display the OCT image. This process is executed, for example, in the same manner as step S10 of the first embodiment.
  • any one of the second to fifth embodiments or a modified example thereof is used. It is possible to apply the included processes.
  • the ophthalmologic image display apparatus includes a reception unit, a division unit, a setting unit, and a display control unit.
  • the reception unit receives an image of the eye to be examined formed using OCT.
  • the dividing unit (image dividing unit 1231) divides the image received by the receiving unit into a plurality of partial areas.
  • the setting unit sets display conditions for each of the plurality of partial areas acquired by the dividing unit.
  • the display control unit main control unit 1211) causes the display unit (display unit 1241) to display the image received by the receiving unit based on the display conditions set by the setting unit.
  • the display means may be included in the ophthalmic image display device or an external device.
  • an ophthalmologic image display apparatus it is possible to improve visibility over the entire OCT image, not just the focused portion in the OCT image. Thereby, not only local observation of the eye to be examined but also global observation can be suitably performed.
  • a wide-area eyeball image for example, a full-eye image
  • display conditions should be set individually for each part depicted in the wide-area eyeball image. Is possible.
  • a wide-area eyeball image that can suitably observe the entire plurality of regions of the eye to be examined.
  • a composite image of a plurality of images obtained by individually executing OCT on a plurality of regions of the eye to be examined is displayed.
  • the first display condition and setting the second display condition for the posterior segment OCT image it is possible to display the combined image.
  • two or more display conditions are set for the anterior segment OCT image, and two or more display conditions are set for the posterior segment OCT image, and then a composite image thereof is displayed.
  • a first display condition is set for a region where the cornea is depicted in the anterior segment OCT image
  • a second display condition is set for the region where the crystalline lens is depicted in the anterior segment OCT image
  • After setting the third display condition for the region in which the vitreous body is depicted in the posterior segment OCT image and after setting the fourth display condition for the region in which the fundus oculi is depicted in the posterior segment OCT image, It is possible to display a composite image of the anterior segment OCT image and the posterior segment OCT image.
  • the posterior surface of the cornea is less likely to be depicted than the anterior surface of the cornea, and the crystalline lens is also less likely to be depicted. Therefore, it is possible to increase the contrast of the region where the posterior surface of the cornea and the crystalline lens are depicted in the anterior segment OCT image, and to highlight the region.
  • eyelashes may appear in the anterior segment OCT image. In such a case, the area can be emphasized or conversely suppressed (blurred).
  • display conditions can be individually set for each identified layer region (or for each layer boundary region).
  • a lesioned part When a lesioned part is depicted in the OCT image, it is possible to set two or more display conditions for the lesioned part. For example, different display conditions can be set for the central region of the lesion and its peripheral region. Further, when a blood vessel is traveling in a lesioned part, different display conditions can be set for the lesioned part and the blood vessel. According to such a configuration, it is possible to preferably observe the structure of the lesion.
  • an interface for changing at least one parameter among various parameters such as pseudo color value, luminance value, contrast, smoothing, and enhancement can be displayed together with the image.
  • an interface for example, a pointing device such as a mouse
  • the display condition of the partial area is changed in real time according to the changed content. The user can guide the display state of the partial area to a desired display state by appropriately changing the parameter while monitoring the display state of the partial area.
  • the interface for displaying an image in which display conditions are set for each partial area may be different depending on the display mode of the image.
  • Image display modes include still image display, moving image display (for example, live moving image display), slide show display, comparison display of two or more images, overlay display (superimposition display), and display of images used for analysis processing. .
  • moving image display for example, live moving image display
  • slide show display comparison display of two or more images
  • overlay display superimposition display
  • display of images used for analysis processing .
  • a plurality of images relating to the same part or different parts of the eye to be examined are sequentially switched and displayed, such as a moving image display or a slide show display, it is applied to one of them (reference image, for example, the first image).
  • reference image for example, the first image
  • the displayed display conditions can be applied to other images.
  • the original display condition of the reference image is compared with the current display condition of the other image, and a new display condition of the other image is set based on the comparison result and the current display condition of the reference image. It can also be configured to.
  • comparison display of two or more images there are cases where two or more images acquired at different timings are displayed side by side (or switched), such as follow-up observation or pre- and post-operative observation.
  • the superimposed display includes a superimposed display of an OCT image and an OCT image, a superimposed display of an OCT image and another type of image, and the like.
  • the same (or similar) display conditions can be set for partial regions corresponding to the same part of the eye to be examined.
  • display conditions for the image based on the analysis result. For example, it is possible to set display conditions for a partial region to be analyzed in the image based on the analysis result.
  • this OCT image (or an image obtained therefrom or a combination thereof with another OCT image is obtained.
  • the display color corresponding to the thickness can be assigned to each part of the partial area corresponding to the predetermined layer in the image.
  • an OCT image including a partial image in which the thickness distribution of the predetermined layer is expressed in color is obtained.
  • Display conditions for blood vessel regions in OCT images can be set.
  • the OCT image is a front sectional image (C sectional image) or a three-dimensional image
  • the blood vessel region is specified by arbitrary image processing such as region growing.
  • the OCT image is a longitudinal cross-sectional image (B cross-sectional image) or an arbitrary cross-sectional image
  • the blood vessel region is specified based on, for example, a phase image representing a time-series change in phase.
  • the blood vessel region can be specified by the following processing: a blood vessel region (reference blood vessel region) in the fundus image of the eye to be examined separately acquired is specified; Registration between the front image obtained by processing the three-dimensional image and the fundus image is performed; based on the result of registration (for example, correspondence between coordinate positions in both images), the reference blood vessel region Is set as a blood vessel region.
  • a front image based on a three-dimensional image a projection image obtained by projecting a three-dimensional image in the depth direction, or a partial region of the three-dimensional image (for example, an image region corresponding to a predetermined layer) has a depth.
  • a shadowgram obtained by projecting in a direction, a flattened image obtained by flattening and displaying from the front an image region corresponding to a predetermined layer (for example, obtained by segmentation) in a three-dimensional image.
  • Information indicating hemangiomas may be assigned to each blood vessel image. Examples of such information include identification information between arteries and veins, identification information between blood vessels associated with lesions and blood vessels not associated with lesions, and identification information according to blood vessel diameter.
  • identification information is acquired by analyzing the OCT image itself or by analyzing another image (such as a fundus image or a fluorescence image that can be registered with the OCT image). Or it can also comprise so that a user may input such identification information manually. According to such a configuration, it is possible to easily recognize the blood vessel region and its type in the OCT image.
  • Contrast can be increased or enhancement processing can be performed on such an image region having a low rendering ability.
  • similar contrast processing and enhancement processing can be performed on a region hidden by a surgical instrument or treatment instrument.
  • a technique for calculating an evaluation value of an image quality (image quality) of an OCT image is generally used.
  • Display conditions can be set based on this evaluation value. For example, it is possible to calculate an evaluation value for any one of a plurality of partial regions of the OCT image and set display conditions so as to increase the evaluation value. Further, by alternately executing the calculation of the evaluation value and the change of the display condition, that is, by adjusting the display condition while monitoring the evaluation value, a suitable display condition (for example, the optimal display condition) is searched. It is also possible to configure.
  • the optical path length difference between the optical path of the signal light LS and the optical path of the reference light LR is changed by changing the position of the optical path length changing unit 41.
  • the method for changing the difference is not limited to this.
  • it is possible to change the optical path length difference by disposing a reflection mirror (reference mirror) in the optical path of the reference light and moving the reference mirror in the traveling direction of the reference light to change the optical path length of the reference light.
  • the optical path length difference may be changed by moving the fundus camera unit 2 or the OCT unit 100 with respect to the eye E to change the optical path length of the signal light LS.
  • the optical path length difference can be changed by moving the measured object in the depth direction (z direction).
  • the computer program for realizing the above embodiment can be stored in an arbitrary recording medium readable by a computer.
  • this recording medium for example, a semiconductor memory, an optical disk, a magneto-optical disk (CD-ROM / DVD-RAM / DVD-ROM / MO, etc.), a magnetic storage medium (hard disk / floppy (registered trademark) disk / ZIP, etc.), etc. Can be used.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Eye Examination Apparatus (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

L'invention concerne une technique au moyen de laquelle non seulement une observation locale d'un œil de sujet peut être réalisée de façon appropriée, mais également une observation globale. Un dispositif d'imagerie ophtalmologique selon un mode de réalisation comprend un système optique, une unité de formation, une unité de division, une unité d'ajustement et une unité de commande d'affichage. Le système optique divise la lumière provenant d'une source de lumière en une lumière de signal et en une lumière de référence, et détecte une lumière d'interférence entre la lumière de signal qui a traversé un œil de sujet et la lumière de référence qui a traversé un trajet de lumière de référence. L'unité de formation forme une image de l'œil de sujet sur la base du résultat de détection de la lumière d'interférence par le système optique. L'unité de division divise l'image formée par l'unité de formation en une pluralité de régions partielles. L'unité d'ajustement ajuste des conditions d'affichage des régions partielles respectives acquises par l'unité de division. L'unité de commande d'affichage affiche, sur un moyen d'affichage, l'image formée par l'unité de formation conformément aux conditions d'affichage qui ont été établies par l'unité d'ajustement.
PCT/JP2014/066046 2013-06-19 2014-06-17 Dispositif d'imagerie ophtalmologique et dispositif d'affichage d'image ophtalmologique WO2014203901A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2015522939A JP6046250B2 (ja) 2013-06-19 2014-06-17 眼科撮影装置および眼科画像表示装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013128102 2013-06-19
JP2013-128102 2013-06-19

Publications (1)

Publication Number Publication Date
WO2014203901A1 true WO2014203901A1 (fr) 2014-12-24

Family

ID=52104633

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/066046 WO2014203901A1 (fr) 2013-06-19 2014-06-17 Dispositif d'imagerie ophtalmologique et dispositif d'affichage d'image ophtalmologique

Country Status (2)

Country Link
JP (3) JP6046250B2 (fr)
WO (1) WO2014203901A1 (fr)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015098912A1 (fr) * 2013-12-25 2015-07-02 興和株式会社 Dispositif de tomographie
JP2016049368A (ja) * 2014-09-01 2016-04-11 株式会社ニデック 眼科撮影装置
JP2016202249A (ja) * 2015-04-15 2016-12-08 株式会社ニデック 眼底撮像装置及び眼底撮像プログラム
KR20170008172A (ko) * 2015-07-13 2017-01-23 캐논 가부시끼가이샤 화상처리장치, 화상처리방법 및 광간섭 단층촬영 장치
JP2017093854A (ja) * 2015-11-25 2017-06-01 株式会社トプコン 眼科撮影装置及び眼科画像表示装置
JP2017127397A (ja) * 2016-01-18 2017-07-27 キヤノン株式会社 画像処理装置、推定方法、システム及びプログラム
JP2017185057A (ja) * 2016-04-06 2017-10-12 キヤノン株式会社 眼科撮影装置及びその制御方法、並びに、プログラム
JP2018504994A (ja) * 2015-02-16 2018-02-22 ノバルティス アーゲー 硝子体および網膜のデュアル撮像のためのシステムおよび方法
JP2018057828A (ja) * 2016-10-05 2018-04-12 キヤノン株式会社 画像処理装置及び画像処理方法
JP2018094056A (ja) * 2016-12-13 2018-06-21 キヤノン株式会社 眼科装置、眼科撮影方法、及びプログラム
JP2018140049A (ja) * 2017-02-28 2018-09-13 キヤノン株式会社 撮像装置、撮像方法およびプログラム
JPWO2017135278A1 (ja) * 2016-02-02 2018-11-29 株式会社ニデック 断層画像撮影装置
WO2019146582A1 (fr) * 2018-01-25 2019-08-01 国立研究開発法人産業技術総合研究所 Dispositif de capture d'image, système de capture d'image, et procédé de capture d'image
WO2019156139A1 (fr) * 2018-02-08 2019-08-15 興和株式会社 Dispositif, procédé et programme de traitement d'images
US10916012B2 (en) 2016-10-05 2021-02-09 Canon Kabushiki Kaisha Image processing apparatus and image processing method
JP2021097790A (ja) * 2019-12-20 2021-07-01 株式会社トプコン 眼科情報処理装置、眼科装置、眼科情報処理方法、及びプログラム
WO2021153087A1 (fr) * 2020-01-30 2021-08-05 株式会社トプコン Dispositif ophthalmique, procédé de commande associé et support de stockage
JP2021183280A (ja) * 2017-02-28 2021-12-02 キヤノン株式会社 撮像装置、撮像装置の作動方法およびプログラム
EP4074244A1 (fr) * 2021-04-13 2022-10-19 Leica Instruments (Singapore) Pte. Ltd. Reconnaissance des caractéristiques et guidage en profondeur à l'aide de l'oct peropératoire

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6768624B2 (ja) * 2017-01-11 2020-10-14 キヤノン株式会社 画像処理装置、光干渉断層撮像装置、画像処理方法、及びプログラム
JP6740177B2 (ja) 2017-06-14 2020-08-12 キヤノン株式会社 画像処理装置、画像処理方法及びプログラム
WO2019172043A1 (fr) * 2018-03-05 2019-09-12 キヤノン株式会社 Dispositif de traitement d'image et son procédé de commande
JP7086683B2 (ja) * 2018-04-06 2022-06-20 キヤノン株式会社 画像処理装置、画像処理方法及びプログラム
WO2020066456A1 (fr) * 2018-09-25 2020-04-02 ソニー株式会社 Dispositif de traitement d'image, procédé de traitement d'image et programme
JP7250653B2 (ja) * 2018-10-10 2023-04-03 キヤノン株式会社 画像処理装置、画像処理方法及びプログラム
JP7439419B2 (ja) 2019-09-04 2024-02-28 株式会社ニデック 眼科画像処理プログラムおよび眼科画像処理装置
JP7517903B2 (ja) 2020-08-20 2024-07-17 株式会社トプコン スリットランプ顕微鏡システム

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011024930A (ja) * 2009-07-29 2011-02-10 Topcon Corp 眼科観察装置
JP2012045299A (ja) * 2010-08-30 2012-03-08 Canon Inc 画像処理装置、画像処理方法、プログラム及びプログラム記録媒体
JP2012071113A (ja) * 2010-08-31 2012-04-12 Canon Inc 画像処理装置、画像処理装置の制御方法及びプログラム
JP2012228544A (ja) * 2012-07-23 2012-11-22 Canon Inc 光断層画像撮像装置
WO2013085042A1 (fr) * 2011-12-09 2013-06-13 株式会社トプコン Dispositif d'observation de fond d'œil

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4068369B2 (ja) * 2002-03-15 2008-03-26 株式会社東芝 X線画像診断装置
JP5095167B2 (ja) * 2006-09-19 2012-12-12 株式会社トプコン 眼底観察装置、眼底画像表示装置及び眼底観察プログラム
JP4971872B2 (ja) * 2007-05-23 2012-07-11 株式会社トプコン 眼底観察装置及びそれを制御するプログラム
JP5790002B2 (ja) * 2011-02-04 2015-10-07 株式会社ニデック 眼科撮影装置
JP5921068B2 (ja) * 2010-03-02 2016-05-24 キヤノン株式会社 画像処理装置、制御方法及び光干渉断層撮影システム
JP5801577B2 (ja) * 2010-03-25 2015-10-28 キヤノン株式会社 光断層撮像装置及び光断層撮像装置の制御方法
JP6039156B2 (ja) * 2010-06-08 2016-12-07 キヤノン株式会社 画像処理装置、画像処理方法、及びプログラム
JP5822485B2 (ja) * 2011-02-25 2015-11-24 キヤノン株式会社 画像処理装置、画像処理方法、画像処理システム、slo装置、およびプログラム
JP5733565B2 (ja) * 2011-03-18 2015-06-10 ソニー株式会社 画像処理装置および方法、並びにプログラム
JP6023406B2 (ja) * 2011-06-29 2016-11-09 キヤノン株式会社 眼科装置、評価方法および当該方法を実行するプログラム
JP6157818B2 (ja) * 2011-10-07 2017-07-05 東芝メディカルシステムズ株式会社 X線診断装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011024930A (ja) * 2009-07-29 2011-02-10 Topcon Corp 眼科観察装置
JP2012045299A (ja) * 2010-08-30 2012-03-08 Canon Inc 画像処理装置、画像処理方法、プログラム及びプログラム記録媒体
JP2012071113A (ja) * 2010-08-31 2012-04-12 Canon Inc 画像処理装置、画像処理装置の制御方法及びプログラム
WO2013085042A1 (fr) * 2011-12-09 2013-06-13 株式会社トプコン Dispositif d'observation de fond d'œil
JP2012228544A (ja) * 2012-07-23 2012-11-22 Canon Inc 光断層画像撮像装置

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2015098912A1 (ja) * 2013-12-25 2017-03-23 興和株式会社 断層像撮影装置
WO2015098912A1 (fr) * 2013-12-25 2015-07-02 興和株式会社 Dispositif de tomographie
JP2016049368A (ja) * 2014-09-01 2016-04-11 株式会社ニデック 眼科撮影装置
JP2018504994A (ja) * 2015-02-16 2018-02-22 ノバルティス アーゲー 硝子体および網膜のデュアル撮像のためのシステムおよび方法
JP2016202249A (ja) * 2015-04-15 2016-12-08 株式会社ニデック 眼底撮像装置及び眼底撮像プログラム
KR20170008172A (ko) * 2015-07-13 2017-01-23 캐논 가부시끼가이샤 화상처리장치, 화상처리방법 및 광간섭 단층촬영 장치
JP2017018435A (ja) * 2015-07-13 2017-01-26 キヤノン株式会社 画像処理装置、画像処理方法及び光干渉断層撮影装置
US10299675B2 (en) 2015-07-13 2019-05-28 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and optical coherence tomography apparatus
KR102049242B1 (ko) * 2015-07-13 2019-11-28 캐논 가부시끼가이샤 화상처리장치, 화상처리방법 및 광간섭 단층촬영 장치
JP2017093854A (ja) * 2015-11-25 2017-06-01 株式会社トプコン 眼科撮影装置及び眼科画像表示装置
WO2017090549A1 (fr) * 2015-11-25 2017-06-01 株式会社トプコン Dispositif de capture d'image ophtalmologique et dispositif d'affichage d'image ophtalmologique
JP2017127397A (ja) * 2016-01-18 2017-07-27 キヤノン株式会社 画像処理装置、推定方法、システム及びプログラム
JP7104516B2 (ja) 2016-02-02 2022-07-21 株式会社ニデック 断層画像撮影装置
JPWO2017135278A1 (ja) * 2016-02-02 2018-11-29 株式会社ニデック 断層画像撮影装置
JP2017185057A (ja) * 2016-04-06 2017-10-12 キヤノン株式会社 眼科撮影装置及びその制御方法、並びに、プログラム
US10916012B2 (en) 2016-10-05 2021-02-09 Canon Kabushiki Kaisha Image processing apparatus and image processing method
JP2018057828A (ja) * 2016-10-05 2018-04-12 キヤノン株式会社 画像処理装置及び画像処理方法
EP3335622A3 (fr) * 2016-12-13 2018-06-27 Canon Kabushiki Kaisha Appareil ophtalmologique, procédé d'imagerie ophtalmologique et programme
US10653309B2 (en) 2016-12-13 2020-05-19 Canon Kabushiki Kaisha Ophthalmologic apparatus, and ophthalmologic imaging method
JP2018094056A (ja) * 2016-12-13 2018-06-21 キヤノン株式会社 眼科装置、眼科撮影方法、及びプログラム
JP2018140049A (ja) * 2017-02-28 2018-09-13 キヤノン株式会社 撮像装置、撮像方法およびプログラム
JP2021183280A (ja) * 2017-02-28 2021-12-02 キヤノン株式会社 撮像装置、撮像装置の作動方法およびプログラム
WO2019146582A1 (fr) * 2018-01-25 2019-08-01 国立研究開発法人産業技術総合研究所 Dispositif de capture d'image, système de capture d'image, et procédé de capture d'image
WO2019156139A1 (fr) * 2018-02-08 2019-08-15 興和株式会社 Dispositif, procédé et programme de traitement d'images
JP2021097790A (ja) * 2019-12-20 2021-07-01 株式会社トプコン 眼科情報処理装置、眼科装置、眼科情報処理方法、及びプログラム
JP7384656B2 (ja) 2019-12-20 2023-11-21 株式会社トプコン 眼科情報処理装置、眼科装置、眼科情報処理方法、及びプログラム
WO2021153087A1 (fr) * 2020-01-30 2021-08-05 株式会社トプコン Dispositif ophthalmique, procédé de commande associé et support de stockage
EP4074244A1 (fr) * 2021-04-13 2022-10-19 Leica Instruments (Singapore) Pte. Ltd. Reconnaissance des caractéristiques et guidage en profondeur à l'aide de l'oct peropératoire
WO2022219006A1 (fr) * 2021-04-13 2022-10-20 Leica Instruments (Singapore) Pte. Ltd. Reconnaissance de caractéristiques et guidage de profondeur à l'aide d'oct per-opératoire

Also Published As

Publication number Publication date
JP6378724B2 (ja) 2018-08-22
JP2016195878A (ja) 2016-11-24
JPWO2014203901A1 (ja) 2017-02-23
JP6046250B2 (ja) 2016-12-14
JP2018130595A (ja) 2018-08-23
JP6586196B2 (ja) 2019-10-02

Similar Documents

Publication Publication Date Title
JP6586196B2 (ja) 眼科撮影装置および眼科画像表示装置
JP5867719B2 (ja) 光画像計測装置
EP3730035B1 (fr) Appareil ophtalmologique
JP5937163B2 (ja) 眼底解析装置及び眼底観察装置
JP6045895B2 (ja) 眼科観察装置
JP5936254B2 (ja) 眼底観察装置及び眼底画像解析装置
JP5941761B2 (ja) 眼科撮影装置及び眼科画像処理装置
JP2016041221A (ja) 眼科撮影装置およびその制御方法
JP6392275B2 (ja) 眼科撮影装置、眼科画像表示装置および眼科画像処理装置
JP6411728B2 (ja) 眼科観察装置
JP6101475B2 (ja) 眼科観察装置
JP6159454B2 (ja) 眼科観察装置
JP6099782B2 (ja) 眼科撮影装置
JP2018023818A (ja) 眼科観察装置
JP6158535B2 (ja) 眼底解析装置
JP6503040B2 (ja) 眼科観察装置
JP6021289B2 (ja) 血流情報生成装置、血流情報生成方法、及びプログラム
WO2016039188A1 (fr) Dispositif d'analyse de fond d'œil et dispositif d'observation de fond d'œil
JP6404431B2 (ja) 眼科観察装置
JP6954831B2 (ja) 眼科撮影装置、その制御方法、プログラム、及び記録媒体
JP6942627B2 (ja) 眼科撮影装置、その制御方法、プログラム、及び記録媒体
JP2018023815A (ja) 眼科観察装置
JP2018023819A (ja) 眼科観察装置
JP2017140401A (ja) 眼科撮影装置
JP2016104305A (ja) 眼科撮影装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14813520

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2015522939

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14813520

Country of ref document: EP

Kind code of ref document: A1