WO2023042577A1 - Ophthalmic information processing device, ophthalmic device, ophthalmic information processing method, and program - Google Patents

Ophthalmic information processing device, ophthalmic device, ophthalmic information processing method, and program Download PDF

Info

Publication number
WO2023042577A1
WO2023042577A1 PCT/JP2022/030396 JP2022030396W WO2023042577A1 WO 2023042577 A1 WO2023042577 A1 WO 2023042577A1 JP 2022030396 W JP2022030396 W JP 2022030396W WO 2023042577 A1 WO2023042577 A1 WO 2023042577A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
eye
image
information processing
depth
Prior art date
Application number
PCT/JP2022/030396
Other languages
French (fr)
Japanese (ja)
Inventor
達夫 山口
陽子 広原
正博 秋葉
Original Assignee
株式会社トプコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社トプコン filed Critical 株式会社トプコン
Publication of WO2023042577A1 publication Critical patent/WO2023042577A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography

Definitions

  • the present invention relates to an ophthalmic information processing device, an ophthalmic device, an ophthalmic information processing method, and a program.
  • fundus observation is useful for diagnosing fundus diseases and estimating the hardening state of the whole body (especially cerebral blood vessels).
  • a fundus image acquired by an ophthalmologic apparatus such as a fundus camera or a scanning light ophthalmoscope (SLO) is used.
  • Patent Literature 1 and Patent Literature 2 disclose an ophthalmologic apparatus that acquires a spectral fundus image.
  • Non-Patent Document 1 and Non-Patent Document 2 disclose a method of applying a hyperspectral image as a spectral fundus image to the retina.
  • Patent Literature 3 discloses a method of accurately identifying a site based on spectral characteristics from a spectral fundus image.
  • Spectral distribution data such as spectral images are acquired based on the reflected light of the illumination light from the measurement target area. Since the detected reflected light includes reflected light and scattered light from various tissues in the depth direction of the measurement target site, it is unclear from which tissue in the measurement target site the light is reflected. . If it is possible to specify which tissue in the measurement target site the reflected light comes from, it becomes possible to perform a more detailed analysis of the spectral distribution data.
  • the present invention has been made in view of such circumstances, and one of its purposes is to provide a new technique for analyzing spectral distribution data in more detail.
  • a first aspect includes a characteristic region identifying unit that identifies a characteristic region in spectral distribution data acquired by receiving return light in a predetermined wavelength range from an eye to be inspected illuminated with illumination light; and a depth information specifying unit that specifies depth information of the characteristic region based on the measurement data of the eye to be inspected, which has a higher resolution in the depth direction than the distribution data.
  • the characteristic region specifying unit is obtained by illuminating the eye to be inspected with illumination light and receiving return light from the eye to be inspected having different wavelength ranges. and specifying the characteristic region in any one of the plurality of spectral distribution data.
  • the measurement data is OCT data obtained by performing optical coherence tomography on the eye to be examined.
  • the depth information specifying unit has the highest correlation with the spectral distribution data among a plurality of front images formed based on the OCT data and having different depth positions.
  • a search unit that searches for a front image with a high degree of depth is included, and the depth information is specified based on the front image searched by the search unit.
  • the depth information specifying unit selects an image including the characteristic region from among a plurality of front images formed based on the OCT data and having different depth positions.
  • a searching unit that searches for a front image containing an image region with the highest degree of correlation is included, and the depth information is specified based on the front image searched by the searching unit.
  • a sixth aspect according to the embodiment is the fourth aspect or the fifth aspect, including an estimating unit that estimates the presence or absence of a disease, the probability of the disease, or the type of the disease based on the front image searched by the searching unit. .
  • a seventh aspect according to the embodiment includes, in the sixth aspect, a display control unit that causes the display means to display disease information including the presence or absence of the disease, the probability of the disease, or the type of the disease estimated by the estimation unit.
  • An eighth aspect according to the embodiment includes, in the fourth aspect or the fifth aspect, a display control unit that causes a display unit to display the front image and the depth information searched by the searching unit.
  • a ninth aspect according to the embodiment, in the fourth aspect or the fifth aspect, includes a display control unit that superimposes the spectral distribution data on the front image searched by the searching unit and displays it on a display unit.
  • the display control unit causes the display means to identifiably display an area corresponding to a characteristic part in the front image corresponding to the characteristic area.
  • An eleventh aspect according to the embodiment in any one of the first to seventh aspects, includes a display control unit that displays the spectral distribution data and the depth information on the display means.
  • the depth information represents a depth position, a depth range, and a layer region with reference to a reference portion of the subject's eye. contains at least one of the information;
  • a thirteenth aspect according to the embodiment is an illumination optical system that illuminates the eye to be inspected with illumination light, a light receiving optical system that receives return light of the illumination light from the eye to be inspected, the wavelength ranges of which are different from each other, and the eye to be inspected. and an ophthalmic information processing apparatus according to any one of the first to twelfth aspects.
  • a fourteenth aspect according to the embodiment is a characteristic region identifying step of identifying a characteristic region in spectral distribution data obtained by receiving return light in a predetermined wavelength range from an eye to be inspected illuminated with illumination light; and a depth information specifying step of specifying depth information of the characteristic region based on measurement data of the eye to be inspected, which has higher resolution in the depth direction than distribution data.
  • the characteristic region identifying step includes illuminating the eye to be inspected with illumination light and receiving return light from the eye to be inspected having different wavelength ranges. and specifying the characteristic region in any one of the plurality of spectral distribution data.
  • the measurement data is OCT data obtained by performing optical coherence tomography on the eye to be examined.
  • the depth information specifying step includes determining the depth information that is most correlated with the spectral distribution data from among a plurality of front images formed based on the OCT data and having different depth positions.
  • a search step of searching for a front image with a high degree of depth is included, and the depth information is specified based on the front image searched in the search step.
  • the depth information specifying step includes: selecting an image including the characteristic region from among a plurality of front images formed based on the OCT data and having different depth positions; A search step of searching for a front image containing an image region with the highest degree of correlation is included, and the depth information is specified based on the front image searched in the search step.
  • a nineteenth aspect according to the embodiment is the seventeenth aspect or the eighteenth aspect, including an estimation step of estimating the presence or absence of a disease, the probability of the disease, or the type of the disease based on the front image searched in the searching step. .
  • a twentieth aspect according to the embodiment, in the nineteenth aspect, includes a display control step of causing the display means to display the disease information including the presence or absence of the disease, the probability of the disease, or the type of the disease estimated in the estimation step.
  • a twenty-first aspect according to the embodiment in the seventeenth aspect or the eighteenth aspect, includes a display control step of displaying the front image and the depth information searched in the searching step on a display means.
  • a 22nd aspect according to the embodiment, in the 17th aspect or 18th aspect, includes a display control step of superimposing the spectral distribution data on the front image searched in the searching step and displaying it on a display means.
  • the display control step causes the display means to identifiably display an area corresponding to a characteristic part in the front image corresponding to the characteristic area.
  • a twenty-fourth aspect according to the embodiment, in any one of the fourteenth to nineteenth aspects, includes a display control step of displaying the spectral distribution data and the depth information on a display means.
  • the depth information represents a depth position, a depth range, and a layer region with reference to a reference portion of the subject's eye. contains at least one of the information;
  • a twenty-sixth aspect according to the embodiment is a program that causes a computer to execute each step of the ophthalmologic information processing method according to any one of the fourteenth to twenty-fifth aspects.
  • FIG. 1 is a schematic diagram showing an example of the configuration of a control system of an ophthalmologic apparatus according to an embodiment
  • FIG. 1 is a schematic diagram showing an example of the configuration of a control system of an ophthalmologic apparatus according to an embodiment
  • FIG. 1 is a schematic diagram showing an example of the configuration of a control system of an ophthalmologic apparatus according to an embodiment
  • FIG. 1 is a schematic diagram showing an example of the configuration of a control system of an ophthalmologic apparatus according to an embodiment
  • FIG. It is a schematic diagram for explaining the operation of the ophthalmologic apparatus according to the embodiment.
  • An ophthalmologic information processing apparatus acquires spectral distribution data of an eye to be examined, and based on measurement data of the eye to be examined that has higher resolution in the depth direction than the spectral distribution data, information representing the depth of the spectral distribution data ( depth information).
  • the ophthalmologic information processing apparatus identifies a characteristic region in the spectral distribution data of the subject's eye, and calculates the depth of the identified characteristic region based on measurement data of the subject's eye having higher resolution in the depth direction than the spectral distribution data.
  • the information to be represented can be specified.
  • the spectral distribution data is obtained by receiving return light in a predetermined wavelength range from the subject's eye (for example, fundus, anterior segment) illuminated with illumination light.
  • spectral distribution data include a spectral image (spectral fundus image, spectral anterior segment image) as a two-dimensional spectral distribution.
  • spectral images include hyperspectral images, multispectral images, and RGB color images.
  • characteristic regions include blood vessels, optic discs, diseased regions, and abnormal regions.
  • an eye to be inspected is illuminated with illumination light having two or more wavelength components whose wavelength ranges are different from each other, and return light having wavelength components in a predetermined wavelength range is selected from the light returned from the eye to be inspected. obtains a plurality of spectral distribution data.
  • the subject's eye is sequentially illuminated with illumination light having two or more wavelength components whose wavelength ranges are different from each other, and return light having wavelength components in a predetermined wavelength range is sequentially emitted from the return light from the subject's eye.
  • a plurality of spectral distribution data are acquired by selecting .
  • illumination light having wavelength components in a predetermined wavelength range is sequentially selected from illumination light having two or more wavelength components whose wavelength ranges are different from each other, and the eye to be examined is sequentially illuminated with the selected illumination light.
  • a plurality of spectral distribution data are acquired by illuminating and sequentially receiving return light from the subject's eye.
  • illumination light having two or more wavelength components with mutually different wavelength ranges is sequentially emitted using a light source whose wavelength range can be arbitrarily changed, and the emitted illumination light sequentially illuminates the subject's eye.
  • a plurality of spectral distribution data are acquired by illuminating and sequentially receiving return light from the subject's eye.
  • a plurality of spectral distribution data are obtained by illuminating the subject's eye with illumination light, sequentially changing the wavelength range in which the light receiving device has high light receiving sensitivity, and sequentially selecting return light from the subject's eye. is obtained.
  • the depth direction may be the traveling direction of illumination light that illuminates the subject's eye, the depth direction of the subject's eye, the direction from the superficial layer to the deep layer of the fundus, or the direction of the measurement optical axis (imaging optical axis) with respect to the subject's eye.
  • OCT optical coherence tomography
  • AO Adaptive Optics
  • the OCT data is obtained, for example, by dividing light from an OCT light source into measurement light and reference light, projecting the measurement light onto the eye to be inspected, returning light of the measurement light from the eye to be inspected, and reference light passing through the reference light path. It is obtained by detecting interfering light.
  • the ophthalmic information processing device is configured to acquire OCT data obtained by an externally provided OCT device.
  • the functionality of the ophthalmic information processing device is implemented by an ophthalmic device capable of acquiring OCT data.
  • the ophthalmic information processing device is configured to acquire measurement data obtained by an externally provided AO-SLO device.
  • the functionality of the ophthalmic information processing device is implemented by an ophthalmic device having AO-SLO functionality.
  • a plurality of spectral distribution data are acquired by sequentially receiving return light of illumination light with different wavelength ranges in a predetermined analysis wavelength region. Regarding the first returned light and the second returned light whose wavelength ranges are adjacent to each other among the sequentially received returned lights, part of the wavelength range of the first returned light may overlap with the wavelength range of the second returned light.
  • characteristic region identification processing is executed for each of the plurality of spectral distribution data. For example, the depth information of the characteristic region in the spectral distribution data that can specify the characteristic region with the highest accuracy among the plurality of spectral distribution data is obtained. For example, depth information of a characteristic region in desired spectral distribution data selected by a user or the like from a plurality of spectral distribution data is obtained.
  • the ophthalmologic information processing apparatus generates a plurality of front images (en-face images, C-scan images, projection images) formed based on OCT data of the subject's eye and projected or integrated in mutually different depth ranges. , OCT angiography) is searched for a front image having the highest degree of correlation with the spectral distribution data. The ophthalmologic information processing apparatus identifies depth information of the searched front image as depth information of the spectral distribution data.
  • An ophthalmologic information processing method includes one or more steps executed by the ophthalmologic information processing apparatus described above.
  • a program according to an embodiment causes a computer (processor) to execute each step of an ophthalmologic information processing method according to an embodiment.
  • a recording medium according to the embodiment is a non-temporary recording medium (storage medium) in which the program according to the embodiment is recorded.
  • a processor is, for example, a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), an ASIC (Application Specific Integrated Circuit), a programmable logic device (for example, a SPLD (Simple Programmable Logic Device CPLD Logic Device), FPGA (Field Programmable Gate Array)), etc.
  • the processor implements the functions according to the embodiment by, for example, reading and executing a program stored in a memory circuit or memory device.
  • a memory circuit or device may be included in the processor. Also, a memory circuit or memory device may be provided external to the processor.
  • depth information for a spectral fundus image as spectral distribution data of the fundus of the subject's eye
  • the configuration according to the embodiment is not limited to this.
  • the following embodiments are also applicable to specifying depth information for a spectral anterior segment image as spectral distribution data of an anterior segment other than the fundus.
  • the ophthalmologic information processing apparatus is configured to acquire spectral distribution data of an eye to be examined that is externally acquired through a communication function.
  • an ophthalmologic apparatus capable of acquiring spectral distribution data of an eye to be examined has the function of an ophthalmologic information processing apparatus.
  • An ophthalmologic apparatus including the functions of the ophthalmologic information processing apparatus according to the embodiment will be described as an example.
  • An ophthalmologic apparatus according to an embodiment includes an ophthalmologic imaging apparatus.
  • the ophthalmic imaging device included in the ophthalmic device of some embodiments is, for example, any one or more of a fundus camera, a scanning optical ophthalmoscope, a slit lamp ophthalmoscope, a surgical microscope, or the like.
  • An ophthalmic device according to some embodiments includes any one or more of an ophthalmic measurement device and an ophthalmic treatment device in addition to an ophthalmic imaging device.
  • the ophthalmic measurement device included in the ophthalmic device of some embodiments is, for example, any one or more of an eye refractor, a tonometer, a specular microscope, a wavefront analyzer, a perimeter, a microperimeter, etc. .
  • the ophthalmic treatment device included in the ophthalmic device of some embodiments is, for example, any one or more of a laser treatment device, a surgical device, a surgical microscope, and the like.
  • the ophthalmic device includes an optical coherence tomography and a fundus camera.
  • swept source OCT is applied to this optical coherence tomography
  • the type of OCT is not limited to this, and other types of OCT (spectral domain OCT, time domain OCT, Amphas OCT, etc.) may be applied. good.
  • the x direction is the direction (horizontal direction) perpendicular to the optical axis direction of the objective lens
  • the y direction is the direction (vertical direction) perpendicular to the optical axis direction of the objective lens.
  • the z-direction is assumed to be the optical axis direction of the objective lens.
  • the ophthalmologic apparatus 1 includes a fundus camera unit 2 , an OCT unit 100 and an arithmetic control unit 200 .
  • the retinal camera unit 2 is provided with an optical system and a mechanism for acquiring a front image of the eye E to be examined.
  • the OCT unit 100 is provided with a part of an optical system and a mechanism for performing OCT. Another part of the optical system and mechanism for performing OCT is provided in the fundus camera unit 2 .
  • the arithmetic control unit 200 includes one or more processors that perform various arithmetic operations and controls.
  • the ophthalmologic apparatus 1 includes a pair of anterior eye cameras 5A and 5B.
  • the fundus camera unit 2 is provided with an optical system for photographing the fundus Ef of the eye E to be examined.
  • the acquired image of the fundus oculi Ef (referred to as a fundus image, fundus photograph, etc.) is a front image such as an observed image or a photographed image. Observation images are obtained by moving image shooting using near-infrared light.
  • the captured image is a still image using flash light or a spectral image (spectral fundus image, spectral anterior segment image).
  • the fundus camera unit 2 can photograph the anterior segment Ea of the subject's eye E to obtain a front image (anterior segment image).
  • the retinal camera unit 2 includes an illumination optical system 10 and an imaging optical system 30.
  • the illumination optical system 10 irradiates the eye E to be inspected with illumination light.
  • the imaging optical system 30 detects return light of the illumination light from the eye E to be examined.
  • the measurement light from the OCT unit 100 is guided to the subject's eye E through the optical path in the retinal camera unit 2, and its return light is guided to the OCT unit 100 through the same optical path.
  • observation illumination light output from an observation light source 11 of an illumination optical system 10 is reflected by a reflecting mirror 12 having a curved reflecting surface, passes through a condenser lens 13, and passes through a visible light cut filter 14. It becomes near-infrared light. Furthermore, the observation illumination light is once converged near the photographing light source 15 , reflected by the mirror 16 , and passed through the relay lenses 17 and 18 , the diaphragm 19 and the relay lens 20 . Then, the observation illumination light is reflected by the periphery of the perforated mirror 21 (area around the perforation), passes through the dichroic mirror 46, is refracted by the objective lens 22, Illuminate part Ea).
  • the return light of the observation illumination light from the subject's eye E is refracted by the objective lens 22, passes through the dichroic mirror 46, passes through the hole formed in the central region of the aperture mirror 21, and passes through the photographing focusing lens 31. through and reflected by mirror 32 . Further, this return light passes through the half mirror 33A, is reflected by the dichroic mirror 33, and is imaged on the light receiving surface of the image sensor 35 by the condenser lens . The image sensor 35 detects returned light at a predetermined frame rate. The focus of the imaging optical system 30 is adjusted so as to match the fundus oculi Ef or the anterior segment Ea.
  • the light (imaging illumination light) output from the imaging light source 15 irradiates the fundus oculi Ef through the same path as the observation illumination light.
  • the return light of the photographing illumination light from the subject's eye E is guided to the dichroic mirror 33 through the same path as the return light of the observation illumination light, passes through the dichroic mirror 33, is reflected by the mirror 36, and enters the tunable filter 80. be guided.
  • the wavelength tunable filter 80 is a filter that can select the wavelength range of transmitted light in a predetermined analysis wavelength region.
  • the wavelength range of light transmitted through the wavelength tunable filter 80 can be arbitrarily selected.
  • the tunable filter 80 is similar to the liquid crystal tunable filter disclosed in Japanese Patent Application Laid-Open No. 2006-158546, for example.
  • the wavelength tunable filter 80 can arbitrarily select the wavelength selection range of transmitted light by changing the voltage applied to the liquid crystal.
  • the wavelength tunable filter 80 includes two or more wavelength selection filters having different wavelength selection ranges for transmitted light, and the two or more wavelength selection filters are selectively arranged in the optical path of return light of the illumination light. may be configured to be possible.
  • the wavelength tunable filter 80 is a filter that can select the wavelength range of reflected light in a predetermined analysis wavelength region.
  • Return light from the mirror 36 that has passed through the wavelength tunable filter 80 is imaged on the light receiving surface of the image sensor 38 by the condenser lens 37 .
  • tunable filter 80 is placed between dichroic mirror 33 and condenser lens 34 .
  • the tunable filter 80 is configured to be insertable/removable with respect to the optical path between the dichroic mirror 33 or mirror 36 and the condenser lens 37 .
  • the ophthalmologic apparatus 1 can sequentially obtain the results of receiving the returned light obtained by the image sensor 38. Multiple spectral fundus images can be acquired.
  • the ophthalmologic apparatus 1 when the wavelength tunable filter 80 is retracted from the optical path between the dichroic mirror 33 and the condenser lens 37, the ophthalmologic apparatus 1 obtains the result of receiving the return light obtained by the image sensor 38, and normally still images (fundus image, anterior segment image) can be acquired.
  • An image (observation image) based on the fundus reflected light detected by the image sensor 35 is displayed on the display device 3 .
  • the display device 3 also displays an image (captured image, spectral fundus image) based on the fundus reflected light detected by the image sensor 38 .
  • the display device 3 that displays the observed image and the display device 3 that displays the captured image may be the same or different.
  • An LCD (Liquid Crystal Display) 39 displays a fixation target and a visual acuity measurement target.
  • a part of the light flux output from the LCD 39 is reflected by the half mirror 33 A, reflected by the mirror 32 , passes through the photographing focusing lens 31 , and passes through the aperture of the apertured mirror 21 .
  • the luminous flux that has passed through the aperture of the perforated mirror 21 is transmitted through the dichroic mirror 46, refracted by the objective lens 22, and projected onto the fundus oculi Ef.
  • fixation position of the subject's eye E By changing the display position of the fixation target on the screen of the LCD 39, the fixation position of the subject's eye E can be changed.
  • fixation positions include the fixation position for acquiring an image centered on the macula, the fixation position for acquiring an image centered on the optic disc, and the center of the fundus between the macula and the optic disc. and a fixation position for acquiring an image of a site far away from the macula (eye fundus periphery).
  • the ophthalmologic apparatus 1 includes a GUI (Graphical User Interface) or the like for designating at least one of such fixation positions.
  • the ophthalmologic apparatus 1 includes a GUI or the like for manually moving the fixation position (the display position of the fixation target).
  • a movable fixation target can be generated by selectively lighting multiple light sources in a light source array (such as a light emitting diode (LED) array). Also, one or more movable light sources can generate a movable fixation target.
  • a light source array such as a light emitting diode (LED) array.
  • LED light emitting diode
  • one or more movable light sources can generate a movable fixation target.
  • the focus optical system 60 generates a split index used for focus adjustment of the eye E to be examined.
  • the focus optical system 60 is moved along the optical path (illumination optical path) of the illumination optical system 10 in conjunction with the movement of the imaging focusing lens 31 along the optical path (illumination optical path) of the imaging optical system 30 .
  • the reflecting bar 67 can be inserted into and removed from the illumination optical path. When performing focus adjustment, the reflecting surface of the reflecting bar 67 is arranged at an angle in the illumination optical path.
  • Focus light output from the LED 61 passes through a relay lens 62, is split into two light beams by a split index plate 63, passes through a two-hole diaphragm 64, is reflected by a mirror 65, and is reflected by a condenser lens 66 onto a reflecting rod 67. is once imaged on the reflective surface of , and then reflected. Further, the focused light passes through the relay lens 20, is reflected by the perforated mirror 21, passes through the dichroic mirror 46, is refracted by the objective lens 22, and is projected onto the fundus oculi Ef. The fundus reflected light of the focus light is guided to the image sensor 35 through the same path as the return light of the observation illumination light. Manual focus and autofocus can be performed based on the received light image (split index image).
  • the dichroic mirror 46 synthesizes the fundus imaging optical path and the OCT optical path.
  • the dichroic mirror 46 reflects light in the wavelength band used for OCT and transmits light for fundus imaging.
  • the optical path for OCT (the optical path of the measurement light) includes, in order from the OCT unit 100 side toward the dichroic mirror 46 side, a collimator lens unit 40, an optical path length changing section 41, an optical scanner 42, an OCT focusing lens 43, a mirror 44, and a relay lens 45 are provided.
  • the optical path length changing unit 41 is movable in the direction of the arrow shown in FIG. 1, and changes the length of the OCT optical path. This change in optical path length is used for optical path length correction according to the axial length of the eye, adjustment of the state of interference, and the like.
  • the optical path length changing section 41 includes a corner cube and a mechanism for moving it.
  • the optical scanner 42 is arranged at a position optically conjugate with the pupil of the eye E to be examined.
  • the optical scanner 42 deflects the measurement light LS passing through the OCT optical path.
  • the optical scanner 42 is, for example, a galvanometer scanner capable of two-dimensional scanning.
  • the OCT focusing lens 43 is moved along the optical path of the measurement light LS in order to adjust the focus of the OCT optical system. Movement of the imaging focusing lens 31, movement of the focusing optical system 60, and movement of the OCT focusing lens 43 can be controlled in a coordinated manner.
  • anterior segment cameras 5A and 5B are used to determine the relative position between the optical system of the ophthalmologic apparatus 1 and the subject's eye E, similar to the invention disclosed in Japanese Patent Laid-Open No. 2013-248376.
  • the anterior eye cameras 5A and 5B are provided on the face of the subject's eye E side of a housing (fundus camera unit 2, etc.) housing an optical system.
  • the ophthalmologic apparatus 1 analyzes two anterior segment images obtained substantially simultaneously from different directions by the anterior segment cameras 5A and 5B, thereby determining the three-dimensional relative relationship between the optical system and the subject's eye E. find the position.
  • the analysis of the two anterior segment images may be similar to the analysis disclosed in Japanese Patent Application Laid-Open No. 2013-248376.
  • the number of anterior segment cameras may be any number of two or more.
  • the position of the eye to be examined E (that is, the relative position between the eye to be examined E and the optical system) is obtained using two or more anterior eye cameras. It is not limited to this.
  • the position of the eye E to be examined can be obtained by analyzing a front image of the eye E to be examined (for example, an observed image of the anterior segment Ea).
  • means for projecting an index onto the cornea of the subject's eye E can be provided, and the position of the subject's eye E can be obtained based on the projection position of this index (that is, the detection state of the corneal reflected light flux of this index).
  • the OCT unit 100 is provided with an optical system for performing swept-source OCT.
  • This optical system includes an interference optical system.
  • This interference optical system has a function of dividing light from a wavelength tunable light source (wavelength swept light source) into measurement light and reference light, return light of the measurement light from the subject's eye E, and reference light passing through the reference light path. and a function of generating interference light and a function of detecting this interference light.
  • a detection result (detection signal) of the interference light obtained by the interference optical system is a signal indicating the spectrum of the interference light, and is sent to the arithmetic control unit 200 .
  • the light source unit 101 includes, for example, a near-infrared tunable laser that changes the wavelength of emitted light at high speed.
  • the light L0 output from the light source unit 101 is guided to the polarization controller 103 by the optical fiber 102, and the polarization state is adjusted.
  • the light L0 whose polarization state has been adjusted is guided by the optical fiber 104 to the fiber coupler 105 and split into the measurement light LS and the reference light LR.
  • the reference light LR is guided to the collimator 111 by the optical fiber 110, converted into a parallel beam, passed through the optical path length correction member 112 and the dispersion compensation member 113, and guided to the corner cube 114.
  • the optical path length correction member 112 acts to match the optical path length of the reference light LR and the optical path length of the measurement light LS.
  • the dispersion compensation member 113 acts to match the dispersion characteristics between the reference light LR and the measurement light LS.
  • the corner cube 114 is movable in the incident direction of the reference light LR, thereby changing the optical path length of the reference light LR.
  • the reference light LR that has passed through the corner cube 114 passes through the dispersion compensating member 113 and the optical path length correcting member 112 , is converted by the collimator 116 from a parallel beam into a converged beam, and enters the optical fiber 117 .
  • the reference light LR incident on the optical fiber 117 is guided to the polarization controller 118 to have its polarization state adjusted, guided to the attenuator 120 via the optical fiber 119 to have its light amount adjusted, and guided to the fiber coupler 122 via the optical fiber 121 . be killed.
  • the measurement light LS generated by the fiber coupler 105 is guided by the optical fiber 127 and converted into a parallel light beam by the collimator lens unit 40, and the optical path length changing unit 41, the optical scanner 42, the OCT focusing lens 43, and the mirror 44. and relay lens 45 .
  • the measurement light LS that has passed through the relay lens 45 is reflected by the dichroic mirror 46, refracted by the objective lens 22, and enters the eye E to be examined.
  • the measurement light LS is scattered and reflected at various depth positions of the eye E to be examined.
  • the return light of the measurement light LS from the subject's eye E travels in the opposite direction along the same path as the forward path, is guided to the fiber coupler 105 , and reaches the fiber coupler 122 via the optical fiber 128 .
  • the incident length of the optical fiber 127 into which the measurement light LS is incident is arranged at a position substantially conjugate with the fundus oculi Ef of the eye E to be examined.
  • the fiber coupler 122 combines (interferences) the measurement light LS that has entered via the optical fiber 128 and the reference light LR that has entered via the optical fiber 121 to generate interference light.
  • the fiber coupler 122 generates a pair of interference lights LC by splitting the interference lights at a predetermined splitting ratio (for example, 1:1).
  • a pair of interference lights LC are guided to detector 125 through optical fibers 123 and 124, respectively.
  • the detector 125 is, for example, a balanced photodiode.
  • a balanced photodiode includes a pair of photodetectors that respectively detect a pair of interference lights LC, and outputs a difference between a pair of detection results obtained by these photodetectors.
  • the detector 125 sends this output (detection signal) to a DAQ (Data Acquisition System) 130 .
  • a clock KC is supplied from the light source unit 101 to the DAQ 130 .
  • the clock KC is generated in the light source unit 101 in synchronization with the output timing of each wavelength swept within a predetermined wavelength range by the wavelength tunable light source.
  • the light source unit 101 for example, optically delays one of the two branched lights obtained by branching the light L0 of each output wavelength, and then outputs the clock KC based on the result of detecting these combined lights. Generate.
  • the DAQ 130 samples the detection signal input from the detector 125 based on the clock KC.
  • DAQ 130 sends the sampling result of the detection signal from detector 125 to arithmetic control unit 200 .
  • an optical path length changing unit 41 for changing the length of the optical path (measurement optical path, measurement arm) of the measurement light LS and an optical path length changing unit 41 for changing the length of the optical path (reference optical path, reference arm) of the reference light LR corner cubes 114 are provided.
  • only one of the optical path length changing portion 41 and the corner cube 114 may be provided. It is also possible to change the difference between the measurement optical path length and the reference optical path length by using optical members other than these.
  • Control system 3 to 5 show configuration examples of the control system of the ophthalmologic apparatus 1.
  • FIG. 3 to 5 some of the components included in the ophthalmologic apparatus 1 are omitted.
  • the same parts as those in FIGS. 1 and 2 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
  • the control section 210, the image forming section 220 and the data processing section 230 are provided in the arithmetic control unit 200, for example.
  • Control unit 210 executes various controls.
  • Control unit 210 includes main control unit 211 and storage unit 212 .
  • the main controller 211 includes a processor (eg, control processor) and controls each part of the ophthalmologic apparatus 1 (including each element shown in FIGS. 1 to 5).
  • the main control unit 211 controls each part of the optical system of the retinal camera unit 2 shown in FIGS. It controls mechanism 150 , image forming section 220 , data processing section 230 and user interface (UI) 240 .
  • UI user interface
  • the control over the retinal camera unit 2 includes control over the focus driving units 31A and 43A, control over the wavelength tunable filter 80, control over the image sensors 35 and 38, control over the optical path length changing unit 41, and control over the optical scanner 42.
  • the control for the focus drive unit 31A includes control for moving the photographing focus lens 31 in the optical axis direction.
  • the control for the focus drive unit 43A includes control for moving the OCT focus lens 43 in the optical axis direction.
  • the control over the wavelength tunable filter 80 includes selection control of the wavelength range of transmitted light (for example, control of voltage applied to the liquid crystal).
  • the control of the image sensors 35 and 38 includes control of the light receiving sensitivity of the imaging element, control of the frame rate (light receiving timing, exposure time), control of the light receiving area (position, size, size), and readout of the light receiving result of the imaging element. control, etc.
  • the image sensor by changing the exposure time according to the wavelength range of the return light, the image sensor is arranged so that the received light intensity is uniform in each wavelength range of the analysis wavelength range in which a plurality of spectral fundus images are acquired. 35, 38 are controlled.
  • the main controller 211 controls the wavelength components of the illumination light in each wavelength range so that the received light intensity is uniform in each wavelength range of the analysis wavelength range in which a plurality of spectral fundus images are acquired. to control the intensity of the
  • Control over the LCD 39 includes control of the fixation position.
  • the main control unit 211 displays the fixation target at a position on the screen of the LCD 39 corresponding to the fixation position set manually or automatically. Further, the main control unit 211 can change (continuously or stepwise) the display position of the fixation target displayed on the LCD 39 . Thereby, the fixation target can be moved (that is, the fixation position can be changed).
  • the display position and movement mode of the fixation target are set manually or automatically. Manual setting is performed using, for example, a GUI. Automatic setting is performed by the data processing unit 230, for example.
  • the control over the optical path length changing unit 41 includes control for changing the optical path length of the measurement light LS.
  • the main control unit 211 moves the optical path length changing unit 41 along the optical path of the measuring light LS by controlling the driving unit that drives the corner cubes of the optical path length changing unit 41 to change the optical path length of the measuring light LS. .
  • Control of the optical scanner 42 includes control of scan mode, scan range (scan start position, scan end position), scan speed, and the like.
  • the main control unit 211 can perform an OCT scan with the measurement light LS on a desired region of the measurement site (imaging site).
  • the main control unit 211 also controls the observation light source 11, the photographing light source 15, the focus optical system 60, and the like.
  • Control over the OCT unit 100 includes control over the light source unit 101, control over the reference driver 114A, control over the detector 125, and control over the DAQ 130.
  • the control of the light source unit 101 includes control of turning on and off of the light source, control of the amount of light emitted from the light source, control of the wavelength sweep range, wavelength sweep speed, control of emission timing of light of each wavelength component, and the like. .
  • the control over the reference driver 114A includes control to change the optical path length of the reference light LR.
  • the main control unit 211 moves the corner cube 114 along the optical path of the reference light LR by controlling the reference driving unit 114A to change the optical path length of the reference light LR.
  • the control of the detector 125 includes control of the light receiving sensitivity of the detecting element, control of the frame rate (light receiving timing), control of the light receiving area (position, size, size), control of reading the light receiving result of the detecting element, and the like.
  • Control over the DAQ 130 includes fetch control (fetch timing, sampling timing) of the detection result of interference light obtained by the detector 125, readout control of the interference signal corresponding to the detection result of the fetched interference light, and the like.
  • the control for the anterior eye cameras 5A and 5B includes control of the light receiving sensitivity of each camera, frame rate (light receiving timing) control, synchronization control of the anterior eye cameras 5A and 5B, and the like.
  • the movement mechanism 150 for example, three-dimensionally moves at least the retinal camera unit 2 (optical system).
  • the movement mechanism 150 includes at least a mechanism for moving the retinal camera unit 2 in the x direction (horizontal direction), a mechanism for moving it in the y direction (vertical direction), and a mechanism for moving it in the z direction (depth direction). , back and forth).
  • the mechanism for moving in the x-direction includes, for example, an x-stage movable in the x-direction and an x-moving mechanism for moving the x-stage.
  • the mechanism for moving in the y-direction includes, for example, a y-stage movable in the y-direction and a y-moving mechanism for moving the y-stage.
  • the mechanism for moving in the z-direction includes, for example, a z-stage movable in the z-direction and a z-moving mechanism for moving the z-stage.
  • Each movement mechanism includes a pulse motor as an actuator and operates under control from the main control unit 211 .
  • the control over the moving mechanism 150 is used in alignment and tracking. Tracking is to move the apparatus optical system according to the eye movement of the eye E to be examined. Alignment and focus adjustment are performed in advance when tracking is performed. Tracking is a function of maintaining a suitable positional relationship in which alignment and focus are achieved by causing the position of the apparatus optical system to follow the movement of the eyeball. Some embodiments are configured to control movement mechanism 150 to change the optical path length of the reference beam (and thus the optical path length difference between the optical path of the measurement beam and the optical path of the reference beam).
  • the user relatively moves the optical system and the subject's eye E by operating the user interface 240 so that the displacement of the subject's eye E with respect to the optical system is cancelled.
  • the main control unit 211 controls the moving mechanism 150 to move the optical system relative to the eye E by outputting a control signal corresponding to the operation content of the user interface 240 to the moving mechanism 150 .
  • the main control unit 211 controls the movement mechanism 150 so that the displacement of the eye E to be examined with respect to the optical system is canceled, thereby moving the optical system relative to the eye E to be examined.
  • arithmetic processing using trigonometry based on the positional relationship between the pair of anterior eye cameras 5A and 5B and the subject's eye E is performed, and the main control unit A reference numeral 211 controls the moving mechanism 150 so that the eye to be examined E has a predetermined positional relationship with respect to the optical system.
  • the main controller 211 outputs a control signal such that the optical axis of the optical system substantially coincides with the axis of the eye E to be examined and the distance of the optical system from the eye E to be examined is a predetermined working distance.
  • the working distance is a default value also called a working distance of the objective lens 22, and corresponds to the distance between the subject's eye E and the optical system at the time of measurement (at the time of photographing) using the optical system.
  • the main control unit 211 can display various information on the display unit 240A as a display control unit.
  • the main control unit 211 causes the display unit 240A to display a plurality of spectral fundus images in association with wavelength ranges.
  • the main control unit 211 causes the display unit 240A to display analysis processing results obtained by the analysis unit 231, which will be described later.
  • the storage unit 212 stores various data.
  • the function of the storage unit 212 is implemented by a storage device such as a memory or a storage device.
  • the data stored in the storage unit 212 includes, for example, control parameters, fundus image data, anterior segment image data, OCT data (including OCT images), spectral image data of the fundus image, and anterior segment image. spectroscopic image data, information on the eye to be examined, and the like.
  • Control parameters include hyperspectral imaging control data and the like.
  • the hyperspectral imaging control data is control data for acquiring a plurality of fundus images based on return lights with different central wavelengths within a predetermined analysis wavelength range.
  • hyperspectral imaging control data examples include an analysis wavelength range in which a plurality of spectral fundus images are acquired, a wavelength range in which each spectral fundus image is acquired, a center wavelength, a center wavelength step, and a wavelength tunable filter 80 corresponding to the center wavelength. control data, etc.
  • the eye information to be examined includes information about the subject such as patient ID and name, information about the eye to be examined such as left/right eye identification information, and electronic medical record information.
  • the storage unit 212 stores programs for executing various processors (control processor, image forming processor, data processing processor).
  • the image forming unit 220 includes a processor (for example, an image forming processor), and forms an OCT image (image data) of the subject's eye E based on the output from the DAQ 130 (sampling result of the detection signal). For example, the image forming unit 220 performs signal processing on the spectral distribution based on the sampling result for each A line, forms a reflection intensity profile for each A line, and images these A line profiles as in the conventional swept source OCT. and arrange them along the scan lines.
  • the signal processing includes noise removal (noise reduction), filtering, FFT (Fast Fourier Transform), and the like.
  • the image forming section 220 performs known processing according to that type.
  • the data processing unit 230 includes a processor (for example, a data processing processor) and performs image processing and analysis processing on the image formed by the image forming unit 220 . At least two of the processor included in the main control unit 211, the processor included in the data processing unit 230, and the processor included in the image forming unit 220 may be configured by a single processor.
  • a processor for example, a data processing processor
  • the data processing unit 230 executes known image processing such as interpolation processing for interpolating pixels between tomographic images to form image data of a three-dimensional image of the fundus oculi Ef or the anterior segment Ea.
  • image data of a three-dimensional image means image data in which pixel positions are defined by a three-dimensional coordinate system.
  • Image data of a three-dimensional image includes image data composed of voxels arranged three-dimensionally. This image data is called volume data or voxel data.
  • rendering processing volume rendering, MIP (Maximum Intensity Projection: maximum intensity projection), etc.
  • Image data of a pseudo three-dimensional image is formed. This pseudo three-dimensional image is displayed on a display device such as the display unit 240A.
  • stack data of a plurality of tomographic images is image data of a three-dimensional image.
  • Stacked data is image data obtained by three-dimensionally arranging a plurality of tomographic images obtained along a plurality of scan lines based on the positional relationship of the scan lines. That is, stack data is image data obtained by expressing a plurality of tomographic images, which were originally defined by individual two-dimensional coordinate systems, by one three-dimensional coordinate system (that is, embedding them in one three-dimensional space).
  • the data processing unit 230 generates a B-scan image by arranging the A-scan images in the B-scan direction. In some embodiments, the data processing unit 230 performs various renderings on the acquired three-dimensional data set (volume data, stack data, etc.) to obtain a B-mode image (B-scan image) (longitudinal section) in an arbitrary cross section. plane image, axial cross-sectional image), C-mode image (C-scan image) at an arbitrary cross-section (cross-sectional image, horizontal cross-sectional image), projection image, shadowgram, and the like.
  • B-mode image B-scan image
  • C-mode image C-mode image
  • An arbitrary cross-sectional image such as a B-scan image or a C-scan image, is formed by selecting pixels (pixels, voxels) on a specified cross-section from a three-dimensional data set.
  • a projection image is formed by projecting a three-dimensional data set in a predetermined direction (z direction, depth direction, axial direction).
  • a shadowgram is formed by projecting a portion of the three-dimensional data set (for example, partial data corresponding to a specific layer) in a predetermined direction. By changing the depth range in the layer direction to be integrated, it is possible to form two or more different shadowgrams.
  • An image such as a C-scan image, a projection image, or a shadowgram whose viewpoint is the front side of the subject's eye is called an en-face image.
  • the data processing unit 230 generates B-scan images and front images (blood vessel-enhanced images, angiograms) in which retinal vessels and choroidal vessels are emphasized based on data (for example, B-scan image data) collected in time series by OCT. can be constructed.
  • data for example, B-scan image data
  • time-series OCT data can be collected by repeatedly scanning substantially the same portion of the eye E to be examined.
  • the data processing unit 230 compares time-series B-scan images obtained by B-scans of substantially the same site, and converts the pixel values of the portions where the signal intensity changes to the pixel values corresponding to the changes.
  • An enhanced image in which the changed portion is emphasized is constructed by the conversion.
  • the data processing unit 230 extracts information for a predetermined thickness in a desired region from the constructed multiple enhanced images and constructs an en-face image to form an OCTA (angiography) image.
  • Such a data processing unit 230 includes an analysis unit 231.
  • the analysis section 231 includes a characteristic site identification section 231A, a three-dimensional position calculation section 231B, and a spectral distribution data processing section 231C.
  • the analysis unit 231 can analyze the image (including the spectroscopic fundus image) of the subject's eye E to identify the characteristic regions depicted in the image. For example, the analysis unit 231 obtains the three-dimensional position of the subject's eye E based on the positions of the anterior eye cameras 5A and 5B and the positions of the specified characteristic regions.
  • the main control unit 211 aligns the optical system with respect to the eye to be examined E by relatively moving the optical system with respect to the eye to be examined E based on the determined three-dimensional position.
  • the analysis unit 231 can perform predetermined analysis processing on a plurality of spectral fundus images.
  • predetermined analysis processing include comparison processing of arbitrary two images among a plurality of spectral fundus images, processing of extracting a common region or difference region specified by the comparison processing, and attention to at least one of the plurality of spectral fundus images.
  • the analysis unit 231 identifies depth information of a characteristic region in any one of the plurality of spectral fundus images, and identifies depth information of the identified characteristic region based on OCT data as measurement data. In some embodiments, the analysis unit 231 aligns a plurality of spectral fundus images based on the OCT data so that each part of each spectral fundus image is aligned in the z direction, and aligns the aligned multiple fundus images. It is possible to identify any characteristic region of the spectral fundus image of .
  • the characteristic site identification unit 231A analyzes each of the captured images obtained by the anterior segment cameras 5A and 5B to identify positions (referred to as characteristic positions) in the captured images corresponding to the characteristic sites of the anterior segment Ea. Identify. For example, the pupil region of the subject eye E, the pupil center position of the subject eye E, the pupil center position, the corneal center position, the corneal vertex position, the subject eye center position, or the iris are used as the characteristic site. A specific example of processing for specifying the pupil center position of the eye E to be examined will be described below.
  • the characteristic part specifying unit 231A specifies an image region (pupil region) corresponding to the pupil of the subject's eye E based on the distribution of pixel values (such as luminance values) of the captured image. Since the pupil is generally drawn with lower luminance than other parts, the pupil region can be identified by searching for the low-luminance image region. At this time, the pupil region may be specified in consideration of the shape of the pupil. In other words, the pupil region can be identified by searching for a substantially circular low-brightness image region.
  • the characteristic part identifying section 231A identifies the central position of the identified pupil region. Since the pupil is substantially circular as described above, it is possible to specify the outline of the pupil region, specify the center position of this outline (the approximate circle or approximate ellipse), and set this as the pupil center position. Alternatively, the center of gravity of the pupil region may be obtained and the position of the center of gravity may be specified as the position of the center of gravity of the pupil.
  • the characteristic part identifying unit 231A can sequentially identify characteristic positions corresponding to characteristic parts in the captured images sequentially obtained by the anterior eye cameras 5A and 5B. In addition, the characteristic part identification unit 231A may identify the characteristic position every one or more frames of the captured images sequentially obtained by the anterior eye cameras 5A and 5B.
  • the three-dimensional position calculation unit 231B calculates the three-dimensional positions of the characteristic regions of the subject's eye E based on the positions of the anterior eye cameras 5A and 5B and the characteristic positions corresponding to the characteristic regions identified by the characteristic region identification unit 231A. Identify as a three-dimensional position.
  • the three-dimensional position calculation unit 231B calculates the positions (known) of the two anterior eye cameras 5A and 5B and corresponding to characteristic regions in the two captured images.
  • the three-dimensional position of the subject's eye E is calculated by applying a known trigonometric method to the position where the eye E is to be examined.
  • the three-dimensional position calculated by the three-dimensional position calculator 231B is sent to the main controller 211.
  • the main control unit 211 determines that the x- and y-direction positions of the optical axis of the optical system match the x- and y-direction positions of the three-dimensional position, and that the z-direction distance is
  • the moving mechanism 150 is controlled so as to achieve a predetermined working distance.
  • the spectral distribution data processing unit 231C executes a process of specifying depth information of the spectral distribution data based on the OCT data.
  • the spectral distribution data processing unit 231C executes a process of identifying a characteristic region in the spectral fundus image as the spectral distribution data and identifying depth information of the identified characteristic region.
  • the spectral distribution data processing unit 231C can estimate the presence or absence of a disease, the probability of the disease, or the type of the disease based on the characteristic regions specified by the above processing.
  • the spectral distribution data processing unit 231C can highly accurately estimate the presence or absence of a disease based on the feature region for which the depth information has been specified by the above processing.
  • the spectral distribution data processing section 231C includes a characteristic region identifying section 2311C, a depth information identifying section 2312C, and a disease estimating section 2314C.
  • the depth information specifying section 2312C includes a searching section 2313C.
  • the characteristic region identifying section 2311C identifies a characteristic region in the spectral fundus image.
  • characteristic regions include blood vessels, diseased regions, optic nerve papilla, abnormal regions, regions characterized by changes in pixel luminance, and the like.
  • the characteristic region identifying section 2311C may identify the characteristic region designated using the operation section 240B of the user interface 240 as the characteristic region in the spectral distribution data.
  • the characteristic region identifying unit 2311C identifies characteristic regions for each of a plurality of spectral fundus images.
  • a feature region may be two or more regions.
  • the characteristic region identifying unit 2311C identifies characteristic regions for one or more spectral fundus images selected from a plurality of spectral fundus images.
  • the characteristic region identifying unit 2311C performs principal component analysis on the spectral fundus image and identifies characteristic regions using the principal component analysis results. For example, in the principal component analysis of a spectroscopic fundus image, principal components of one or more dimensions are sequentially identified so as to maximize the variance (variation). Each principal component reflects a characteristic region (characteristic part).
  • the characteristic region specifying unit 2311C first calculates the center of gravity (average value) of all the data of the spectral fundus image, and specifies the direction in which the variance of the data from the calculated center of gravity is maximum as the first principal component. Then, the second principal component that has the maximum variance in the direction orthogonal to the identified first principal component is identified. Subsequently, the characteristic region identifying unit 2311C identifies the (n+1)-th principal component having the maximum variance in the direction perpendicular to the most recently identified n-th (n is an integer of 2 or more) principal component, Principal components are identified sequentially up to the dimension
  • a method of specifying a characteristic region by applying principal component analysis to such a spectroscopic fundus image is exemplified in Japanese Patent Application Laid-Open No. 2007-330558, for example.
  • the first principal component reflects the underlying retinal shape
  • the second principal component reflects the interchoroidal vessels
  • the third principal component reflects the retinal veins
  • the fifth principal component reflects the retinal veins.
  • the principal component reflects the entire retinal blood vessels.
  • the component representing the retinal artery can be extracted by removing the third principal component representing the retinal vein from the fifth principal component representing the entire retinal blood vessel. That is, each principal component obtained by the principal component analysis reflects the characteristic region (characteristic part) in the spectral distribution data, and it is possible to specify the characteristic region in the spectral distribution data using the principal component analysis result.
  • the characteristic region specifying unit 2311C uses at least one of an eigenvalue, a contribution rate, and a cumulative contribution rate corresponding to each principal component obtained by principal component analysis of the spectral fundus image to determine the spectral fundus image. Identify feature regions in the image.
  • the characteristic region identifying unit 2311C identifies characteristic regions based on comparison results obtained by comparing a plurality of spectral fundus images. For example, the characteristic region identifying unit 2311C identifies a characteristic region by comparing two spectral fundus images whose wavelength ranges are adjacent to each other. For example, the characteristic region identifying unit 2311C identifies a characteristic region by comparing spectral fundus images in two predetermined wavelength ranges.
  • the depth information specifying section 2312C specifies depth information of the feature area specified by the feature area specifying section 2311C.
  • Examples of depth information include information representing a position in the depth direction, which is the direction of the optical axis for measurement, information representing a range of positions in the depth direction, information representing a layer region, and information representing a tissue, with reference to a predetermined reference site. information, etc.
  • Examples of the predetermined reference sites include the fundus surface of the eye to be examined, a predetermined layer region forming the retina of the eye to be examined, the corneal vertex of the eye to be examined, the site where the reflected light intensity of the eye to be examined is maximized, and the anterior eye of the eye to be examined.
  • the depth information identifying unit 2312C identifies depth information of the characteristic region identified by the characteristic region identifying unit 2311C using OCT data with higher resolution in the depth direction than the spectral distribution data. Specifically, the depth information specifying unit 2312C searches the OCT data for a region having the highest degree of correlation with the feature region specified by the feature region specifying unit 2311C. The depth information specifying unit 2312C specifies the depth information in the searched OCT data area as the depth information of the feature area specified by the feature area specifying unit 2311C.
  • the main control unit 211 causes the display unit 240A to display the spectral fundus image (spectral distribution data) and the depth information specified by the depth information specifying unit 2312C. At this time, the main control unit 211 can display the OCT data corresponding to the depth information together with the spectral fundus image and the depth information on the display unit 240A.
  • the searching unit 2313C searches the OCT data (for example, three-dimensional OCT data) for a region having the highest degree of correlation with the spectroscopic fundus image in a predetermined wavelength range.
  • the searching unit 2313C obtains a plurality of degrees of correlation between each of the plurality of regions of the OCT data and the spectral fundus image, and selects the region of the OCT data with the highest degree of correlation from the plurality of degrees of correlation obtained. identify.
  • the searching unit 2313C obtains a plurality of degrees of correlation between each of the plurality of front images and the spectral distribution image in the predetermined wavelength range.
  • the search unit 2313C identifies the front image with the highest degree of correlation, and identifies the depth information of the identified front image as the depth information of the spectral fundus image.
  • the search unit 2313C obtains the plurality of degrees of correlation with respect to the three-dimensional OCT image of the eye to be examined E formed based on the OCT data of the eye to be examined E, and calculates the obtained plurality of A region of the three-dimensional image with the highest correlation is identified from the correlation.
  • the depth information specifying unit 2312C specifies depth information in the specified region of the three-dimensional image as depth information of the spectral fundus image.
  • the search unit 2313C can search the OCT data for an area that has the highest degree of correlation with the characteristic region (spectral distribution data in a broad sense) identified by the characteristic region identification unit 2311C.
  • the searching unit 2313C obtains a plurality of degrees of correlation between each of the plurality of regions of the OCT data and the characteristic region identified by the characteristic region identifying unit 2311C, and selects the highest degree of correlation from the obtained plurality of degrees of correlation. A region of OCT data with a high degree of correlation is identified.
  • the searching unit 2313C obtains, for each of the plurality of front images, a plurality of degrees of correlation between each of the plurality of regions of the front image and the feature regions specified by the feature region specifying unit 2311C.
  • the searching unit 2313C identifies a region having the highest degree of correlation with the characteristic region in each front image, and includes the region having the highest degree of correlation from among the plurality of front images in which the regions with the highest degree of correlation are identified in each of the front images. Identify the front image.
  • the searching unit 2313C identifies the identified depth information of the front image as the depth information of the characteristic region identified by the characteristic region identifying unit 2311C.
  • the search unit 2313C obtains the plurality of degrees of correlation with respect to the three-dimensional OCT image of the eye to be examined E formed based on the OCT data of the eye to be examined E, and calculates the obtained plurality of A region of the three-dimensional image with the highest correlation is identified from the correlation.
  • the depth information specifying unit 2312C specifies depth information in the specified region of the three-dimensional image as depth information of the feature region specified by the feature region specifying unit 2311C.
  • the disease estimating unit 2314C determines the presence or absence of disease, the probability of disease, or the disease probability based on the front image searched by the searching unit 2313C (or the region in the front image corresponding to the characteristic region identified by the characteristic region identifying unit 2311C). Estimate the type of In some embodiments, the disease estimator 2314C estimates the presence or absence of disease, the probability of disease, or the type of disease based on two or more frontal images of a predetermined depth range including the searched frontal image.
  • the disease estimation unit 2314C a plurality of image patterns corresponding to disease types are registered in advance.
  • the disease estimating unit 2314C obtains the degree of correlation between the searched front image (or the above-described region in the front image) and each of the plurality of image patterns, and when the degree of correlation is equal to or greater than a predetermined threshold, the eye E to be examined has a disease.
  • Generate disease information including the fact that it is estimated to be accompanied by
  • the disease estimating unit 2314C can generate disease information including the fact that it is estimated to be associated with a disease and the type of disease corresponding to an image pattern having a degree of correlation equal to or greater than a threshold.
  • the degree of correlation is less than a predetermined threshold value
  • the disease estimating unit 2314C generates disease information including the fact that it is estimated that the subject's eye E does not have a disease.
  • the main control unit 211 can cause the display unit 240A to display disease information including the presence or absence of a disease, the probability of disease, or the type of disease.
  • the main control unit 211 transmits disease information together with at least one of the searched front image (or the front image including the searched region), the spectral distribution image, and the specified depth information. Displayed on the display unit 240A. Further, the main control unit 211 may superimpose the spectral distribution image on the searched front image and display it on the display unit 240A.
  • the main control unit 211 causes the display unit 240A to identifiably display an area corresponding to the characteristic region in the front image corresponding to the characteristic area identified by the characteristic area identification unit 2311C.
  • User interface 240 includes a display section 240A and an operation section 240B.
  • Display unit 240A includes display device 3 .
  • the operation unit 240B includes various operation devices and input devices.
  • the user interface 240 may include a device such as a touch panel that combines a display function and an operation function. In other embodiments, at least a portion of the user interface may not be included on the ophthalmic device.
  • the display device may be an external device connected to the ophthalmic equipment.
  • the communication unit 250 has a function for communicating with an external device (not shown).
  • the communication unit 250 has a communication interface according to a connection form with an external device.
  • external devices include server devices, OCT devices, scanning optical ophthalmoscopes, slit lamp ophthalmoscopes, ophthalmic measurement devices, and ophthalmic treatment devices.
  • ophthalmic measurement devices include eye refractometers, tonometers, specular microscopes, wavefront analyzers, perimeters, microperimeters, and the like.
  • Examples of ophthalmic treatment devices include laser treatment devices, surgical devices, surgical microscopes, and the like.
  • the external device may be a device (reader) that reads information from a recording medium, or a device (writer) that writes information to a recording medium. Further, the external device may be a hospital information system (HIS) server, a DICOM (Digital Imaging and Communication in Medicine) server, a doctor terminal, a mobile terminal, a personal terminal, a cloud server, or the like.
  • HIS hospital information system
  • DICOM Digital Imaging and Communication in Medicine
  • the arithmetic control unit 200 (the control unit 210, the image forming unit 220, and the data processing unit 230) is an example of the "ophthalmic information processing apparatus" according to the embodiment.
  • a spectral image (spectral fundus image, spectral anterior segment image) is an example of "spectral distribution data” according to the embodiment.
  • OCT data is an example of "measurement data” according to the embodiment.
  • the disease estimator 2314C is an example of the "estimator” according to the embodiment.
  • the control unit 210 (main control unit 211) is an example of a "display control unit” according to the embodiment.
  • the imaging optical system 30 is an example of a "light receiving optical system” according to the embodiment.
  • the optical system from the OCT unit 100 to the objective lens 22 is an example of the "OCT optical system” according to the embodiment.
  • the ophthalmologic apparatus 1 acquires a plurality of spectroscopic fundus images by illuminating the fundus oculi Ef with illumination light and receiving return light from the fundus oculi Ef having different wavelength ranges within a predetermined analysis wavelength range.
  • FIG. 6 shows an example of a plurality of spectral fundus images according to the embodiment.
  • FIG. 6 shows an example of a spectral fundus image displayed on the display unit 240A.
  • the main control unit 211 causes the display unit 240A to horizontally arrange a plurality of spectral fundus images acquired by sequentially receiving the returning light from the image sensor 38 and display them on the display unit 240A. At this time, the main control unit 211 can cause the display unit 240A to display each of the plurality of spectral fundus images in association with the wavelength range. This makes it possible to easily grasp the spectral distribution of the fundus corresponding to the wavelength range.
  • FIG. 7 shows an explanatory diagram of an operation example of the ophthalmologic apparatus 1 according to the embodiment.
  • the spectral distribution data processing unit 231C obtains the degree of correlation between any of the spectral fundus images IMG1 of the plurality of spectral fundus images and each of the plurality of en-face images having different depth positions, and determines the en- Identify the face image.
  • the spectral distribution data processing unit 231C specifies the depth information of the specified en-face image as the depth information of the spectral fundus image IMG1.
  • the spectral fundus image IMG1 may be a spectral fundus image to be analyzed in which a characteristic region or a region of interest is drawn.
  • the depth position or layer region of the spectral fundus image IMG1 can be specified with high precision, and the spectral distribution of the spectral fundus image IMG1 can be determined while grasping the tissue, site, etc. depicted in the spectral fundus image IMG1. analysis becomes possible. Therefore, at least one of spectral distribution and depth position (layer area) can be used to improve the accuracy of disease estimation.
  • FIG. 8 shows an explanatory diagram of another operation example of the ophthalmologic apparatus 1 according to the embodiment.
  • the spectral distribution data processing unit 231C analyzes any one spectral fundus image IMG2 of the plurality of spectral fundus images to identify a characteristic region CS, and generates a characteristic region image IMG3 including the identified characteristic region CS and a depth position. The degree of correlation with each of a plurality of en-face images with different values is obtained, and the en-face image with the highest degree of correlation is specified. The spectral distribution data processing unit 231C identifies the identified depth information of the en-face image as the depth information of the characteristic region image IMG3.
  • the spectral fundus image IMG2 may be a spectral fundus image in which the characteristic region is most clearly depicted among the plurality of spectral fundus images.
  • the depth position or layer region of the characteristic region CS in the spectral fundus image IMG2 can be specified with high accuracy, and the spectral distribution of the spectral fundus image IMG2 can be analyzed while grasping the tissue, site, etc. in the characteristic region CS. it becomes possible to Therefore, at least one of spectral distribution and depth position (layer area) can be used to improve the accuracy of disease estimation.
  • the spectral fundus image specified by the spectral distribution data processing unit 231C may be superimposed on the en-face image and displayed on the display unit 240A.
  • the spectral distribution data processing unit 231C obtains the degree of correlation between the spectral fundus image and each of a plurality of en-face images with different depth positions, and identifies the en-face image with the highest degree of correlation.
  • the main control unit 211 causes the display unit 240A to display the spectroscopic fundus image superimposed on the specified en-face image.
  • the spectral fundus image may be a desired spectral fundus image among the plurality of spectral fundus images, or a spectral fundus image in which the characteristic region is most clearly rendered among the plurality of spectral fundus images.
  • an en-face image corresponding to the spectral fundus image specified by the spectral distribution data processing unit 231C is displayed in a display mode corresponding to the spectral fundus image. You may make it display on the part 240A.
  • the en-face image is displayed on the display unit 240A by assigning color information corresponding to the luminance value of the spectral fundus image to each pixel, each predetermined region, or each part.
  • the spectral distribution data processing unit 231C obtains the degree of correlation between the spectral fundus image and each of a plurality of en-face images with different depth positions, and identifies the en-face image with the highest degree of correlation. .
  • the main control unit 211 assigns color information corresponding to the spectral fundus image to the specified en-face image, and causes the display unit 240A to display it.
  • FIG. 9 to 11 show operation examples of the ophthalmologic apparatus 1 according to the embodiment.
  • FIG. 9 shows a flowchart of an operation example of the ophthalmologic apparatus 1 when acquiring a plurality of spectral fundus images.
  • FIG. 10 shows a flow diagram of an operation example of the ophthalmologic apparatus 1 when estimating a disease using a spectral fundus image.
  • FIG. 11 shows a flowchart of an operation example of the ophthalmologic apparatus 1 when displaying a spectral fundus image superimposed on an OCT image.
  • the storage unit 212 stores computer programs for realizing the processes shown in FIGS.
  • the main control unit 211 executes the processes shown in FIGS. 9 to 11 by operating according to this computer program.
  • the main control unit 211 controls the anterior segment cameras 5A and 5B to photograph the anterior segment Ea of the subject's eye E substantially simultaneously.
  • the characteristic site identification unit 231A receives control from the main control unit 211, analyzes a pair of anterior segment images obtained substantially simultaneously by the anterior segment cameras 5A and 5B, and identifies the pupil of the subject's eye E as a characteristic site. Identify the center position.
  • the three-dimensional position calculator 231B obtains the three-dimensional position of the eye E to be examined. This processing includes arithmetic processing using trigonometry based on the positional relationship between the pair of anterior eye cameras 5A and 5B and the subject's eye E, as described in Japanese Patent Application Laid-Open No. 2013-248376, for example.
  • the main control unit 211 moves based on the three-dimensional position of the subject's eye E obtained by the three-dimensional position calculator 231B so that the optical system (for example, the fundus camera unit 2) and the subject's eye E have a predetermined positional relationship.
  • the predetermined positional relationship is a positional relationship that enables imaging and examination of the subject's eye E using an optical system.
  • the x-coordinate and y-coordinate of the optical axis of the objective lens 22 are the eye E and the difference between the z-coordinate of the objective lens 22 (front lens surface) and the z-coordinate of the eye to be examined E (corneal surface) is equal to a predetermined distance (working distance) , is set as the destination of the optical system.
  • the main control unit 211 controls the focus optical system 60 to project the split index on the eye E to be examined.
  • the analysis unit 231 extracts a pair of split index images by analyzing the observed image of the fundus oculi Ef on which the split indices are projected, and calculates the relative relationship between the pair of split indices. Calculate the deviation.
  • the main control unit 211 controls the focus driving unit 31A and the focus driving unit 43A based on the calculated deviation (direction of deviation, amount of deviation).
  • the main controller 211 controls the wavelength tunable filter 80 to set the wavelength selection range of transmitted light to a predetermined wavelength range.
  • a predetermined wavelength range is an initial wavelength range when sequentially repeating the selection of wavelength ranges to cover the analysis wavelength range.
  • the main control unit 211 causes the image data of the spectral fundus image to be obtained.
  • the main control unit 211 controls the illumination optical system 10 to illuminate the subject's eye E with illumination light, captures the light reception result of the reflected light of the illumination light obtained by the image sensor 38, and obtains the image data of the spectral fundus image. get
  • the main control unit 211 determines whether or not to acquire a spectral fundus image in the next wavelength range. For example, when wavelength selection is sequentially changed in predetermined wavelength range steps within the analysis wavelength range, the main control unit 211 determines whether or not to acquire the next spectral fundus image based on the number of times the wavelength range has been changed. can judge. For example, the main control unit 211 can determine whether or not to acquire the next spectral fundus image by determining whether or not all of a plurality of predetermined wavelength ranges have been selected.
  • step S5 When it is determined in step S5 that the next spectral fundus image is to be acquired (step S5: Y), the operation of the ophthalmologic apparatus 1 proceeds to step S6.
  • step S5 when it is determined not to acquire the next spectral fundus image (step S5: N), the operation of the ophthalmologic apparatus 1 ends (end).
  • step S6 Change wavelength range
  • the main control unit 211 controls the wavelength tunable filter 80 to select the range of transmitted light to be selected next. change. Subsequently, the operation of the ophthalmologic apparatus 1 proceeds to step S4.
  • the ophthalmologic apparatus 1 can acquire a plurality of spectral fundus images corresponding to a plurality of wavelength ranges within a predetermined analysis wavelength range.
  • FIG. 10 shows an operation example in the case of estimating a disease using either one of a plurality of spectral fundus images acquired according to the operation example shown in FIG. 9 or a plurality of acquired spectral fundus images.
  • the main control unit 211 controls the characteristic region specifying unit 2311C to specify a characteristic region in the spectral fundus image.
  • the characteristic region identification unit 2311C performs characteristic region identification processing on the spectral fundus image as described above.
  • the characteristic region identifying unit 2311C identifies a characteristic region in a spectral fundus image selected in advance from among a plurality of spectral fundus images. In some embodiments, the characteristic region identifying section 2311C selects one characteristic region from the plurality of characteristic regions identified in each of the plurality of spectral fundus images.
  • the main controller 211 acquires an OCT image.
  • the OCT data of the eye E to be examined is obtained by performing OCT on the eye E to be examined in advance. It is also assumed that a three-dimensional OCT image or a plurality of en-face images with different depth positions are formed based on OCT data. In this case, the main controller 211 acquires a 3D OCT image or a plurality of en-face images.
  • step S12 the main control unit 211 controls the OCT unit 100 and the like to perform OCT on the subject's eye E and obtain OCT data.
  • the data processing unit 230 forms a three-dimensional OCT image or a plurality of en-face images having different depth positions based on the acquired OCT data.
  • the main control unit 211 controls the depth information specifying unit 2312C (searching unit 2313C) so that the OCT image acquired in step S12 has the highest degree of correlation with the image including the characteristic region specified in step S11. is searched for an image region with a high , or an en-face image containing the image region.
  • the main control unit 211 controls the depth information specifying unit 2312C to specify a part (layer region, depth position, etc.) on the fundus corresponding to the characteristic region specified in step S11.
  • the depth information specifying unit 2312C specifies the depth information from the image area that has the highest correlation with the image containing the feature area searched in step S13, or from the en-face image that includes the image area. Identify the site on the fundus from the depth information.
  • the main control unit 211 controls the disease estimating unit 2314C to specify the presence or absence of the disease, the probability of the disease, or the type of the disease in the site on the fundus identified in step S14.
  • the disease estimation unit 2314C performs disease estimation processing as described above.
  • the disease estimating unit 2314C uses the spectral distribution (spectral characteristics) of the spectral fundus image for which the characteristic region is identified in step S11, the en-face image (OCT image) searched in step S13, Based on the site on the fundus identified in step S14, the presence or absence of disease, the probability of disease, or the type of disease is identified.
  • the main control unit 211 controls the spectral fundus image with the characteristic region identified in step S11, the characteristic region identified in step S11, the en-face image (OCT image) identified in step S13, and the characteristic region.
  • the depth information obtained, the part on the fundus identified in step S14, and at least one of the presence or absence of the disease, the probability of disease, and the type of disease estimated in step S15 are displayed on the display unit 240A.
  • step S16 the main control unit 211 superimposes the spectroscopic fundus image in which the characteristic region is specified in step S11 on the en-face image specified in step S13, and displays the images on the display unit 240A.
  • the main control unit 211 may cause the display unit 240A to display a synthetic fundus image generated by assigning color components and changeable transparency information to each of a plurality of spectral fundus images and superimposing them. good.
  • the ophthalmologic apparatus 1 can identify a region corresponding to a characteristic region in the spectral fundus image of the eye E to be examined based on the OCT data of the eye E to be examined, and estimate a disease.
  • FIG. 11 shows an operation example in which either one of the plurality of spectral fundus images acquired according to the operation example shown in FIG. 9 or the acquired plurality of spectral fundus images is superimposed on the OCT image and displayed. show.
  • the main control unit 211 controls the characteristic region identifying unit 2311C to identify a characteristic region in the spectral fundus image, as in step S11.
  • the main control unit 211 acquires an OCT image as in step S12.
  • the OCT data of the eye E to be examined is obtained by performing OCT on the eye E to be examined in advance. It is also assumed that a three-dimensional OCT image or a plurality of en-face images with different depth positions are formed based on OCT data. In this case, the main controller 211 acquires a 3D OCT image or a plurality of en-face images.
  • the main control unit 211 controls the depth information specifying unit 2312C (searching unit 2313C) so that the OCT image acquired in step S22 has the highest correlation with the spectral fundus image in which the characteristic region is specified in step S21. Search for en-face images (or image regions in 3D OCT images) with high degrees.
  • step S24 the main control unit 211 causes the display unit 240A to superimpose the spectral fundus image of which the characteristic region was specified in step S21 on the en-face image searched in step S23.
  • step S24 the main control unit 211 causes the display unit 240A to display the characteristic regions identified in step S21 in a identifiable manner.
  • the ophthalmologic apparatus 1 can superimpose the spectral fundus image of the eye E to be examined on the OCT data of the eye E to be displayed.
  • the ophthalmologic information processing apparatus (control unit 210, image forming unit 220, and data processing unit 230) according to some embodiments includes a characteristic region specifying unit (2311C) and a depth information specifying unit (2312C).
  • the characteristic region identifying unit identifies a characteristic region in the spectral distribution data acquired by receiving return light in a predetermined wavelength range from the subject's eye (E) illuminated with the illumination light.
  • the depth information specifying unit specifies the depth information of the characteristic region based on the measurement data of the subject's eye having higher resolution in the depth direction than the spectral distribution data.
  • the characteristic region identifying unit illuminates the subject's eye with illumination light and receives return light from the subject's eye having different wavelength ranges from each other. Identify feature regions.
  • a characteristic region having a characteristic spectral distribution is identified, and the identified characteristic region is the depth at the measurement site. It is possible to specify with high accuracy which tissue in the direction it belongs to.
  • the measurement data is OCT data obtained by performing optical coherence tomography on the eye to be examined.
  • the depth information specifying unit is a search unit that searches for a front image having the highest degree of correlation with the spectral distribution data from among a plurality of front images formed based on OCT data and having different depth positions. (2313C), and specifies depth information based on the front image searched by the search unit.
  • the spectral distribution data can be obtained with high precision. Depth information can be easily identified.
  • the depth information specifying unit selects an image including a characteristic region from among a plurality of front images formed based on OCT data and having different depth positions, and an image including an image region having the highest degree of correlation.
  • a searching unit (2313C) for searching an image is included, and depth information is specified based on the front image searched by the searching unit.
  • the front image including the image area having the highest correlation with the image including the characteristic area in the spectral distribution data is specified by the search processing for the plurality of front images formed based on the OCT data. Therefore, it is possible to easily identify highly accurate depth information of the characteristic region.
  • Some embodiments include an estimating unit (disease estimating unit 2314C) that estimates the presence or absence of disease, the probability of disease, or the type of disease based on the front image searched by the searching unit.
  • disease estimating unit 2314C disease estimating unit 2314C
  • a display control unit (control unit 210, main control 211).
  • Some embodiments include a display control unit (control unit 210, main control unit 211) that causes display means (display unit 240A) to display the front image and depth information searched by the search unit.
  • Some embodiments include a display control unit (control unit 210, main control unit 211) that superimposes the spectral distribution data on the front image searched by the search unit and displays it on display means (display unit 240A).
  • the spectral distribution data can be superimposed on the front image and displayed, and the distribution data and the front image can be associated with each other.
  • the display control unit causes the display means to identifiably display the area corresponding to the characteristic site in the front image corresponding to the characteristic area.
  • Some embodiments include a display control unit (control unit 210, main control unit 211) that displays spectral distribution data and depth information on display means (display unit 240A).
  • the depth information includes at least one of information representing a depth position, a depth range, and a layer area relative to a reference portion of the subject's eye.
  • An ophthalmologic apparatus (1) includes an illumination optical system (10) that illuminates an eye to be inspected with illumination light, and a light receiving optical system that receives return light of the illumination light from the eye to be inspected whose wavelength ranges are different from each other.
  • illumination optical system 10
  • light receiving optical system that receives return light of the illumination light from the eye to be inspected whose wavelength ranges are different from each other.
  • OCT optical system an optical system from an OCT unit to an objective lens
  • any one of the ophthalmic information processing apparatuses described above. include.
  • An ophthalmologic information processing method includes a characteristic region identifying step and a depth information identifying step.
  • the characteristic region identifying step identifies a characteristic region in the spectral distribution data acquired by receiving return light in a predetermined wavelength range from the eye (E) illuminated with the illumination light.
  • the depth information specifying step specifies the depth information of the characteristic region based on the measurement data of the subject's eye having higher resolution in the depth direction than the spectral distribution data.
  • the characteristic region identifying step includes any of a plurality of spectral distribution data acquired by illuminating the eye to be inspected with illumination light and receiving return light from the eye to be inspected that has different wavelength ranges. Identify feature regions.
  • a characteristic region having a characteristic spectral distribution is identified, and the identified characteristic region is the depth at the measurement site. It is possible to specify with high accuracy which tissue in the direction it belongs to.
  • the measurement data is OCT data obtained by performing optical coherence tomography on the eye to be examined.
  • the depth information specifying step is a search step of searching for a front image having the highest degree of correlation with the spectral distribution data from among a plurality of front images formed based on OCT data and having different depth positions. and identifying depth information based on the front image searched in the searching step.
  • the spectral distribution data can be obtained with high accuracy. Depth information can be easily identified.
  • the depth information specifying step includes, from among a plurality of front images formed based on OCT data and having different depth positions, an image including a characteristic region and an image region having the highest correlation degree.
  • a search step of searching the image is included, and depth information is identified based on the front image searched in the search step.
  • a front image including an image area having the highest degree of correlation with an image including a characteristic area in the spectral distribution data is specified by performing search processing on a plurality of front images formed based on OCT data. Therefore, it is possible to easily identify highly accurate depth information of the characteristic region.
  • Some embodiments include an estimation step of estimating the presence or absence of disease, the probability of disease, or the type of disease based on the front image searched in the search step.
  • Some embodiments include a display control step of displaying disease information including the presence or absence of a disease, the probability of disease, or the type of disease estimated in the estimation step on the display means (display unit 240A).
  • the disease information estimated from the spectral distribution data can be displayed and the disease information can be notified to the outside.
  • Some embodiments include a display control step of displaying the front image and depth information searched in the search step on display means (display unit 240A).
  • the front image and depth information corresponding to the spectral distribution data can be displayed, and the front image and depth information can be notified to the outside.
  • Some embodiments include a display control step of superimposing the spectral distribution data on the front image searched in the search step and displaying it on the display means (display unit 240A).
  • the spectral distribution data can be superimposed on the front image and displayed, and the distribution data and the front image can be associated with each other.
  • the display control step causes the display means to identifiably display the area corresponding to the characteristic site in the front image corresponding to the characteristic area.
  • Some embodiments include a display control step of displaying spectral distribution data and depth information on display means (display unit 240A).
  • the spectral distribution data and the depth information can be displayed and notified to the outside.
  • the depth information includes at least one of information representing a depth position, a depth range, and a layer area relative to a reference portion of the subject's eye.
  • a program causes a computer to execute each step of the ophthalmologic information processing method described above.
  • the storage unit 212 stores a program that causes a computer to execute the ophthalmologic information processing method.
  • a program may be stored in any computer-readable recording medium.
  • the recording medium may be electronic media using magnetism, light, magneto-optics, semiconductors, and the like.
  • recording media are magnetic tapes, magnetic disks, optical disks, magneto-optical disks, flash memories, solid state drives, and the like.
  • ophthalmologic apparatus 1 ophthalmologic apparatus 2 retinal camera unit 10 illumination optical system 22 objective lens 30 imaging optical system 80 wavelength tunable filter 100 OCT unit 210 control section 211 main control section 220 image forming section 230 data processing section 231 analysis section 231A characteristic site identification section 231B Dimensional position calculation unit 231C Spectral distribution data processing unit 2311C Characteristic region identification unit 2312C Depth information identification unit 2313C Search unit 2314C Disease estimation unit E Eye to be examined Ef Fundus LS Measurement light

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

This ophthalmic information processing device includes a feature region identifying unit and a depth information identifying unit. The feature region identifying unit identifies a feature region in spectral distribution data, which is obtained as a result of receiving light which was illumination light projected onto an eye subjected to examination and then returned from the eye and is within a prescribed wavelength range. The depth information identifying unit identifies depth information of the feature region, on the basis of measurement data of the eye subjected to examination, the measurement data having a higher resolution in the depth direction compared to the spectral distribution data.

Description

眼科情報処理装置、眼科装置、眼科情報処理方法、及びプログラムOphthalmic information processing device, ophthalmic device, ophthalmic information processing method, and program
 この発明は、眼科情報処理装置、眼科装置、眼科情報処理方法、及びプログラムに関する。 The present invention relates to an ophthalmic information processing device, an ophthalmic device, an ophthalmic information processing method, and a program.
 眼底観察は、瞳孔を通して網膜、血管、視神経等を観察することで、眼底疾患の診断、及び全身(特に、脳血管)の硬化状態の推定に有用である。眼底観察には、例えば、眼底カメラ、走査型光検眼鏡(Scanning Light Ophthalmoscope:SLO)等の眼科装置(眼底撮影装置)により取得される眼底画像が用いられる。 By observing the retina, blood vessels, optic nerve, etc. through the pupil, fundus observation is useful for diagnosing fundus diseases and estimating the hardening state of the whole body (especially cerebral blood vessels). For fundus observation, for example, a fundus image acquired by an ophthalmologic apparatus (fundus photographing apparatus) such as a fundus camera or a scanning light ophthalmoscope (SLO) is used.
 眼底観察において、広範な解析波長範囲にわたって複数の分光眼底画像を取得することで、一般的な眼底画像では把握することが難しい眼底の様々な特徴を抽出できる可能性があることが知られている。例えば、特許文献1及び特許文献2には、分光眼底画像を取得する眼科装置について開示されている。例えば、非特許文献1及び非特許文献2には、分光眼底画像としてのハイパースペクトル画像を網膜に適用する手法が開示されている。例えば、特許文献3には、分光眼底画像から分光特性に基づいて部位を精度良く特定する手法が開示されている。 In fundus observation, it is known that by acquiring multiple spectral fundus images over a wide analysis wavelength range, it is possible to extract various features of the fundus that are difficult to grasp with general fundus images. . For example, Patent Literature 1 and Patent Literature 2 disclose an ophthalmologic apparatus that acquires a spectral fundus image. For example, Non-Patent Document 1 and Non-Patent Document 2 disclose a method of applying a hyperspectral image as a spectral fundus image to the retina. For example, Patent Literature 3 discloses a method of accurately identifying a site based on spectral characteristics from a spectral fundus image.
特開2006-158546号公報JP-A-2006-158546 特開2010-200916号公報JP 2010-200916 A 特開2007-330558号公報JP-A-2007-330558
 分光画像などの分光分布データは、計測対象部位からの照明光の反射光に基づいて取得される。検出された反射光には、計測対象部位における深さ方向の様々の組織からの反射光や散乱光が含まれるため、反射光が計測対象部位におけるどの組織からの光であるかが不明である。反射光が計測対象部位におけるどの組織からの光であるかを特定できれば、分光分布データに対してより詳細な解析を行うことが可能になる。 Spectral distribution data such as spectral images are acquired based on the reflected light of the illumination light from the measurement target area. Since the detected reflected light includes reflected light and scattered light from various tissues in the depth direction of the measurement target site, it is unclear from which tissue in the measurement target site the light is reflected. . If it is possible to specify which tissue in the measurement target site the reflected light comes from, it becomes possible to perform a more detailed analysis of the spectral distribution data.
 本発明は、このような事情に鑑みてなされたものであり、その目的の1つは、分光分布データをより詳細に解析するための新たな技術を提供することにある。 The present invention has been made in view of such circumstances, and one of its purposes is to provide a new technique for analyzing spectral distribution data in more detail.
 実施形態に係る第1態様は、照明光で照明された被検眼から所定の波長範囲の戻り光を受光することにより取得された分光分布データにおける特徴領域を特定する特徴領域特定部と、前記分光分布データより深さ方向の分解能が高い前記被検眼の測定データに基づいて、前記特徴領域の深さ情報を特定する深さ情報特定部と、を含む、眼科情報処理装置である。 A first aspect according to an embodiment includes a characteristic region identifying unit that identifies a characteristic region in spectral distribution data acquired by receiving return light in a predetermined wavelength range from an eye to be inspected illuminated with illumination light; and a depth information specifying unit that specifies depth information of the characteristic region based on the measurement data of the eye to be inspected, which has a higher resolution in the depth direction than the distribution data.
 実施形態に係る第2態様では、第1態様において、前記特徴領域特定部は、前記被検眼を照明光で照明し、互いに波長範囲が異なる前記被検眼からの戻り光を受光することにより取得された複数の分光分布データのいずれかにおける前記特徴領域を特定する。 In a second aspect according to the embodiment, in the first aspect, the characteristic region specifying unit is obtained by illuminating the eye to be inspected with illumination light and receiving return light from the eye to be inspected having different wavelength ranges. and specifying the characteristic region in any one of the plurality of spectral distribution data.
 実施形態に係る第3態様では、第1態様又は第2態様において、前記測定データは、前記被検眼に対して光コヒーレンストモグラフィを実行することにより得られたOCTデータである。 In the third aspect according to the embodiment, in the first aspect or the second aspect, the measurement data is OCT data obtained by performing optical coherence tomography on the eye to be examined.
 実施形態に係る第4態様では、第3態様において、前記深さ情報特定部は、前記OCTデータに基づいて形成され互いに深さ位置が異なる複数の正面画像の中から前記分光分布データと最も相関度が高い正面画像を探索する探索部を含み、前記探索部により探索された正面画像に基づいて前記深さ情報を特定する。 In a fourth aspect according to the embodiment, in the third aspect, the depth information specifying unit has the highest correlation with the spectral distribution data among a plurality of front images formed based on the OCT data and having different depth positions. A search unit that searches for a front image with a high degree of depth is included, and the depth information is specified based on the front image searched by the search unit.
 実施形態に係る第5態様では、第3態様において、前記深さ情報特定部は、前記OCTデータに基づいて形成され互いに深さ位置が異なる複数の正面画像の中から前記特徴領域を含む画像と最も相関度が高い画像領域を含む正面画像を探索する探索部を含み、前記探索部により探索された正面画像に基づいて前記深さ情報を特定する。 In a fifth aspect according to the embodiment, in the third aspect, the depth information specifying unit selects an image including the characteristic region from among a plurality of front images formed based on the OCT data and having different depth positions. A searching unit that searches for a front image containing an image region with the highest degree of correlation is included, and the depth information is specified based on the front image searched by the searching unit.
 実施形態に係る第6態様は、第4態様又は第5態様において、前記探索部により探索された前記正面画像に基づいて疾患の有無、疾患の確率、又は疾患の種類を推定する推定部を含む。 A sixth aspect according to the embodiment is the fourth aspect or the fifth aspect, including an estimating unit that estimates the presence or absence of a disease, the probability of the disease, or the type of the disease based on the front image searched by the searching unit. .
 実施形態に係る第7態様は、第6態様において、前記推定部により推定された前記疾患の有無、疾患の確率、又は疾患の種類を含む疾患情報を表示手段に表示させる表示制御部を含む。 A seventh aspect according to the embodiment includes, in the sixth aspect, a display control unit that causes the display means to display disease information including the presence or absence of the disease, the probability of the disease, or the type of the disease estimated by the estimation unit.
 実施形態に係る第8態様は、第4態様又は第5態様において、前記探索部により探索された前記正面画像と前記深さ情報とを表示手段に表示させる表示制御部を含む。 An eighth aspect according to the embodiment includes, in the fourth aspect or the fifth aspect, a display control unit that causes a display unit to display the front image and the depth information searched by the searching unit.
 実施形態に係る第9態様は、第4態様又は第5態様において、前記分光分布データを前記探索部により探索された前記正面画像に重畳させて表示手段に表示させる表示制御部を含む。 A ninth aspect according to the embodiment, in the fourth aspect or the fifth aspect, includes a display control unit that superimposes the spectral distribution data on the front image searched by the searching unit and displays it on a display unit.
 実施形態に係る第10態様では、第8態様又は第9態様において、前記表示制御部は、前記特徴領域に対応した前記正面画像における特徴部位に相当する領域を識別可能に前記表示手段に表示させる。 In a tenth aspect according to the embodiment, in the eighth aspect or the ninth aspect, the display control unit causes the display means to identifiably display an area corresponding to a characteristic part in the front image corresponding to the characteristic area. .
 実施形態に係る第11態様は、第1態様~第7態様のいずれかにおいて、前記分光分布データと前記深さ情報とを表示手段に表示させる表示制御部を含む。 An eleventh aspect according to the embodiment, in any one of the first to seventh aspects, includes a display control unit that displays the spectral distribution data and the depth information on the display means.
 実施形態に係る第12態様では、第1態様~第11態様のいずれかにおいて、前記深さ情報は、前記被検眼の基準部位を基準とした深さ位置、深さ範囲、及び層領域を表す情報の少なくとも1つを含む。 In a twelfth aspect according to the embodiment, in any one of the first to eleventh aspects, the depth information represents a depth position, a depth range, and a layer region with reference to a reference portion of the subject's eye. contains at least one of the information;
 実施形態に係る第13態様は、前記被検眼を照明光で照明する照明光学系と、互いに波長範囲が異なる前記被検眼からの前記照明光の戻り光を受光する受光光学系と、前記被検眼に対して光コヒーレンストモグラフィを実行するOCT光学系と、第1態様~第12態様のいずれかの眼科情報処理装置と、を含む、眼科装置である。 A thirteenth aspect according to the embodiment is an illumination optical system that illuminates the eye to be inspected with illumination light, a light receiving optical system that receives return light of the illumination light from the eye to be inspected, the wavelength ranges of which are different from each other, and the eye to be inspected. and an ophthalmic information processing apparatus according to any one of the first to twelfth aspects.
 実施形態に係る第14態様は、照明光で照明された被検眼から所定の波長範囲の戻り光を受光することにより取得された分光分布データにおける特徴領域を特定する特徴領域特定ステップと、前記分光分布データより深さ方向の分解能が高い前記被検眼の測定データに基づいて、前記特徴領域の深さ情報を特定する深さ情報特定ステップと、を含む、眼科情報処理方法である。 A fourteenth aspect according to the embodiment is a characteristic region identifying step of identifying a characteristic region in spectral distribution data obtained by receiving return light in a predetermined wavelength range from an eye to be inspected illuminated with illumination light; and a depth information specifying step of specifying depth information of the characteristic region based on measurement data of the eye to be inspected, which has higher resolution in the depth direction than distribution data.
 実施形態に係る第15態様では、第14態様において、前記特徴領域特定ステップは、前記被検眼を照明光で照明し、互いに波長範囲が異なる前記被検眼からの戻り光を受光することにより取得された複数の分光分布データのいずれかにおける前記特徴領域を特定する。 In a fifteenth aspect according to the embodiment, in the fourteenth aspect, the characteristic region identifying step includes illuminating the eye to be inspected with illumination light and receiving return light from the eye to be inspected having different wavelength ranges. and specifying the characteristic region in any one of the plurality of spectral distribution data.
 実施形態に係る第16態様では、第14態様又は第15態様において、前記測定データは、前記被検眼に対して光コヒーレンストモグラフィを実行することにより得られたOCTデータである。 In a sixteenth aspect according to the embodiment, in the fourteenth aspect or fifteenth aspect, the measurement data is OCT data obtained by performing optical coherence tomography on the eye to be examined.
 実施形態に係る第17態様では、第16態様において、前記深さ情報特定ステップは、前記OCTデータに基づいて形成され互いに深さ位置が異なる複数の正面画像の中から前記分光分布データと最も相関度が高い正面画像を探索する探索ステップを含み、前記探索ステップにおいて探索された正面画像に基づいて前記深さ情報を特定する。 In a seventeenth aspect according to the embodiment, in the sixteenth aspect, the depth information specifying step includes determining the depth information that is most correlated with the spectral distribution data from among a plurality of front images formed based on the OCT data and having different depth positions. A search step of searching for a front image with a high degree of depth is included, and the depth information is specified based on the front image searched in the search step.
 実施形態に係る第18態様では、第16態様において、前記深さ情報特定ステップは、前記OCTデータに基づいて形成され互いに深さ位置が異なる複数の正面画像の中から前記特徴領域を含む画像と最も相関度が高い画像領域を含む正面画像を探索する探索ステップを含み、前記探索ステップにおいて探索された正面画像に基づいて前記深さ情報を特定する。 In an eighteenth aspect according to the embodiment, in the sixteenth aspect, the depth information specifying step includes: selecting an image including the characteristic region from among a plurality of front images formed based on the OCT data and having different depth positions; A search step of searching for a front image containing an image region with the highest degree of correlation is included, and the depth information is specified based on the front image searched in the search step.
 実施形態に係る第19態様は、第17態様又は第18態様において、前記探索ステップにおいて探索された前記正面画像に基づいて疾患の有無、疾患の確率、又は疾患の種類を推定する推定ステップを含む。 A nineteenth aspect according to the embodiment is the seventeenth aspect or the eighteenth aspect, including an estimation step of estimating the presence or absence of a disease, the probability of the disease, or the type of the disease based on the front image searched in the searching step. .
 実施形態に係る第20態様は、第19態様において、前記推定ステップにおいて推定された前記疾患の有無、疾患の確率、又は疾患の種類を含む疾患情報を表示手段に表示させる表示制御ステップを含む。 A twentieth aspect according to the embodiment, in the nineteenth aspect, includes a display control step of causing the display means to display the disease information including the presence or absence of the disease, the probability of the disease, or the type of the disease estimated in the estimation step.
 実施形態に係る第21態様は、第17態様又は第18態様において、前記探索ステップにおいて探索された前記正面画像と前記深さ情報とを表示手段に表示させる表示制御ステップを含む。 A twenty-first aspect according to the embodiment, in the seventeenth aspect or the eighteenth aspect, includes a display control step of displaying the front image and the depth information searched in the searching step on a display means.
 実施形態に係る第22態様は、第17態様又は第18態様において、前記分光分布データを前記探索ステップにおいて探索された前記正面画像に重畳させて表示手段に表示させる表示制御ステップを含む。 A 22nd aspect according to the embodiment, in the 17th aspect or 18th aspect, includes a display control step of superimposing the spectral distribution data on the front image searched in the searching step and displaying it on a display means.
 実施形態に係る第23態様では、第21態様又は第22態様において、前記表示制御ステップは、前記特徴領域に対応した前記正面画像における特徴部位に相当する領域を識別可能に前記表示手段に表示させる。 In a twenty-third aspect according to the embodiment, in the twenty-first aspect or the twenty-second aspect, the display control step causes the display means to identifiably display an area corresponding to a characteristic part in the front image corresponding to the characteristic area. .
 実施形態に係る第24態様は、第14態様~第19態様のいずれかにおいて、前記分光分布データと前記深さ情報とを表示手段に表示させる表示制御ステップを含む。 A twenty-fourth aspect according to the embodiment, in any one of the fourteenth to nineteenth aspects, includes a display control step of displaying the spectral distribution data and the depth information on a display means.
 実施形態に係る第25態様では、第14態様~第24態様のいずれかにおいて、前記深さ情報は、前記被検眼の基準部位を基準とした深さ位置、深さ範囲、及び層領域を表す情報の少なくとも1つを含む。 In a twenty-fifth aspect according to the embodiment, in any one of the fourteenth to twenty-fourth aspects, the depth information represents a depth position, a depth range, and a layer region with reference to a reference portion of the subject's eye. contains at least one of the information;
 実施形態に係る第26態様は、コンピュータに、第14態様~第25態様のいずれかの眼科情報処理方法の各ステップを実行させるプログラムである。 A twenty-sixth aspect according to the embodiment is a program that causes a computer to execute each step of the ophthalmologic information processing method according to any one of the fourteenth to twenty-fifth aspects.
 なお、上記した複数の態様に係る構成を任意に組み合わせることが可能である。 It should be noted that it is possible to arbitrarily combine the configurations according to the plurality of aspects described above.
 本発明によれば、分光分布データをより詳細に解析するための新たな技術を提供することができる。 According to the present invention, it is possible to provide a new technique for analyzing spectral distribution data in more detail.
実施形態に係る眼科装置の光学系の構成の一例を示す概略図である。It is a schematic diagram showing an example of the configuration of the optical system of the ophthalmologic apparatus according to the embodiment. 実施形態に係る眼科装置の光学系の構成の一例を示す概略図である。It is a schematic diagram showing an example of the configuration of the optical system of the ophthalmologic apparatus according to the embodiment. 実施形態に係る眼科装置の制御系の構成の一例を示す概略図である。1 is a schematic diagram showing an example of the configuration of a control system of an ophthalmologic apparatus according to an embodiment; FIG. 実施形態に係る眼科装置の制御系の構成の一例を示す概略図である。1 is a schematic diagram showing an example of the configuration of a control system of an ophthalmologic apparatus according to an embodiment; FIG. 実施形態に係る眼科装置の制御系の構成の一例を示す概略図である。1 is a schematic diagram showing an example of the configuration of a control system of an ophthalmologic apparatus according to an embodiment; FIG. 実施形態に係る眼科装置の動作を説明するための概略図である。It is a schematic diagram for explaining the operation of the ophthalmologic apparatus according to the embodiment. 実施形態に係る眼科装置の動作を説明するための概略図である。It is a schematic diagram for explaining the operation of the ophthalmologic apparatus according to the embodiment. 実施形態に係る眼科装置の動作を説明するための概略図である。It is a schematic diagram for explaining the operation of the ophthalmologic apparatus according to the embodiment. 実施形態に係る眼科装置の動作の一例を表すフローチャートである。4 is a flow chart showing an example of the operation of the ophthalmologic apparatus according to the embodiment; 実施形態に係る眼科装置の動作の一例を表すフローチャートである。4 is a flow chart showing an example of the operation of the ophthalmologic apparatus according to the embodiment; 実施形態に係る眼科装置の動作の一例を表すフローチャートである。4 is a flow chart showing an example of the operation of the ophthalmologic apparatus according to the embodiment;
 この発明に係る眼科情報処理装置、眼科装置、眼科情報処理方法、及びプログラムの実施形態の一例について、図面を参照しながら詳細に説明する。なお、実施形態において、この明細書において引用されている文献に記載された技術を任意に援用することが可能である。 An example of an embodiment of an ophthalmologic information processing apparatus, an ophthalmologic apparatus, an ophthalmologic information processing method, and a program according to the present invention will be described in detail with reference to the drawings. In addition, in the embodiments, it is possible to arbitrarily incorporate the techniques described in the documents cited in this specification.
 実施形態に係る眼科情報処理装置は、被検眼の分光分布データを取得し、分光分布データより深さ方向の分解能が高い被検眼の測定データに基づいて、分光分布データの深さを表す情報(深さ情報)を特定する。特に、眼科情報処理装置は、被検眼の分光分布データにおける特徴領域を特定し、分光分布データより深さ方向の分解能が高い被検眼の測定データに基づいて、特定された特徴領域の深さを表す情報(深さ情報)を特定することができる。 An ophthalmologic information processing apparatus according to an embodiment acquires spectral distribution data of an eye to be examined, and based on measurement data of the eye to be examined that has higher resolution in the depth direction than the spectral distribution data, information representing the depth of the spectral distribution data ( depth information). In particular, the ophthalmologic information processing apparatus identifies a characteristic region in the spectral distribution data of the subject's eye, and calculates the depth of the identified characteristic region based on measurement data of the subject's eye having higher resolution in the depth direction than the spectral distribution data. The information to be represented (depth information) can be specified.
 分光分布データは、照明光で照明された被検眼(例えば、眼底、前眼部)から所定の波長範囲の戻り光を受光することにより取得される。分光分布データの例として、2次元の分光分布としての分光画像(分光眼底画像、分光前眼部画像)などがある。分光画像の例として、ハイパースペクトル画像、マルチスペクトル画像、RGBのカラー画像などがある。特徴領域の例として、血管、視神経乳頭、疾患部位、異常部位などがある。 The spectral distribution data is obtained by receiving return light in a predetermined wavelength range from the subject's eye (for example, fundus, anterior segment) illuminated with illumination light. Examples of spectral distribution data include a spectral image (spectral fundus image, spectral anterior segment image) as a two-dimensional spectral distribution. Examples of spectral images include hyperspectral images, multispectral images, and RGB color images. Examples of characteristic regions include blood vessels, optic discs, diseased regions, and abnormal regions.
 いくつかの実施形態では、互いに波長範囲が異なる2以上の波長成分を有する照明光で被検眼を照明し、被検眼からの戻り光から所定の波長範囲の波長成分を有する戻り光を選択することにより複数の分光分布データが取得される。いくつかの実施形態では、互いに波長範囲が異なる2以上の波長成分を有する照明光で被検眼を順次に照明し、被検眼からの戻り光から所定の波長範囲の波長成分を有する戻り光を順次に選択することにより複数の分光分布データが取得される。 In some embodiments, an eye to be inspected is illuminated with illumination light having two or more wavelength components whose wavelength ranges are different from each other, and return light having wavelength components in a predetermined wavelength range is selected from the light returned from the eye to be inspected. obtains a plurality of spectral distribution data. In some embodiments, the subject's eye is sequentially illuminated with illumination light having two or more wavelength components whose wavelength ranges are different from each other, and return light having wavelength components in a predetermined wavelength range is sequentially emitted from the return light from the subject's eye. A plurality of spectral distribution data are acquired by selecting .
 いくつかの実施形態では、互いに波長範囲が異なる2以上の波長成分を有する照明光から所定の波長範囲の波長成分を有する照明光を順次に選択し、選択された照明光で被検眼を順次に照明し、被検眼からの戻り光を順次に受光することにより複数の分光分布データが取得される。 In some embodiments, illumination light having wavelength components in a predetermined wavelength range is sequentially selected from illumination light having two or more wavelength components whose wavelength ranges are different from each other, and the eye to be examined is sequentially illuminated with the selected illumination light. A plurality of spectral distribution data are acquired by illuminating and sequentially receiving return light from the subject's eye.
 いくつかの実施形態では、波長範囲を任意に変更可能な光源を用いて互いに波長範囲が異なる2以上の波長成分を有する照明光を順次に出射させ、出射された照明光で被検眼を順次に照明し、被検眼からの戻り光を順次に受光することにより複数の分光分布データが取得される。 In some embodiments, illumination light having two or more wavelength components with mutually different wavelength ranges is sequentially emitted using a light source whose wavelength range can be arbitrarily changed, and the emitted illumination light sequentially illuminates the subject's eye. A plurality of spectral distribution data are acquired by illuminating and sequentially receiving return light from the subject's eye.
 いくつかの実施形態では、照明光で被検眼を照明し、受光デバイスの受光感度が高い波長範囲を順次に変更して被検眼からの戻り光を順次に選択することにより複数の分光分布データが取得される。 In some embodiments, a plurality of spectral distribution data are obtained by illuminating the subject's eye with illumination light, sequentially changing the wavelength range in which the light receiving device has high light receiving sensitivity, and sequentially selecting return light from the subject's eye. is obtained.
 深さ方向は、被検眼を照明する照明光の進行方向、被検眼の奥行方向、眼底の浅層から深層に向かう方向、又は被検眼に対する測定光軸(撮影光軸)の方向であってよい。分光分布データより深さ方向の分解能が高い被検眼の測定データの例として、被検眼に対して光コヒーレンストモグラフィ(Optical Coherence Tomography:OCT)を実行することにより得られるOCTデータ、AO(Adaptive Optics)-SLOを用いて得られた被検眼の測定データなどがある。 The depth direction may be the traveling direction of illumination light that illuminates the subject's eye, the depth direction of the subject's eye, the direction from the superficial layer to the deep layer of the fundus, or the direction of the measurement optical axis (imaging optical axis) with respect to the subject's eye. . As an example of the measurement data of the eye to be inspected, which has higher resolution in the depth direction than the spectral distribution data, OCT data obtained by performing optical coherence tomography (OCT) on the eye to be inspected, AO (Adaptive Optics) )-measurement data of an eye to be examined obtained using SLO.
 OCTデータは、例えば、OCT光源からの光を測定光と参照光とに分割し、被検眼に測定光を投射し、被検眼からの測定光の戻り光と参照光路を経由した参照光との干渉光を検出することにより取得される。いくつかの実施形態では、眼科情報処理装置は、外部に設けられたOCT装置により得られたOCTデータを取得するように構成される。いくつかの実施形態では、眼科情報処理装置の機能は、OCTデータを取得可能な眼科装置により実現される。 The OCT data is obtained, for example, by dividing light from an OCT light source into measurement light and reference light, projecting the measurement light onto the eye to be inspected, returning light of the measurement light from the eye to be inspected, and reference light passing through the reference light path. It is obtained by detecting interfering light. In some embodiments, the ophthalmic information processing device is configured to acquire OCT data obtained by an externally provided OCT device. In some embodiments, the functionality of the ophthalmic information processing device is implemented by an ophthalmic device capable of acquiring OCT data.
 いくつかの実施形態では、眼科情報処理装置は、外部に設けられたAO-SLO装置により得られた測定データを取得するように構成される。いくつかの実施形態では、眼科情報処理装置の機能は、AO-SLOの機能を有する眼科装置により実現される。 In some embodiments, the ophthalmic information processing device is configured to acquire measurement data obtained by an externally provided AO-SLO device. In some embodiments, the functionality of the ophthalmic information processing device is implemented by an ophthalmic device having AO-SLO functionality.
 いくつかの実施形態では、所定の解析波長領域において互いに波長範囲が異なる照明光の戻り光を順次に受光することにより複数の分光分布データが取得される。順次に受光される戻り光のうち波長範囲が隣接する第1戻り光及び第2戻り光について、第1戻り光の波長範囲の一部は、第2戻り光の波長範囲に重複してよい。この場合、複数の分光分布データのそれぞれに対して特徴領域の特定処理が実行される。例えば、複数の分光分布データのうち最も精度良く特徴領域を特定可能な分光分布データにおける特徴領域の深さ情報が求められる。例えば、複数の分光分布データのうちユーザー等により選択された所望の分光分布データにおける特徴領域の深さ情報が求められる。 In some embodiments, a plurality of spectral distribution data are acquired by sequentially receiving return light of illumination light with different wavelength ranges in a predetermined analysis wavelength region. Regarding the first returned light and the second returned light whose wavelength ranges are adjacent to each other among the sequentially received returned lights, part of the wavelength range of the first returned light may overlap with the wavelength range of the second returned light. In this case, characteristic region identification processing is executed for each of the plurality of spectral distribution data. For example, the depth information of the characteristic region in the spectral distribution data that can specify the characteristic region with the highest accuracy among the plurality of spectral distribution data is obtained. For example, depth information of a characteristic region in desired spectral distribution data selected by a user or the like from a plurality of spectral distribution data is obtained.
 いくつかの実施形態では、眼科情報処理装置は、被検眼のOCTデータに基づいて形成され互いに異なる深さ範囲で投影又は積分された複数の正面画像(en-face画像、Cスキャン画像、プロジェクション画像、OCTアンギオグラフィ)から分光分布データと最も相関度が高い正面画像を探索する。眼科情報処理装置は、探索された正面画像の深さ情報を、当該分光分布データの深さ情報として特定する。 In some embodiments, the ophthalmologic information processing apparatus generates a plurality of front images (en-face images, C-scan images, projection images) formed based on OCT data of the subject's eye and projected or integrated in mutually different depth ranges. , OCT angiography) is searched for a front image having the highest degree of correlation with the spectral distribution data. The ophthalmologic information processing apparatus identifies depth information of the searched front image as depth information of the spectral distribution data.
 これにより、計測部位の分光分布データから特定された特徴領域が、計測部位における深さ方向のどの組織のものであるかを高精度に特定することが可能になる。また、眼球運動に起因した分光分布データの位置ずれ(例えば、xz方向)も補正することができる。それにより、分光分布データに対してより詳細な解析(例えば、特徴領域のより詳細な観察、疾患の推定)を行うことが可能になる。 This makes it possible to identify with high accuracy which tissue in the depth direction of the measurement site the characteristic region identified from the spectral distribution data of the measurement site belongs to. In addition, it is also possible to correct the positional deviation (for example, in the xz direction) of the spectral distribution data caused by eye movement. Thereby, it becomes possible to perform more detailed analysis (for example, more detailed observation of characteristic regions, estimation of disease) on the spectral distribution data.
 実施形態に係る眼科情報処理方法は、上記の眼科情報処理装置により実行される1以上のステップを含む。実施形態に係るプログラムは、実施形態に係る眼科情報処理方法の各ステップをコンピュータ(プロセッサ)に実行させる。実施形態に係る記録媒体は、実施形態に係るプログラムが記録された非一時的な記録媒体(記憶媒体)である。 An ophthalmologic information processing method according to an embodiment includes one or more steps executed by the ophthalmologic information processing apparatus described above. A program according to an embodiment causes a computer (processor) to execute each step of an ophthalmologic information processing method according to an embodiment. A recording medium according to the embodiment is a non-temporary recording medium (storage medium) in which the program according to the embodiment is recorded.
 本明細書において、プロセッサは、例えば、CPU(Central Processing Unit)、GPU(Graphics Processing Unit)、ASIC(Application Specific Integrated Circuit)、プログラマブル論理デバイス(例えば、SPLD(Simple Programmable Logic Device)、CPLD(Complex Programmable Logic Device)、FPGA(Field Programmable Gate Array))等の回路を含む。プロセッサは、例えば、記憶回路又は記憶装置に格納されているプログラムを読み出し実行することで、実施形態に係る機能を実現する。記憶回路又は記憶装置がプロセッサに含まれていてよい。また、記憶回路又は記憶装置がプロセッサの外部に設けられていてよい。 In this specification, a processor is, for example, a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), an ASIC (Application Specific Integrated Circuit), a programmable logic device (for example, a SPLD (Simple Programmable Logic Device CPLD Logic Device), FPGA (Field Programmable Gate Array)), etc. The processor implements the functions according to the embodiment by, for example, reading and executing a program stored in a memory circuit or memory device. A memory circuit or device may be included in the processor. Also, a memory circuit or memory device may be provided external to the processor.
 以下、被検眼の眼底の分光分布データとしての分光眼底画像に対する深さ情報を特定する場合について説明するが、実施形態に係る構成はこれに限定されるものではない。例えば、以下の実施形態は、眼底以外の前眼部の分光分布データとしての分光前眼部画像に対する深さ情報を特定する場合にも適用可能である。 A case of specifying depth information for a spectral fundus image as spectral distribution data of the fundus of the subject's eye will be described below, but the configuration according to the embodiment is not limited to this. For example, the following embodiments are also applicable to specifying depth information for a spectral anterior segment image as spectral distribution data of an anterior segment other than the fundus.
 いくつかの実施形態では、眼科情報処理装置は、通信機能により外部で取得された被検眼の分光分布データを取得するように構成される。いくつかの実施形態では、被検眼の分光分布データを取得可能な眼科装置が、眼科情報処理装置の機能を有する。 In some embodiments, the ophthalmologic information processing apparatus is configured to acquire spectral distribution data of an eye to be examined that is externally acquired through a communication function. In some embodiments, an ophthalmologic apparatus capable of acquiring spectral distribution data of an eye to be examined has the function of an ophthalmologic information processing apparatus.
 以下の実施形態では、実施形態に係る眼科情報処理装置の機能を含む眼科装置を例に説明する。実施形態に係る眼科装置は、眼科撮影装置を含む。いくつかの実施形態の眼科装置に含まれる眼科撮影装置は、例えば、眼底カメラ、走査型光検眼鏡、スリットランプ検眼鏡、手術用顕微鏡等のうちのいずれか1つ以上である。いくつかの実施形態に係る眼科装置は、眼科撮影装置に加えて、眼科測定装置及び眼科治療装置のうちのいずれか1つ以上を含む。いくつかの実施形態の眼科装置に含まれる眼科測定装置は、例えば、眼屈折検査装置、眼圧計、スペキュラーマイクロスコープ、ウェーブフロントアナライザ、視野計、マイクロペリメータ等のうちのいずれか1つ以上である。いくつかの実施形態の眼科装置に含まれる眼科治療装置は、例えば、レーザー治療装置、手術装置、手術用顕微鏡等のうちのいずれか1つ以上である。 In the following embodiment, an ophthalmologic apparatus including the functions of the ophthalmologic information processing apparatus according to the embodiment will be described as an example. An ophthalmologic apparatus according to an embodiment includes an ophthalmologic imaging apparatus. The ophthalmic imaging device included in the ophthalmic device of some embodiments is, for example, any one or more of a fundus camera, a scanning optical ophthalmoscope, a slit lamp ophthalmoscope, a surgical microscope, or the like. An ophthalmic device according to some embodiments includes any one or more of an ophthalmic measurement device and an ophthalmic treatment device in addition to an ophthalmic imaging device. The ophthalmic measurement device included in the ophthalmic device of some embodiments is, for example, any one or more of an eye refractor, a tonometer, a specular microscope, a wavefront analyzer, a perimeter, a microperimeter, etc. . The ophthalmic treatment device included in the ophthalmic device of some embodiments is, for example, any one or more of a laser treatment device, a surgical device, a surgical microscope, and the like.
 以下の実施形態では、眼科装置は、光干渉断層計と眼底カメラとを含む。この光干渉断層計にはスウェプトソースOCTが適用されているが、OCTのタイプはこれに限定されず、他のタイプのOCT(スペクトラルドメインOCT、タイムドメインOCT、アンファスOCT等)が適用されてもよい。 In the following embodiments, the ophthalmic device includes an optical coherence tomography and a fundus camera. Although swept source OCT is applied to this optical coherence tomography, the type of OCT is not limited to this, and other types of OCT (spectral domain OCT, time domain OCT, Amphas OCT, etc.) may be applied. good.
 以下、x方向は、対物レンズの光軸方向に直交する方向(左右方向)であり、y方向は、対物レンズの光軸方向に直交する方向(上下方向)であるものとする。z方向は、対物レンズの光軸方向であるものとする。 Hereinafter, the x direction is the direction (horizontal direction) perpendicular to the optical axis direction of the objective lens, and the y direction is the direction (vertical direction) perpendicular to the optical axis direction of the objective lens. The z-direction is assumed to be the optical axis direction of the objective lens.
<構成>
〔光学系〕
 図1に示すように、眼科装置1は、眼底カメラユニット2、OCTユニット100及び演算制御ユニット200を含む。眼底カメラユニット2には、被検眼Eの正面画像を取得するための光学系や機構が設けられている。OCTユニット100には、OCTを実行するための光学系や機構の一部が設けられている。OCTを実行するための光学系や機構の他の一部は、眼底カメラユニット2に設けられている。演算制御ユニット200は、各種の演算や制御を実行する1以上のプロセッサを含む。これらに加え、被検者の顔を支持するための部材(顎受け、額当て等)や、OCTの対象部位を切り替えるためのレンズユニット(例えば、前眼部OCT用アタッチメント)等の任意の要素やユニットが眼科装置1に設けられてもよい。更に、眼科装置1は、一対の前眼部カメラ5A及び5Bを備える。
<Configuration>
〔Optical system〕
As shown in FIG. 1 , the ophthalmologic apparatus 1 includes a fundus camera unit 2 , an OCT unit 100 and an arithmetic control unit 200 . The retinal camera unit 2 is provided with an optical system and a mechanism for acquiring a front image of the eye E to be examined. The OCT unit 100 is provided with a part of an optical system and a mechanism for performing OCT. Another part of the optical system and mechanism for performing OCT is provided in the fundus camera unit 2 . The arithmetic control unit 200 includes one or more processors that perform various arithmetic operations and controls. In addition to these, arbitrary elements such as a member for supporting the subject's face (chin rest, forehead rest, etc.) and a lens unit for switching the target part of OCT (for example, attachment for anterior segment OCT) or unit may be provided in the ophthalmologic apparatus 1 . Furthermore, the ophthalmologic apparatus 1 includes a pair of anterior eye cameras 5A and 5B.
[眼底カメラユニット2]
 眼底カメラユニット2には、被検眼Eの眼底Efを撮影するための光学系が設けられている。取得される眼底Efの画像(眼底画像、眼底写真等と呼ばれる)は、観察画像、撮影画像等の正面画像である。観察画像は、近赤外光を用いた動画撮影により得られる。撮影画像は、フラッシュ光を用いた静止画像、又は分光画像(分光眼底画像、分光前眼部画像)である。更に、眼底カメラユニット2は、被検眼Eの前眼部Eaを撮影して正面画像(前眼部画像)を取得することができる。
[Fundus camera unit 2]
The fundus camera unit 2 is provided with an optical system for photographing the fundus Ef of the eye E to be examined. The acquired image of the fundus oculi Ef (referred to as a fundus image, fundus photograph, etc.) is a front image such as an observed image or a photographed image. Observation images are obtained by moving image shooting using near-infrared light. The captured image is a still image using flash light or a spectral image (spectral fundus image, spectral anterior segment image). Furthermore, the fundus camera unit 2 can photograph the anterior segment Ea of the subject's eye E to obtain a front image (anterior segment image).
 眼底カメラユニット2は、照明光学系10と撮影光学系30とを含む。照明光学系10は被検眼Eに照明光を照射する。撮影光学系30は、被検眼Eからの照明光の戻り光を検出する。OCTユニット100からの測定光は、眼底カメラユニット2内の光路を通じて被検眼Eに導かれ、その戻り光は、同じ光路を通じてOCTユニット100に導かれる。 The retinal camera unit 2 includes an illumination optical system 10 and an imaging optical system 30. The illumination optical system 10 irradiates the eye E to be inspected with illumination light. The imaging optical system 30 detects return light of the illumination light from the eye E to be examined. The measurement light from the OCT unit 100 is guided to the subject's eye E through the optical path in the retinal camera unit 2, and its return light is guided to the OCT unit 100 through the same optical path.
 照明光学系10の観察光源11から出力された光(観察照明光)は、曲面状の反射面を有する反射ミラー12により反射され、集光レンズ13を経由し、可視カットフィルタ14を透過して近赤外光となる。更に、観察照明光は、撮影光源15の近傍にて一旦集束し、ミラー16により反射され、リレーレンズ17、18、絞り19及びリレーレンズ20を経由する。そして、観察照明光は、孔開きミラー21の周辺部(孔部の周囲の領域)にて反射され、ダイクロイックミラー46を透過し、対物レンズ22により屈折されて被検眼E(眼底Ef又は前眼部Ea)を照明する。被検眼Eからの観察照明光の戻り光は、対物レンズ22により屈折され、ダイクロイックミラー46を透過し、孔開きミラー21の中心領域に形成された孔部を通過し、撮影合焦レンズ31を経由し、ミラー32により反射される。更に、この戻り光は、ハーフミラー33Aを透過し、ダイクロイックミラー33により反射され、集光レンズ34によりイメージセンサ35の受光面に結像される。イメージセンサ35は、所定のフレームレートで戻り光を検出する。なお、撮影光学系30のフォーカスは、眼底Ef又は前眼部Eaに合致するように調整される。 Light (observation illumination light) output from an observation light source 11 of an illumination optical system 10 is reflected by a reflecting mirror 12 having a curved reflecting surface, passes through a condenser lens 13, and passes through a visible light cut filter 14. It becomes near-infrared light. Furthermore, the observation illumination light is once converged near the photographing light source 15 , reflected by the mirror 16 , and passed through the relay lenses 17 and 18 , the diaphragm 19 and the relay lens 20 . Then, the observation illumination light is reflected by the periphery of the perforated mirror 21 (area around the perforation), passes through the dichroic mirror 46, is refracted by the objective lens 22, Illuminate part Ea). The return light of the observation illumination light from the subject's eye E is refracted by the objective lens 22, passes through the dichroic mirror 46, passes through the hole formed in the central region of the aperture mirror 21, and passes through the photographing focusing lens 31. through and reflected by mirror 32 . Further, this return light passes through the half mirror 33A, is reflected by the dichroic mirror 33, and is imaged on the light receiving surface of the image sensor 35 by the condenser lens . The image sensor 35 detects returned light at a predetermined frame rate. The focus of the imaging optical system 30 is adjusted so as to match the fundus oculi Ef or the anterior segment Ea.
 撮影光源15から出力された光(撮影照明光)は、観察照明光と同様の経路を通って眼底Efに照射される。被検眼Eからの撮影照明光の戻り光は、観察照明光の戻り光と同じ経路を通ってダイクロイックミラー33まで導かれ、ダイクロイックミラー33を透過し、ミラー36により反射され、波長可変フィルタ80に導かれる。 The light (imaging illumination light) output from the imaging light source 15 irradiates the fundus oculi Ef through the same path as the observation illumination light. The return light of the photographing illumination light from the subject's eye E is guided to the dichroic mirror 33 through the same path as the return light of the observation illumination light, passes through the dichroic mirror 33, is reflected by the mirror 36, and enters the tunable filter 80. be guided.
 波長可変フィルタ80は、所定の解析波長領域において透過光の波長範囲を選択可能なフィルタである。波長可変フィルタ80を透過する光の波長範囲は、任意に選択可能である。 The wavelength tunable filter 80 is a filter that can select the wavelength range of transmitted light in a predetermined analysis wavelength region. The wavelength range of light transmitted through the wavelength tunable filter 80 can be arbitrarily selected.
 いくつかの実施形態では、波長可変フィルタ80は、例えば、特開2006-158546号公報に開示された液晶波長可変フィルタと同様である。この場合、波長可変フィルタ80は、液晶への印可電圧を変化させることにより透過光の波長選択範囲を任意に選択することができる。 In some embodiments, the tunable filter 80 is similar to the liquid crystal tunable filter disclosed in Japanese Patent Application Laid-Open No. 2006-158546, for example. In this case, the wavelength tunable filter 80 can arbitrarily select the wavelength selection range of transmitted light by changing the voltage applied to the liquid crystal.
 いくつかの実施形態では、波長可変フィルタ80は、互いに透過光の波長選択範囲が異なる2以上の波長選択フィルタを含み、2以上の波長選択フィルタを選択的に照明光の戻り光の光路に配置可能に構成されてよい。 In some embodiments, the wavelength tunable filter 80 includes two or more wavelength selection filters having different wavelength selection ranges for transmitted light, and the two or more wavelength selection filters are selectively arranged in the optical path of return light of the illumination light. may be configured to be possible.
 いくつかの実施形態では、波長可変フィルタ80は、所定の解析波長領域において反射光の波長範囲を選択可能なフィルタである。 In some embodiments, the wavelength tunable filter 80 is a filter that can select the wavelength range of reflected light in a predetermined analysis wavelength region.
 波長可変フィルタ80を透過したミラー36からの戻り光は、集光レンズ37によりイメージセンサ38の受光面に結像される。 Return light from the mirror 36 that has passed through the wavelength tunable filter 80 is imaged on the light receiving surface of the image sensor 38 by the condenser lens 37 .
 いくつかの実施形態では、波長可変フィルタ80は、ダイクロイックミラー33と集光レンズ34との間に配置される。 In some embodiments, tunable filter 80 is placed between dichroic mirror 33 and condenser lens 34 .
 いくつかの実施形態では、波長可変フィルタ80は、ダイクロイックミラー33又はミラー36と集光レンズ37との間の光路に対して挿脱可能に構成される。例えば、波長可変フィルタ80がダイクロイックミラー33と集光レンズ37との間の光路に配置されたとき、眼科装置1は、イメージセンサ38により得られた戻り光の受光結果を順次に取得することで複数の分光眼底画像を取得することができる。例えば、波長可変フィルタ80がダイクロイックミラー33と集光レンズ37との間の光路から退避されたとき、眼科装置1は、イメージセンサ38により得られた戻り光の受光結果を取得することで、通常の静止画像(眼底画像、前眼部画像)を取得することができる。 In some embodiments, the tunable filter 80 is configured to be insertable/removable with respect to the optical path between the dichroic mirror 33 or mirror 36 and the condenser lens 37 . For example, when the wavelength tunable filter 80 is arranged in the optical path between the dichroic mirror 33 and the condenser lens 37, the ophthalmologic apparatus 1 can sequentially obtain the results of receiving the returned light obtained by the image sensor 38. Multiple spectral fundus images can be acquired. For example, when the wavelength tunable filter 80 is retracted from the optical path between the dichroic mirror 33 and the condenser lens 37, the ophthalmologic apparatus 1 obtains the result of receiving the return light obtained by the image sensor 38, and normally still images (fundus image, anterior segment image) can be acquired.
 表示装置3には、イメージセンサ35により検出された眼底反射光に基づく画像(観察画像)が表示される。なお、撮影光学系30のピントが前眼部に合わせられている場合、被検眼Eの前眼部の観察画像が表示される。また、表示装置3には、イメージセンサ38により検出された眼底反射光に基づく画像(撮影画像、分光眼底画像)が表示される。なお、観察画像を表示する表示装置3と撮影画像を表示する表示装置3は、同一のものであってもよいし、異なるものであってもよい。被検眼Eを赤外光で照明して同様の撮影を行う場合には、赤外の撮影画像が表示される。 An image (observation image) based on the fundus reflected light detected by the image sensor 35 is displayed on the display device 3 . Note that when the imaging optical system 30 is focused on the anterior segment, an observation image of the anterior segment of the subject's eye E is displayed. The display device 3 also displays an image (captured image, spectral fundus image) based on the fundus reflected light detected by the image sensor 38 . The display device 3 that displays the observed image and the display device 3 that displays the captured image may be the same or different. When the subject's eye E is illuminated with infrared light and photographed in the same manner, an infrared photographed image is displayed.
 LCD(Liquid Crystal Display)39は固視標や視力測定用視標を表示する。LCD39から出力された光束は、その一部がハーフミラー33Aにて反射され、ミラー32に反射され、撮影合焦レンズ31を経由し、孔開きミラー21の孔部を通過する。孔開きミラー21の孔部を通過した光束は、ダイクロイックミラー46を透過し、対物レンズ22により屈折されて眼底Efに投射される。 An LCD (Liquid Crystal Display) 39 displays a fixation target and a visual acuity measurement target. A part of the light flux output from the LCD 39 is reflected by the half mirror 33 A, reflected by the mirror 32 , passes through the photographing focusing lens 31 , and passes through the aperture of the apertured mirror 21 . The luminous flux that has passed through the aperture of the perforated mirror 21 is transmitted through the dichroic mirror 46, refracted by the objective lens 22, and projected onto the fundus oculi Ef.
 LCD39の画面上における固視標の表示位置を変更することにより、被検眼Eの固視位置を変更できる。固視位置の例として、黄斑を中心とする画像を取得するための固視位置や、視神経乳頭を中心とする画像を取得するための固視位置や、黄斑と視神経乳頭との間の眼底中心を中心とする画像を取得するための固視位置や、黄斑から大きく離れた部位(眼底周辺部)の画像を取得するための固視位置などがある。いくつかの実施形態に係る眼科装置1は、このような固視位置の少なくとも1つを指定するためのGUI(Graphical User Interface)等を含む。いくつかの実施形態に係る眼科装置1は、固視位置(固視標の表示位置)をマニュアルで移動するためのGUI等を含む。 By changing the display position of the fixation target on the screen of the LCD 39, the fixation position of the subject's eye E can be changed. Examples of fixation positions include the fixation position for acquiring an image centered on the macula, the fixation position for acquiring an image centered on the optic disc, and the center of the fundus between the macula and the optic disc. and a fixation position for acquiring an image of a site far away from the macula (eye fundus periphery). The ophthalmologic apparatus 1 according to some embodiments includes a GUI (Graphical User Interface) or the like for designating at least one of such fixation positions. The ophthalmologic apparatus 1 according to some embodiments includes a GUI or the like for manually moving the fixation position (the display position of the fixation target).
 移動可能な固視標を被検眼Eに提示するための構成はLCD等の表示装置には限定されない。例えば、光源アレイ(発光ダイオード(LED)アレイ等)における複数の光源を選択的に点灯させることにより、移動可能な固視標を生成することができる。また、移動可能な1以上の光源により、移動可能な固視標を生成することができる。 The configuration for presenting a movable fixation target to the eye to be examined E is not limited to a display device such as an LCD. For example, a movable fixation target can be generated by selectively lighting multiple light sources in a light source array (such as a light emitting diode (LED) array). Also, one or more movable light sources can generate a movable fixation target.
 フォーカス光学系60は、被検眼Eに対するフォーカス調整に用いられるスプリット指標を生成する。フォーカス光学系60は、撮影光学系30の光路(撮影光路)に沿った撮影合焦レンズ31の移動に連動して、照明光学系10の光路(照明光路)に沿って移動される。反射棒67は、照明光路に対して挿脱可能である。フォーカス調整を行う際には、反射棒67の反射面が照明光路に傾斜配置される。LED61から出力されたフォーカス光は、リレーレンズ62を通過し、スプリット指標板63により2つの光束に分離され、二孔絞り64を通過し、ミラー65により反射され、集光レンズ66により反射棒67の反射面に一旦結像されて反射される。更に、フォーカス光は、リレーレンズ20を経由し、孔開きミラー21に反射され、ダイクロイックミラー46を透過し、対物レンズ22により屈折されて眼底Efに投射される。フォーカス光の眼底反射光は、観察照明光の戻り光と同じ経路を通ってイメージセンサ35に導かれる。その受光像(スプリット指標像)に基づいてマニュアルフォーカスやオートフォーカスを実行できる。 The focus optical system 60 generates a split index used for focus adjustment of the eye E to be examined. The focus optical system 60 is moved along the optical path (illumination optical path) of the illumination optical system 10 in conjunction with the movement of the imaging focusing lens 31 along the optical path (illumination optical path) of the imaging optical system 30 . The reflecting bar 67 can be inserted into and removed from the illumination optical path. When performing focus adjustment, the reflecting surface of the reflecting bar 67 is arranged at an angle in the illumination optical path. Focus light output from the LED 61 passes through a relay lens 62, is split into two light beams by a split index plate 63, passes through a two-hole diaphragm 64, is reflected by a mirror 65, and is reflected by a condenser lens 66 onto a reflecting rod 67. is once imaged on the reflective surface of , and then reflected. Further, the focused light passes through the relay lens 20, is reflected by the perforated mirror 21, passes through the dichroic mirror 46, is refracted by the objective lens 22, and is projected onto the fundus oculi Ef. The fundus reflected light of the focus light is guided to the image sensor 35 through the same path as the return light of the observation illumination light. Manual focus and autofocus can be performed based on the received light image (split index image).
 ダイクロイックミラー46は、眼底撮影用光路とOCT用光路とを合成する。ダイクロイックミラー46は、OCTに用いられる波長帯の光を反射し、眼底撮影用の光を透過させる。OCT用光路(測定光の光路)には、OCTユニット100側からダイクロイックミラー46側に向かって順に、コリメータレンズユニット40、光路長変更部41、光スキャナ42、OCT合焦レンズ43、ミラー44、及びリレーレンズ45が設けられている。 The dichroic mirror 46 synthesizes the fundus imaging optical path and the OCT optical path. The dichroic mirror 46 reflects light in the wavelength band used for OCT and transmits light for fundus imaging. The optical path for OCT (the optical path of the measurement light) includes, in order from the OCT unit 100 side toward the dichroic mirror 46 side, a collimator lens unit 40, an optical path length changing section 41, an optical scanner 42, an OCT focusing lens 43, a mirror 44, and a relay lens 45 are provided.
 光路長変更部41は、図1に示す矢印の方向に移動可能とされ、OCT用光路の長さを変更する。この光路長の変更は、眼軸長に応じた光路長補正や、干渉状態の調整などに利用される。光路長変更部41は、コーナーキューブと、これを移動する機構とを含む。 The optical path length changing unit 41 is movable in the direction of the arrow shown in FIG. 1, and changes the length of the OCT optical path. This change in optical path length is used for optical path length correction according to the axial length of the eye, adjustment of the state of interference, and the like. The optical path length changing section 41 includes a corner cube and a mechanism for moving it.
 光スキャナ42は、被検眼Eの瞳孔と光学的に共役な位置に配置される。光スキャナ42は、OCT用光路を通過する測定光LSを偏向する。光スキャナ42は、例えば、2次元走査が可能なガルバノスキャナである。 The optical scanner 42 is arranged at a position optically conjugate with the pupil of the eye E to be examined. The optical scanner 42 deflects the measurement light LS passing through the OCT optical path. The optical scanner 42 is, for example, a galvanometer scanner capable of two-dimensional scanning.
 OCT合焦レンズ43は、OCT用の光学系のフォーカス調整を行うために、測定光LSの光路に沿って移動される。撮影合焦レンズ31の移動、フォーカス光学系60の移動、及びOCT合焦レンズ43の移動を連係的に制御することができる。 The OCT focusing lens 43 is moved along the optical path of the measurement light LS in order to adjust the focus of the OCT optical system. Movement of the imaging focusing lens 31, movement of the focusing optical system 60, and movement of the OCT focusing lens 43 can be controlled in a coordinated manner.
[前眼部カメラ5A及び5B]
 前眼部カメラ5A及び5Bは、特開2013-248376号公報に開示された発明と同様に、眼科装置1の光学系と被検眼Eとの間の相対位置を求めるために用いられる。前眼部カメラ5A及び5Bは、光学系が格納された筐体(眼底カメラユニット2等)の被検眼E側の面に設けられている。眼科装置1は、前眼部カメラ5A及び5Bにより異なる方向から実質的に同時に取得された2つの前眼部画像を解析することにより、光学系と被検眼Eとの間の3次元的な相対位置を求める。2つの前眼部画像の解析は、特開2013-248376号公報に開示された解析と同様であってよい。また、前眼部カメラの個数は2以上の任意の個数であってよい。
[ Anterior segment cameras 5A and 5B]
The anterior eye cameras 5A and 5B are used to determine the relative position between the optical system of the ophthalmologic apparatus 1 and the subject's eye E, similar to the invention disclosed in Japanese Patent Laid-Open No. 2013-248376. The anterior eye cameras 5A and 5B are provided on the face of the subject's eye E side of a housing (fundus camera unit 2, etc.) housing an optical system. The ophthalmologic apparatus 1 analyzes two anterior segment images obtained substantially simultaneously from different directions by the anterior segment cameras 5A and 5B, thereby determining the three-dimensional relative relationship between the optical system and the subject's eye E. find the position. The analysis of the two anterior segment images may be similar to the analysis disclosed in Japanese Patent Application Laid-Open No. 2013-248376. Also, the number of anterior segment cameras may be any number of two or more.
 本例では、2以上の前眼部カメラを利用して被検眼Eの位置(つまり被検眼Eと光学系との相対位置)を求めているが、被検眼Eの位置を求めるための手法はこれに限定されない。例えば、被検眼Eの正面画像(例えば前眼部Eaの観察画像)を解析することにより、被検眼Eの位置を求めることができる。或いは、被検眼Eの角膜に指標を投影する手段を設け、この指標の投影位置(つまり、この指標の角膜反射光束の検出状態)に基づいて被検眼Eの位置を求めることができる。 In this example, the position of the eye to be examined E (that is, the relative position between the eye to be examined E and the optical system) is obtained using two or more anterior eye cameras. It is not limited to this. For example, the position of the eye E to be examined can be obtained by analyzing a front image of the eye E to be examined (for example, an observed image of the anterior segment Ea). Alternatively, means for projecting an index onto the cornea of the subject's eye E can be provided, and the position of the subject's eye E can be obtained based on the projection position of this index (that is, the detection state of the corneal reflected light flux of this index).
[OCTユニット100]
 図2に例示するように、OCTユニット100には、スウェプトソースOCTを実行するための光学系が設けられている。この光学系は、干渉光学系を含む。この干渉光学系は、波長可変光源(波長掃引型光源)からの光を測定光と参照光とに分割する機能と、被検眼Eからの測定光の戻り光と参照光路を経由した参照光とを重ね合わせて干渉光を生成する機能と、この干渉光を検出する機能とを備える。干渉光学系により得られた干渉光の検出結果(検出信号)は、干渉光のスペクトルを示す信号であり、演算制御ユニット200に送られる。
[OCT unit 100]
As illustrated in FIG. 2, the OCT unit 100 is provided with an optical system for performing swept-source OCT. This optical system includes an interference optical system. This interference optical system has a function of dividing light from a wavelength tunable light source (wavelength swept light source) into measurement light and reference light, return light of the measurement light from the subject's eye E, and reference light passing through the reference light path. and a function of generating interference light and a function of detecting this interference light. A detection result (detection signal) of the interference light obtained by the interference optical system is a signal indicating the spectrum of the interference light, and is sent to the arithmetic control unit 200 .
 光源ユニット101は、例えば、出射光の波長を高速で変化させる近赤外波長可変レーザーを含む。光源ユニット101から出力された光L0は、光ファイバ102により偏波コントローラ103に導かれてその偏光状態が調整される。偏光状態が調整された光L0は、光ファイバ104によりファイバカプラ105に導かれて測定光LSと参照光LRとに分割される。 The light source unit 101 includes, for example, a near-infrared tunable laser that changes the wavelength of emitted light at high speed. The light L0 output from the light source unit 101 is guided to the polarization controller 103 by the optical fiber 102, and the polarization state is adjusted. The light L0 whose polarization state has been adjusted is guided by the optical fiber 104 to the fiber coupler 105 and split into the measurement light LS and the reference light LR.
 参照光LRは、光ファイバ110によりコリメータ111に導かれて平行光束に変換され、光路長補正部材112及び分散補償部材113を経由し、コーナーキューブ114に導かれる。光路長補正部材112は、参照光LRの光路長と測定光LSの光路長とを合わせるよう作用する。分散補償部材113は、参照光LRと測定光LSとの間の分散特性を合わせるよう作用する。コーナーキューブ114は、参照光LRの入射方向に移動可能であり、それにより参照光LRの光路長が変更される。 The reference light LR is guided to the collimator 111 by the optical fiber 110, converted into a parallel beam, passed through the optical path length correction member 112 and the dispersion compensation member 113, and guided to the corner cube 114. The optical path length correction member 112 acts to match the optical path length of the reference light LR and the optical path length of the measurement light LS. The dispersion compensation member 113 acts to match the dispersion characteristics between the reference light LR and the measurement light LS. The corner cube 114 is movable in the incident direction of the reference light LR, thereby changing the optical path length of the reference light LR.
 コーナーキューブ114を経由した参照光LRは、分散補償部材113及び光路長補正部材112を経由し、コリメータ116によって平行光束から集束光束に変換され、光ファイバ117に入射する。光ファイバ117に入射した参照光LRは、偏波コントローラ118に導かれてその偏光状態が調整され、光ファイバ119によりアッテネータ120に導かれて光量が調整され、光ファイバ121によりファイバカプラ122に導かれる。 The reference light LR that has passed through the corner cube 114 passes through the dispersion compensating member 113 and the optical path length correcting member 112 , is converted by the collimator 116 from a parallel beam into a converged beam, and enters the optical fiber 117 . The reference light LR incident on the optical fiber 117 is guided to the polarization controller 118 to have its polarization state adjusted, guided to the attenuator 120 via the optical fiber 119 to have its light amount adjusted, and guided to the fiber coupler 122 via the optical fiber 121 . be killed.
 一方、ファイバカプラ105により生成された測定光LSは、光ファイバ127により導かれてコリメータレンズユニット40により平行光束に変換され、光路長変更部41、光スキャナ42、OCT合焦レンズ43、ミラー44及びリレーレンズ45を経由する。リレーレンズ45を経由した測定光LSは、ダイクロイックミラー46により反射され、対物レンズ22により屈折されて被検眼Eに入射する。測定光LSは、被検眼Eの様々な深さ位置において散乱・反射される。被検眼Eからの測定光LSの戻り光は、往路と同じ経路を逆向きに進行してファイバカプラ105に導かれ、光ファイバ128を経由してファイバカプラ122に到達する。なお、測定光LSが入射する光ファイバ127の入射短は、被検眼Eの眼底Efと略共役な位置に配置される。 On the other hand, the measurement light LS generated by the fiber coupler 105 is guided by the optical fiber 127 and converted into a parallel light beam by the collimator lens unit 40, and the optical path length changing unit 41, the optical scanner 42, the OCT focusing lens 43, and the mirror 44. and relay lens 45 . The measurement light LS that has passed through the relay lens 45 is reflected by the dichroic mirror 46, refracted by the objective lens 22, and enters the eye E to be examined. The measurement light LS is scattered and reflected at various depth positions of the eye E to be examined. The return light of the measurement light LS from the subject's eye E travels in the opposite direction along the same path as the forward path, is guided to the fiber coupler 105 , and reaches the fiber coupler 122 via the optical fiber 128 . It should be noted that the incident length of the optical fiber 127 into which the measurement light LS is incident is arranged at a position substantially conjugate with the fundus oculi Ef of the eye E to be examined.
 ファイバカプラ122は、光ファイバ128を介して入射された測定光LSと、光ファイバ121を介して入射された参照光LRとを合成して(干渉させて)干渉光を生成する。ファイバカプラ122は、所定の分岐比(例えば1:1)で干渉光を分岐することにより、一対の干渉光LCを生成する。一対の干渉光LCは、それぞれ光ファイバ123及び124を通じて検出器125に導かれる。 The fiber coupler 122 combines (interferences) the measurement light LS that has entered via the optical fiber 128 and the reference light LR that has entered via the optical fiber 121 to generate interference light. The fiber coupler 122 generates a pair of interference lights LC by splitting the interference lights at a predetermined splitting ratio (for example, 1:1). A pair of interference lights LC are guided to detector 125 through optical fibers 123 and 124, respectively.
 検出器125は、例えばバランスドフォトダイオードである。バランスドフォトダイオードは、一対の干渉光LCをそれぞれ検出する一対のフォトディテクタを含み、これらフォトディテクタにより得られた一対の検出結果の差分を出力する。検出器125は、この出力(検出信号)をDAQ(Data Acquisition System)130に送る。 The detector 125 is, for example, a balanced photodiode. A balanced photodiode includes a pair of photodetectors that respectively detect a pair of interference lights LC, and outputs a difference between a pair of detection results obtained by these photodetectors. The detector 125 sends this output (detection signal) to a DAQ (Data Acquisition System) 130 .
 DAQ130には、光源ユニット101からクロックKCが供給される。クロックKCは、光源ユニット101において、波長可変光源により所定の波長範囲内で掃引される各波長の出力タイミングに同期して生成される。光源ユニット101は、例えば、各出力波長の光L0を分岐することにより得られた2つの分岐光の一方を光学的に遅延させた後、これらの合成光を検出した結果に基づいてクロックKCを生成する。DAQ130は、検出器125から入力される検出信号をクロックKCに基づきサンプリングする。DAQ130は、検出器125からの検出信号のサンプリング結果を演算制御ユニット200に送る。 A clock KC is supplied from the light source unit 101 to the DAQ 130 . The clock KC is generated in the light source unit 101 in synchronization with the output timing of each wavelength swept within a predetermined wavelength range by the wavelength tunable light source. The light source unit 101, for example, optically delays one of the two branched lights obtained by branching the light L0 of each output wavelength, and then outputs the clock KC based on the result of detecting these combined lights. Generate. The DAQ 130 samples the detection signal input from the detector 125 based on the clock KC. DAQ 130 sends the sampling result of the detection signal from detector 125 to arithmetic control unit 200 .
 本例では、測定光LSの光路(測定光路、測定アーム)の長さを変更するための光路長変更部41と、参照光LRの光路(参照光路、参照アーム)の長さを変更するためのコーナーキューブ114の双方が設けられている。しかしながら、光路長変更部41とコーナーキューブ114のいずれか一方のみが設けられもよい。また、これら以外の光学部材を用いて、測定光路長と参照光路長との差を変更することも可能である。 In this example, an optical path length changing unit 41 for changing the length of the optical path (measurement optical path, measurement arm) of the measurement light LS and an optical path length changing unit 41 for changing the length of the optical path (reference optical path, reference arm) of the reference light LR corner cubes 114 are provided. However, only one of the optical path length changing portion 41 and the corner cube 114 may be provided. It is also possible to change the difference between the measurement optical path length and the reference optical path length by using optical members other than these.
〔制御系〕
 図3~図5に、眼科装置1の制御系の構成例を示す。図3~図5において、眼科装置1に含まれる構成要素の一部が省略されている。図3において、図1及び図2と同様の部分には同一符号を付し、適宜説明を省略する。制御部210、画像形成部220及びデータ処理部230は、例えば、演算制御ユニット200に設けられる。
[Control system]
3 to 5 show configuration examples of the control system of the ophthalmologic apparatus 1. FIG. 3 to 5, some of the components included in the ophthalmologic apparatus 1 are omitted. In FIG. 3, the same parts as those in FIGS. 1 and 2 are denoted by the same reference numerals, and description thereof will be omitted as appropriate. The control section 210, the image forming section 220 and the data processing section 230 are provided in the arithmetic control unit 200, for example.
〈制御部210〉
 制御部210は、各種の制御を実行する。制御部210は、主制御部211と記憶部212とを含む。
<Control unit 210>
The control unit 210 executes various controls. Control unit 210 includes main control unit 211 and storage unit 212 .
〈主制御部211〉
 主制御部211は、プロセッサ(例えば、制御プロセッサ)を含み、眼科装置1の各部(図1~図5に示された各要素を含む)を制御する。例えば、主制御部211は、図1~図2に示す眼底カメラユニット2の光学系の各部、OCTユニット100の光学系の各部、前眼部カメラ5A、5B、上記の光学系を移動する移動機構150、画像形成部220、データ処理部230、及びユーザーインターフェイス(User Interface:UI)240を制御する。
<Main control unit 211>
The main controller 211 includes a processor (eg, control processor) and controls each part of the ophthalmologic apparatus 1 (including each element shown in FIGS. 1 to 5). For example, the main control unit 211 controls each part of the optical system of the retinal camera unit 2 shown in FIGS. It controls mechanism 150 , image forming section 220 , data processing section 230 and user interface (UI) 240 .
 眼底カメラユニット2に対する制御には、合焦駆動部31A、43Aに対する制御、波長可変フィルタ80に対する制御、イメージセンサ35、38に対する制御、光路長変更部41に対する制御、及び光スキャナ42に対する制御が含まれる。 The control over the retinal camera unit 2 includes control over the focus driving units 31A and 43A, control over the wavelength tunable filter 80, control over the image sensors 35 and 38, control over the optical path length changing unit 41, and control over the optical scanner 42. be
 合焦駆動部31Aに対する制御には、撮影合焦レンズ31を光軸方向に移動する制御が含まれる。合焦駆動部43Aに対する制御には、OCT合焦レンズ43を光軸方向に移動する制御が含まれる。 The control for the focus drive unit 31A includes control for moving the photographing focus lens 31 in the optical axis direction. The control for the focus drive unit 43A includes control for moving the OCT focus lens 43 in the optical axis direction.
 波長可変フィルタ80に対する制御には、透過光の波長範囲の選択制御(例えば、液晶に対する印可電圧の制御)などがある。 The control over the wavelength tunable filter 80 includes selection control of the wavelength range of transmitted light (for example, control of voltage applied to the liquid crystal).
 イメージセンサ35、38に対する制御には、撮像素子に対する受光感度の制御、フレームレート(受光タイミング、露光時間)の制御、受光領域(位置、大きさ、サイズ)の制御、撮像素子に対する受光結果の読み出し制御などがある。いくつかの実施形態では、戻り光の波長範囲に応じて露光時間を変更することで、複数の分光眼底画像が取得される解析波長範囲の各波長範囲において受光強度が均一になるようにイメージセンサ35、38が制御される。いくつかの実施形態では、主制御部211は、複数の分光眼底画像が取得される解析波長範囲の各波長範囲において受光強度が均一になるように、照明光の各波長範囲の波長成分の光の強度を制御する。 The control of the image sensors 35 and 38 includes control of the light receiving sensitivity of the imaging element, control of the frame rate (light receiving timing, exposure time), control of the light receiving area (position, size, size), and readout of the light receiving result of the imaging element. control, etc. In some embodiments, by changing the exposure time according to the wavelength range of the return light, the image sensor is arranged so that the received light intensity is uniform in each wavelength range of the analysis wavelength range in which a plurality of spectral fundus images are acquired. 35, 38 are controlled. In some embodiments, the main controller 211 controls the wavelength components of the illumination light in each wavelength range so that the received light intensity is uniform in each wavelength range of the analysis wavelength range in which a plurality of spectral fundus images are acquired. to control the intensity of the
 LCD39に対する制御には、固視位置の制御が含まれる。例えば、主制御部211は、手動又は自動で設定された固視位置に対応するLCD39の画面上の位置に固視標を表示する。また、主制御部211は、LCD39に表示されている固視標の表示位置を(連続的に又は段階的に)変更することができる。それにより、固視標を移動することができる(つまり、固視位置を変更することができる)。固視標の表示位置や移動態様は、マニュアルで又は自動的に設定される。マニュアルでの設定は、例えばGUIを用いて行われる。自動的な設定は、例えば、データ処理部230により行われる。 Control over the LCD 39 includes control of the fixation position. For example, the main control unit 211 displays the fixation target at a position on the screen of the LCD 39 corresponding to the fixation position set manually or automatically. Further, the main control unit 211 can change (continuously or stepwise) the display position of the fixation target displayed on the LCD 39 . Thereby, the fixation target can be moved (that is, the fixation position can be changed). The display position and movement mode of the fixation target are set manually or automatically. Manual setting is performed using, for example, a GUI. Automatic setting is performed by the data processing unit 230, for example.
 光路長変更部41に対する制御には、測定光LSの光路長を変更する制御が含まれる。主制御部211は、光路長変更部41のコーナーキューブを駆動する駆動部を制御することで測定光LSの光路に沿って光路長変更部41を移動し、測定光LSの光路長を変更する。 The control over the optical path length changing unit 41 includes control for changing the optical path length of the measurement light LS. The main control unit 211 moves the optical path length changing unit 41 along the optical path of the measuring light LS by controlling the driving unit that drives the corner cubes of the optical path length changing unit 41 to change the optical path length of the measuring light LS. .
 光スキャナ42に対する制御には、スキャンモード、スキャン範囲(スキャン開始位置、スキャン終了位置)、スキャン速度などの制御がある。主制御部211は、光スキャナ42に対する制御を行うことで、計測部位(撮影部位)における所望の領域に対して測定光LSでOCTスキャンを実行することができる。 Control of the optical scanner 42 includes control of scan mode, scan range (scan start position, scan end position), scan speed, and the like. By controlling the optical scanner 42, the main control unit 211 can perform an OCT scan with the measurement light LS on a desired region of the measurement site (imaging site).
 また、主制御部211は、観察光源11、撮影光源15、フォーカス光学系60などを制御する。 The main control unit 211 also controls the observation light source 11, the photographing light source 15, the focus optical system 60, and the like.
 OCTユニット100に対する制御には、光源ユニット101に対する制御、参照駆動部114Aに対する制御、検出器125に対する制御、DAQ130に対する制御が含まれる。 Control over the OCT unit 100 includes control over the light source unit 101, control over the reference driver 114A, control over the detector 125, and control over the DAQ 130.
 光源ユニット101に対する制御には、光源のオン及びオフの制御、光源から出射される光の光量の制御、波長掃引範囲の制御、波長掃引速度、各波長成分の光の出射タイミングの制御などがある。 The control of the light source unit 101 includes control of turning on and off of the light source, control of the amount of light emitted from the light source, control of the wavelength sweep range, wavelength sweep speed, control of emission timing of light of each wavelength component, and the like. .
 参照駆動部114Aに対する制御には、参照光LRの光路長を変更する制御が含まれる。主制御部211は、参照駆動部114Aを制御することで参照光LRの光路に沿ってコーナーキューブ114を移動し、参照光LRの光路長を変更する。 The control over the reference driver 114A includes control to change the optical path length of the reference light LR. The main control unit 211 moves the corner cube 114 along the optical path of the reference light LR by controlling the reference driving unit 114A to change the optical path length of the reference light LR.
 検出器125に対する制御には、検出素子に対する受光感度の制御、フレームレート(受光タイミング)の制御、受光領域(位置、大きさ、サイズ)の制御、検出素子に対する受光結果の読み出し制御などがある。 The control of the detector 125 includes control of the light receiving sensitivity of the detecting element, control of the frame rate (light receiving timing), control of the light receiving area (position, size, size), control of reading the light receiving result of the detecting element, and the like.
 DAQ130に対する制御には、検出器125により得られた干渉光の検出結果の取り込み制御(取り込みタイミング、サンプリングタイミング)、取り込まれた干渉光の検出結果に対応した干渉信号の読み出し制御などがある。 Control over the DAQ 130 includes fetch control (fetch timing, sampling timing) of the detection result of interference light obtained by the detector 125, readout control of the interference signal corresponding to the detection result of the fetched interference light, and the like.
 前眼部カメラ5A、5Bに対する制御には、各カメラの受光感度の制御、フレームレート(受光タイミング)の制御、前眼部カメラ5A、5Bの同期制御などがある。 The control for the anterior eye cameras 5A and 5B includes control of the light receiving sensitivity of each camera, frame rate (light receiving timing) control, synchronization control of the anterior eye cameras 5A and 5B, and the like.
 移動機構150は、例えば、少なくとも眼底カメラユニット2(光学系)を3次元的に移動する。典型的な例において、移動機構150は、少なくとも眼底カメラユニット2をx方向(左右方向)に移動するための機構と、y方向(上下方向)に移動するための機構と、z方向(奥行き方向、前後方向)に移動するための機構とを含む。x方向に移動するための機構は、例えば、x方向に移動可能なxステージと、xステージを移動するx移動機構とを含む。y方向に移動するための機構は、例えば、例えば、y方向に移動可能なyステージと、yステージを移動するy移動機構とを含む。z方向に移動するための機構は、例えば、z方向に移動可能なzステージと、zステージを移動するz移動機構とを含む。各移動機構は、アクチュエータとしてのパルスモータを含み、主制御部211からの制御を受けて動作する。 The movement mechanism 150, for example, three-dimensionally moves at least the retinal camera unit 2 (optical system). In a typical example, the movement mechanism 150 includes at least a mechanism for moving the retinal camera unit 2 in the x direction (horizontal direction), a mechanism for moving it in the y direction (vertical direction), and a mechanism for moving it in the z direction (depth direction). , back and forth). The mechanism for moving in the x-direction includes, for example, an x-stage movable in the x-direction and an x-moving mechanism for moving the x-stage. The mechanism for moving in the y-direction includes, for example, a y-stage movable in the y-direction and a y-moving mechanism for moving the y-stage. The mechanism for moving in the z-direction includes, for example, a z-stage movable in the z-direction and a z-moving mechanism for moving the z-stage. Each movement mechanism includes a pulse motor as an actuator and operates under control from the main control unit 211 .
 移動機構150に対する制御は、アライメントやトラッキングにおいて用いられる。トラッキングとは、被検眼Eの眼球運動に合わせて装置光学系を移動させるものである。トラッキングを行う場合には、事前にアライメントとフォーカス調整が実行される。トラッキングは、装置光学系の位置を眼球運動に追従させることにより、アライメントとピントが合った好適な位置関係を維持する機能である。いくつかの実施形態では、参照光の光路長(よって、測定光の光路と参照光の光路との間の光路長差)を変更するために移動機構150の制御を行うように構成される。 The control over the moving mechanism 150 is used in alignment and tracking. Tracking is to move the apparatus optical system according to the eye movement of the eye E to be examined. Alignment and focus adjustment are performed in advance when tracking is performed. Tracking is a function of maintaining a suitable positional relationship in which alignment and focus are achieved by causing the position of the apparatus optical system to follow the movement of the eyeball. Some embodiments are configured to control movement mechanism 150 to change the optical path length of the reference beam (and thus the optical path length difference between the optical path of the measurement beam and the optical path of the reference beam).
 マニュアルアライメントの場合、光学系に対する被検眼Eの変位がキャンセルされるようにユーザーがユーザーインターフェイス240に対して操作することにより光学系と被検眼Eとを相対移動させる。例えば、主制御部211は、ユーザーインターフェイス240に対する操作内容に対応した制御信号を移動機構150に出力することにより移動機構150を制御して被検眼Eに対して光学系を相対移動させる。 In the case of manual alignment, the user relatively moves the optical system and the subject's eye E by operating the user interface 240 so that the displacement of the subject's eye E with respect to the optical system is cancelled. For example, the main control unit 211 controls the moving mechanism 150 to move the optical system relative to the eye E by outputting a control signal corresponding to the operation content of the user interface 240 to the moving mechanism 150 .
 オートアライメントの場合、光学系に対する被検眼Eの変位がキャンセルされるように主制御部211が移動機構150を制御することにより被検眼Eに対して光学系を相対移動させる。具体的には、特開2013-248376号公報に記載のように、一対の前眼部カメラ5A及び5Bと被検眼Eとの位置関係に基づく三角法を利用した演算処理を行い、主制御部211は、光学系に対する被検眼Eの位置関係が所定の位置関係になるように移動機構150を制御する。いくつかの実施形態では、主制御部211は、光学系の光軸が被検眼Eの軸に略一致し、かつ、被検眼Eに対する光学系の距離が所定の作動距離になるように制御信号を移動機構150に出力することにより移動機構150を制御して被検眼Eに対して光学系を相対移動させる。ここで、作動距離とは、対物レンズ22のワーキングディスタンスとも呼ばれる既定値であり、光学系を用いた測定時(撮影時)における被検眼Eと光学系との間の距離に相当する。 In the case of auto-alignment, the main control unit 211 controls the movement mechanism 150 so that the displacement of the eye E to be examined with respect to the optical system is canceled, thereby moving the optical system relative to the eye E to be examined. Specifically, as described in JP-A-2013-248376, arithmetic processing using trigonometry based on the positional relationship between the pair of anterior eye cameras 5A and 5B and the subject's eye E is performed, and the main control unit A reference numeral 211 controls the moving mechanism 150 so that the eye to be examined E has a predetermined positional relationship with respect to the optical system. In some embodiments, the main controller 211 outputs a control signal such that the optical axis of the optical system substantially coincides with the axis of the eye E to be examined and the distance of the optical system from the eye E to be examined is a predetermined working distance. to the moving mechanism 150 to control the moving mechanism 150 to move the optical system relative to the eye E to be examined. Here, the working distance is a default value also called a working distance of the objective lens 22, and corresponds to the distance between the subject's eye E and the optical system at the time of measurement (at the time of photographing) using the optical system.
 主制御部211は、表示制御部として、各種情報を表示部240Aに表示させることが可能である。例えば、主制御部211は、複数の分光眼底画像を波長範囲に対応付けて表示部240Aに表示させる。例えば、主制御部211は、後述の解析部231により得られた解析処理結果を表示部240Aに表示させる。 The main control unit 211 can display various information on the display unit 240A as a display control unit. For example, the main control unit 211 causes the display unit 240A to display a plurality of spectral fundus images in association with wavelength ranges. For example, the main control unit 211 causes the display unit 240A to display analysis processing results obtained by the analysis unit 231, which will be described later.
〈記憶部212〉
 記憶部212は各種のデータを記憶する。記憶部212の機能は、メモリ又は記憶装置等の記憶デバイスにより実現される。記憶部212に記憶されるデータとしては、例えば、制御パラメータ、眼底画像の画像データ、前眼部画像の画像データ、OCTデータ(OCT画像を含む)、眼底画像の分光画像データ、前眼部画像の分光画像データ、被検眼情報などがある。制御パラメータとしては、ハイパースペクトル撮影制御データなどがある。ハイパースペクトル撮影制御データは、所定の解析波長範囲内で互いに異なる中心波長の戻り光に基づいて複数の眼底画像を取得するための制御データである。ハイパースペクトル撮影制御データの例として、複数の分光眼底画像が取得される解析波長範囲、各分光眼底画像が取得される波長範囲、中心波長、中心波長ステップ、中心波長に対応した波長可変フィルタ80の制御データなどがある。被検眼情報は、患者IDや氏名などの被検者に関する情報や、左眼/右眼の識別情報、電子カルテ情報などの被検眼に関する情報を含む。記憶部212には、各種のプロセッサ(制御プロセッサ、画像形成プロセッサ、データ処理プロセッサ)を実行させるためのプログラムが記憶される。
<Storage unit 212>
The storage unit 212 stores various data. The function of the storage unit 212 is implemented by a storage device such as a memory or a storage device. The data stored in the storage unit 212 includes, for example, control parameters, fundus image data, anterior segment image data, OCT data (including OCT images), spectral image data of the fundus image, and anterior segment image. spectroscopic image data, information on the eye to be examined, and the like. Control parameters include hyperspectral imaging control data and the like. The hyperspectral imaging control data is control data for acquiring a plurality of fundus images based on return lights with different central wavelengths within a predetermined analysis wavelength range. Examples of hyperspectral imaging control data include an analysis wavelength range in which a plurality of spectral fundus images are acquired, a wavelength range in which each spectral fundus image is acquired, a center wavelength, a center wavelength step, and a wavelength tunable filter 80 corresponding to the center wavelength. control data, etc. The eye information to be examined includes information about the subject such as patient ID and name, information about the eye to be examined such as left/right eye identification information, and electronic medical record information. The storage unit 212 stores programs for executing various processors (control processor, image forming processor, data processing processor).
〈画像形成部220〉
 画像形成部220は、プロセッサ(例えば、画像形成プロセッサ)を含み、DAQ130からの出力(検出信号のサンプリング結果)に基づいて、被検眼EのOCT画像(画像データ)を形成する。例えば、画像形成部220は、従来のスウェプトソースOCTと同様に、Aライン毎のサンプリング結果に基づくスペクトル分布に信号処理を施してAライン毎の反射強度プロファイルを形成し、これらAラインプロファイルを画像化してスキャンラインに沿って配列する。上記信号処理には、ノイズ除去(ノイズ低減)、フィルタ処理、FFT(Fast Fourier Transform)などが含まれる。他のタイプのOCTを実行する場合、画像形成部220は、そのタイプに応じた公知の処理を実行する。
<Image forming unit 220>
The image forming unit 220 includes a processor (for example, an image forming processor), and forms an OCT image (image data) of the subject's eye E based on the output from the DAQ 130 (sampling result of the detection signal). For example, the image forming unit 220 performs signal processing on the spectral distribution based on the sampling result for each A line, forms a reflection intensity profile for each A line, and images these A line profiles as in the conventional swept source OCT. and arrange them along the scan lines. The signal processing includes noise removal (noise reduction), filtering, FFT (Fast Fourier Transform), and the like. When performing another type of OCT, the image forming section 220 performs known processing according to that type.
〈データ処理部230〉
 データ処理部230は、プロセッサ(例えば、データ処理プロセッサ)を含み、画像形成部220により形成された画像に対して画像処理や解析処理を施す。主制御部211に含まれるプロセッサ、データ処理部230に含まれるプロセッサ、及び画像形成部220に含まれるプロセッサの少なくとも2つは、単一のプロセッサにより構成されていてもよい。
<Data processing unit 230>
The data processing unit 230 includes a processor (for example, a data processing processor) and performs image processing and analysis processing on the image formed by the image forming unit 220 . At least two of the processor included in the main control unit 211, the processor included in the data processing unit 230, and the processor included in the image forming unit 220 may be configured by a single processor.
 データ処理部230は、断層像の間の画素を補間する補間処理などの公知の画像処理を実行して、眼底Ef又は前眼部Eaの3次元画像の画像データを形成する。なお、3次元画像の画像データとは、3次元座標系により画素の位置が定義された画像データを意味する。3次元画像の画像データとしては、3次元的に配列されたボクセルからなる画像データがある。この画像データは、ボリュームデータ或いはボクセルデータなどと呼ばれる。ボリュームデータに基づく画像を表示させる場合、データ処理部230は、このボリュームデータに対してレンダリング処理(ボリュームレンダリングやMIP(Maximum Intensity Projection:最大値投影)など)を施して、特定の視線方向から見たときの擬似的な3次元画像の画像データを形成する。表示部240A等の表示デバイスには、この擬似的な3次元画像が表示される。 The data processing unit 230 executes known image processing such as interpolation processing for interpolating pixels between tomographic images to form image data of a three-dimensional image of the fundus oculi Ef or the anterior segment Ea. Note that image data of a three-dimensional image means image data in which pixel positions are defined by a three-dimensional coordinate system. Image data of a three-dimensional image includes image data composed of voxels arranged three-dimensionally. This image data is called volume data or voxel data. When displaying an image based on volume data, the data processing unit 230 performs rendering processing (volume rendering, MIP (Maximum Intensity Projection: maximum intensity projection), etc.) on this volume data so that it can be viewed from a specific viewing direction. Image data of a pseudo three-dimensional image is formed. This pseudo three-dimensional image is displayed on a display device such as the display unit 240A.
 また、3次元画像の画像データとして、複数の断層像のスタックデータを形成することも可能である。スタックデータは、複数のスキャンラインに沿って得られた複数の断層像を、スキャンラインの位置関係に基づいて3次元的に配列させることで得られる画像データである。すなわち、スタックデータは、元々個別の2次元座標系により定義されていた複数の断層像を、1つの3次元座標系により表現する(つまり1つの3次元空間に埋め込む)ことにより得られる画像データである。 It is also possible to form stack data of a plurality of tomographic images as image data of a three-dimensional image. Stacked data is image data obtained by three-dimensionally arranging a plurality of tomographic images obtained along a plurality of scan lines based on the positional relationship of the scan lines. That is, stack data is image data obtained by expressing a plurality of tomographic images, which were originally defined by individual two-dimensional coordinate systems, by one three-dimensional coordinate system (that is, embedding them in one three-dimensional space). be.
 いくつかの実施形態では、データ処理部230は、Aスキャン画像をBスキャン方向に配列することによりBスキャン画像を生成する。いくつかの実施形態では、データ処理部230は、取得された3次元データセット(ボリュームデータ、スタックデータ等)に各種のレンダリングを施すことで、任意断面におけるBモード画像(Bスキャン画像)(縦断面像、軸方向断面像)、任意断面におけるCモード画像(Cスキャン画像)(横断面像、水平断面像)、プロジェクション画像、シャドウグラムなどを形成することができる。Bスキャン画像やCスキャン画像のような任意断面の画像は、指定された断面上の画素(ピクセル、ボクセル)を3次元データセットから選択することにより形成される。プロジェクション画像は、3次元データセットを所定方向(z方向、深さ方向、軸方向)に投影することによって形成される。シャドウグラムは、3次元データセットの一部(たとえば特定層に相当する部分データ)を所定方向に投影することによって形成される。積分する層方向の深さ範囲を変更することで、互いに異なる2以上のシャドウグラムを形成することが可能である。Cスキャン画像、プロジェクション画像、シャドウグラムのような、被検眼の正面側を視点とする画像を正面画像(en-face画像)と呼ぶ。 In some embodiments, the data processing unit 230 generates a B-scan image by arranging the A-scan images in the B-scan direction. In some embodiments, the data processing unit 230 performs various renderings on the acquired three-dimensional data set (volume data, stack data, etc.) to obtain a B-mode image (B-scan image) (longitudinal section) in an arbitrary cross section. plane image, axial cross-sectional image), C-mode image (C-scan image) at an arbitrary cross-section (cross-sectional image, horizontal cross-sectional image), projection image, shadowgram, and the like. An arbitrary cross-sectional image, such as a B-scan image or a C-scan image, is formed by selecting pixels (pixels, voxels) on a specified cross-section from a three-dimensional data set. A projection image is formed by projecting a three-dimensional data set in a predetermined direction (z direction, depth direction, axial direction). A shadowgram is formed by projecting a portion of the three-dimensional data set (for example, partial data corresponding to a specific layer) in a predetermined direction. By changing the depth range in the layer direction to be integrated, it is possible to form two or more different shadowgrams. An image such as a C-scan image, a projection image, or a shadowgram whose viewpoint is the front side of the subject's eye is called an en-face image.
 データ処理部230は、OCTにより時系列に収集されたデータ(例えば、Bスキャン画像データ)に基づいて、網膜血管や脈絡膜血管が強調されたBスキャン画像や正面画像(血管強調画像、アンギオグラム)を構築することができる。例えば、被検眼Eの略同一部位を反復的にスキャンすることにより、時系列のOCTデータを収集することができる。 The data processing unit 230 generates B-scan images and front images (blood vessel-enhanced images, angiograms) in which retinal vessels and choroidal vessels are emphasized based on data (for example, B-scan image data) collected in time series by OCT. can be constructed. For example, time-series OCT data can be collected by repeatedly scanning substantially the same portion of the eye E to be examined.
 いくつかの実施形態では、データ処理部230は、略同一部位に対するBスキャンにより得られた時系列のBスキャン画像を比較し、信号強度の変化部分の画素値を変化分に対応した画素値に変換することにより当該変化部分が強調された強調画像を構築する。更に、データ処理部230は、構築された複数の強調画像から所望の部位における所定の厚さ分の情報を抽出してen-face画像として構築することでOCTA(アンギオグラフィ)像を形成する。 In some embodiments, the data processing unit 230 compares time-series B-scan images obtained by B-scans of substantially the same site, and converts the pixel values of the portions where the signal intensity changes to the pixel values corresponding to the changes. An enhanced image in which the changed portion is emphasized is constructed by the conversion. Furthermore, the data processing unit 230 extracts information for a predetermined thickness in a desired region from the constructed multiple enhanced images and constructs an en-face image to form an OCTA (angiography) image.
 このようなデータ処理部230は、解析部231を含む。 Such a data processing unit 230 includes an analysis unit 231.
〈解析部231〉
 図4に示すように、解析部231は、特徴部位特定部231Aと、3次元位置算出部231Bと、分光分布データ処理部231Cとを含む。
<analysis unit 231>
As shown in FIG. 4, the analysis section 231 includes a characteristic site identification section 231A, a three-dimensional position calculation section 231B, and a spectral distribution data processing section 231C.
 解析部231は、被検眼Eの画像(分光眼底画像を含む)を解析して当該画像に描出された特徴部位を特定することが可能である。例えば、解析部231は、前眼部カメラ5A、5Bの位置と特定された特徴部位の位置とに基づいて被検眼Eの3次元位置を求める。主制御部211は、求められた3次元位置に基づいて被検眼Eに対して光学系を相対移動させることにより、被検眼Eに対する光学系の位置合わせを行う。 The analysis unit 231 can analyze the image (including the spectroscopic fundus image) of the subject's eye E to identify the characteristic regions depicted in the image. For example, the analysis unit 231 obtains the three-dimensional position of the subject's eye E based on the positions of the anterior eye cameras 5A and 5B and the positions of the specified characteristic regions. The main control unit 211 aligns the optical system with respect to the eye to be examined E by relatively moving the optical system with respect to the eye to be examined E based on the determined three-dimensional position.
 また、解析部231は、複数の分光眼底画像に対して所定の解析処理を実行することが可能である。所定の解析処理の例として、複数の分光眼底画像のうち任意の2つの画像の比較処理、比較処理により特定された共通領域又は差分領域の抽出処理、複数の分光眼底画像の少なくとも1つに対する注目部位又は特徴部位の特定処理、分光眼底画像中での上記の共通領域、上記の差分領域、上記の注目部位、又は上記の特徴部位の識別表示処理、複数の分光眼底画像のうち少なくとも2つの画像の合成処理などがある。 Also, the analysis unit 231 can perform predetermined analysis processing on a plurality of spectral fundus images. Examples of predetermined analysis processing include comparison processing of arbitrary two images among a plurality of spectral fundus images, processing of extracting a common region or difference region specified by the comparison processing, and attention to at least one of the plurality of spectral fundus images. Part or characteristic part identification processing, identification display processing of the common region, the difference region, the target region, or the characteristic region in the spectral fundus image, at least two images out of the plurality of spectral fundus images , and the like.
 また、解析部231は、複数の分光眼底画像のいずれかにおける特徴領域の深さ情報を特定し、特定された特徴領域の深さ情報を測定データとしてのOCTデータに基づいて特定する。いくつかの実施形態では、解析部231は、各分光眼底画像における各部位がz方向に一致するようにOCTデータに基づいて複数の分光眼底画像の位置合わせを行い、位置合わせが行われた複数の分光眼底画像のいずれかの特徴領域を特定することが可能である。 In addition, the analysis unit 231 identifies depth information of a characteristic region in any one of the plurality of spectral fundus images, and identifies depth information of the identified characteristic region based on OCT data as measurement data. In some embodiments, the analysis unit 231 aligns a plurality of spectral fundus images based on the OCT data so that each part of each spectral fundus image is aligned in the z direction, and aligns the aligned multiple fundus images. It is possible to identify any characteristic region of the spectral fundus image of .
〈特徴部位特定部231A〉
 特徴部位特定部231Aは、前眼部カメラ5A及び5Bにより得られた各撮影画像を解析することで、前眼部Eaの特徴部位に相当する当該撮影画像中の位置(特徴位置と呼ぶ)を特定する。特徴部位としては、例えば被検眼Eの瞳孔領域、被検眼Eの瞳孔中心位置、瞳孔重心位置、角膜中心位置、角膜頂点位置、被検眼中心位置、又は虹彩が用いられる。以下、被検眼Eの瞳孔中心位置を特定する処理の具体例を説明する。
<Characteristic part identification unit 231A>
The characteristic site identification unit 231A analyzes each of the captured images obtained by the anterior segment cameras 5A and 5B to identify positions (referred to as characteristic positions) in the captured images corresponding to the characteristic sites of the anterior segment Ea. Identify. For example, the pupil region of the subject eye E, the pupil center position of the subject eye E, the pupil center position, the corneal center position, the corneal vertex position, the subject eye center position, or the iris are used as the characteristic site. A specific example of processing for specifying the pupil center position of the eye E to be examined will be described below.
 まず、特徴部位特定部231Aは、撮影画像の画素値(輝度値など)の分布に基づいて、被検眼Eの瞳孔に相当する画像領域(瞳孔領域)を特定する。一般に瞳孔は他の部位よりも低い輝度で描画されるので、低輝度の画像領域を探索することによって瞳孔領域を特定することができる。このとき、瞳孔の形状を考慮して瞳孔領域を特定するようにしてもよい。つまり、略円形かつ低輝度の画像領域を探索することによって瞳孔領域を特定するように構成することができる。 First, the characteristic part specifying unit 231A specifies an image region (pupil region) corresponding to the pupil of the subject's eye E based on the distribution of pixel values (such as luminance values) of the captured image. Since the pupil is generally drawn with lower luminance than other parts, the pupil region can be identified by searching for the low-luminance image region. At this time, the pupil region may be specified in consideration of the shape of the pupil. In other words, the pupil region can be identified by searching for a substantially circular low-brightness image region.
 次に、特徴部位特定部231Aは、特定された瞳孔領域の中心位置を特定する。上記のように瞳孔は略円形であるので、瞳孔領域の輪郭を特定し、この輪郭(の近似円または近似楕円)の中心位置を特定し、これを瞳孔中心位置とすることができる。また、瞳孔領域の重心を求め、この重心位置を瞳孔重心位置として特定してもよい。 Next, the characteristic part identifying section 231A identifies the central position of the identified pupil region. Since the pupil is substantially circular as described above, it is possible to specify the outline of the pupil region, specify the center position of this outline (the approximate circle or approximate ellipse), and set this as the pupil center position. Alternatively, the center of gravity of the pupil region may be obtained and the position of the center of gravity may be specified as the position of the center of gravity of the pupil.
 なお、他の特徴部位に対応する特徴位置を特定する場合であっても、上記と同様に撮影画像の画素値の分布に基づいて当該特徴位置を特定することが可能である。 Even when identifying characteristic positions corresponding to other characteristic regions, it is possible to identify the characteristic positions based on the distribution of pixel values of the captured image in the same manner as described above.
 特徴部位特定部231Aは、前眼部カメラ5A及び5Bにより逐次に得られた撮影画像に対し特徴部位に相当する特徴位置を逐次に特定することが可能である。また、特徴部位特定部231Aは、前眼部カメラ5A及び5Bにより逐次に得られた撮影画像に対し1以上の任意の数のフレームおきに特徴位置を特定してもよい。 The characteristic part identifying unit 231A can sequentially identify characteristic positions corresponding to characteristic parts in the captured images sequentially obtained by the anterior eye cameras 5A and 5B. In addition, the characteristic part identification unit 231A may identify the characteristic position every one or more frames of the captured images sequentially obtained by the anterior eye cameras 5A and 5B.
〈3次元位置算出部231B〉
 3次元位置算出部231Bは、前眼部カメラ5A及び5Bの位置と、特徴部位特定部231Aにより特定された特徴部位に相当する特徴位置とに基づいて特徴部位の3次元位置を被検眼Eの3次元位置として特定する。3次元位置算出部231Bは、特開2013-248376号公報に開示されているように、2つの前眼部カメラ5A及び5Bの位置(既知である)と、2つの撮影画像において特徴部位に相当する位置とに対して、公知の三角法を適用することにより被検眼Eの3次元位置を算出する。3次元位置算出部231Bにより求められた3次元位置は、主制御部211に送られる。主制御部211は、当該3次元位置に基づいて、光学系の光軸のx方向及びy方向の位置が3次元位置のx方向及びy方向の位置と一致し、かつ、z方向の距離が所定の作動距離になるように移動機構150を制御する。
<Three-dimensional position calculator 231B>
The three-dimensional position calculation unit 231B calculates the three-dimensional positions of the characteristic regions of the subject's eye E based on the positions of the anterior eye cameras 5A and 5B and the characteristic positions corresponding to the characteristic regions identified by the characteristic region identification unit 231A. Identify as a three-dimensional position. The three-dimensional position calculation unit 231B, as disclosed in JP-A-2013-248376, calculates the positions (known) of the two anterior eye cameras 5A and 5B and corresponding to characteristic regions in the two captured images. The three-dimensional position of the subject's eye E is calculated by applying a known trigonometric method to the position where the eye E is to be examined. The three-dimensional position calculated by the three-dimensional position calculator 231B is sent to the main controller 211. FIG. Based on the three-dimensional position, the main control unit 211 determines that the x- and y-direction positions of the optical axis of the optical system match the x- and y-direction positions of the three-dimensional position, and that the z-direction distance is The moving mechanism 150 is controlled so as to achieve a predetermined working distance.
〈分光分布データ処理部231C〉
 分光分布データ処理部231Cは、分光分布データの深さ情報をOCTデータに基づいて特定する処理を実行する。特に、分光分布データ処理部231Cは、分光分布データとしての分光眼底画像における特徴領域を特定し、特定された特徴領域の深さ情報を特定する処理を実行する。また、分光分布データ処理部231Cは、上記の処理により特定された特徴領域に基づいて疾患の有無、疾患の確率、又は疾患の種類を推定することが可能である。特に、分光分布データ処理部231Cは、上記の処理により深さ情報が特定された特徴領域に基づいて、疾患の有無等を高精度に推定することができる。
<Spectral distribution data processing unit 231C>
The spectral distribution data processing unit 231C executes a process of specifying depth information of the spectral distribution data based on the OCT data. In particular, the spectral distribution data processing unit 231C executes a process of identifying a characteristic region in the spectral fundus image as the spectral distribution data and identifying depth information of the identified characteristic region. Moreover, the spectral distribution data processing unit 231C can estimate the presence or absence of a disease, the probability of the disease, or the type of the disease based on the characteristic regions specified by the above processing. In particular, the spectral distribution data processing unit 231C can highly accurately estimate the presence or absence of a disease based on the feature region for which the depth information has been specified by the above processing.
 図5に示すように、分光分布データ処理部231Cは、特徴領域特定部2311Cと、深さ情報特定部2312Cと、疾患推定部2314Cとを含む。深さ情報特定部2312Cは、探索部2313Cを含む。 As shown in FIG. 5, the spectral distribution data processing section 231C includes a characteristic region identifying section 2311C, a depth information identifying section 2312C, and a disease estimating section 2314C. The depth information specifying section 2312C includes a searching section 2313C.
〈特徴領域特定部2311C〉
 特徴領域特定部2311Cは、分光眼底画像における特徴領域を特定する。この場合、特徴領域として、血管、疾患部位、視神経乳頭、異常部位、画素の輝度の変化が特徴的な領域などがある。特徴領域特定部2311Cは、ユーザーインターフェイス240の操作部240Bを用いて指定された特徴領域を分光分布データにおける特徴領域として特定してもよい。
<Characteristic region identification unit 2311C>
The characteristic region identifying section 2311C identifies a characteristic region in the spectral fundus image. In this case, characteristic regions include blood vessels, diseased regions, optic nerve papilla, abnormal regions, regions characterized by changes in pixel luminance, and the like. The characteristic region identifying section 2311C may identify the characteristic region designated using the operation section 240B of the user interface 240 as the characteristic region in the spectral distribution data.
 いくつかの実施形態では、特徴領域特定部2311Cは、複数の分光眼底画像のそれぞれについて特徴領域を特定する。特徴領域は、2以上の領域であってよい。いくつかの実施形態では、特徴領域特定部2311Cは、複数の分光眼底画像のうち選択された1以上の分光眼底画像について特徴領域を特定する。 In some embodiments, the characteristic region identifying unit 2311C identifies characteristic regions for each of a plurality of spectral fundus images. A feature region may be two or more regions. In some embodiments, the characteristic region identifying unit 2311C identifies characteristic regions for one or more spectral fundus images selected from a plurality of spectral fundus images.
 いくつかの実施形態では、特徴領域特定部2311Cは、分光眼底画像に対して主成分分析を行い、主成分分析結果を用いて特徴領域を特定する。例えば、分光眼底画像に対する主成分分析では、分散(ばらつき)が最大になるように1次元以上の主成分が順次に特定される。各主成分には、特徴領域(特徴部位)が反映される。 In some embodiments, the characteristic region identifying unit 2311C performs principal component analysis on the spectral fundus image and identifies characteristic regions using the principal component analysis results. For example, in the principal component analysis of a spectroscopic fundus image, principal components of one or more dimensions are sequentially identified so as to maximize the variance (variation). Each principal component reflects a characteristic region (characteristic part).
 具体的には、特徴領域特定部2311Cは、まず、分光眼底画像の全データの重心(平均値)を算出し、算出された重心からデータの分散が最大となる方向を第1主成分として特定し、特定された第1主成分に直交する方向で分散が最大となる第2主成分を特定する。続いて、特徴領域特定部2311Cは、直近で特定された第n(nは2以上の整数)主成分に直交する方向で分散が最大となる第(n+1)主成分を特定し、あらかじめ決められた次元まで順次に主成分を特定する。 Specifically, the characteristic region specifying unit 2311C first calculates the center of gravity (average value) of all the data of the spectral fundus image, and specifies the direction in which the variance of the data from the calculated center of gravity is maximum as the first principal component. Then, the second principal component that has the maximum variance in the direction orthogonal to the identified first principal component is identified. Subsequently, the characteristic region identifying unit 2311C identifies the (n+1)-th principal component having the maximum variance in the direction perpendicular to the most recently identified n-th (n is an integer of 2 or more) principal component, Principal components are identified sequentially up to the dimension
 このような分光眼底画像に主成分分析を適用して特徴領域を特定する手法については、例えば、特開2007-330558号公報に例示されている。例示された手法では、第1主成分には基本的な網膜の形状が反映され、第2主成分には脈絡間脈血管が反映され、第3主成分には網膜静脈が反映され、第5主成分には網膜血管の全体が反映される。例えば、網膜血管の全体を表す第5主成分から網膜静脈を表す第3主成分を除去することにより、網膜動脈を表す成分を抽出することが可能である。すなわち、主成分分析により得られた各主成分に分光分布データにおける特徴領域(特徴部位)が反映され、主成分分析結果を用いて分光分布データにおける特徴領域を特定することが可能である。 A method of specifying a characteristic region by applying principal component analysis to such a spectroscopic fundus image is exemplified in Japanese Patent Application Laid-Open No. 2007-330558, for example. In the exemplified approach, the first principal component reflects the underlying retinal shape, the second principal component reflects the interchoroidal vessels, the third principal component reflects the retinal veins, and the fifth principal component reflects the retinal veins. The principal component reflects the entire retinal blood vessels. For example, the component representing the retinal artery can be extracted by removing the third principal component representing the retinal vein from the fifth principal component representing the entire retinal blood vessel. That is, each principal component obtained by the principal component analysis reflects the characteristic region (characteristic part) in the spectral distribution data, and it is possible to specify the characteristic region in the spectral distribution data using the principal component analysis result.
 いくつかの実施形態では、特徴領域特定部2311Cは、分光眼底画像に対する主成分分析により得られた各主成分に対応した固有値、寄与率、及び累積寄与率の少なくとも1つを用いて、分光眼底画像における特徴領域を特定する。 In some embodiments, the characteristic region specifying unit 2311C uses at least one of an eigenvalue, a contribution rate, and a cumulative contribution rate corresponding to each principal component obtained by principal component analysis of the spectral fundus image to determine the spectral fundus image. Identify feature regions in the image.
 いくつかの実施形態では、特徴領域特定部2311Cは、複数の分光眼底画像を比較することにより得られた比較結果に基づいて特徴領域を特定する。例えば、特徴領域特定部2311Cは、互いに波長範囲が隣接する2つの分光眼底画像を比較することで、特徴領域を特定する。例えば、特徴領域特定部2311Cは、あらかじめ決められた2つの波長範囲の分光眼底画像を比較することで、特徴領域を特定する。 In some embodiments, the characteristic region identifying unit 2311C identifies characteristic regions based on comparison results obtained by comparing a plurality of spectral fundus images. For example, the characteristic region identifying unit 2311C identifies a characteristic region by comparing two spectral fundus images whose wavelength ranges are adjacent to each other. For example, the characteristic region identifying unit 2311C identifies a characteristic region by comparing spectral fundus images in two predetermined wavelength ranges.
〈深さ情報特定部2312C〉
 深さ情報特定部2312Cは、特徴領域特定部2311Cにより特定された特徴領域の深さ情報を特定する。
<Depth information specifying unit 2312C>
The depth information specifying section 2312C specifies depth information of the feature area specified by the feature area specifying section 2311C.
 深さ情報の例として、所定の基準部位を基準として、測定光軸方向である深さ方向の位置を表す情報、深さ方向の位置の範囲を表す情報、層領域を表す情報、組織を表す情報などがある。所定の基準部位の例として、被検眼の眼底表面、被検眼の網膜を構成する所定の層領域、被検眼の角膜頂点、被検眼の反射光の強度が最大になる部位、被検眼の前眼部を構成する所定の部位などがある。 Examples of depth information include information representing a position in the depth direction, which is the direction of the optical axis for measurement, information representing a range of positions in the depth direction, information representing a layer region, and information representing a tissue, with reference to a predetermined reference site. information, etc. Examples of the predetermined reference sites include the fundus surface of the eye to be examined, a predetermined layer region forming the retina of the eye to be examined, the corneal vertex of the eye to be examined, the site where the reflected light intensity of the eye to be examined is maximized, and the anterior eye of the eye to be examined. There is a predetermined part that constitutes a part.
 深さ情報特定部2312Cは、分光分布データより深さ方向の分解能が高いOCTデータを用いて、特徴領域特定部2311Cにより特定された特徴領域の深さ情報を特定する。具体的には、深さ情報特定部2312Cは、OCTデータから特徴領域特定部2311Cにより特定された特徴領域と最も相関度が高い領域を探索する。深さ情報特定部2312Cは、探索されたOCTデータの領域における深さ情報を特徴領域特定部2311Cにより特定された特徴領域の深さ情報として特定する。 The depth information identifying unit 2312C identifies depth information of the characteristic region identified by the characteristic region identifying unit 2311C using OCT data with higher resolution in the depth direction than the spectral distribution data. Specifically, the depth information specifying unit 2312C searches the OCT data for a region having the highest degree of correlation with the feature region specified by the feature region specifying unit 2311C. The depth information specifying unit 2312C specifies the depth information in the searched OCT data area as the depth information of the feature area specified by the feature area specifying unit 2311C.
 いくつかの実施形態では、主制御部211は、表示制御部として、分光眼底画像(分光分布データ)と、深さ情報特定部2312Cにより特定された深さ情報とを表示部240Aに表示させる。このとき、主制御部211は、分光眼底画像及び深さ情報と共に当該深さ情報に対応したOCTデータを、表示部240Aに表示させることができる。 In some embodiments, the main control unit 211, as a display control unit, causes the display unit 240A to display the spectral fundus image (spectral distribution data) and the depth information specified by the depth information specifying unit 2312C. At this time, the main control unit 211 can display the OCT data corresponding to the depth information together with the spectral fundus image and the depth information on the display unit 240A.
〈探索部2313C〉
 探索部2313Cは、OCTデータ(例えば、3次元OCTデータ)から所定の波長範囲の分光眼底画像と最も相関度が高い領域を探索する。いくつかの実施形態では、探索部2313Cは、OCTデータの複数の領域のそれぞれと分光眼底画像との複数の相関度を求め、求められた複数の相関度から最も相関度が高いOCTデータの領域を特定する。
<Search unit 2313C>
The searching unit 2313C searches the OCT data (for example, three-dimensional OCT data) for a region having the highest degree of correlation with the spectroscopic fundus image in a predetermined wavelength range. In some embodiments, the searching unit 2313C obtains a plurality of degrees of correlation between each of the plurality of regions of the OCT data and the spectral fundus image, and selects the region of the OCT data with the highest degree of correlation from the plurality of degrees of correlation obtained. identify.
 例えば、被検眼EのOCTデータに基づいて、互いに深さ位置が異なる複数の正面画像(en-face画像、Cスキャン画像、プロジェクション画像、OCTアンギオグラフィ)があらかじめ形成されている。この場合、探索部2313Cは、複数の正面画像のそれぞれと所定の波長範囲の分光分布画像との複数の相関度を求める。探索部2313Cは、最も相関度が高い正面画像を特定し、特定された正面画像の深さ情報を当該分光眼底画像の深さ情報として特定する。 For example, based on the OCT data of the subject's eye E, a plurality of front images (en-face images, C-scan images, projection images, OCT angiography) having different depth positions are formed in advance. In this case, the searching unit 2313C obtains a plurality of degrees of correlation between each of the plurality of front images and the spectral distribution image in the predetermined wavelength range. The search unit 2313C identifies the front image with the highest degree of correlation, and identifies the depth information of the identified front image as the depth information of the spectral fundus image.
 いくつかの実施形態では、探索部2313Cは、被検眼EのOCTデータに基づいて形成された被検眼Eの3次元のOCT画像に対して上記の複数の相関度を求め、求められた複数の相関度から最も相関度が高い3次元画像の領域を特定する。深さ情報特定部2312Cは、特定された3次元画像の領域における深さ情報を当該分光眼底画像の深さ情報として特定する。 In some embodiments, the search unit 2313C obtains the plurality of degrees of correlation with respect to the three-dimensional OCT image of the eye to be examined E formed based on the OCT data of the eye to be examined E, and calculates the obtained plurality of A region of the three-dimensional image with the highest correlation is identified from the correlation. The depth information specifying unit 2312C specifies depth information in the specified region of the three-dimensional image as depth information of the spectral fundus image.
 また、探索部2313Cは、OCTデータから特徴領域特定部2311Cにより特定された特徴領域(広義には、分光分布データ)と最も相関度が高い領域を探索することができる。いくつかの実施形態では、探索部2313Cは、OCTデータの複数の領域のそれぞれと特徴領域特定部2311Cにより特定された特徴領域との複数の相関度を求め、求められた複数の相関度から最も相関度が高いOCTデータの領域を特定する。 In addition, the search unit 2313C can search the OCT data for an area that has the highest degree of correlation with the characteristic region (spectral distribution data in a broad sense) identified by the characteristic region identification unit 2311C. In some embodiments, the searching unit 2313C obtains a plurality of degrees of correlation between each of the plurality of regions of the OCT data and the characteristic region identified by the characteristic region identifying unit 2311C, and selects the highest degree of correlation from the obtained plurality of degrees of correlation. A region of OCT data with a high degree of correlation is identified.
 例えば、被検眼EのOCTデータに基づいて、互いに深さ位置が異なる複数の正面画像があらかじめ形成されている。この場合、探索部2313Cは、複数の正面画像のそれぞれについて、正面画像の複数の領域のそれぞれと特徴領域特定部2311Cにより特定された特徴領域との複数の相関度を求める。探索部2313Cは、各正面画像において特徴領域と最も相関度が高い領域を特定し、それぞれにおいて最も相関度が高い領域が特定された複数の正面画像の中から、最も相関度が高い領域を含む正面画像を特定する。探索部2313Cは、特定された正面画像の深さ情報を特徴領域特定部2311Cにより特定された特徴領域の深さ情報として特定する。 For example, based on the OCT data of the subject's eye E, a plurality of front images having different depth positions are formed in advance. In this case, the searching unit 2313C obtains, for each of the plurality of front images, a plurality of degrees of correlation between each of the plurality of regions of the front image and the feature regions specified by the feature region specifying unit 2311C. The searching unit 2313C identifies a region having the highest degree of correlation with the characteristic region in each front image, and includes the region having the highest degree of correlation from among the plurality of front images in which the regions with the highest degree of correlation are identified in each of the front images. Identify the front image. The searching unit 2313C identifies the identified depth information of the front image as the depth information of the characteristic region identified by the characteristic region identifying unit 2311C.
 いくつかの実施形態では、探索部2313Cは、被検眼EのOCTデータに基づいて形成された被検眼Eの3次元のOCT画像に対して上記の複数の相関度を求め、求められた複数の相関度から最も相関度が高い3次元画像の領域を特定する。深さ情報特定部2312Cは、特定された3次元画像の領域における深さ情報を特徴領域特定部2311Cにより特定された特徴領域の深さ情報として特定する。 In some embodiments, the search unit 2313C obtains the plurality of degrees of correlation with respect to the three-dimensional OCT image of the eye to be examined E formed based on the OCT data of the eye to be examined E, and calculates the obtained plurality of A region of the three-dimensional image with the highest correlation is identified from the correlation. The depth information specifying unit 2312C specifies depth information in the specified region of the three-dimensional image as depth information of the feature region specified by the feature region specifying unit 2311C.
〈疾患推定部2314C〉
 疾患推定部2314Cは、探索部2313Cにより探索された正面画像(又は特徴領域特定部2311Cにより特定された特徴領域に対応した正面画像中の領域)に基づいて疾患の有無、疾患の確率、又は疾患の種類を推定する。いくつかの実施形態では、疾患推定部2314Cは、探索された正面画像を含む所定の深さ範囲の2以上の正面画像に基づいて疾患の有無、疾患の確率、又は疾患の種類を推定する。
<Disease estimation unit 2314C>
The disease estimating unit 2314C determines the presence or absence of disease, the probability of disease, or the disease probability based on the front image searched by the searching unit 2313C (or the region in the front image corresponding to the characteristic region identified by the characteristic region identifying unit 2311C). Estimate the type of In some embodiments, the disease estimator 2314C estimates the presence or absence of disease, the probability of disease, or the type of disease based on two or more frontal images of a predetermined depth range including the searched frontal image.
 例えば、疾患推定部2314Cには、疾患の種類に対応した複数の画像パターンがあらかじめ登録されている。疾患推定部2314Cは、探索された正面画像(又は正面画像中の上記の領域)と複数の画像パターンのそれぞれとの相関度を求め、相関度が所定の閾値以上のとき、被検眼Eが疾患を伴うことが推定される旨を含む疾患情報を生成する。疾患推定部2314Cは、疾患を伴うことが推定される旨と、閾値以上の相関度を有する画像パターンに対応した疾患の種類とを含む疾患情報を生成することが可能である。また、疾患推定部2314Cは、相関度が所定の閾値未満のとき、被検眼Eが疾患を伴わないことが推定される旨を含む疾患情報を生成する。 For example, in the disease estimation unit 2314C, a plurality of image patterns corresponding to disease types are registered in advance. The disease estimating unit 2314C obtains the degree of correlation between the searched front image (or the above-described region in the front image) and each of the plurality of image patterns, and when the degree of correlation is equal to or greater than a predetermined threshold, the eye E to be examined has a disease. Generate disease information including the fact that it is estimated to be accompanied by The disease estimating unit 2314C can generate disease information including the fact that it is estimated to be associated with a disease and the type of disease corresponding to an image pattern having a degree of correlation equal to or greater than a threshold. Further, when the degree of correlation is less than a predetermined threshold value, the disease estimating unit 2314C generates disease information including the fact that it is estimated that the subject's eye E does not have a disease.
 主制御部211は、疾患の有無、疾患の確率、又は疾患の種類を含む疾患情報を表示部240Aに表示させることができる。いくつかの実施形態では、主制御部211は、探索された正面画像(又は探索された領域を含む正面画像)、分光分布画像、及び特定された深さ情報の少なくとも1つと共に、疾患情報を表示部240Aに表示させる。また、主制御部211は、探索された正面画像に分光分布画像を重畳させて表示部240Aに表示させてもよい。いくつかの実施形態では、主制御部211は、特徴領域特定部2311Cにより特定された特徴領域に対応した正面画像における特徴部位に相当する領域を識別可能に表示部240Aに表示させる。 The main control unit 211 can cause the display unit 240A to display disease information including the presence or absence of a disease, the probability of disease, or the type of disease. In some embodiments, the main control unit 211 transmits disease information together with at least one of the searched front image (or the front image including the searched region), the spectral distribution image, and the specified depth information. Displayed on the display unit 240A. Further, the main control unit 211 may superimpose the spectral distribution image on the searched front image and display it on the display unit 240A. In some embodiments, the main control unit 211 causes the display unit 240A to identifiably display an area corresponding to the characteristic region in the front image corresponding to the characteristic area identified by the characteristic area identification unit 2311C.
〈ユーザーインターフェイス240〉
 ユーザーインターフェイス240は表示部240Aと操作部240Bとを含む。表示部240Aは表示装置3を含む。操作部240Bは各種の操作デバイスや入力デバイスを含む。
<User Interface 240>
User interface 240 includes a display section 240A and an operation section 240B. Display unit 240A includes display device 3 . The operation unit 240B includes various operation devices and input devices.
 ユーザーインターフェイス240は、例えばタッチパネルのような表示機能と操作機能とが一体となったデバイスを含んでいてもよい。他の実施形態において、ユーザーインターフェイスの少なくとも一部が眼科装置に含まれていなくてよい。例えば、表示デバイスは、眼科装置に接続された外部装置であってよい。 The user interface 240 may include a device such as a touch panel that combines a display function and an operation function. In other embodiments, at least a portion of the user interface may not be included on the ophthalmic device. For example, the display device may be an external device connected to the ophthalmic equipment.
〈通信部250〉
 通信部250は、図示しない外部装置と通信するための機能を有する。通信部250は、外部装置との接続形態に応じた通信インターフェイスを備える。外部装置の例として、サーバ装置、OCT装置、走査型光検眼鏡、スリットランプ検眼鏡、眼科測定装置、眼科治療装置などがある。眼科測定装置の例として、眼屈折検査装置、眼圧計、スペキュラーマイクロスコープ、ウェーブフロントアナライザ、視野計、マイクロペリメータなどがある。眼科治療装置の例として、レーザー治療装置、手術装置、手術用顕微鏡などがある。また、外部装置は、記録媒体から情報を読み取る装置(リーダ)や、記録媒体に情報を書き込む装置(ライタ)などでもよい。更に、外部装置は、病院情報システム(HIS)サーバ、DICOM(Digital Imaging and COmmunication in Medicine)サーバ、医師端末、モバイル端末、個人端末、クラウドサーバなどでもよい。
<Communication unit 250>
The communication unit 250 has a function for communicating with an external device (not shown). The communication unit 250 has a communication interface according to a connection form with an external device. Examples of external devices include server devices, OCT devices, scanning optical ophthalmoscopes, slit lamp ophthalmoscopes, ophthalmic measurement devices, and ophthalmic treatment devices. Examples of ophthalmic measurement devices include eye refractometers, tonometers, specular microscopes, wavefront analyzers, perimeters, microperimeters, and the like. Examples of ophthalmic treatment devices include laser treatment devices, surgical devices, surgical microscopes, and the like. The external device may be a device (reader) that reads information from a recording medium, or a device (writer) that writes information to a recording medium. Further, the external device may be a hospital information system (HIS) server, a DICOM (Digital Imaging and Communication in Medicine) server, a doctor terminal, a mobile terminal, a personal terminal, a cloud server, or the like.
 演算制御ユニット200(制御部210、画像形成部220、及びデータ処理部230)は、実施形態に係る「眼科情報処理装置」の一例である。分光画像(分光眼底画像、分光前眼部画像)は、実施形態に係る「分光分布データ」の一例である。OCTデータは、実施形態に係る「測定データ」の一例である。疾患推定部2314Cは、実施形態に係る「推定部」の一例である。制御部210(主制御部211)は、実施形態に係る「表示制御部」の一例である。撮影光学系30は、実施形態に係る「受光光学系」の一例である。OCTユニット100から対物レンズ22までの光学系は、実施形態に係る「OCT光学系」の一例である。 The arithmetic control unit 200 (the control unit 210, the image forming unit 220, and the data processing unit 230) is an example of the "ophthalmic information processing apparatus" according to the embodiment. A spectral image (spectral fundus image, spectral anterior segment image) is an example of "spectral distribution data" according to the embodiment. OCT data is an example of "measurement data" according to the embodiment. The disease estimator 2314C is an example of the "estimator" according to the embodiment. The control unit 210 (main control unit 211) is an example of a "display control unit" according to the embodiment. The imaging optical system 30 is an example of a "light receiving optical system" according to the embodiment. The optical system from the OCT unit 100 to the objective lens 22 is an example of the "OCT optical system" according to the embodiment.
〈動作〉
 眼科装置1の動作例について説明する。
<motion>
An operation example of the ophthalmologic apparatus 1 will be described.
 眼科装置1は、眼底Efを照明光で照明し、所定の解析波長範囲内で互いに波長範囲が異なる眼底Efからの戻り光を受光することで、複数の分光眼底画像を取得する。 The ophthalmologic apparatus 1 acquires a plurality of spectroscopic fundus images by illuminating the fundus oculi Ef with illumination light and receiving return light from the fundus oculi Ef having different wavelength ranges within a predetermined analysis wavelength range.
 図6に、実施形態に係る複数の分光眼底画像の一例を示す。図6は、表示部240Aに表示された分光眼底画像の一例を表す。 FIG. 6 shows an example of a plurality of spectral fundus images according to the embodiment. FIG. 6 shows an example of a spectral fundus image displayed on the display unit 240A.
 例えば、主制御部211は、イメージセンサ38により順次に戻り光を順次に受光することにより取得された複数の分光眼底画像を水平方向に配列させて表示部240Aに表示させる。このとき、主制御部211は、複数の分光眼底画像のそれぞれを波長範囲に対応付けて表示部240Aに表示させることが可能である。これにより、波長範囲に対応した眼底の分光分布を容易に把握することができる。 For example, the main control unit 211 causes the display unit 240A to horizontally arrange a plurality of spectral fundus images acquired by sequentially receiving the returning light from the image sensor 38 and display them on the display unit 240A. At this time, the main control unit 211 can cause the display unit 240A to display each of the plurality of spectral fundus images in association with the wavelength range. This makes it possible to easily grasp the spectral distribution of the fundus corresponding to the wavelength range.
 図7に、実施形態に係る眼科装置1の動作例の説明図を示す。 FIG. 7 shows an explanatory diagram of an operation example of the ophthalmologic apparatus 1 according to the embodiment.
 分光分布データ処理部231Cは、複数の分光眼底画像のいずれかの分光眼底画像IMG1と、深さ位置が異なる複数のen-face画像のそれぞれとの相関度を求め、最も相関度が高いen-face画像を特定する。分光分布データ処理部231Cは、特定されたen-face画像の深さ情報を分光眼底画像IMG1の深さ情報として特定する。分光眼底画像IMG1は、特徴領域又は注目部位が描出された解析対象の分光眼底画像であってよい。 The spectral distribution data processing unit 231C obtains the degree of correlation between any of the spectral fundus images IMG1 of the plurality of spectral fundus images and each of the plurality of en-face images having different depth positions, and determines the en- Identify the face image. The spectral distribution data processing unit 231C specifies the depth information of the specified en-face image as the depth information of the spectral fundus image IMG1. The spectral fundus image IMG1 may be a spectral fundus image to be analyzed in which a characteristic region or a region of interest is drawn.
 これにより、分光眼底画像IMG1の深さ位置又は層領域を高精度に特定することができ、分光眼底画像IMG1に描出されている組織、部位等を把握しつつ、分光眼底画像IMG1の分光分布を解析することが可能になる。従って、分光分布、及び深さ位置(層領域)の少なくとも1つを用いて、疾患の推定精度を向上させることができる。 As a result, the depth position or layer region of the spectral fundus image IMG1 can be specified with high precision, and the spectral distribution of the spectral fundus image IMG1 can be determined while grasping the tissue, site, etc. depicted in the spectral fundus image IMG1. analysis becomes possible. Therefore, at least one of spectral distribution and depth position (layer area) can be used to improve the accuracy of disease estimation.
 図8に、実施形態に係る眼科装置1の他の動作例の説明図を示す。 FIG. 8 shows an explanatory diagram of another operation example of the ophthalmologic apparatus 1 according to the embodiment.
 分光分布データ処理部231Cは、複数の分光眼底画像のいずれか1つの分光眼底画像IMG2を解析して特徴領域CSを特定し、特定された特徴領域CSを含む特徴領域画像IMG3と、深さ位置が異なる複数のen-face画像のそれぞれとの相関度を求め、最も相関度が高いen-face画像を特定する。分光分布データ処理部231Cは、特定されたen-face画像の深さ情報を特徴領域画像IMG3の深さ情報として特定する。分光眼底画像IMG2は、複数の分光眼底画像のうち特徴領域が最も鮮明に描出された分光眼底画像であってよい。 The spectral distribution data processing unit 231C analyzes any one spectral fundus image IMG2 of the plurality of spectral fundus images to identify a characteristic region CS, and generates a characteristic region image IMG3 including the identified characteristic region CS and a depth position. The degree of correlation with each of a plurality of en-face images with different values is obtained, and the en-face image with the highest degree of correlation is specified. The spectral distribution data processing unit 231C identifies the identified depth information of the en-face image as the depth information of the characteristic region image IMG3. The spectral fundus image IMG2 may be a spectral fundus image in which the characteristic region is most clearly depicted among the plurality of spectral fundus images.
 これにより、分光眼底画像IMG2における特徴領域CSの深さ位置又は層領域を高精度に特定することができ、特徴領域CSにおける組織、部位等を把握しつつ、分光眼底画像IMG2の分光分布を解析することが可能になる。従って、分光分布、及び深さ位置(層領域)の少なくとも1つを用いて、疾患の推定精度を向上させることができる。 As a result, the depth position or layer region of the characteristic region CS in the spectral fundus image IMG2 can be specified with high accuracy, and the spectral distribution of the spectral fundus image IMG2 can be analyzed while grasping the tissue, site, etc. in the characteristic region CS. it becomes possible to Therefore, at least one of spectral distribution and depth position (layer area) can be used to improve the accuracy of disease estimation.
 また、実施形態に係る眼科装置1の更に別の動作例として、分光分布データ処理部231Cにより特定された分光眼底画像をen-face画像に重畳させて表示部240Aに表示させてもよい。具体的には、分光分布データ処理部231Cは、分光眼底画像と、深さ位置が異なる複数のen-face画像のそれぞれとの相関度を求め、最も相関度が高いen-face画像を特定する。主制御部211は、特定されたen-face画像に、上記の分光眼底画像を重畳させて表示部240Aに表示させる。このとき、分光眼底画像は、複数の分光眼底画像のうち所望の分光眼底画像、又は複数の分光眼底画像のうち特徴領域が最も鮮明に描出された分光眼底画像であってよい。 As yet another operation example of the ophthalmologic apparatus 1 according to the embodiment, the spectral fundus image specified by the spectral distribution data processing unit 231C may be superimposed on the en-face image and displayed on the display unit 240A. Specifically, the spectral distribution data processing unit 231C obtains the degree of correlation between the spectral fundus image and each of a plurality of en-face images with different depth positions, and identifies the en-face image with the highest degree of correlation. . The main control unit 211 causes the display unit 240A to display the spectroscopic fundus image superimposed on the specified en-face image. At this time, the spectral fundus image may be a desired spectral fundus image among the plurality of spectral fundus images, or a spectral fundus image in which the characteristic region is most clearly rendered among the plurality of spectral fundus images.
 これにより、所定の波長範囲の光を用いて検出された部位とen-face画像に描出された部位との相関を把握することが可能になる。 This makes it possible to grasp the correlation between the site detected using light in a predetermined wavelength range and the site depicted in the en-face image.
 また、実施形態に係る眼科装置1の更に別の動作例として、分光分布データ処理部231Cにより特定された分光眼底画像に対応した表示態様で、当該分光眼底画像に対応するen-face画像を表示部240Aに表示させてもよい。例えば、en-face画像は、分光眼底画像の輝度値に対応した色情報を画素ごと、所定の領域ごと、又は部位ごとに割り当てて表示部240Aに表示させる。具体的には、分光分布データ処理部231Cは、分光眼底画像と、深さ位置が異なる複数のen-face画像のそれぞれとの相関度を求め、最も相関度が高いen-face画像を特定する。主制御部211は、特定されたen-face画像に、上記の分光眼底画像に対応した色情報を割り当てて表示部240Aに表示させる。 Further, as still another operation example of the ophthalmologic apparatus 1 according to the embodiment, an en-face image corresponding to the spectral fundus image specified by the spectral distribution data processing unit 231C is displayed in a display mode corresponding to the spectral fundus image. You may make it display on the part 240A. For example, the en-face image is displayed on the display unit 240A by assigning color information corresponding to the luminance value of the spectral fundus image to each pixel, each predetermined region, or each part. Specifically, the spectral distribution data processing unit 231C obtains the degree of correlation between the spectral fundus image and each of a plurality of en-face images with different depth positions, and identifies the en-face image with the highest degree of correlation. . The main control unit 211 assigns color information corresponding to the spectral fundus image to the specified en-face image, and causes the display unit 240A to display it.
 図9~図11に、実施形態に係る眼科装置1の動作例を示す。図9は、複数の分光眼底画像を取得する場合の眼科装置1の動作例のフロー図を表す。図10は、分光眼底画像を用いて疾患を推定する場合の眼科装置1の動作例のフロー図を表す。図11は、分光眼底画像をOCT画像に重畳させて表示させる場合の眼科装置1の動作例のフロー図を表す。 9 to 11 show operation examples of the ophthalmologic apparatus 1 according to the embodiment. FIG. 9 shows a flowchart of an operation example of the ophthalmologic apparatus 1 when acquiring a plurality of spectral fundus images. FIG. 10 shows a flow diagram of an operation example of the ophthalmologic apparatus 1 when estimating a disease using a spectral fundus image. FIG. 11 shows a flowchart of an operation example of the ophthalmologic apparatus 1 when displaying a spectral fundus image superimposed on an OCT image.
 記憶部212には、図9~図11に示す処理を実現するためのコンピュータプログラムが記憶されている。主制御部211は、このコンピュータプログラムに従って動作することにより、図9~図11に示す処理を実行する。 The storage unit 212 stores computer programs for realizing the processes shown in FIGS. The main control unit 211 executes the processes shown in FIGS. 9 to 11 by operating according to this computer program.
 まず、図9に示す動作例について説明する。 First, the operation example shown in FIG. 9 will be described.
(S1:アライメント)
 まず、主制御部211は、アライメントを実行する。
(S1: Alignment)
First, the main controller 211 executes alignment.
 例えば、主制御部211は、前眼部カメラ5A及び5Bを制御して、実質的に同時に被検眼Eの前眼部Eaを撮影する。特徴部位特定部231Aは、主制御部211からの制御を受け、前眼部カメラ5A及び5Bにより実質的に同時に取得された一対の前眼部画像を解析して特徴部位として被検眼Eの瞳孔中心位置を特定する。3次元位置算出部231Bは、被検眼Eの3次元位置を求める。この処理は、例えば、特開2013-248376号公報に記載のように、一対の前眼部カメラ5A及び5Bと被検眼Eとの位置関係に基づく三角法を利用した演算処理を含む。 For example, the main control unit 211 controls the anterior segment cameras 5A and 5B to photograph the anterior segment Ea of the subject's eye E substantially simultaneously. The characteristic site identification unit 231A receives control from the main control unit 211, analyzes a pair of anterior segment images obtained substantially simultaneously by the anterior segment cameras 5A and 5B, and identifies the pupil of the subject's eye E as a characteristic site. Identify the center position. The three-dimensional position calculator 231B obtains the three-dimensional position of the eye E to be examined. This processing includes arithmetic processing using trigonometry based on the positional relationship between the pair of anterior eye cameras 5A and 5B and the subject's eye E, as described in Japanese Patent Application Laid-Open No. 2013-248376, for example.
 主制御部211は、光学系(例えば眼底カメラユニット2)と被検眼Eとが所定の位置関係となるように、3次元位置算出部231Bにより求められた被検眼Eの3次元位置に基づき移動機構150を制御する。ここで、所定の位置関係は、光学系を用いて被検眼Eの撮影や検査を実行可能な位置関係である。典型例として、3次元位置算出部231Bにより被検眼Eの3次元位置(x座標、y座標、z座標)が得られた場合、対物レンズ22の光軸のx座標及びy座標が被検眼Eのx座標及びy座標にそれぞれ一致し、且つ、対物レンズ22(前側レンズ面)のz座標と被検眼E(角膜表面)のz座標との差が所定距離(ワーキングディスタンス)に等しくなる位置が、光学系の移動先として設定される。 The main control unit 211 moves based on the three-dimensional position of the subject's eye E obtained by the three-dimensional position calculator 231B so that the optical system (for example, the fundus camera unit 2) and the subject's eye E have a predetermined positional relationship. Control mechanism 150 . Here, the predetermined positional relationship is a positional relationship that enables imaging and examination of the subject's eye E using an optical system. As a typical example, when the three-dimensional position (x-coordinate, y-coordinate, z-coordinate) of the eye E to be examined is obtained by the three-dimensional position calculator 231B, the x-coordinate and y-coordinate of the optical axis of the objective lens 22 are the eye E and the difference between the z-coordinate of the objective lens 22 (front lens surface) and the z-coordinate of the eye to be examined E (corneal surface) is equal to a predetermined distance (working distance) , is set as the destination of the optical system.
(S2:オートフォーカス)
 続いて、主制御部211は、オートフォーカスを開始する。
(S2: Autofocus)
Subsequently, the main controller 211 starts autofocus.
 例えば、主制御部211は、フォーカス光学系60を制御して被検眼Eにスプリット指標を投影させる。解析部231は、主制御部211からの制御を受け、スプリット指標が投影されている眼底Efの観察画像を解析することにより、一対のスプリット指標像を抽出し、一対のスプリット指標の相対的なずれを算出する。主制御部211は、算出されたずれ(ずれ方向、ずれ量)に基づいて合焦駆動部31Aや合焦駆動部43Aを制御する。 For example, the main control unit 211 controls the focus optical system 60 to project the split index on the eye E to be examined. Under the control of the main control unit 211, the analysis unit 231 extracts a pair of split index images by analyzing the observed image of the fundus oculi Ef on which the split indices are projected, and calculates the relative relationship between the pair of split indices. Calculate the deviation. The main control unit 211 controls the focus driving unit 31A and the focus driving unit 43A based on the calculated deviation (direction of deviation, amount of deviation).
(S3:波長範囲を設定)
 次に、主制御部211は、波長可変フィルタ80を制御して、透過光の波長選択範囲を所定の波長範囲に設定する。所定の波長範囲の例として、解析波長範囲を網羅するように波長範囲の選択を順次に繰り返すときの初期波長範囲がある。
(S3: Set wavelength range)
Next, the main controller 211 controls the wavelength tunable filter 80 to set the wavelength selection range of transmitted light to a predetermined wavelength range. An example of a predetermined wavelength range is an initial wavelength range when sequentially repeating the selection of wavelength ranges to cover the analysis wavelength range.
(S4:画像データを取得)
 次に、主制御部211は、分光眼底画像の画像データを取得させる。
(S4: Acquire image data)
Next, the main control unit 211 causes the image data of the spectral fundus image to be obtained.
 例えば、主制御部211は、照明光学系10を制御して照明光で被検眼Eを照明させ、イメージセンサ38により得られた照明光の反射光の受光結果を取り込み、分光眼底画像の画像データを取得させる。 For example, the main control unit 211 controls the illumination optical system 10 to illuminate the subject's eye E with illumination light, captures the light reception result of the reflected light of the illumination light obtained by the image sensor 38, and obtains the image data of the spectral fundus image. get
(S5:次?)
 続いて、主制御部211は、次の波長範囲で分光眼底画像の取得を行うか否かを判定する。例えば、解析波長範囲内を所定の波長範囲ステップで波長選択を順次に変更する場合に、主制御部211は、波長範囲の変更回数に基づいて次の分光眼底画像の取得を行うか否かを判定することができる。例えば、主制御部211は、あらかじめ決められた複数の波長範囲のすべてが選択されたか否かを判別することで次の分光眼底画像の取得を行うか否かを判定することができる。
(S5: next?)
Subsequently, the main control unit 211 determines whether or not to acquire a spectral fundus image in the next wavelength range. For example, when wavelength selection is sequentially changed in predetermined wavelength range steps within the analysis wavelength range, the main control unit 211 determines whether or not to acquire the next spectral fundus image based on the number of times the wavelength range has been changed. can judge. For example, the main control unit 211 can determine whether or not to acquire the next spectral fundus image by determining whether or not all of a plurality of predetermined wavelength ranges have been selected.
 ステップS5において、次の分光眼底画像の取得を行うと判定されたとき(ステップS5:Y)、眼科装置1の動作はステップS6に移行する。ステップS5において、次の分光眼底画像の取得を行わないと判定されたとき(ステップS5:N)、眼科装置1の動作は終了である(エンド)。 When it is determined in step S5 that the next spectral fundus image is to be acquired (step S5: Y), the operation of the ophthalmologic apparatus 1 proceeds to step S6. In step S5, when it is determined not to acquire the next spectral fundus image (step S5: N), the operation of the ophthalmologic apparatus 1 ends (end).
(S6:波長範囲を変更)
 ステップS5において次の分光眼底画像の取得を行うと判定されたとき(ステップS5:Y)、主制御部211は、波長可変フィルタ80を制御して、次に選択すべき透過光の選択範囲を変更する。続いて、眼科装置1の動作は、ステップS4に移行する。
(S6: Change wavelength range)
When it is determined in step S5 that the next spectral fundus image is to be acquired (step S5: Y), the main control unit 211 controls the wavelength tunable filter 80 to select the range of transmitted light to be selected next. change. Subsequently, the operation of the ophthalmologic apparatus 1 proceeds to step S4.
 以上のように、実施形態に係る眼科装置1は、所定の解析波長範囲内の複数の波長範囲に対応した複数の分光眼底画像を取得することができる。 As described above, the ophthalmologic apparatus 1 according to the embodiment can acquire a plurality of spectral fundus images corresponding to a plurality of wavelength ranges within a predetermined analysis wavelength range.
 次に、図10に示す動作例について説明する。図10は、図9に示す動作例に従って取得された複数の分光眼底画像、又は取得された複数の分光眼底画像のいずれか1つを用いて疾患の推定を行う場合の動作例を表す。 Next, an operation example shown in FIG. 10 will be described. FIG. 10 shows an operation example in the case of estimating a disease using either one of a plurality of spectral fundus images acquired according to the operation example shown in FIG. 9 or a plurality of acquired spectral fundus images.
(S11:特徴領域を特定)
 まず、主制御部211は、特徴領域特定部2311Cを制御して、分光眼底画像における特徴領域を特定させる。特徴領域特定部2311Cは、上記のように、分光眼底画像に対して特徴領域特定処理を実行する。
(S11: Identify feature area)
First, the main control unit 211 controls the characteristic region specifying unit 2311C to specify a characteristic region in the spectral fundus image. The characteristic region identification unit 2311C performs characteristic region identification processing on the spectral fundus image as described above.
 いくつかの実施形態では、特徴領域特定部2311Cは、複数の分光眼底画像のうちあらかじめ選択された分光眼底画像における特徴領域を特定する。いくつかの実施形態では、特徴領域特定部2311Cは、複数の分光眼底画像のそれぞれにおいて特定された複数の特徴領域から1つの特徴領域を選択する。 In some embodiments, the characteristic region identifying unit 2311C identifies a characteristic region in a spectral fundus image selected in advance from among a plurality of spectral fundus images. In some embodiments, the characteristic region identifying section 2311C selects one characteristic region from the plurality of characteristic regions identified in each of the plurality of spectral fundus images.
(S12:OCT画像を取得)
 続いて、主制御部211は、OCT画像を取得する。この実施形態では、あらかじめ被検眼Eに対してOCTを実行することにより被検眼EのOCTデータが取得されているものとする。また、OCTデータに基づいて、3次元OCT画像、又は互いに深さ位置が異なる複数のen-face画像が形成されているものとする。この場合、主制御部211は、3次元OCT画像又は複数のen-face画像を取得する。
(S12: Acquire OCT image)
Subsequently, the main controller 211 acquires an OCT image. In this embodiment, it is assumed that the OCT data of the eye E to be examined is obtained by performing OCT on the eye E to be examined in advance. It is also assumed that a three-dimensional OCT image or a plurality of en-face images with different depth positions are formed based on OCT data. In this case, the main controller 211 acquires a 3D OCT image or a plurality of en-face images.
 いくつかの実施形態では、ステップS12において、主制御部211がOCTユニット100等を制御して、被検眼Eに対してOCTを実行して、OCTデータを取得する。データ処理部230は、取得されたOCTデータに基づいて、3次元OCT画像、又は互いに深さ位置が異なる複数のen-face画像を形成する。 In some embodiments, in step S12, the main control unit 211 controls the OCT unit 100 and the like to perform OCT on the subject's eye E and obtain OCT data. The data processing unit 230 forms a three-dimensional OCT image or a plurality of en-face images having different depth positions based on the acquired OCT data.
(S13:探索)
 次に、主制御部211は、深さ情報特定部2312C(探索部2313C)を制御して、ステップS12において取得されたOCT画像から、ステップS11において特定された特徴領域を含む画像と最も相関度が高い画像領域、又は当該画像領域を含むen-face画像を探索させる。
(S13: search)
Next, the main control unit 211 controls the depth information specifying unit 2312C (searching unit 2313C) so that the OCT image acquired in step S12 has the highest degree of correlation with the image including the characteristic region specified in step S11. is searched for an image region with a high , or an en-face image containing the image region.
(S14:眼底上の部位を特定)
 次に、主制御部211は、深さ情報特定部2312Cを制御して、ステップS11において特定された特徴領域に対応した眼底上の部位(層領域、深さ位置など)を特定させる。深さ情報特定部2312Cは、ステップS13における探索された、特徴領域を含む画像と最も相関度が高い画像領域、又は当該画像領域を含むen-face画像から深さ情報を特定し、特定された深さ情報から眼底上の部位を特定する。
(S14: Specify the site on the fundus)
Next, the main control unit 211 controls the depth information specifying unit 2312C to specify a part (layer region, depth position, etc.) on the fundus corresponding to the characteristic region specified in step S11. The depth information specifying unit 2312C specifies the depth information from the image area that has the highest correlation with the image containing the feature area searched in step S13, or from the en-face image that includes the image area. Identify the site on the fundus from the depth information.
(S15:疾患を推定)
 次に、主制御部211は、疾患推定部2314Cを制御して、ステップS14において特定された眼底上の部位の疾患の有無、疾患の確率、又は疾患の種類を特定させる。
(S15: Estimate disease)
Next, the main control unit 211 controls the disease estimating unit 2314C to specify the presence or absence of the disease, the probability of the disease, or the type of the disease in the site on the fundus identified in step S14.
 例えば、疾患推定部2314Cは、上記のように疾患の推定処理を行う。いくつかの実施形態では、疾患推定部2314Cは、ステップS11において特徴領域が特定された分光眼底画像の分光分布(分光特性)と、ステップS13において探索されたen-face画像(OCT画像)と、ステップS14において特定された眼底上の部位とに基づいて、疾患の有無、疾患の確率、又は疾患の種類を特定する。 For example, the disease estimation unit 2314C performs disease estimation processing as described above. In some embodiments, the disease estimating unit 2314C uses the spectral distribution (spectral characteristics) of the spectral fundus image for which the characteristic region is identified in step S11, the en-face image (OCT image) searched in step S13, Based on the site on the fundus identified in step S14, the presence or absence of disease, the probability of disease, or the type of disease is identified.
(S16:表示)
 続いて、主制御部211は、ステップS11において特徴領域が特定された分光眼底画像、ステップS11において特定された特徴領域、ステップS13において特定されたen-face画像(OCT画像)、特徴領域に対応した深さ情報、ステップS14において特定された眼底上の部位、及びステップS15において推定された疾患の有無、疾患の確率、又は疾患の種類の少なくとも1つを表示部240Aに表示させる。
(S16: display)
Subsequently, the main control unit 211 controls the spectral fundus image with the characteristic region identified in step S11, the characteristic region identified in step S11, the en-face image (OCT image) identified in step S13, and the characteristic region. The depth information obtained, the part on the fundus identified in step S14, and at least one of the presence or absence of the disease, the probability of disease, and the type of disease estimated in step S15 are displayed on the display unit 240A.
 いくつかの実施形態では、主制御部211は、ステップS16において、ステップS13において特定されたen-face画像に、ステップS11において特徴領域が特定された分光眼底画像を重畳させて表示部240Aに表示させる。また、ステップS16において、主制御部211は、複数の分光眼底画像のそれぞれに色成分及び変更可能な透明度情報を割り当てて重ね合わせことにより生成された合成眼底画像を表示部240Aに表示させてもよい。 In some embodiments, in step S16, the main control unit 211 superimposes the spectroscopic fundus image in which the characteristic region is specified in step S11 on the en-face image specified in step S13, and displays the images on the display unit 240A. Let Further, in step S16, the main control unit 211 may cause the display unit 240A to display a synthetic fundus image generated by assigning color components and changeable transparency information to each of a plurality of spectral fundus images and superimposing them. good.
 以上で、図10に示すフローは終了である(エンド)。 This concludes the flow shown in FIG. 10 (end).
 以上のように、実施形態に係る眼科装置1は、被検眼EのOCTデータに基づいて被検眼Eの分光眼底画像における特徴領域に対応した部位を特定し、疾患の推定を行うことができる。 As described above, the ophthalmologic apparatus 1 according to the embodiment can identify a region corresponding to a characteristic region in the spectral fundus image of the eye E to be examined based on the OCT data of the eye E to be examined, and estimate a disease.
 次に、図11に示す動作例について説明する。図11は、図9に示す動作例に従って取得された複数の分光眼底画像、又は取得された複数の分光眼底画像のいずれか1つを用いてOCT画像に重畳させて表示させる場合の動作例を表す。 Next, an operation example shown in FIG. 11 will be described. FIG. 11 shows an operation example in which either one of the plurality of spectral fundus images acquired according to the operation example shown in FIG. 9 or the acquired plurality of spectral fundus images is superimposed on the OCT image and displayed. show.
(S21:特徴領域を特定)
 まず、主制御部211は、ステップS11と同様に、特徴領域特定部2311Cを制御して、分光眼底画像における特徴領域を特定させる。
(S21: Specify feature area)
First, the main control unit 211 controls the characteristic region identifying unit 2311C to identify a characteristic region in the spectral fundus image, as in step S11.
(S22:OCT画像を取得)
 続いて、主制御部211は、ステップS12と同様に、OCT画像を取得する。この実施形態では、あらかじめ被検眼Eに対してOCTを実行することにより被検眼EのOCTデータが取得されているものとする。また、OCTデータに基づいて、3次元OCT画像、又は互いに深さ位置が異なる複数のen-face画像が形成されているものとする。この場合、主制御部211は、3次元OCT画像又は複数のen-face画像を取得する。
(S22: Acquire OCT image)
Subsequently, the main control unit 211 acquires an OCT image as in step S12. In this embodiment, it is assumed that the OCT data of the eye E to be examined is obtained by performing OCT on the eye E to be examined in advance. It is also assumed that a three-dimensional OCT image or a plurality of en-face images with different depth positions are formed based on OCT data. In this case, the main controller 211 acquires a 3D OCT image or a plurality of en-face images.
(S23:探索)
 次に、主制御部211は、深さ情報特定部2312C(探索部2313C)を制御して、ステップS22において取得されたOCT画像から、ステップS21において特徴領域が特定された分光眼底画像と最も相関度が高いen-face画像(又は3次元OCT画像中の画像領域)を探索させる。
(S23: search)
Next, the main control unit 211 controls the depth information specifying unit 2312C (searching unit 2313C) so that the OCT image acquired in step S22 has the highest correlation with the spectral fundus image in which the characteristic region is specified in step S21. Search for en-face images (or image regions in 3D OCT images) with high degrees.
(S24:重畳表示)
 続いて、主制御部211は、ステップS23において探索されたen-face画像に、ステップS21において特徴領域が特定された分光眼底画像を重畳させて表示部240Aに表示させる。いくつかの実施形態では、主制御部211は、ステップS24において、ステップS21において特定された特徴領域を識別可能に表示部240Aに表示させる。
(S24: superimposed display)
Subsequently, the main control unit 211 causes the display unit 240A to superimpose the spectral fundus image of which the characteristic region was specified in step S21 on the en-face image searched in step S23. In some embodiments, in step S24, the main control unit 211 causes the display unit 240A to display the characteristic regions identified in step S21 in a identifiable manner.
 以上で、図11に示すフローは終了である(エンド)。 This concludes the flow shown in FIG. 11 (end).
 以上のように、実施形態に係る眼科装置1は、被検眼Eの分光眼底画像を被検眼EのOCTデータに重畳させて表示させることができる。 As described above, the ophthalmologic apparatus 1 according to the embodiment can superimpose the spectral fundus image of the eye E to be examined on the OCT data of the eye E to be displayed.
〈作用〉
 実施形態に係る眼科情報処理装置、眼科装置、眼科情報処理方法、及びプログラムについて説明する。
<Action>
An ophthalmologic information processing apparatus, an ophthalmologic apparatus, an ophthalmologic information processing method, and a program according to embodiments will be described.
 いくつかの実施形態に係る眼科情報処理装置(制御部210、画像形成部220、及びデータ処理部230)は、特徴領域特定部(2311C)と、深さ情報特定部(2312C)とを含む。特徴領域特定部は、照明光で照明された被検眼(E)から所定の波長範囲の戻り光を受光することにより取得された分光分布データにおける特徴領域を特定する。深さ情報特定部は、分光分布データより深さ方向の分解能が高い被検眼の測定データに基づいて、特徴領域の深さ情報を特定する。 The ophthalmologic information processing apparatus (control unit 210, image forming unit 220, and data processing unit 230) according to some embodiments includes a characteristic region specifying unit (2311C) and a depth information specifying unit (2312C). The characteristic region identifying unit identifies a characteristic region in the spectral distribution data acquired by receiving return light in a predetermined wavelength range from the subject's eye (E) illuminated with the illumination light. The depth information specifying unit specifies the depth information of the characteristic region based on the measurement data of the subject's eye having higher resolution in the depth direction than the spectral distribution data.
 このような構成によれば、分光分布データにおける特徴領域が、計測部位における深さ方向のどの組織のものであるかを高精度に特定することが可能になる。また、眼球運動に起因した分光分布データの位置ずれも補正することができる。それにより、分光分布データに対してより詳細な解析を行うことが可能になる。 According to such a configuration, it is possible to identify with high accuracy which tissue in the depth direction in the measurement site the characteristic region in the spectral distribution data belongs to. In addition, it is possible to correct the positional deviation of the spectral distribution data due to the movement of the eyeball. Thereby, it becomes possible to perform a more detailed analysis on the spectral distribution data.
 いくつかの実施形態では、特徴領域特定部は、被検眼を照明光で照明し、互いに波長範囲が異なる被検眼からの戻り光を受光することにより取得された複数の分光分布データのいずれかにおける特徴領域を特定する。 In some embodiments, the characteristic region identifying unit illuminates the subject's eye with illumination light and receives return light from the subject's eye having different wavelength ranges from each other. Identify feature regions.
 このような構成によれば、波長範囲に応じて分光分布データに現れる被検眼の部位が異なるため、分光分布が特徴的な特徴領域を特定し、特定された特徴領域が、計測部位における深さ方向のどの組織のものであるかを高精度に特定することが可能になる。 According to such a configuration, since the part of the eye to be examined that appears in the spectral distribution data differs depending on the wavelength range, a characteristic region having a characteristic spectral distribution is identified, and the identified characteristic region is the depth at the measurement site. It is possible to specify with high accuracy which tissue in the direction it belongs to.
 いくつかの実施形態では、測定データは、被検眼に対して光コヒーレンストモグラフィを実行することにより得られたOCTデータである。 In some embodiments, the measurement data is OCT data obtained by performing optical coherence tomography on the eye to be examined.
 このような構成によれば、被検眼に対してOCTを実行可能な構成を用いて、分光分布データにおける特徴領域が、計測部位における深さ方向のどの組織のものであるかを高精度に特定することが可能になる。 According to such a configuration, using a configuration capable of executing OCT on the eye to be inspected, it is possible to identify with high accuracy which tissue in the depth direction of the measurement site the feature region in the spectral distribution data belongs to. it becomes possible to
 いくつかの実施形態では、深さ情報特定部は、OCTデータに基づいて形成され互いに深さ位置が異なる複数の正面画像の中から分光分布データと最も相関度が高い正面画像を探索する探索部(2313C)を含み、探索部により探索された正面画像に基づいて深さ情報を特定する。 In some embodiments, the depth information specifying unit is a search unit that searches for a front image having the highest degree of correlation with the spectral distribution data from among a plurality of front images formed based on OCT data and having different depth positions. (2313C), and specifies depth information based on the front image searched by the search unit.
 このような構成によれば、OCTデータに基づいて形成された複数の正面画像に対する探索処理により分光分布データと最も相関度が高い正面画像を特定するようにしたので、分光分布データの高精度な深さ情報を簡便に特定することが可能になる。 According to such a configuration, since the front image having the highest degree of correlation with the spectral distribution data is specified by the search processing for the plurality of front images formed based on the OCT data, the spectral distribution data can be obtained with high precision. Depth information can be easily identified.
 いくつかの実施形態では、深さ情報特定部は、OCTデータに基づいて形成され互いに深さ位置が異なる複数の正面画像の中から特徴領域を含む画像と最も相関度が高い画像領域を含む正面画像を探索する探索部(2313C)を含み、探索部により探索された正面画像に基づいて深さ情報を特定する。 In some embodiments, the depth information specifying unit selects an image including a characteristic region from among a plurality of front images formed based on OCT data and having different depth positions, and an image including an image region having the highest degree of correlation. A searching unit (2313C) for searching an image is included, and depth information is specified based on the front image searched by the searching unit.
 このような構成によれば、OCTデータに基づいて形成された複数の正面画像に対する探索処理により分光分布データにおける特徴領域を含む画像と最も相関度が高い画像領域を含む正面画像を特定するようにしたので、特徴領域の高精度な深さ情報を簡便に特定することが可能になる。 According to such a configuration, the front image including the image area having the highest correlation with the image including the characteristic area in the spectral distribution data is specified by the search processing for the plurality of front images formed based on the OCT data. Therefore, it is possible to easily identify highly accurate depth information of the characteristic region.
 いくつかの実施形態は、探索部により探索された正面画像に基づいて疾患の有無、疾患の確率、又は疾患の種類を推定する推定部(疾患推定部2314C)を含む。 Some embodiments include an estimating unit (disease estimating unit 2314C) that estimates the presence or absence of disease, the probability of disease, or the type of disease based on the front image searched by the searching unit.
 このような構成によれば、分光分布データから高精度に疾患を推定することが可能になる。 According to such a configuration, it is possible to estimate the disease with high accuracy from the spectral distribution data.
 いくつかの実施形態は、推定部により推定された疾患の有無、疾患の確率、又は疾患の種類を含む疾患情報を表示手段(表示部240A)に表示させる表示制御部(制御部210、主制御部211)を含む。 In some embodiments, a display control unit (control unit 210, main control 211).
 このような構成によれば、分光分布データから推定された疾患情報を表示させて、疾患情報を外部に報知することができる。 According to such a configuration, it is possible to display the disease information estimated from the spectral distribution data and notify the disease information to the outside.
 いくつかの実施形態は、探索部により探索された正面画像と深さ情報とを表示手段(表示部240A)に表示させる表示制御部(制御部210、主制御部211)を含む。 Some embodiments include a display control unit (control unit 210, main control unit 211) that causes display means (display unit 240A) to display the front image and depth information searched by the search unit.
 このような構成によれば、分光分布データに対応した正面画像と深さ情報とを表示させて、正面画像と深さ情報とを外部に報知することができる According to such a configuration, it is possible to display a front image and depth information corresponding to the spectral distribution data, and notify the front image and depth information to the outside.
 いくつかの実施形態は、分光分布データを探索部により探索された正面画像に重畳させて表示手段(表示部240A)に表示させる表示制御部(制御部210、主制御部211)を含む。 Some embodiments include a display control unit (control unit 210, main control unit 211) that superimposes the spectral distribution data on the front image searched by the search unit and displays it on display means (display unit 240A).
 このような構成によれば、分光分布データを正面画像に重畳させて表示させて、分布データと正面画像とを対応付けることができる。 According to such a configuration, the spectral distribution data can be superimposed on the front image and displayed, and the distribution data and the front image can be associated with each other.
 いくつかの実施形態では、表示制御部は、特徴領域に対応した正面画像における特徴部位に相当する領域を識別可能に表示手段に表示させる。 In some embodiments, the display control unit causes the display means to identifiably display the area corresponding to the characteristic site in the front image corresponding to the characteristic area.
 このような構成によれば、特徴領域に対応した正面画像における特徴部位に相当する領域の把握を容易にすることができるようになる。 According to such a configuration, it becomes possible to easily grasp the area corresponding to the characteristic part in the front image corresponding to the characteristic area.
 いくつかの実施形態は、分光分布データと深さ情報とを表示手段(表示部240A)に表示させる表示制御部(制御部210、主制御部211)を含む。 Some embodiments include a display control unit (control unit 210, main control unit 211) that displays spectral distribution data and depth information on display means (display unit 240A).
 このような構成によれば、分光分布データと深さ情報とを表示させて、分光分布データと深さ情報とを外部に報知することができる。 According to such a configuration, it is possible to display the spectral distribution data and the depth information and inform the outside of the spectral distribution data and the depth information.
 いくつかの実施形態では、深さ情報は、被検眼の基準部位を基準とした深さ位置、深さ範囲、及び層領域を表す情報の少なくとも1つを含む。 In some embodiments, the depth information includes at least one of information representing a depth position, a depth range, and a layer area relative to a reference portion of the subject's eye.
 このような構成によれば、被検眼の基準部位を基準とした深さ位置、深さ範囲、及び層領域を表す情報の少なくとも1つを特定することが可能になる。 According to such a configuration, it is possible to specify at least one of the information representing the depth position, depth range, and layer region with reference to the reference portion of the subject's eye.
 いくつかの実施形態に係る眼科装置(1)は、被検眼を照明光で照明する照明光学系(10)と、互いに波長範囲が異なる被検眼からの照明光の戻り光を受光する受光光学系(撮影光学系30)と、被検眼に対して光コヒーレンストモグラフィを実行するOCT光学系(OCTユニットから対物レンズまでの光学系)と、上記のいずれかに記載の眼科情報処理装置と、を含む。 An ophthalmologic apparatus (1) according to some embodiments includes an illumination optical system (10) that illuminates an eye to be inspected with illumination light, and a light receiving optical system that receives return light of the illumination light from the eye to be inspected whose wavelength ranges are different from each other. (Photographing optical system 30), an OCT optical system (an optical system from an OCT unit to an objective lens) that performs optical coherence tomography on an eye to be examined, and any one of the ophthalmic information processing apparatuses described above. include.
 このような構成によれば、分光分布データにおける特徴領域が、計測部位における深さ方向のどの組織のものであるかを高精度に特定することが可能な眼科装置を提供することができるようになる。 According to such a configuration, it is possible to provide an ophthalmologic apparatus capable of specifying with high accuracy which tissue in the depth direction in the measurement site the characteristic region in the spectral distribution data belongs to. Become.
 いくつかの実施形態に係る眼科情報処理方法は、特徴領域特定ステップと、深さ情報特定ステップとを含む。特徴領域特定ステップは、照明光で照明された被検眼(E)から所定の波長範囲の戻り光を受光することにより取得された分光分布データにおける特徴領域を特定する。深さ情報特定ステップは、分光分布データより深さ方向の分解能が高い被検眼の測定データに基づいて、特徴領域の深さ情報を特定する。 An ophthalmologic information processing method according to some embodiments includes a characteristic region identifying step and a depth information identifying step. The characteristic region identifying step identifies a characteristic region in the spectral distribution data acquired by receiving return light in a predetermined wavelength range from the eye (E) illuminated with the illumination light. The depth information specifying step specifies the depth information of the characteristic region based on the measurement data of the subject's eye having higher resolution in the depth direction than the spectral distribution data.
 このような方法によれば、分光分布データにおける特徴領域が、計測部位における深さ方向のどの組織のものであるかを高精度に特定することが可能になる。また、眼球運動に起因した分光分布データの位置ずれも補正することができる。それにより、分光分布データに対してより詳細な解析を行うことが可能になる。 According to such a method, it is possible to identify with high accuracy which tissue in the depth direction in the measurement site the characteristic region in the spectral distribution data belongs to. In addition, it is possible to correct the positional deviation of the spectral distribution data due to the movement of the eyeball. Thereby, it becomes possible to perform a more detailed analysis on the spectral distribution data.
 いくつかの実施形態では、特徴領域特定ステップは、被検眼を照明光で照明し、互いに波長範囲が異なる被検眼からの戻り光を受光することにより取得された複数の分光分布データのいずれかにおける特徴領域を特定する。 In some embodiments, the characteristic region identifying step includes any of a plurality of spectral distribution data acquired by illuminating the eye to be inspected with illumination light and receiving return light from the eye to be inspected that has different wavelength ranges. Identify feature regions.
 このような方法によれば、波長範囲に応じて分光分布データに現れる被検眼の部位が異なるため、分光分布が特徴的な特徴領域を特定し、特定された特徴領域が、計測部位における深さ方向のどの組織のものであるかを高精度に特定することが可能になる。 According to such a method, since the part of the eye to be examined that appears in the spectral distribution data differs depending on the wavelength range, a characteristic region having a characteristic spectral distribution is identified, and the identified characteristic region is the depth at the measurement site. It is possible to specify with high accuracy which tissue in the direction it belongs to.
 いくつかの実施形態では、測定データは、被検眼に対して光コヒーレンストモグラフィを実行することにより得られたOCTデータである。 In some embodiments, the measurement data is OCT data obtained by performing optical coherence tomography on the eye to be examined.
 このような方法によれば、被検眼に対してOCTを実行可能な構成を用いて、分光分布データにおける特徴領域が、計測部位における深さ方向のどの組織のものであるかを高精度に特定することが可能になる。 According to such a method, using a configuration capable of performing OCT on the subject's eye, it is possible to identify with high accuracy which tissue in the depth direction of the measurement site the feature region in the spectral distribution data belongs to. it becomes possible to
 いくつかの実施形態では、深さ情報特定ステップは、OCTデータに基づいて形成され互いに深さ位置が異なる複数の正面画像の中から分光分布データと最も相関度が高い正面画像を探索する探索ステップを含み、探索ステップにおいて探索された正面画像に基づいて深さ情報を特定する。 In some embodiments, the depth information specifying step is a search step of searching for a front image having the highest degree of correlation with the spectral distribution data from among a plurality of front images formed based on OCT data and having different depth positions. and identifying depth information based on the front image searched in the searching step.
 このような方法によれば、OCTデータに基づいて形成された複数の正面画像に対する探索処理により分光分布データと最も相関度が高い正面画像を特定するようにしたので、分光分布データの高精度な深さ情報を簡便に特定することが可能になる。 According to this method, since the front image having the highest degree of correlation with the spectral distribution data is specified by searching a plurality of front images formed based on the OCT data, the spectral distribution data can be obtained with high accuracy. Depth information can be easily identified.
 いくつかの実施形態では、深さ情報特定ステップは、OCTデータに基づいて形成され互いに深さ位置が異なる複数の正面画像の中から特徴領域を含む画像と最も相関度が高い画像領域を含む正面画像を探索する探索ステップを含み、探索ステップにおいて探索された正面画像に基づいて深さ情報を特定する。 In some embodiments, the depth information specifying step includes, from among a plurality of front images formed based on OCT data and having different depth positions, an image including a characteristic region and an image region having the highest correlation degree. A search step of searching the image is included, and depth information is identified based on the front image searched in the search step.
 このような方法によれば、OCTデータに基づいて形成された複数の正面画像に対する探索処理により分光分布データにおける特徴領域を含む画像と最も相関度が高い画像領域を含む正面画像を特定するようにしたので、特徴領域の高精度な深さ情報を簡便に特定することが可能になる。 According to such a method, a front image including an image area having the highest degree of correlation with an image including a characteristic area in the spectral distribution data is specified by performing search processing on a plurality of front images formed based on OCT data. Therefore, it is possible to easily identify highly accurate depth information of the characteristic region.
 いくつかの実施形態は、探索ステップにおいて探索された正面画像に基づいて疾患の有無、疾患の確率、又は疾患の種類を推定する推定ステップを含む。 Some embodiments include an estimation step of estimating the presence or absence of disease, the probability of disease, or the type of disease based on the front image searched in the search step.
 このような方法によれば、分光分布データから高精度に疾患を推定することが可能になる。 According to such a method, it is possible to estimate the disease with high accuracy from the spectral distribution data.
 いくつかの実施形態は、推定ステップにおいて推定された疾患の有無、疾患の確率、又は疾患の種類を含む疾患情報を表示手段(表示部240A)に表示させる表示制御ステップを含む。 Some embodiments include a display control step of displaying disease information including the presence or absence of a disease, the probability of disease, or the type of disease estimated in the estimation step on the display means (display unit 240A).
 このような方法によれば、分光分布データから推定された疾患情報を表示させて、疾患情報を外部に報知することができる。 According to such a method, the disease information estimated from the spectral distribution data can be displayed and the disease information can be notified to the outside.
 いくつかの実施形態は、探索ステップにおいて探索された正面画像と深さ情報とを表示手段(表示部240A)に表示させる表示制御ステップを含む。 Some embodiments include a display control step of displaying the front image and depth information searched in the search step on display means (display unit 240A).
 このような方法によれば、分光分布データに対応した正面画像と深さ情報とを表示させて、正面画像と深さ情報とを外部に報知することができる According to such a method, the front image and depth information corresponding to the spectral distribution data can be displayed, and the front image and depth information can be notified to the outside.
 いくつかの実施形態は、分光分布データを探索ステップにおいて探索された正面画像に重畳させて表示手段(表示部240A)に表示させる表示制御ステップを含む。 Some embodiments include a display control step of superimposing the spectral distribution data on the front image searched in the search step and displaying it on the display means (display unit 240A).
 このような方法によれば、分光分布データを正面画像に重畳させて表示させて、分布データと正面画像とを対応付けることができる。 According to such a method, the spectral distribution data can be superimposed on the front image and displayed, and the distribution data and the front image can be associated with each other.
 いくつかの実施形態では、表示制御ステップは、特徴領域に対応した正面画像における特徴部位に相当する領域を識別可能に表示手段に表示させる。 In some embodiments, the display control step causes the display means to identifiably display the area corresponding to the characteristic site in the front image corresponding to the characteristic area.
 このような方法によれば、特徴領域に対応した正面画像における特徴部位に相当する領域の把握を容易にすることができるようになる。 According to such a method, it becomes possible to easily grasp the area corresponding to the characteristic part in the front image corresponding to the characteristic area.
 いくつかの実施形態は、分光分布データと深さ情報とを表示手段(表示部240A)に表示させる表示制御ステップを含む。 Some embodiments include a display control step of displaying spectral distribution data and depth information on display means (display unit 240A).
 このような方法によれば、分光分布データと深さ情報とを表示させて、分光分布データと深さ情報とを外部に報知することができる。 According to such a method, the spectral distribution data and the depth information can be displayed and notified to the outside.
 いくつかの実施形態では、深さ情報は、被検眼の基準部位を基準とした深さ位置、深さ範囲、及び層領域を表す情報の少なくとも1つを含む。 In some embodiments, the depth information includes at least one of information representing a depth position, a depth range, and a layer area relative to a reference portion of the subject's eye.
 このような方法によれば、被検眼の基準部位を基準とした深さ位置、深さ範囲、及び層領域を表す情報の少なくとも1つを特定することが可能になる。 According to such a method, it is possible to specify at least one of information representing a depth position, a depth range, and a layer region with reference to the reference portion of the eye to be examined.
 いくつかの実施形態に係るプログラムは、コンピュータに、上記のいずれかに記載の眼科情報処理方法の各ステップを実行させる。 A program according to some embodiments causes a computer to execute each step of the ophthalmologic information processing method described above.
 このようなプログラムによれば、分光分布データにおける特徴領域が、計測部位における深さ方向のどの組織のものであるかを高精度に特定することが可能になる。また、眼球運動に起因した分光分布データの位置ずれも補正することができる。それにより、分光分布データに対してより詳細な解析を行うことが可能なコンピュータプログラムを提供することができるようになる。 According to such a program, it is possible to identify with high accuracy which tissue in the depth direction in the measurement site the characteristic region in the spectral distribution data belongs to. In addition, it is possible to correct the positional deviation of the spectral distribution data due to the movement of the eyeball. As a result, it becomes possible to provide a computer program capable of performing more detailed analysis on the spectral distribution data.
 以上に説明した実施形態はこの発明の一例に過ぎない。この発明を実施しようとする者は、この発明の要旨の範囲内における変形(省略、置換、付加等)を任意に施すことが可能である。 The embodiment described above is merely an example of the present invention. A person who intends to implement this invention can arbitrarily make modifications (omissions, substitutions, additions, etc.) within the scope of the gist of this invention.
 いくつかの実施形態では、眼科情報処理方法をコンピュータに実行させるプログラムが記憶部212に保存される。このようなプログラムを、コンピュータによって読み取り可能な任意の記録媒体に記憶させてもよい。記録媒体は、磁気、光、光磁気、半導体などを利用した電子媒体であってよい。典型的には、記録媒体は、磁気テープ、磁気ディスク、光ディスク、光磁気ディスク、フラッシュメモリ、ソリッドステートドライブなどである。 In some embodiments, the storage unit 212 stores a program that causes a computer to execute the ophthalmologic information processing method. Such a program may be stored in any computer-readable recording medium. The recording medium may be electronic media using magnetism, light, magneto-optics, semiconductors, and the like. Typically, recording media are magnetic tapes, magnetic disks, optical disks, magneto-optical disks, flash memories, solid state drives, and the like.
1 眼科装置
2 眼底カメラユニット
10 照明光学系
22 対物レンズ
30 撮影光学系
80 波長可変フィルタ
100 OCTユニット
210 制御部
211 主制御部
220 画像形成部
230 データ処理部
231 解析部
231A 特徴部位特定部
231B 3次元位置算出部
231C 分光分布データ処理部
2311C 特徴領域特定部
2312C 深さ情報特定部
2313C 探索部
2314C 疾患推定部
E 被検眼
Ef 眼底
LS 測定光
1 ophthalmologic apparatus 2 retinal camera unit 10 illumination optical system 22 objective lens 30 imaging optical system 80 wavelength tunable filter 100 OCT unit 210 control section 211 main control section 220 image forming section 230 data processing section 231 analysis section 231A characteristic site identification section 231B Dimensional position calculation unit 231C Spectral distribution data processing unit 2311C Characteristic region identification unit 2312C Depth information identification unit 2313C Search unit 2314C Disease estimation unit E Eye to be examined Ef Fundus LS Measurement light

Claims (26)

  1.  照明光で照明された被検眼から所定の波長範囲の戻り光を受光することにより取得された分光分布データにおける特徴領域を特定する特徴領域特定部と、
     前記分光分布データより深さ方向の分解能が高い前記被検眼の測定データに基づいて、前記特徴領域の深さ情報を特定する深さ情報特定部と、
     を含む、眼科情報処理装置。
    a characteristic region identifying unit that identifies a characteristic region in spectral distribution data obtained by receiving return light in a predetermined wavelength range from an eye to be inspected illuminated with illumination light;
    a depth information identifying unit that identifies depth information of the characteristic region based on measurement data of the eye to be inspected, which has higher resolution in the depth direction than the spectral distribution data;
    An ophthalmic information processing device, comprising:
  2.  前記特徴領域特定部は、前記被検眼を照明光で照明し、互いに波長範囲が異なる前記被検眼からの戻り光を受光することにより取得された複数の分光分布データのいずれかにおける前記特徴領域を特定する
     ことを特徴とする請求項1に記載の眼科情報処理装置。
    The characteristic region identifying unit identifies the characteristic region in any one of a plurality of spectral distribution data acquired by illuminating the eye to be examined with illumination light and receiving return light from the eye to be examined that has different wavelength ranges. The ophthalmologic information processing apparatus according to claim 1, characterized in that:
  3.  前記測定データは、前記被検眼に対して光コヒーレンストモグラフィを実行することにより得られたOCTデータである
     ことを特徴とする請求項1又は請求項2に記載の眼科情報処理装置。
    The ophthalmologic information processing apparatus according to claim 1 or 2, wherein the measurement data is OCT data obtained by performing optical coherence tomography on the eye to be examined.
  4.  前記深さ情報特定部は、
     前記OCTデータに基づいて形成され互いに深さ位置が異なる複数の正面画像の中から前記分光分布データと最も相関度が高い正面画像を探索する探索部を含み、
     前記探索部により探索された正面画像に基づいて前記深さ情報を特定する
     ことを特徴とする請求項3に記載の眼科情報処理装置。
    The depth information specifying unit
    a search unit that searches for a front image having the highest correlation with the spectral distribution data from among a plurality of front images formed based on the OCT data and having different depth positions;
    The ophthalmologic information processing apparatus according to claim 3, wherein the depth information is specified based on the front image searched by the search unit.
  5.  前記深さ情報特定部は、
     前記OCTデータに基づいて形成され互いに深さ位置が異なる複数の正面画像の中から前記特徴領域を含む画像と最も相関度が高い画像領域を含む正面画像を探索する探索部を含み、
     前記探索部により探索された正面画像に基づいて前記深さ情報を特定する
     ことを特徴とする請求項3に記載の眼科情報処理装置。
    The depth information specifying unit
    a search unit that searches a plurality of front images formed based on the OCT data and having different depth positions for a front image containing an image region having the highest correlation with the image containing the characteristic region;
    The ophthalmologic information processing apparatus according to claim 3, wherein the depth information is specified based on the front image searched by the search unit.
  6.  前記探索部により探索された前記正面画像に基づいて疾患の有無、疾患の確率、又は種類を推定する推定部を含む
     ことを特徴とする請求項4又は請求項5に記載の眼科情報処理装置。
    6. The ophthalmologic information processing apparatus according to claim 4, further comprising an estimating unit that estimates the presence or absence of a disease, the probability of the disease, or the type of disease based on the front image searched by the searching unit.
  7.  前記推定部により推定された前記疾患の有無、疾患の確率、又は疾患の種類を含む疾患情報を表示手段に表示させる表示制御部を含む
     ことを特徴とする請求項6に記載の眼科情報処理装置。
    7. The ophthalmologic information processing apparatus according to claim 6, further comprising a display control unit that causes a display unit to display disease information including the presence or absence of the disease, the probability of the disease, or the type of the disease estimated by the estimation unit. .
  8.  前記探索部により探索された前記正面画像と前記深さ情報とを表示手段に表示させる表示制御部を含む
     ことを特徴とする請求項4又は請求項5に記載の眼科情報処理装置。
    6. The ophthalmologic information processing apparatus according to claim 4, further comprising a display control unit that causes a display unit to display the front image and the depth information searched by the search unit.
  9.  前記分光分布データを前記探索部により探索された前記正面画像に重畳させて表示手段に表示させる表示制御部を含む
     ことを特徴とする請求項4又は請求項5に記載の眼科情報処理装置。
    6. The ophthalmologic information processing apparatus according to claim 4, further comprising a display control unit that superimposes the spectral distribution data on the front image searched by the search unit and displays the front image on a display unit.
  10.  前記表示制御部は、前記特徴領域に対応した前記正面画像における特徴部位に相当する領域を識別可能に前記表示手段に表示させる
     ことを特徴とする請求項8又は請求項9に記載の眼科情報処理装置。
    10. The ophthalmologic information processing according to claim 8, wherein the display control unit causes the display unit to display a region corresponding to the characteristic part in the front image corresponding to the characteristic region in a identifiable manner. Device.
  11.  前記分光分布データと前記深さ情報とを表示手段に表示させる表示制御部を含む
     ことを特徴とする請求項1~請求項7のいずれか一項に記載の眼科情報処理装置。
    The ophthalmologic information processing apparatus according to any one of claims 1 to 7, further comprising a display control section for displaying the spectral distribution data and the depth information on display means.
  12.  前記深さ情報は、前記被検眼の基準部位を基準とした深さ位置、深さ範囲、及び層領域を表す情報の少なくとも1つを含む
     ことを特徴とする請求項1~請求項11のいずれか一項に記載の眼科情報処理装置。
    12. The depth information according to any one of claims 1 to 11, wherein the depth information includes at least one of information representing a depth position, a depth range, and a layer area with reference to a reference portion of the eye to be inspected. 1. The ophthalmic information processing apparatus according to claim 1.
  13.  前記被検眼を照明光で照明する照明光学系と、
     互いに波長範囲が異なる前記被検眼からの前記照明光の戻り光を受光する受光光学系と、
     前記被検眼に対して光コヒーレンストモグラフィを実行するOCT光学系と、
     請求項1~請求項12のいずれか一項に記載の眼科情報処理装置と、
     を含む、眼科装置。
    an illumination optical system that illuminates the subject's eye with illumination light;
    a light-receiving optical system for receiving return light of the illumination light from the eye to be inspected, the wavelength ranges of which are different from each other;
    an OCT optical system that performs optical coherence tomography on the eye to be examined;
    an ophthalmologic information processing apparatus according to any one of claims 1 to 12;
    An ophthalmic device, comprising:
  14.  照明光で照明された被検眼から所定の波長範囲の戻り光を受光することにより取得された分光分布データにおける特徴領域を特定する特徴領域特定ステップと、
     前記分光分布データより深さ方向の分解能が高い前記被検眼の測定データに基づいて、前記特徴領域の深さ情報を特定する深さ情報特定ステップと、
     を含む、眼科情報処理方法。
    a characteristic region identifying step of identifying a characteristic region in spectral distribution data obtained by receiving return light in a predetermined wavelength range from an eye illuminated with illumination light;
    a depth information specifying step of specifying depth information of the characteristic region based on the measurement data of the eye to be inspected, which has higher resolution in the depth direction than the spectral distribution data;
    An ophthalmic information processing method, comprising:
  15.  前記特徴領域特定ステップは、前記被検眼を照明光で照明し、互いに波長範囲が異なる前記被検眼からの戻り光を受光することにより取得された複数の分光分布データのいずれかにおける前記特徴領域を特定する
     ことを特徴とする請求項14に記載の眼科情報処理方法。
    The characteristic region identifying step identifies the characteristic region in any one of a plurality of spectral distribution data acquired by illuminating the eye to be inspected with illumination light and receiving return light from the eye to be inspected having different wavelength ranges. The ophthalmologic information processing method according to claim 14, characterized by specifying.
  16.  前記測定データは、前記被検眼に対して光コヒーレンストモグラフィを実行することにより得られたOCTデータである
     ことを特徴とする請求項14又は請求項15に記載の眼科情報処理方法。
    16. The ophthalmologic information processing method according to claim 14, wherein the measurement data is OCT data obtained by performing optical coherence tomography on the eye to be examined.
  17.  前記深さ情報特定ステップは、
     前記OCTデータに基づいて形成され互いに深さ位置が異なる複数の正面画像の中から前記分光分布データと最も相関度が高い正面画像を探索する探索ステップを含み、
     前記探索ステップにおいて探索された正面画像に基づいて前記深さ情報を特定する
     ことを特徴とする請求項16に記載の眼科情報処理方法。
    The depth information identifying step includes:
    A search step of searching for a front image having the highest correlation with the spectral distribution data from among a plurality of front images formed based on the OCT data and having different depth positions;
    The ophthalmologic information processing method according to claim 16, wherein the depth information is specified based on the front image searched in the searching step.
  18.  前記深さ情報特定ステップは、
     前記OCTデータに基づいて形成され互いに深さ位置が異なる複数の正面画像の中から前記特徴領域を含む画像と最も相関度が高い画像領域を含む正面画像を探索する探索ステップを含み、
     前記探索ステップにおいて探索された正面画像に基づいて前記深さ情報を特定する
     ことを特徴とする請求項16に記載の眼科情報処理方法。
    The depth information identifying step includes:
    A search step of searching for a front image containing an image region having the highest correlation with the image containing the characteristic region from among a plurality of front images formed based on the OCT data and having different depth positions;
    The ophthalmologic information processing method according to claim 16, wherein the depth information is specified based on the front image searched in the searching step.
  19.  前記探索ステップにおいて探索された前記正面画像に基づいて疾患の有無、疾患の確率、又は疾患の種類を推定する推定ステップを含む
     ことを特徴とする請求項17又は請求項18に記載の眼科情報処理方法。
    19. The ophthalmologic information processing according to claim 17, further comprising an estimation step of estimating the presence or absence of a disease, the probability of the disease, or the type of the disease based on the front image searched in the search step. Method.
  20.  前記推定ステップにおいて推定された前記疾患の有無、疾患の確率、又は疾患の種類を含む疾患情報を表示手段に表示させる表示制御ステップを含む
     ことを特徴とする請求項19に記載の眼科情報処理方法。
    20. The ophthalmologic information processing method according to claim 19, further comprising a display control step of causing a display unit to display the disease information including the presence or absence of the disease, the probability of the disease, or the type of the disease estimated in the estimation step. .
  21.  前記探索ステップにおいて探索された前記正面画像と前記深さ情報とを表示手段に表示させる表示制御ステップを含む
     ことを特徴とする請求項17又は請求項18に記載の眼科情報処理方法。
    19. The ophthalmologic information processing method according to claim 17, further comprising a display control step of causing display means to display the front image and the depth information searched in the search step.
  22.  前記分光分布データを前記探索ステップにおいて探索された前記正面画像に重畳させて表示手段に表示させる表示制御ステップを含む
     ことを特徴とする請求項17又は請求項18に記載の眼科情報処理方法。
    19. The ophthalmologic information processing method according to claim 17, further comprising a display control step of superimposing the spectral distribution data on the front image searched in the searching step and displaying it on display means.
  23.  前記表示制御ステップは、前記特徴領域に対応した前記正面画像における特徴部位に相当する領域を識別可能に前記表示手段に表示させる
     ことを特徴とする請求項21又は請求項22に記載の眼科情報処理方法。
    23. The ophthalmologic information processing according to claim 21 or 22, wherein the display control step causes the display means to display an area corresponding to the characteristic part in the front image corresponding to the characteristic area in a identifiable manner. Method.
  24.  前記分光分布データと前記深さ情報とを表示手段に表示させる表示制御ステップを含む
     ことを特徴とする請求項14~請求項19のいずれか一項に記載の眼科情報処理方法。
    The ophthalmologic information processing method according to any one of claims 14 to 19, further comprising a display control step of displaying the spectral distribution data and the depth information on display means.
  25.  前記深さ情報は、前記被検眼の基準部位を基準とした深さ位置、深さ範囲、及び層領域を表す情報の少なくとも1つを含む
     ことを特徴とする請求項14~請求項24のいずれか一項に記載の眼科情報処理方法。
    25. Any one of claims 14 to 24, wherein the depth information includes at least one of information representing a depth position, a depth range, and a layer area with reference to a reference portion of the eye to be inspected. 1. The ophthalmological information processing method according to 1.
  26.  コンピュータに、請求項14~請求項25のいずれか一項に記載の眼科情報処理方法の各ステップを実行させることを特徴とするプログラム。 A program characterized by causing a computer to execute each step of the ophthalmologic information processing method according to any one of claims 14 to 25.
PCT/JP2022/030396 2021-09-16 2022-08-09 Ophthalmic information processing device, ophthalmic device, ophthalmic information processing method, and program WO2023042577A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021150693A JP2023043212A (en) 2021-09-16 2021-09-16 Ophthalmologic information processing device, ophthalmologic device, ophthalmologic information processing method, and program
JP2021-150693 2021-09-16

Publications (1)

Publication Number Publication Date
WO2023042577A1 true WO2023042577A1 (en) 2023-03-23

Family

ID=85602752

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/030396 WO2023042577A1 (en) 2021-09-16 2022-08-09 Ophthalmic information processing device, ophthalmic device, ophthalmic information processing method, and program

Country Status (2)

Country Link
JP (1) JP2023043212A (en)
WO (1) WO2023042577A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006061328A (en) * 2004-08-26 2006-03-09 Kowa Co Ophthalmologic apparatus
JP2007330557A (en) * 2006-06-15 2007-12-27 Topcon Corp Spectral fundus measuring apparatus and its measuring method
JP2007330558A (en) * 2006-06-15 2007-12-27 Topcon Corp Spectral fundus measuring apparatus and its measuring method
JP2009264787A (en) * 2008-04-22 2009-11-12 Topcon Corp Optical image measuring device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006061328A (en) * 2004-08-26 2006-03-09 Kowa Co Ophthalmologic apparatus
JP2007330557A (en) * 2006-06-15 2007-12-27 Topcon Corp Spectral fundus measuring apparatus and its measuring method
JP2007330558A (en) * 2006-06-15 2007-12-27 Topcon Corp Spectral fundus measuring apparatus and its measuring method
JP2009264787A (en) * 2008-04-22 2009-11-12 Topcon Corp Optical image measuring device

Also Published As

Publication number Publication date
JP2023043212A (en) 2023-03-29

Similar Documents

Publication Publication Date Title
JP6426974B2 (en) Data processing method and OCT apparatus
JP6469413B2 (en) Data processing method and OCT apparatus
JP6703839B2 (en) Ophthalmic measuring device
WO2020044712A1 (en) Ophthalmology device, and control method therefor
JP2023014190A (en) Ophthalmology imaging apparatus
JP2023080218A (en) Ophthalmologic apparatus
JP2019154988A (en) Ophthalmologic imaging apparatus, control method therefor, program, and storage medium
JP7260426B2 (en) Optical coherence tomography device, control method thereof, optical measurement method, program, and storage medium
JP6736734B2 (en) Ophthalmic photographing device and ophthalmic information processing device
JP7166182B2 (en) Ophthalmic information processing device, ophthalmic device, ophthalmic information processing method, and program
JP7374272B2 (en) ophthalmology equipment
JP7199172B2 (en) Ophthalmic device and its control method
JP7117873B2 (en) ophthalmic equipment
JP7325169B2 (en) Ophthalmic device and its control method
JP2023038280A (en) Blood flow measurement device
WO2023042577A1 (en) Ophthalmic information processing device, ophthalmic device, ophthalmic information processing method, and program
JP2018192082A (en) Ophthalmologic apparatus and control method thereof
JP7281906B2 (en) Ophthalmic device, its control method, program, and recording medium
JP6942627B2 (en) Ophthalmologic imaging equipment, its control method, programs, and recording media
JP7289394B2 (en) Ophthalmic information processing device, ophthalmic device, ophthalmic information processing method, and program
JP7216514B2 (en) Blood vessel analyzer
JP7288110B2 (en) ophthalmic equipment
JP7314345B2 (en) Ophthalmic device and its control method
JP7221628B2 (en) Blood flow measuring device, information processing device, information processing method, and program
JP2023040529A (en) Ophthalmologic information processing apparatus, ophthalmologic apparatus, ophthalmologic information processing method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22869724

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22869724

Country of ref document: EP

Kind code of ref document: A1