WO2017090550A1 - Ophthalmological device - Google Patents

Ophthalmological device Download PDF

Info

Publication number
WO2017090550A1
WO2017090550A1 PCT/JP2016/084403 JP2016084403W WO2017090550A1 WO 2017090550 A1 WO2017090550 A1 WO 2017090550A1 JP 2016084403 W JP2016084403 W JP 2016084403W WO 2017090550 A1 WO2017090550 A1 WO 2017090550A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
graph
characteristic value
distribution
eye
Prior art date
Application number
PCT/JP2016/084403
Other languages
French (fr)
Japanese (ja)
Inventor
釣 滋孝
Original Assignee
株式会社トプコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社トプコン filed Critical 株式会社トプコン
Publication of WO2017090550A1 publication Critical patent/WO2017090550A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated

Definitions

  • the present invention relates to an ophthalmologic apparatus.
  • OCT optical coherence tomography
  • a predetermined tissue for example, retinal nerve fiber layer, ganglion cell layer, inner plexiform layer, etc.
  • standard data is statistically derived from a large number of normal eye measurement data, and is called normal eye data (normal data) or the like.
  • the layer thickness distribution of the eye to be examined and the standard layer thickness distribution obtained by OCT are registered, and the layer thickness value of each point in the layer thickness distribution of the eye to be examined and the standard layer thickness distribution. Is performed by comparing the standard layer thickness value of the corresponding point. Therefore, the accuracy and accuracy of registration greatly affect the comparison result.
  • the thickness of the organization varies greatly between individuals. For example, there are eyes whose tissues are generally thinner than standard or thick eyes. In the conventional method that compares only the magnitudes of the layer thickness values, such individual differences are not taken into consideration, and therefore there is a possibility that a comparison result with sufficient accuracy cannot be obtained depending on the eye to be examined.
  • the purpose of the ophthalmic apparatus according to the present invention is to improve the accuracy of comparative diagnosis using standard data.
  • the ophthalmologic apparatus of the embodiment includes a storage unit, a data collection unit, a processing unit, a comparison unit, and an output unit.
  • the storage unit displays a standard graph in which a standard distribution of characteristic values is expressed by a coordinate system including a first coordinate axis representing a position in a predetermined range of the eye and a second coordinate axis representing a magnitude of the predetermined characteristic value of the eye.
  • Store in advance The data collection unit collects data of a region of the eye to be examined including a target region corresponding to a predetermined range using optical coherence tomography.
  • the processing unit processes the collected data to create a distribution graph in which the distribution of characteristic values in the target region is expressed by the coordinate system.
  • the comparison unit compares the created distribution graph with the standard graph.
  • the output unit outputs a comparison result between the distribution graph and the standard graph.
  • Schematic showing an example of a structure of the ophthalmologic apparatus which concerns on embodiment Schematic showing an example of a structure of the ophthalmologic apparatus which concerns on embodiment. Schematic showing an example of a structure of the ophthalmologic apparatus which concerns on embodiment. Schematic showing an example of a structure of the ophthalmologic apparatus which concerns on embodiment. Schematic showing an example of a structure of the ophthalmologic apparatus which concerns on embodiment. Schematic for demonstrating an example of a structure of the ophthalmologic apparatus which concerns on embodiment. Schematic for demonstrating an example of a structure of the ophthalmologic apparatus which concerns on embodiment. Schematic for demonstrating an example of a structure of the ophthalmologic apparatus which concerns on embodiment. Schematic for demonstrating an example of a structure of the ophthalmologic apparatus which concerns on embodiment. Schematic for demonstrating an example of a structure of the ophthalmologic apparatus which concerns on embodiment.
  • the ophthalmologic apparatus of the embodiment has a function of executing optical coherence tomography (OCT) of the eye to be examined.
  • OCT optical coherence tomography
  • the embodiment is not limited thereto.
  • the type of OCT is not limited to the swept source OCT, and may be a spectral domain OCT or a full field OCT (in-face OCT).
  • the ophthalmologic apparatus of the embodiment may or may not have a function of acquiring a photograph (digital photograph) of the eye to be examined like a fundus camera.
  • a slit lamp microscope, an anterior ocular segment photographing camera, or a surgical microscope may be provided instead of the fundus camera.
  • the ophthalmologic apparatus 1 shown in FIG. 1 is used for taking a photograph of the eye E and OCT.
  • the ophthalmologic apparatus 1 includes a fundus camera unit 2, an OCT unit 100, and an arithmetic control unit 200.
  • the fundus camera unit 2 is provided with optical systems and mechanisms that are substantially the same as those of a conventional fundus camera.
  • the OCT unit 100 is provided with an optical system and a mechanism for performing OCT.
  • the arithmetic control unit 200 includes a processor.
  • a chin rest and a forehead support for supporting the face of the subject are provided at positions facing the fundus camera unit 2.
  • the “processor” is, for example, a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), an ASIC (Application Specific Integrated Circuit), a programmable logic device (eg, SPLD (Simple ProGLD). It means a circuit such as Programmable Logic Device (FPGA) or Field Programmable Gate Array (FPGA).
  • the processor implements the functions according to the embodiment by reading and executing a program stored in a storage circuit or a storage device.
  • the fundus camera unit 2 is provided with an optical system and mechanism for acquiring a digital photograph of the eye E to be examined.
  • the digital image of the eye E includes an observation image and a captured image.
  • the observation image is obtained, for example, by moving image shooting using near infrared light.
  • the captured image is, for example, a color image or monochrome image obtained using visible flash light, or a monochrome image obtained using near-infrared flash light.
  • the fundus camera unit 2 may be able to acquire a fluorescein fluorescent image, an indocyanine green fluorescent image, a spontaneous fluorescent image, or the like.
  • the fundus camera unit 2 includes an illumination optical system 10 and a photographing optical system 30.
  • the illumination optical system 10 projects illumination light onto the eye E to be examined.
  • the imaging optical system 30 detects the return light of the illumination light from the eye E.
  • the measurement light from the OCT unit 100 is guided to the eye E through the optical path in the fundus camera unit 2, and the return light is guided to the OCT unit 100 through the same optical path.
  • the observation light source 11 of the illumination optical system 10 is, for example, a halogen lamp or an LED (Light Emitting Diode).
  • the light (observation illumination light) output from the observation light source 11 is reflected by the reflection mirror 12 having a curved reflection surface, passes through the condensing lens 13, passes through the visible cut filter 14, and is converted into near infrared light. Become. Further, the observation illumination light is once converged in the vicinity of the photographing light source 15, reflected by the mirror 16, and passes through the relay lenses 17 and 18, the diaphragm 19 and the relay lens 20.
  • the observation illumination light is reflected by the peripheral part of the perforated mirror 21 (area around the perforated part), passes through the dichroic mirror 46, and is refracted by the objective lens 22 so as to pass through the eye E (especially the fundus oculi Ef). Illuminate.
  • the return light of the observation illumination light from the eye E is refracted by the objective lens 22, passes through the dichroic mirror 46, passes through the hole formed in the central region of the perforated mirror 21, and passes through the dichroic mirror 55.
  • the light is reflected by the mirror 32 via the photographing focusing lens 31. Further, the return light passes through the half mirror 33A, is reflected by the dichroic mirror 33, and forms an image on the light receiving surface of the CCD image sensor 35 by the condenser lens.
  • the CCD image sensor 35 detects the return light at a predetermined frame rate, for example. Note that an observation image of the fundus oculi Ef is obtained when the photographing optical system 30 is focused on the fundus oculi Ef, and an anterior eye observation image is obtained when the focus is on the anterior eye segment.
  • the imaging light source 15 is a visible light source including, for example, a xenon lamp or an LED.
  • the light (imaging illumination light) output from the imaging light source 15 is applied to the fundus oculi Ef through the same path as the observation illumination light.
  • the return light of the imaging illumination light from the eye E is guided to the dichroic mirror 33 through the same path as the return light of the observation illumination light, passes through the dichroic mirror 33, is reflected by the mirror 36, and is reflected by the condenser lens 37.
  • An image is formed on the light receiving surface of the CCD image sensor 38.
  • LCD 39 displays a fixation target for fixing the eye E to be examined.
  • a part of the light beam (fixed light beam) output from the LCD 39 is reflected by the half mirror 33A, reflected by the mirror 32, and passes through the photographing focusing lens 31 and the dichroic mirror 55, and then the hole of the aperture mirror 21. Passes through the dichroic mirror 46, is refracted by the objective lens 22, and is projected onto the fundus oculi Ef.
  • the fixation position of the eye E can be changed.
  • a matrix LED in which a plurality of LEDs are two-dimensionally arranged, or a combination of a light source and a variable diaphragm (liquid crystal diaphragm or the like) can be applied.
  • the fundus camera unit 2 is provided with an alignment optical system 50 and a focus optical system 60.
  • the alignment optical system 50 generates an alignment index used for alignment of the optical system with respect to the eye E.
  • the focus optical system 60 generates a split index used for focus adjustment with respect to the eye E.
  • Alignment light output from the LED 51 of the alignment optical system 50 is reflected by the dichroic mirror 55 via the apertures 52 and 53 and the relay lens 54, and passes through the hole of the perforated mirror 21.
  • the light that has passed through the hole of the perforated mirror 21 passes through the dichroic mirror 46 and is projected onto the eye E by the objective lens 22.
  • the cornea-reflected light of the alignment light passes through the objective lens 22, the dichroic mirror 46, and the hole, part of which passes through the dichroic mirror 55, passes through the photographing focusing lens 31, is reflected by the mirror 32, and is half
  • the light passes through the mirror 33A, is reflected by the dichroic mirror 33, and is projected onto the light receiving surface of the CCD image sensor 35 by the condenser lens. Based on the received light image (alignment index image) by the CCD image sensor 35, manual alignment and auto alignment similar to the conventional one can be performed.
  • the focus optical system 60 is moved along the optical path (illumination optical path) of the illumination optical system 10 in conjunction with the movement of the imaging focusing lens 31 along the optical path (imaging optical path) of the imaging optical system 30.
  • the reflection bar 67 can be inserted into and removed from the illumination optical path.
  • the reflecting surface of the reflecting rod 67 is obliquely provided in the illumination light path.
  • the focus light output from the LED 61 passes through the relay lens 62, is separated into two light beams by the split target plate 63, passes through the two-hole aperture 64, is reflected by the mirror 65, and is reflected by the condenser lens 66. An image is once formed on the reflection surface 67 and reflected. Further, the focus light passes through the relay lens 20, is reflected by the perforated mirror 21, passes through the dichroic mirror 46, is refracted by the objective lens 22, and is projected onto the fundus oculi Ef.
  • the fundus reflected light of the focus light is detected by the CCD image sensor 35 through the same path as the corneal reflected light of the alignment light. Based on the received image (split index image) by the CCD image sensor 35, manual alignment and auto-alignment similar to the conventional one can be performed.
  • the photographing optical system 30 includes diopter correction lenses 70 and 71.
  • the diopter correction lenses 70 and 71 can be selectively inserted into a photographing optical path between the perforated mirror 21 and the dichroic mirror 55.
  • the diopter correction lens 70 is a plus (+) lens for correcting intensity hyperopia, for example, a + 20D (diopter) convex lens.
  • the diopter correction lens 71 is a minus ( ⁇ ) lens for correcting high myopia, for example, a ⁇ 20D concave lens.
  • the diopter correction lenses 70 and 71 are mounted on, for example, a turret plate. The turret plate is formed with a hole for the case where none of the diopter correction lenses 70 and 71 is applied.
  • the dichroic mirror 46 combines the optical path for fundus imaging and the optical path for OCT.
  • the dichroic mirror 46 reflects light in a wavelength band used for OCT and transmits light for fundus photographing.
  • a collimator lens unit 40, an optical path length changing unit 41, an optical scanner 42, an OCT focusing lens 43, a mirror 44, and a relay lens 45 are provided in this order from the OCT unit 100 side.
  • the optical path length changing unit 41 is movable in the direction of the arrow shown in FIG. 1, and changes the optical path length of the optical path for OCT. This change in the optical path length is used for correcting the optical path length according to the axial length of the eye E or adjusting the interference state.
  • the optical path length changing unit 41 includes, for example, a corner cube and a mechanism for moving the corner cube.
  • the optical scanner 42 is disposed at a position optically conjugate with the pupil of the eye E to be examined.
  • the optical scanner 42 changes the traveling direction of the measurement light LS that passes through the optical path for OCT. Thereby, the eye E is scanned with the measurement light LS.
  • the optical scanner 42 can deflect the measurement light LS in an arbitrary direction on the xy plane, and includes, for example, a galvanometer mirror that deflects the measurement light LS in the x direction and a galvanometer mirror that deflects the measurement light LS in the y direction.
  • the OCT unit 100 is provided with an optical system for performing OCT of the eye E.
  • the configuration of this optical system is the same as that of the conventional swept source OCT. That is, this optical system divides the light from the wavelength sweep type (wavelength scanning type) light source into the measurement light and the reference light, and returns the return light of the measurement light from the eye E and the reference light via the reference light path.
  • An interference optical system that generates interference light by causing interference and detects the interference light is included.
  • a detection result (detection signal) obtained by the interference optical system is a signal indicating the spectrum of the interference light, and is sent to the arithmetic control unit 200.
  • the light source unit 101 includes a wavelength swept type (wavelength scanning type) light source that changes the wavelength of the emitted light at high speed, like a general swept source OCT.
  • the wavelength sweep type light source is, for example, a near infrared laser light source.
  • the light L0 output from the light source unit 101 is guided to the polarization controller 103 by the optical fiber 102 and its polarization state is adjusted. Further, the light L0 is guided to the fiber coupler 105 by the optical fiber 104 and is divided into the measurement light LS and the reference light LR.
  • the reference light LR is guided to the collimator 111 by the optical fiber 110, converted into a parallel light beam, and guided to the corner cube 114 via the optical path length correction member 112 and the dispersion compensation member 113.
  • the optical path length correction member 112 acts to match the optical path length of the reference light LR and the optical path length of the measurement light LS.
  • the dispersion compensation member 113 acts to match the dispersion characteristics between the reference light LR and the measurement light LS.
  • the corner cube 114 turns the traveling direction of the incident reference light LR in the reverse direction.
  • the incident direction and the emitting direction of the reference light LR with respect to the corner cube 114 are parallel to each other.
  • the corner cube 114 is movable in the incident direction of the reference light LR, and thereby the optical path length of the reference light LR is changed.
  • the optical path length changing unit 41 for changing the length of the optical path (measurement optical path, measurement arm) of the measurement light LS and the optical path (reference optical path, reference arm) of the reference light LR are used. Both corner cubes 114 for changing the length are provided, but only one of the optical path length changing unit 41 and the corner cube 114 may be provided. It is also possible to change the difference between the measurement optical path length and the reference optical path length using optical members other than these.
  • the reference light LR that has passed through the corner cube 114 is converted from a parallel light beam into a converged light beam by the collimator 116 via the dispersion compensation member 113 and the optical path length correction member 112, and enters the optical fiber 117.
  • the reference light LR incident on the optical fiber 117 is guided to the polarization controller 118 and its polarization state is adjusted.
  • the reference light LR is guided to the attenuator 120 by the optical fiber 119 and the amount of light is adjusted, and is guided to the fiber coupler 122 by the optical fiber 121. It is burned.
  • the measurement light LS generated by the fiber coupler 105 is guided by the optical fiber 127 and converted into a parallel light beam by the collimator lens unit 40, and the optical path length changing unit 41, the optical scanner 42, the OCT focusing lens 43, and the mirror 44. Then, the light passes through the relay lens 45, is reflected by the dichroic mirror 46, is refracted by the objective lens 22, and enters the eye E to be examined.
  • the measurement light LS is scattered and reflected at various depth positions of the eye E.
  • the return light of the measurement light LS from the eye E travels in the reverse direction on the same path as the forward path, is guided to the fiber coupler 105, and reaches the fiber coupler 122 via the optical fiber 128.
  • the fiber coupler 122 combines (interferences) the measurement light LS incident through the optical fiber 128 and the reference light LR incident through the optical fiber 121 to generate interference light.
  • the fiber coupler 122 generates a pair of interference light LC by branching the interference light at a predetermined branching ratio (for example, 1: 1).
  • the pair of interference lights LC are guided to the detector 125 through optical fibers 123 and 124, respectively.
  • the detector 125 is, for example, a balanced photodiode (Balanced Photo Diode).
  • the balanced photodiode has a pair of photodetectors that respectively detect the pair of interference lights LC, and outputs a difference between detection results obtained by these.
  • the detector 125 sends the detection result (detection signal) to a DAQ (Data Acquisition System) 130.
  • DAQ Data Acquisition System
  • the clock KC is supplied from the light source unit 101 to the DAQ 130.
  • the clock KC is generated in synchronization with the output timing of each wavelength swept within a predetermined wavelength range by the wavelength sweep type light source in the light source unit 101.
  • the light source unit 101 optically delays one of the two branched lights obtained by branching the light L0 of each output wavelength, and then generates a clock KC based on the result of detecting these combined lights. Generate.
  • the DAQ 130 samples the detection signal input from the detector 125 based on the clock KC.
  • the DAQ 130 sends the sampling result of the detection signal from the detector 125 to the arithmetic control unit 200.
  • the arithmetic control unit 200 controls each part of the fundus camera unit 2, the display device 3, and the OCT unit 100.
  • the arithmetic control unit 200 executes various arithmetic processes.
  • the arithmetic control unit 200 performs signal processing such as Fourier transform on the spectrum distribution based on the detection result obtained by the detector 125 for each series of wavelength scans (for each A line).
  • a reflection intensity profile is formed.
  • the arithmetic control unit 200 forms image data by imaging the reflection intensity profile of each A line.
  • the arithmetic processing for that is the same as the conventional swept source OCT.
  • the arithmetic control unit 200 includes, for example, a processor, a RAM (Random Access Memory), a ROM (Read Only Memory), a hard disk drive, a communication interface, and the like. Various computer programs are stored in a storage device such as a hard disk drive.
  • the arithmetic control unit 200 may include an operation device, an input device, a display device, and the like.
  • Control system> A configuration example of the control system of the ophthalmologic apparatus 1 is shown in FIG.
  • Control unit 210 controls each unit of the ophthalmologic apparatus 1.
  • Control unit 210 includes a processor.
  • the control unit 210 is provided with a main control unit 211 and a storage unit 212.
  • the main control unit 211 performs various controls.
  • the main control unit 211 includes an imaging focusing lens 31, CCDs (image sensors) 35 and 38, an LCD 39, an optical path length changing unit 41, an optical scanner 42, an OCT focusing lens 43, a focus optical system 60, a reflector 67,
  • the light source unit 101, the reference driving unit 114A, the detector 125, the DAQ 130, and the like are controlled.
  • the reference driving unit 114A moves the corner cube 114 provided in the reference optical path. Thereby, the length of the reference optical path is changed.
  • the ophthalmologic apparatus 1 may include an optical system moving mechanism (not shown).
  • the optical system moving mechanism moves the fundus camera unit 2 (or at least a part of the optical system stored therein) and the OCT unit 100 (or at least a part of the optical system stored therein) in a three-dimensional manner.
  • the storage unit 212 stores various data. Examples of the data stored in the storage unit 212 include image data of an OCT image, image data of a fundus image, and eye information to be examined.
  • the eye information includes subject information such as patient ID and name, left / right eye identification information, electronic medical record information, and the like.
  • normal eye data 212a is stored in advance.
  • the normal eye data 212a is obtained by collecting measurement data of at least eyes (normal eyes) diagnosed as not suffering from a predetermined injury and disease, and statistically processing them.
  • the normal eye data 212a represents a standard value of normal eye measurement data.
  • the normal eye data 212a includes, for example, a typical standard value such as an average value or an intermediate value, and a value representing a range such as a standard deviation, variance, maximum value, minimum value, or the like regarding a predetermined characteristic value of normal eyes.
  • the normal eye data 212a includes a threshold value of a predetermined characteristic value of the normal eye.
  • the characteristic value to be measured is a value that is known to change according to the presence or absence and degree of a predetermined injury or illness, and is a value that can be derived from data collected by an OCT scan.
  • the characteristic value may be a size of a fundus tissue referred to in diagnosis of a fundus disease such as glaucoma. Specific examples thereof include the thickness of a layer tissue composed of a retinal nerve fiber layer, a ganglion cell, an inner plexiform layer, or any combination thereof. Further, the choroid thickness, the diameter and inclination of the optic disc, the diameter and number and distribution of the pores of the phloem plate, the cornea thickness distribution and the curvature distribution, etc. are examples of characteristic values. The characteristic values are not limited to these, and any characteristic value can be adopted according to the purpose for which the ophthalmologic apparatus 1 is used.
  • the normal eye data 212a of the present embodiment includes graph information (standard graph) representing a standard distribution of characteristic values.
  • the standard graph is defined using a coordinate system including a first coordinate axis that represents the position of the eye in a predetermined range and a second coordinate value that represents the magnitude of the characteristic value.
  • the standard graph of the present embodiment represents an allowable range of characteristic values for determining a normal eye.
  • the predetermined range of the eye is included in the target range of the OCT scan.
  • the predetermined range and the target range may be the same or different.
  • the predetermined range may be a two-dimensional area composed of a part of a three-dimensional area that is a target of a three-dimensional scan.
  • the first coordinate axis includes one or more coordinate axes.
  • the first coordinate axis when the first coordinate axis includes a single coordinate axis, the first coordinate axis represents a one-dimensional position, for example, a plurality of positions arranged on a straight line or in a curved line.
  • the first coordinate axis when the first coordinate axis is composed of two coordinate axes, the first coordinate axis represents a two-dimensional position, for example, a plurality of positions arranged on a plane or a curved surface. The same applies to the case where the first coordinate axis is composed of three or more coordinate axes.
  • the standard graph 300 represents a standard distribution of retinal nerve fiber layer (cpRNFL) thickness around the optic disc.
  • the horizontal axis corresponds to the first coordinate axis and represents the definition area of the standard graph 300.
  • the vertical axis corresponds to the second coordinate axis and represents the value of the thickness of the retinal nerve fiber layer.
  • the domain defined by the horizontal axis is the circumference of a cylindrical area surrounding the optic disc and having a predetermined diameter.
  • An example is shown in FIG.
  • a circular scan path P having a predetermined diameter is applied around the optic disc D.
  • OCT is sequentially performed on a plurality of positions on the scan path P.
  • Center P C of the circular scan path P is set in the center of example approximate circle (approximate ellipse) of the center of gravity or the optic nerve head D of the optic disc D.
  • the diameter of the scan path P may be a default value.
  • the diameter of the scan path P may be set for each eye E based on the diameter of the optic nerve head D (disk diameter, cup diameter, rim diameter, etc.).
  • the diameter of the scan path P can be obtained by multiplying the measured value of the diameter of the optic nerve head D by a predetermined constant.
  • FIG. 5 “T” is the ear side (temporal), “S” is the upper side (superior), “N” is the nasal side (nasal), “I” is the lower side (inferior), respectively.
  • Show. “T”, “S”, “N”, and “I” attached to the horizontal axis in FIG. 4 correspond to these directions.
  • a standard graph 300 shown in FIG. 4 shows a standard thickness of the retinal nerve fiber layer in a two-dimensional section obtained by cutting a cylindrical section around the optic nerve head along the z-direction at the position of the ear side “T”. Represents a significant distribution.
  • the standard graph 300 in this example may represent the allowable range of the layer thickness value at each point (each point on the horizontal axis) on the circumference (scan path P).
  • the standard graph 300 includes, for example, a graph (average graph) 301 indicating the average value of the layer thickness at each point on the circumference, a graph (upper limit graph) 302 indicating the upper limit of the allowable range, and a graph indicating the lower limit. (Lower limit graph) 303.
  • the standard graph 300 is obtained by measuring a large number of normal eyes, collecting distribution graphs of the thickness of the retinal nerve fiber layer, and analyzing the thickness distribution modes (shape, position, value size, etc.). It is also possible to form.
  • the standard graph 300 represents not a standard distribution of layer thickness values but a form that can be taken by a normal eye distribution graph (that is, an allowable range of the distribution graph form). It is also possible to create the distribution graph 300 in consideration of both the layer thickness value and the form of the distribution graph.
  • the image forming unit 220 forms image data of a cross-sectional image of the fundus oculi Ef or the anterior ocular segment based on the sampling result of the detection signal input from the DAQ 130.
  • This processing includes signal processing such as noise removal (noise reduction), filter processing, and FFT (Fast Fourier Transform) as in the case of the conventional swept source OCT.
  • the image data formed by the image forming unit 220 is a group of image data (a group of image data) formed by imaging reflection intensity profiles in a plurality of A lines (lines in the z direction) arranged along the scan line. A scan image data).
  • the image forming unit 220 includes, for example, at least one of a processor and a dedicated circuit board.
  • image data and “image” based thereon may be identified. Further, the part of the eye E to be examined and the image representing it may be identified.
  • the data processing unit 230 performs image processing and analysis processing on the image formed by the image forming unit 220. For example, the data processing unit 230 executes correction processing such as image luminance correction and dispersion correction. Further, the data processing unit 230 performs image processing and analysis processing on an image (fundus image, anterior eye image, etc.) obtained by the fundus camera unit 2.
  • the data processing unit 230 includes, for example, at least one of a processor and a dedicated circuit board.
  • the data processing unit 230 includes a three-dimensional image forming unit 231, a partial image specifying unit 232, a characteristic value calculating unit 233, a distribution graph creating unit 234, and a graph comparing unit 235.
  • ⁇ Three-dimensional image forming unit 231 When a three-dimensional scan (raster scan or the like) of the eye E is performed, the image forming unit 220 forms a two-dimensional cross-sectional image (B scan image) corresponding to each scanning line.
  • the three-dimensional image forming unit 231 forms a three-dimensional image based on these two-dimensional cross-sectional images.
  • a three-dimensional image means image data in which pixel positions are defined by a three-dimensional coordinate system. Examples of 3D images include stack data and volume data.
  • Stack data is image data obtained by three-dimensionally arranging a plurality of cross-sectional images corresponding to a plurality of scanning lines based on the positional relationship of these scanning lines. That is, stack data is image data obtained by expressing a plurality of cross-sectional images originally defined by individual two-dimensional coordinate systems by a single three-dimensional coordinate system (that is, by embedding them in one three-dimensional space). is there.
  • Volume data (voxel data) is formed by executing known image processing such as interpolation processing for interpolating pixels between a plurality of cross-sectional images included in the stack data to form voxels.
  • a pseudo three-dimensional image (rendered image) is formed by performing rendering processing (volume rendering, MIP (Maximum Intensity Projection), etc.) on the volume data.
  • a two-dimensional cross-sectional image can be formed from a group of pixels arranged in an arbitrary cross section in the volume data. This image processing is called multi-planar reconstruction (MPR).
  • MPR multi-planar reconstruction
  • ⁇ Partial image specifying unit 232 When the three-dimensional image forming unit 231 forms a three-dimensional image of the eye E, the partial image specifying unit 232 analyzes the three-dimensional image, thereby comparing the three-dimensional image with the normal eye data 212a. Identify the partial image inside. As described above, the normal eye data 212a includes a standard graph. The partial image specifying unit 232 specifies an area in the three-dimensional image corresponding to the definition area (predetermined range of the eye described above) indicated by the first coordinate axis of the standard graph.
  • the partial image specifying unit 232 specifies a partial image corresponding to the scan path P shown in FIG. 5 by analyzing the three-dimensional image.
  • a specific example of this processing is shown in FIG. A symbol V indicates a three-dimensional image.
  • the partial image specifying unit 232 specifies the optic disc D by analyzing the three-dimensional image V, and specifies the scan path P based on the position and diameter of the specified optic disc D.
  • the scanning path P is a circle that surrounds the optic nerve head D and has a predetermined diameter.
  • the partial image corresponding to the scan path P is a cylindrical cross-sectional image G having the scan path P as a circumference and the z direction as an axial direction.
  • the characteristic value calculation unit 233 calculates the characteristic value at each of a plurality of positions in the target region by analyzing the partial image specified by the partial image specifying unit 232. In this example, the characteristic value calculation unit 233 determines the thickness distribution of the retinal nerve fiber layer by analyzing the cylindrical cross-sectional image G. That is, the thickness of the retinal nerve fiber layer is measured at a plurality of positions in the scan path P.
  • the characteristic value calculation unit 233 first cuts the cylindrical cross-sectional image G along the z direction at the ear position “T”. Thus, a two-dimensional cross-sectional image G A as shown in FIG. 7 is obtained.
  • the characteristic value calculating unit 233 by performing segmentation of two-dimensional cross-sectional images G A, image regions (nerve fiber layer image) corresponding to the retinal nerve fiber layer to identify the L. Segmentation is performed based on changes in pixel values, as in the conventional case. For example, the segmentation is performed so that a characteristic value is specified from the values of the pixel group arranged in each A line, and a pixel having the specified value is selected as a pixel of a predetermined layer (or its boundary). .
  • the characteristic value calculation unit 233 may obtain an approximate curve (a linear approximate curve, a logarithmic approximate curve, a polynomial approximate curve, a power approximate curve, an exponential approximate curve, a moving average approximate curve, etc.) of the layer boundary.
  • an approximate curve a linear approximate curve, a logarithmic approximate curve, a polynomial approximate curve, a power approximate curve, an exponential approximate curve, a moving average approximate curve, etc.
  • a plurality of positions p 1 on the scan path P, p 2, ⁇ ⁇ ⁇ , the thickness of the retinal nerve fiber layer in p M W (p 1), W (p 2), ⁇ , W (p M ) is obtained. That is, the thickness distribution W (P) of the retinal nerve fiber layer along the scan path P is obtained.
  • the distribution graph creation unit 234 creates a distribution graph based on the characteristic values calculated for a plurality of positions by the characteristic value calculation unit 233.
  • the distribution graph creation unit 234 represents a coordinate system in which the position on the scan path P is represented by the horizontal axis (first coordinate axis) and the thickness W of the retinal nerve fiber layer is represented by the vertical axis (second coordinate axis).
  • the correspondence relationship (p m , W (p m )) obtained by the characteristic value calculation unit 233 is plotted.
  • the distribution graph creation unit 234 can connect a plurality of points plotted in the coordinate system with broken lines or curves. By such processing, a distribution graph H shown in FIG. 8 is created.
  • the graph comparison unit 235 compares the distribution graph H created by the distribution graph creation unit 234 with the standard graph 300 stored in the storage unit 212a. In this example, the graph comparison unit 235 determines whether or not the distribution graph H is included in the allowable range represented by the standard graph 300.
  • An example of comparison between the distribution graph H and the standard graph 300 is shown in FIG. In the example illustrated in FIG. 9, the distribution graph H is included in an allowable range (a region sandwiched between the upper limit graph 302 and the lower limit graph 303 illustrated in FIG. 4) indicated by the standard graph 300. Therefore, the result of this comparative diagnosis is determined to be normal.
  • the method of comparing the graphs is arbitrary. For example, it is configured to determine whether or not the entire distribution graph is included in the allowable range represented by the standard graph, or to be configured to determine whether or not at least a part of the distribution graph is included in the allowable range represented by the standard graph,
  • the distribution graph included in the allowable range represented by the standard graph can be configured to determine whether the ratio is equal to or greater than a threshold value.
  • the distribution graph can be configured to determine whether or not the distribution graph has the characteristics of the standard graph.
  • the retinal nerve fiber layer thickness (cpRNFLT) around the nipple of normal eyes becomes thinner in the order of lower side, upper side, nose side, and ear side (ISNT's law), and in the vicinity of “I” and “S” in the graph
  • Each has a convex portion (called double hump). It is known that the double hump shape collapses in glaucomatous eyes. Therefore, it is possible to create a standard graph in consideration of the allowable range of the interval between the two convex portions, the allowable range of the displacement of the convex portions, the allowable range of the elevation difference between the convex portions and other portions, and the like. .
  • both graphs can be recognized as images and the image correlation between them can be obtained.
  • the value of the image correlation is used as a deviation of the distribution graph with respect to the standard graph, and serves as a determination material for determining whether or not there is an abnormality in the eye E.
  • Both graphs are regarded as thickness distribution functions in a space (WP space) spanned by the layer thickness (W) and the position on the scan path (P), and a correlation between these distribution functions is obtained. be able to.
  • the correlation value between the distribution functions is obtained as a cross-correlation function of the graph of the eye E to be examined with respect to the standard graph, and is used as a material for determining the presence / absence and degree of abnormality based on comparison with a standard normal eye.
  • the user interface 240 includes a display unit 241 and an operation unit 242.
  • the display unit 241 includes the display device 3.
  • the main control unit 211 causes the display unit 241 to display information obtained by the data processing unit 230. For example, the comparison result between the standard graph and the distribution graph and information indicating whether or not an abnormality has been found in the eye E can be displayed.
  • the operation unit 242 includes various operation devices and input devices.
  • the user interface 240 may include a device such as a touch panel in which a display function and an operation function are integrated. Embodiments that do not include at least a portion of the user interface 240 can also be constructed.
  • the display device may be an external device connected to the ophthalmologic apparatus.
  • OCT of the eye E is performed.
  • data is collected by scanning a three-dimensional region including the optic papilla D shown in FIG. 5 and its periphery (including at least the scan path P).
  • This three-dimensional scan is, for example, a raster scan in which a plurality of linear scanning lines are arranged in parallel to each other.
  • Each scanning line has, for example, a line segment shape extending in the x direction, and the plurality of scanning lines are arranged at equal intervals in the y direction.
  • the image forming unit 220 forms a plurality of B scan images corresponding to the plurality of scanning lines based on the data collected in step S1.
  • the B scan image of this example is, for example, a two-dimensional cross-sectional image representing an xz cross section.
  • the three-dimensional image forming unit 231 forms a three-dimensional image (three-dimensional image V shown in FIG. 6) based on the plurality of B scan images formed in step S2.
  • the partial region specifying unit 232 specifies the cylindrical cross-sectional image G (partial image of the three-dimensional image V) including the optic nerve head D and having a predetermined diameter by analyzing the three-dimensional image V formed in step S3. To do.
  • the characteristic value calculation unit 233 calculates the thickness (characteristic value) of the retinal nerve fiber layer at a plurality of positions on the scan path P by analyzing the cylindrical cross-sectional image G identified in step S4. Specifically, the characteristic value calculating unit 233 forms a 2-dimensional cross-sectional image G A shown in FIG. 7 by cutting open along the z-direction a cylindrical cross-sectional image G ear-side position in the "T", the two-dimensional cross-section performing segmentation of the image G a identifies the nerve fiber layer image L by.
  • the distribution graph creation unit 234 takes the position of the retinal nerve fiber layer obtained in step S5 in the coordinate system having the position on the scan path P on the horizontal axis and the thickness W of the retinal nerve fiber layer on the vertical axis.
  • the height distribution W (P) ⁇ W (p 1 ), W (p 2 ),..., W (p M ) ⁇ is plotted. Thereby, the distribution graph H shown in FIG. 8 is obtained.
  • the graph comparison unit 235 compares the distribution graph H created in step S6 with the standard graph 300 (see FIG. 4) stored in advance in the storage unit 212a, thereby determining whether or not there is a disease (such as glaucoma). A determination is made.
  • the main control unit 211 causes the display unit 241 to display the comparison result (presence / absence of disease, degree, etc.) obtained in step S7.
  • the output method of the comparison result is not limited to display, and may be, for example, transmission to an external computer or storage device, recording on a recording medium, printing on a printing medium, or the like.
  • FIG. 11 shows a part of an ophthalmologic apparatus according to a modification.
  • a data processing unit 230A shown in FIG. 11 is applied instead of the data processing unit 230 (see FIG. 3) of the above embodiment. That is, the ophthalmologic apparatus of the present modification includes the configuration shown in FIGS. 1 and 2, the elements shown in FIG. 3 other than the data processing unit 230, and the data processing unit 230A shown in FIG.
  • the data processing unit 230A includes a three-dimensional image forming unit 231A, a characteristic value calculating unit 232A, a characteristic value selecting unit 233A, a distribution graph creating unit 234A, and a graph comparing unit 235A.
  • the three-dimensional image forming unit 231A is configured to execute the same processing as the three-dimensional image forming unit 231 of the above embodiment.
  • the characteristic value calculation unit 232A calculates the characteristic value at each of a plurality of positions of the three-dimensional image by analyzing the three-dimensional image formed by the three-dimensional image forming unit 231A. For example, the characteristic value calculation unit 232A calculates the thickness of the retinal nerve fiber layer for all the A lines constituting the three-dimensional image. Thereby, the thickness distribution of the retinal nerve fiber layer in the three-dimensional region where the three-dimensional OCT scan is performed is obtained.
  • the characteristic value selection unit 233A selects a characteristic value corresponding to the target part from all the characteristic values obtained by the characteristic value calculation unit 232A by analyzing the three-dimensional image.
  • the characteristic value selection unit 233A specifies, for example, a partial image in the three-dimensional image corresponding to the target part. This process is executed, for example, in the same manner as the partial image specifying unit 232 of the above embodiment. Thereby, for example, characteristic values (thicknesses of the retinal nerve fiber layer) at a plurality of positions in the region represented by the cylindrical cross-sectional image G shown in FIG. 6 are selected.
  • the distribution graph creation unit 234A creates a distribution graph in which the distribution of the characteristic values in the target part is expressed by a predetermined coordinate system based on the characteristic value group selected by the characteristic value selection unit 233A. This process is executed in the same manner as the distribution graph creation unit 234 of the above embodiment. Thereby, for example, a distribution graph H as shown in FIG. 8 is obtained.
  • the graph comparison unit 235A compares the distribution graph created by the distribution graph creation unit 234A with the standard graph stored in advance in the storage unit 212. This process is executed in the same manner as the graph comparison unit 235 of the above embodiment.
  • the combination and order of a series of processes based on a three-dimensional image obtained by a three-dimensional OCT scan may be arbitrary.
  • a partial image (such as a nerve fiber layer image) may be specified and a characteristic value may be calculated.
  • a B-scan image used for forming a three-dimensional image can be analyzed to identify a partial image or calculate a characteristic value.
  • the target site of the comparative diagnosis may be OCT scanned.
  • the ophthalmologic apparatus includes a storage unit, a data collection unit, a processing unit, a comparison unit, and an output unit.
  • the storage unit displays a standard graph in which a standard distribution of characteristic values is expressed by a coordinate system including a first coordinate axis representing a position in a predetermined range of the eye and a second coordinate axis representing a magnitude of the predetermined characteristic value of the eye. Store in advance.
  • the storage unit 212 of the above embodiment is an example.
  • the data collection unit collects data of the region of the eye to be examined including the target region corresponding to the predetermined range using OCT.
  • the elements in the OCT unit 100 and the elements arranged in the measurement arm among the elements in the fundus camera unit 2 are included in the data collection unit.
  • the processing unit processes the collected data to create a distribution graph in which the distribution of characteristic values in the target region is expressed by the coordinate system.
  • the image forming unit 220 and elements (at least the characteristic value calculating unit 233 and the distribution graph creating unit 234) that contribute to creating the distribution graph among the elements in the data processing unit 230 are included in the processing unit.
  • the comparison unit compares the distribution graph created by the processing unit with the standard graph.
  • the comparison unit 235 of the above embodiment is an example.
  • the output unit outputs a comparison result between the distribution graph and the standard graph.
  • the main control unit 211 and the display unit 241 of the above embodiment are examples thereof.
  • the processing unit may include an image forming unit, a characteristic value calculating unit, and a distribution graph creating unit.
  • the image forming unit is configured to form an image based on the data collected by the data collecting unit. This image is, for example, a two-dimensional image (B-scan image or the like) or a three-dimensional image.
  • the image forming unit 220 (and the three-dimensional image forming unit 231) of the above embodiment is an example.
  • the characteristic value calculation unit is configured to calculate characteristic values at each of a plurality of positions in the target site of the comparative diagnosis by analyzing the image formed by the image forming unit.
  • the characteristic value calculation unit 233 of the above embodiment is an example.
  • the distribution graph creation unit is configured to create a distribution graph based on the characteristic values calculated for a plurality of positions (distribution of characteristic values in the target part).
  • the distribution creation unit 234 of the above embodiment is an example.
  • the domain of the standard graph (the predetermined range above) may be a cylindrical range surrounding the optic nerve head and having a predetermined diameter.
  • the characteristic value calculation unit may be configured to calculate the thickness of the layer tissue including the retinal nerve fiber layer as the characteristic value.
  • the distribution graph creation unit expresses the distribution of the thickness of the layer structure in a coordinate system in which a plurality of positions in the circumferential direction in the cylindrical range are represented by the first coordinate axis and the thickness of the layer structure is represented by the second coordinate axis. Thus, a distribution graph may be created.
  • the data collection unit may be configured to scan a three-dimensional region of the eye to be examined that includes a target region corresponding to the definition region (the predetermined range) of the standard graph.
  • the image forming unit forms a three-dimensional image based on data collected by scanning the three-dimensional region.
  • the processing unit includes a first specifying unit that specifies a partial image corresponding to the target region by analyzing the formed three-dimensional image.
  • the partial image specifying unit 232 of the above embodiment is an example.
  • the characteristic value calculation unit calculates the characteristic value by analyzing the specified partial image.
  • the distribution graph creation unit creates a distribution graph based on the characteristic values acquired for the partial images.
  • the image forming unit forms a three-dimensional image based on data collected by scanning the three-dimensional region.
  • the characteristic value calculation unit calculates the characteristic value by analyzing the formed three-dimensional image.
  • the processing unit includes a second specifying unit that specifies a characteristic value corresponding to the target part among the characteristic values obtained by the characteristic value calculating unit by analyzing the three-dimensional image.
  • the characteristic value selection unit 233A of the modification is an example.
  • the distribution graph creation unit creates a distribution graph based on the specified characteristic value.
  • a comparative diagnosis of ophthalmic diseases diagnosis of whether or not the disease is present, the degree of progression of the disease, etc.
  • the diagnosis result is not affected by the accuracy and accuracy of registration.
  • the influence of individual differences on the diagnosis result is small. Therefore, according to the embodiment and the modification, it is possible to improve the accuracy of the comparative diagnosis using the standard data.
  • the embodiment and the modified example globally (globally) distribute the characteristic state of the eye to be examined. ) Is configured to determine.
  • comparative diagnosis of the part cannot be performed with the conventional method. According to this, it is possible to obtain a result by a global comparative diagnosis.
  • the embodiment and the modification it is also possible to detect an abnormality that is difficult to grasp by a conventional method such as a double hump shape collapse in a glaucoma eye or the like. It is known that the distance between two humps (convex portions) tends to increase when the eye to be examined is a myopic eye (high-strength myopia). According to the embodiment and the modification, it is possible to make a comparative diagnosis by referring to the distance between the humps, or to determine whether the eye to be examined is a myopic eye based on the distance between the humps. .
  • the storage unit (for example, the storage unit 212) stores in advance the allowable range of the characteristic value in addition to the standard graph.
  • the allowable range of characteristic values is normal eye data used in conventional comparative diagnosis, and is map information representing, for example, allowable ranges of characteristic values at a plurality of positions of the eye (characteristic value ranges for normal eyes).
  • a conventional comparative diagnosis and a comparative diagnosis such as an embodiment.
  • the comparison unit performs comparison diagnosis (distribution graph and (Compared to a standard graph).
  • the conventional comparative diagnosis is executed by the data processing unit 230 and the like in the same manner as in the past.
  • the main control unit 211 indicates that an abnormality exists. Can be output by the display unit 241 or the like.
  • the data processing unit 230 performs a conventional comparative diagnosis on the thickness of the retinal nerve fiber layer, and determines whether or not there is a site (abnormal site) having a thickness outside the allowable range.
  • the graph comparison unit 235 compares the distribution graph with the standard graph. By performing the two-stage comparative diagnosis in this way, the possibility of false negatives is reduced. On the other hand, when it is determined that an abnormal site exists, the comparison process between the distribution graph and the standard graph is omitted.
  • the order in which these comparative diagnoses are performed may be reversed.
  • it is possible to determine whether or not to execute the conventional comparative diagnosis according to the result of the comparative diagnosis of the embodiment or the modification.
  • the results may be output by comprehensively considering the results of the comparative diagnosis of the embodiment or the modified example and the results of the conventional comparative diagnosis.
  • it can be configured to output the result “abnormal” when it is determined to be abnormal in one or both of the comparative diagnoses, and to output the result “normal” when it is determined to be normal in both of the comparative diagnoses.
  • the result may be output in consideration of the degree of abnormality in both comparative diagnoses.
  • the ophthalmologic apparatus of the above-described embodiment or modification includes a configuration (data collection unit) for executing OCT
  • the embodiment is not limited to this.
  • the embodiment may be an ophthalmic modality other than OCT, a computer, or the like.
  • data obtained using OCT is input from the outside.
  • Such an ophthalmologic apparatus includes a storage unit, a reception unit, a processing unit, a comparison unit, and an output unit.
  • the storage unit has a standard distribution of characteristic values by a coordinate system including a first coordinate axis that represents a position in a predetermined range of the eye and a second coordinate axis that represents the magnitude of the predetermined characteristic value of the eye. Is stored in advance.
  • the accepting unit accepts data collected by scanning the region of the eye to be examined including the target region corresponding to the predetermined range by OCT.
  • the reception unit is, for example, a communication interface in the arithmetic control unit 200 of the above embodiment.
  • the communication interface receives data from an external device via a communication line such as the Internet or a LAN.
  • the external device is, for example, a computer (server, storage device, etc.) on a communication line.
  • the accepting unit is not limited to the communication interface.
  • the reception unit may include a drive device that reads data recorded on the recording medium.
  • the processing unit processes the data received by the receiving unit to create a distribution graph in which the distribution of characteristic values in the target region is expressed by the coordinate system.
  • the comparison unit compares the created distribution graph with the standard graph.
  • the output unit outputs a comparison result between the distribution graph and the standard graph.
  • each of the processing unit, the comparison unit, and the output unit may have the same configuration as in the above embodiment.
  • Ophthalmic Device 100 OCT Unit 212 Storage Unit 212a Normal Eye Data 220 Image Forming Unit 230 Data Processing Unit 234 Distribution Graph Creation Unit 235 Graph Comparison Unit

Abstract

An ophthalmological device of an embodiment of the present invention is provided with a storage unit, a data accumulation unit, a processing unit, a comparison unit, and an output unit. The storage unit stores in advance a standard graph in which a standard distribution of characteristic values is expressed using a coordinate system that includes a first coordinate axis indicating a position of an eye within a prescribed range, and a second coordinate axis indicating the magnitude of a prescribed characteristic value of the eye. The data accumulating unit uses optical coherence tomography to accumulate data of an eye-under-test region which includes a target site corresponding to the prescribed range. The processing unit processes the accumulated data to create a distribution graph in which the distribution of the characteristic values of the target site is expressed using the coordinate system. The comparison unit compares the distribution graph which was created and the standard graph. The output unit outputs the results of the comparison between the distribution graph and the standard graph.

Description

眼科装置Ophthalmic equipment
 本発明は、眼科装置に関する。 The present invention relates to an ophthalmologic apparatus.
 眼科診療において光コヒーレンストモグラフィ(OCT)は必要不可欠な検査となってきている。例えば、緑内障等の眼底疾患の診断では、OCTにより取得された眼底のデータから所定組織(例えば、網膜神経線維層、神経節細胞層、内網状層など)の厚さの分布を取得し、それを標準データと比較することにより、疾患の有無や程度を判定している(例えば特許文献1を参照)。標準データは、多数の正常眼の測定データから統計的に導出され、正常眼データ(ノーマティブデータ)などと呼ばれる。 Optical coherence tomography (OCT) has become an indispensable examination in ophthalmic practice. For example, in the diagnosis of fundus diseases such as glaucoma, the thickness distribution of a predetermined tissue (for example, retinal nerve fiber layer, ganglion cell layer, inner plexiform layer, etc.) is obtained from the fundus data obtained by OCT, Is compared with standard data to determine the presence and extent of the disease (see, for example, Patent Document 1). The standard data is statistically derived from a large number of normal eye measurement data, and is called normal eye data (normal data) or the like.
特開2015-80677号公報JP2015-80677A
 このような従来の比較診断は、OCTにより得られた被検眼の層厚分布と標準層厚分布とのレジストレーションを行い、被検眼の層厚分布における各点の層厚値と標準層厚分布において対応点の標準層厚値とを比較することにより行われる。そのため、レジストレーションの精度や確度が比較結果に大きく影響する。 In such a conventional comparative diagnosis, the layer thickness distribution of the eye to be examined and the standard layer thickness distribution obtained by OCT are registered, and the layer thickness value of each point in the layer thickness distribution of the eye to be examined and the standard layer thickness distribution. Is performed by comparing the standard layer thickness value of the corresponding point. Therefore, the accuracy and accuracy of registration greatly affect the comparison result.
 また、組織の厚さは個人差が大きい。例えば、組織が全体的に標準より薄い眼や厚い眼が存在する。層厚値の大きさのみを比較する従来の手法ではこのような個人差が考慮されないため、被検眼によっては十分な確度の比較結果を得られないおそれがある。 Also, the thickness of the organization varies greatly between individuals. For example, there are eyes whose tissues are generally thinner than standard or thick eyes. In the conventional method that compares only the magnitudes of the layer thickness values, such individual differences are not taken into consideration, and therefore there is a possibility that a comparison result with sufficient accuracy cannot be obtained depending on the eye to be examined.
 本発明に係る眼科装置の目的は、標準データを用いた比較診断の確度の向上を図ることにある。 The purpose of the ophthalmic apparatus according to the present invention is to improve the accuracy of comparative diagnosis using standard data.
 実施形態の眼科装置は、記憶部とデータ収集部と処理部と比較部と出力部とを備える。記憶部は、眼の所定範囲における位置を表す第1座標軸と眼の所定の特性値の大きさを表す第2座標軸とを含む座標系により特性値の標準的な分布が表現された標準グラフを予め記憶する。データ収集部は、所定範囲に相当する対象部位を含む被検眼の領域のデータを光コヒーレンストモグラフィを用いて収集する。処理部は、収集されたデータを処理することにより、上記対象部位における特性値の分布が上記座標系により表現された分布グラフを作成する。比較部は、作成された分布グラフと標準グラフとを比較する。出力部は、分布グラフと標準グラフとの比較結果を出力する。 The ophthalmologic apparatus of the embodiment includes a storage unit, a data collection unit, a processing unit, a comparison unit, and an output unit. The storage unit displays a standard graph in which a standard distribution of characteristic values is expressed by a coordinate system including a first coordinate axis representing a position in a predetermined range of the eye and a second coordinate axis representing a magnitude of the predetermined characteristic value of the eye. Store in advance. The data collection unit collects data of a region of the eye to be examined including a target region corresponding to a predetermined range using optical coherence tomography. The processing unit processes the collected data to create a distribution graph in which the distribution of characteristic values in the target region is expressed by the coordinate system. The comparison unit compares the created distribution graph with the standard graph. The output unit outputs a comparison result between the distribution graph and the standard graph.
 実施形態によれば、標準データを用いた比較診断の確度の向上を図ることができる。 According to the embodiment, it is possible to improve the accuracy of comparative diagnosis using standard data.
実施形態に係る眼科装置の構成の一例を表す概略図。Schematic showing an example of a structure of the ophthalmologic apparatus which concerns on embodiment. 実施形態に係る眼科装置の構成の一例を表す概略図。Schematic showing an example of a structure of the ophthalmologic apparatus which concerns on embodiment. 実施形態に係る眼科装置の構成の一例を表す概略図。Schematic showing an example of a structure of the ophthalmologic apparatus which concerns on embodiment. 実施形態に係る眼科装置の構成の一例を表す概略図。Schematic showing an example of a structure of the ophthalmologic apparatus which concerns on embodiment. 実施形態に係る眼科装置の構成の一例を説明するための概略図。Schematic for demonstrating an example of a structure of the ophthalmologic apparatus which concerns on embodiment. 実施形態に係る眼科装置の構成の一例を説明するための概略図。Schematic for demonstrating an example of a structure of the ophthalmologic apparatus which concerns on embodiment. 実施形態に係る眼科装置の構成の一例を説明するための概略図。Schematic for demonstrating an example of a structure of the ophthalmologic apparatus which concerns on embodiment. 実施形態に係る眼科装置の構成の一例を説明するための概略図。Schematic for demonstrating an example of a structure of the ophthalmologic apparatus which concerns on embodiment. 実施形態に係る眼科装置の構成の一例を説明するための概略図。Schematic for demonstrating an example of a structure of the ophthalmologic apparatus which concerns on embodiment. 実施形態に係る眼科装置の動作の一例を表すフロー図。The flowchart showing an example of operation | movement of the ophthalmologic apparatus which concerns on embodiment. 変形例に係る眼科装置の構成の一例を表す概略図。Schematic showing an example of a structure of the ophthalmologic apparatus which concerns on a modification.
 本発明の幾つかの実施形態について図面を参照しながら詳細に説明する。実施形態の眼科装置は、被検眼の光コヒーレンストモグラフィ(OCT)を実行する機能を備える。 Several embodiments of the present invention will be described in detail with reference to the drawings. The ophthalmologic apparatus of the embodiment has a function of executing optical coherence tomography (OCT) of the eye to be examined.
 以下、スウェプトソースOCTと眼底カメラとを組み合わせた眼科装置について説明するが、実施形態はこれに限定されない。例えば、OCTの種別はスウェプトソースOCTには限定されず、スペクトラルドメインOCTやフルフィールドOCT(インファスOCT)であってもよい。また、実施形態の眼科装置は、眼底カメラのような被検眼の写真(デジタル写真)を取得する機能を備えていてもいなくてもよい。また、眼底カメラに代えて、スリットランプ顕微鏡や前眼部撮影カメラや手術用顕微鏡が設けられていてもよい。 Hereinafter, although an ophthalmologic apparatus combining a swept source OCT and a fundus camera will be described, the embodiment is not limited thereto. For example, the type of OCT is not limited to the swept source OCT, and may be a spectral domain OCT or a full field OCT (in-face OCT). In addition, the ophthalmologic apparatus of the embodiment may or may not have a function of acquiring a photograph (digital photograph) of the eye to be examined like a fundus camera. Further, a slit lamp microscope, an anterior ocular segment photographing camera, or a surgical microscope may be provided instead of the fundus camera.
〈構成〉
 図1に示す眼科装置1は、被検眼Eの写真撮影及びOCTのために用いられる。眼科装置1は、眼底カメラユニット2、OCTユニット100及び演算制御ユニット200を含む。眼底カメラユニット2には、従来の眼底カメラとほぼ同様の光学系や機構が設けられている。OCTユニット100には、OCTを実行するための光学系や機構が設けられている。演算制御ユニット200はプロセッサを含む。被検者の顔を支持するための顎受けや額当てが、眼底カメラユニット2に対向する位置に設けられている。
<Constitution>
The ophthalmologic apparatus 1 shown in FIG. 1 is used for taking a photograph of the eye E and OCT. The ophthalmologic apparatus 1 includes a fundus camera unit 2, an OCT unit 100, and an arithmetic control unit 200. The fundus camera unit 2 is provided with optical systems and mechanisms that are substantially the same as those of a conventional fundus camera. The OCT unit 100 is provided with an optical system and a mechanism for performing OCT. The arithmetic control unit 200 includes a processor. A chin rest and a forehead support for supporting the face of the subject are provided at positions facing the fundus camera unit 2.
 本明細書において「プロセッサ」は、例えば、CPU(Central Processing Unit)、GPU(Graphics Processing Unit)、ASIC(Application Specific Integrated Circuit)、プログラマブル論理デバイス(例えば、SPLD(Simple Programmable Logic Device)、CPLD(Complex Programmable Logic Device)、FPGA(Field Programmable Gate Array))等の回路を意味する。プロセッサは、例えば、記憶回路や記憶装置に格納されているプログラムを読み出し実行することで、実施形態に係る機能を実現する。 In this specification, the “processor” is, for example, a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), an ASIC (Application Specific Integrated Circuit), a programmable logic device (eg, SPLD (Simple ProGLD). It means a circuit such as Programmable Logic Device (FPGA) or Field Programmable Gate Array (FPGA). For example, the processor implements the functions according to the embodiment by reading and executing a program stored in a storage circuit or a storage device.
〈眼底カメラユニット2〉
 眼底カメラユニット2には、被検眼Eのデジタル写真を取得するための光学系や機構が設けられている。被検眼Eのデジタル画像には観察画像や撮影画像がある。観察画像は、例えば、近赤外光を用いた動画撮影により得られる。撮影画像は、例えば、可視フラッシュ光を用いて得られるカラー画像若しくはモノクロ画像、又は近赤外フラッシュ光を用いて得られるモノクロ画像である。眼底カメラユニット2は、フルオレセイン蛍光画像やインドシアニングリーン蛍光画像や自発蛍光画像などを取得可能であってよい。
<Fundus camera unit 2>
The fundus camera unit 2 is provided with an optical system and mechanism for acquiring a digital photograph of the eye E to be examined. The digital image of the eye E includes an observation image and a captured image. The observation image is obtained, for example, by moving image shooting using near infrared light. The captured image is, for example, a color image or monochrome image obtained using visible flash light, or a monochrome image obtained using near-infrared flash light. The fundus camera unit 2 may be able to acquire a fluorescein fluorescent image, an indocyanine green fluorescent image, a spontaneous fluorescent image, or the like.
 眼底カメラユニット2は、照明光学系10と撮影光学系30とを含む。照明光学系10は被検眼Eに照明光を投射する。撮影光学系30は、被検眼Eからの照明光の戻り光を検出する。OCTユニット100からの測定光は、眼底カメラユニット2内の光路を通じて被検眼Eに導かれ、その戻り光は、同じ光路を通じてOCTユニット100に導かれる。 The fundus camera unit 2 includes an illumination optical system 10 and a photographing optical system 30. The illumination optical system 10 projects illumination light onto the eye E to be examined. The imaging optical system 30 detects the return light of the illumination light from the eye E. The measurement light from the OCT unit 100 is guided to the eye E through the optical path in the fundus camera unit 2, and the return light is guided to the OCT unit 100 through the same optical path.
 照明光学系10の観察光源11は、例えばハロゲンランプ又はLED(Light Emitting Diode)である。観察光源11から出力された光(観察照明光)は、曲面状の反射面を有する反射ミラー12により反射され、集光レンズ13を経由し、可視カットフィルタ14を透過して近赤外光となる。更に、観察照明光は、撮影光源15の近傍にて一旦集束し、ミラー16により反射され、リレーレンズ17、18、絞り19及びリレーレンズ20を経由する。そして、観察照明光は、孔開きミラー21の周辺部(孔部の周囲の領域)にて反射され、ダイクロイックミラー46を透過し、対物レンズ22により屈折されて被検眼E(特に眼底Ef)を照明する。 The observation light source 11 of the illumination optical system 10 is, for example, a halogen lamp or an LED (Light Emitting Diode). The light (observation illumination light) output from the observation light source 11 is reflected by the reflection mirror 12 having a curved reflection surface, passes through the condensing lens 13, passes through the visible cut filter 14, and is converted into near infrared light. Become. Further, the observation illumination light is once converged in the vicinity of the photographing light source 15, reflected by the mirror 16, and passes through the relay lenses 17 and 18, the diaphragm 19 and the relay lens 20. Then, the observation illumination light is reflected by the peripheral part of the perforated mirror 21 (area around the perforated part), passes through the dichroic mirror 46, and is refracted by the objective lens 22 so as to pass through the eye E (especially the fundus oculi Ef). Illuminate.
 被検眼Eからの観察照明光の戻り光は、対物レンズ22により屈折され、ダイクロイックミラー46を透過し、孔開きミラー21の中心領域に形成された孔部を通過し、ダイクロイックミラー55を透過し、撮影合焦レンズ31を経由し、ミラー32により反射される。更に、この戻り光は、ハーフミラー33Aを透過し、ダイクロイックミラー33により反射され、集光レンズ34によりCCDイメージセンサ35の受光面に結像される。CCDイメージセンサ35は、例えば所定のフレームレートで戻り光を検出する。なお、撮影光学系30のピントが眼底Efに合っている場合には眼底Efの観察画像が得られ、ピントが前眼部に合っている場合には前眼部の観察画像が得られる。 The return light of the observation illumination light from the eye E is refracted by the objective lens 22, passes through the dichroic mirror 46, passes through the hole formed in the central region of the perforated mirror 21, and passes through the dichroic mirror 55. The light is reflected by the mirror 32 via the photographing focusing lens 31. Further, the return light passes through the half mirror 33A, is reflected by the dichroic mirror 33, and forms an image on the light receiving surface of the CCD image sensor 35 by the condenser lens. The CCD image sensor 35 detects the return light at a predetermined frame rate, for example. Note that an observation image of the fundus oculi Ef is obtained when the photographing optical system 30 is focused on the fundus oculi Ef, and an anterior eye observation image is obtained when the focus is on the anterior eye segment.
 撮影光源15は、例えば、キセノンランプ又はLEDを含む可視光源である。撮影光源15から出力された光(撮影照明光)は、観察照明光と同様の経路を通って眼底Efに照射される。被検眼Eからの撮影照明光の戻り光は、観察照明光の戻り光と同じ経路を通ってダイクロイックミラー33まで導かれ、ダイクロイックミラー33を透過し、ミラー36により反射され、集光レンズ37によりCCDイメージセンサ38の受光面に結像される。 The imaging light source 15 is a visible light source including, for example, a xenon lamp or an LED. The light (imaging illumination light) output from the imaging light source 15 is applied to the fundus oculi Ef through the same path as the observation illumination light. The return light of the imaging illumination light from the eye E is guided to the dichroic mirror 33 through the same path as the return light of the observation illumination light, passes through the dichroic mirror 33, is reflected by the mirror 36, and is reflected by the condenser lens 37. An image is formed on the light receiving surface of the CCD image sensor 38.
 LCD39は、被検眼Eを固視させるための固視標を表示する。LCD39から出力された光束(固視光束)は、その一部がハーフミラー33Aにて反射され、ミラー32に反射され、撮影合焦レンズ31及びダイクロイックミラー55を経由し、孔開きミラー21の孔部を通過し、ダイクロイックミラー46を透過し、対物レンズ22により屈折されて眼底Efに投射される。LCD39の画面における固視標の表示位置を変更することにより被検眼Eの固視位置を変更できる。なお、LCD39の代わりに、複数のLEDが2次元的に配列されたマトリクスLEDや、光源と可変絞り(液晶絞り等)との組み合わせなどを適用することができる。 LCD 39 displays a fixation target for fixing the eye E to be examined. A part of the light beam (fixed light beam) output from the LCD 39 is reflected by the half mirror 33A, reflected by the mirror 32, and passes through the photographing focusing lens 31 and the dichroic mirror 55, and then the hole of the aperture mirror 21. Passes through the dichroic mirror 46, is refracted by the objective lens 22, and is projected onto the fundus oculi Ef. By changing the display position of the fixation target on the screen of the LCD 39, the fixation position of the eye E can be changed. Instead of the LCD 39, a matrix LED in which a plurality of LEDs are two-dimensionally arranged, or a combination of a light source and a variable diaphragm (liquid crystal diaphragm or the like) can be applied.
 眼底カメラユニット2にはアライメント光学系50とフォーカス光学系60が設けられている。アライメント光学系50は、被検眼Eに対する光学系のアライメントに用いられるアライメント指標を生成する。フォーカス光学系60は、被検眼Eに対するフォーカス調整に用いられるスプリット指標を生成する。 The fundus camera unit 2 is provided with an alignment optical system 50 and a focus optical system 60. The alignment optical system 50 generates an alignment index used for alignment of the optical system with respect to the eye E. The focus optical system 60 generates a split index used for focus adjustment with respect to the eye E.
 アライメント光学系50のLED51から出力されたアライメント光は、絞り52及び53並びにリレーレンズ54を経由し、ダイクロイックミラー55により反射され、孔開きミラー21の孔部を通過する。孔開きミラー21の孔部を通過した光は、ダイクロイックミラー46を透過し、対物レンズ22により被検眼Eに投射される。 Alignment light output from the LED 51 of the alignment optical system 50 is reflected by the dichroic mirror 55 via the apertures 52 and 53 and the relay lens 54, and passes through the hole of the perforated mirror 21. The light that has passed through the hole of the perforated mirror 21 passes through the dichroic mirror 46 and is projected onto the eye E by the objective lens 22.
 アライメント光の角膜反射光は、対物レンズ22、ダイクロイックミラー46及び上記孔部を経由し、その一部がダイクロイックミラー55を透過し、撮影合焦レンズ31を通過し、ミラー32により反射され、ハーフミラー33Aを透過し、ダイクロイックミラー33に反射され、集光レンズ34によりCCDイメージセンサ35の受光面に投影される。CCDイメージセンサ35による受光像(アライメント指標像)に基づき、従来と同様のマニュアルアライメントやオートアライメントを行うことができる。 The cornea-reflected light of the alignment light passes through the objective lens 22, the dichroic mirror 46, and the hole, part of which passes through the dichroic mirror 55, passes through the photographing focusing lens 31, is reflected by the mirror 32, and is half The light passes through the mirror 33A, is reflected by the dichroic mirror 33, and is projected onto the light receiving surface of the CCD image sensor 35 by the condenser lens. Based on the received light image (alignment index image) by the CCD image sensor 35, manual alignment and auto alignment similar to the conventional one can be performed.
 フォーカス光学系60は、撮影光学系30の光路(撮影光路)に沿った撮影合焦レンズ31の移動に連動して、照明光学系10の光路(照明光路)に沿って移動される。反射棒67は、照明光路に対して挿脱可能である。 The focus optical system 60 is moved along the optical path (illumination optical path) of the illumination optical system 10 in conjunction with the movement of the imaging focusing lens 31 along the optical path (imaging optical path) of the imaging optical system 30. The reflection bar 67 can be inserted into and removed from the illumination optical path.
 フォーカス調整を行う際には、反射棒67の反射面が照明光路に斜設される。LED61から出力されたフォーカス光は、リレーレンズ62を通過し、スプリット視標板63により2つの光束に分離され、二孔絞り64を通過し、ミラー65により反射され、集光レンズ66により反射棒67の反射面に一旦結像されて反射される。更に、フォーカス光は、リレーレンズ20を経由し、孔開きミラー21に反射され、ダイクロイックミラー46を透過し、対物レンズ22により屈折されて眼底Efに投射される。 When performing focus adjustment, the reflecting surface of the reflecting rod 67 is obliquely provided in the illumination light path. The focus light output from the LED 61 passes through the relay lens 62, is separated into two light beams by the split target plate 63, passes through the two-hole aperture 64, is reflected by the mirror 65, and is reflected by the condenser lens 66. An image is once formed on the reflection surface 67 and reflected. Further, the focus light passes through the relay lens 20, is reflected by the perforated mirror 21, passes through the dichroic mirror 46, is refracted by the objective lens 22, and is projected onto the fundus oculi Ef.
 フォーカス光の眼底反射光は、アライメント光の角膜反射光と同じ経路を通ってCCDイメージセンサ35により検出される。CCDイメージセンサ35による受光像(スプリット指標像)に基づき、従来と同様のマニュアルアライメントやオートアライメントを行うことができる。 The fundus reflected light of the focus light is detected by the CCD image sensor 35 through the same path as the corneal reflected light of the alignment light. Based on the received image (split index image) by the CCD image sensor 35, manual alignment and auto-alignment similar to the conventional one can be performed.
 撮影光学系30は、視度補正レンズ70及び71を含む。視度補正レンズ70及び71は、孔開きミラー21とダイクロイックミラー55との間の撮影光路に選択的に挿入可能である。視度補正レンズ70は、強度遠視を補正するためのプラス(+)レンズであり、例えば+20D(ディオプター)の凸レンズである。視度補正レンズ71は、強度近視を補正するためのマイナス(-)レンズであり、例えば-20Dの凹レンズである。視度補正レンズ70及び71は、例えばターレット板に装着されている。ターレット板には、視度補正レンズ70及び71のいずれも適用しない場合のための孔部が形成されている。 The photographing optical system 30 includes diopter correction lenses 70 and 71. The diopter correction lenses 70 and 71 can be selectively inserted into a photographing optical path between the perforated mirror 21 and the dichroic mirror 55. The diopter correction lens 70 is a plus (+) lens for correcting intensity hyperopia, for example, a + 20D (diopter) convex lens. The diopter correction lens 71 is a minus (−) lens for correcting high myopia, for example, a −20D concave lens. The diopter correction lenses 70 and 71 are mounted on, for example, a turret plate. The turret plate is formed with a hole for the case where none of the diopter correction lenses 70 and 71 is applied.
 ダイクロイックミラー46は、眼底撮影用の光路とOCT用の光路とを合成する。ダイクロイックミラー46は、OCTに用いられる波長帯の光を反射し、眼底撮影用の光を透過させる。OCT用の光路には、OCTユニット100側から順に、コリメータレンズユニット40、光路長変更部41、光スキャナ42、OCT合焦レンズ43、ミラー44、及びリレーレンズ45が設けられている。 The dichroic mirror 46 combines the optical path for fundus imaging and the optical path for OCT. The dichroic mirror 46 reflects light in a wavelength band used for OCT and transmits light for fundus photographing. In the optical path for OCT, a collimator lens unit 40, an optical path length changing unit 41, an optical scanner 42, an OCT focusing lens 43, a mirror 44, and a relay lens 45 are provided in this order from the OCT unit 100 side.
 光路長変更部41は、図1に示す矢印の方向に移動可能とされ、OCT用の光路の光路長を変更する。この光路長の変更は、被検眼Eの眼軸長に応じた光路長の補正や、干渉状態の調整などに利用される。光路長変更部41は、例えばコーナーキューブと、これを移動する機構とを含む。 The optical path length changing unit 41 is movable in the direction of the arrow shown in FIG. 1, and changes the optical path length of the optical path for OCT. This change in the optical path length is used for correcting the optical path length according to the axial length of the eye E or adjusting the interference state. The optical path length changing unit 41 includes, for example, a corner cube and a mechanism for moving the corner cube.
 光スキャナ42は、被検眼Eの瞳孔と光学的に共役な位置に配置される。光スキャナ42は、OCT用の光路を通過する測定光LSの進行方向を変更する。それにより、被検眼Eが測定光LSでスキャンされる。光スキャナ42は、xy平面の任意方向に測定光LSを偏向可能であり、例えば、測定光LSをx方向に偏向するガルバノミラーと、y方向に偏向するガルバノミラーとを含む。 The optical scanner 42 is disposed at a position optically conjugate with the pupil of the eye E to be examined. The optical scanner 42 changes the traveling direction of the measurement light LS that passes through the optical path for OCT. Thereby, the eye E is scanned with the measurement light LS. The optical scanner 42 can deflect the measurement light LS in an arbitrary direction on the xy plane, and includes, for example, a galvanometer mirror that deflects the measurement light LS in the x direction and a galvanometer mirror that deflects the measurement light LS in the y direction.
〈OCTユニット100〉
 図2に例示するように、OCTユニット100には、被検眼EのOCTを実行するための光学系が設けられている。この光学系の構成は、従来のスウェプトソースOCTと同様である。すなわち、この光学系は、波長掃引型(波長走査型)光源からの光を測定光と参照光とに分割し、被検眼Eからの測定光の戻り光と参照光路を経由した参照光とを干渉させて干渉光を生成し、この干渉光を検出する干渉光学系を含む。干渉光学系により得られる検出結果(検出信号)は、干渉光のスペクトルを示す信号であり、演算制御ユニット200に送られる。
<OCT unit 100>
As illustrated in FIG. 2, the OCT unit 100 is provided with an optical system for performing OCT of the eye E. The configuration of this optical system is the same as that of the conventional swept source OCT. That is, this optical system divides the light from the wavelength sweep type (wavelength scanning type) light source into the measurement light and the reference light, and returns the return light of the measurement light from the eye E and the reference light via the reference light path. An interference optical system that generates interference light by causing interference and detects the interference light is included. A detection result (detection signal) obtained by the interference optical system is a signal indicating the spectrum of the interference light, and is sent to the arithmetic control unit 200.
 光源ユニット101は、一般的なスウェプトソースOCTと同様に、出射光の波長を高速で変化させる波長掃引型(波長走査型)光源を含む。波長掃引型光源は、例えば、近赤外レーザ光源である。 The light source unit 101 includes a wavelength swept type (wavelength scanning type) light source that changes the wavelength of the emitted light at high speed, like a general swept source OCT. The wavelength sweep type light source is, for example, a near infrared laser light source.
 光源ユニット101から出力された光L0は、光ファイバ102により偏波コントローラ103に導かれてその偏光状態が調整される。更に、光L0は、光ファイバ104によりファイバカプラ105に導かれて測定光LSと参照光LRとに分割される。 The light L0 output from the light source unit 101 is guided to the polarization controller 103 by the optical fiber 102 and its polarization state is adjusted. Further, the light L0 is guided to the fiber coupler 105 by the optical fiber 104 and is divided into the measurement light LS and the reference light LR.
 参照光LRは、光ファイバ110によりコリメータ111に導かれて平行光束に変換され、光路長補正部材112及び分散補償部材113を経由し、コーナーキューブ114に導かれる。光路長補正部材112は、参照光LRの光路長と測定光LSの光路長とを合わせるよう作用する。分散補償部材113は、参照光LRと測定光LSとの間の分散特性を合わせるよう作用する。 The reference light LR is guided to the collimator 111 by the optical fiber 110, converted into a parallel light beam, and guided to the corner cube 114 via the optical path length correction member 112 and the dispersion compensation member 113. The optical path length correction member 112 acts to match the optical path length of the reference light LR and the optical path length of the measurement light LS. The dispersion compensation member 113 acts to match the dispersion characteristics between the reference light LR and the measurement light LS.
 コーナーキューブ114は、入射した参照光LRの進行方向を逆方向に折り返す。コーナーキューブ114に対する参照光LRの入射方向と出射方向は互いに平行である。コーナーキューブ114は、参照光LRの入射方向に移動可能であり、それにより参照光LRの光路長が変更される。 The corner cube 114 turns the traveling direction of the incident reference light LR in the reverse direction. The incident direction and the emitting direction of the reference light LR with respect to the corner cube 114 are parallel to each other. The corner cube 114 is movable in the incident direction of the reference light LR, and thereby the optical path length of the reference light LR is changed.
 図1及び図2に示す構成では、測定光LSの光路(測定光路、測定アーム)の長さを変更するための光路長変更部41と、参照光LRの光路(参照光路、参照アーム)の長さを変更するためのコーナーキューブ114の双方が設けられているが、光路長変更部41とコーナーキューブ114のいずれか一方のみが設けられもよい。また、これら以外の光学部材を用いて、測定光路長と参照光路長との差を変更することも可能である。 1 and 2, the optical path length changing unit 41 for changing the length of the optical path (measurement optical path, measurement arm) of the measurement light LS and the optical path (reference optical path, reference arm) of the reference light LR are used. Both corner cubes 114 for changing the length are provided, but only one of the optical path length changing unit 41 and the corner cube 114 may be provided. It is also possible to change the difference between the measurement optical path length and the reference optical path length using optical members other than these.
 コーナーキューブ114を経由した参照光LRは、分散補償部材113及び光路長補正部材112を経由し、コリメータ116によって平行光束から集束光束に変換され、光ファイバ117に入射する。光ファイバ117に入射した参照光LRは、偏波コントローラ118に導かれてその偏光状態が調整され、光ファイバ119によりアッテネータ120に導かれて光量が調整され、光ファイバ121によりファイバカプラ122に導かれる。 The reference light LR that has passed through the corner cube 114 is converted from a parallel light beam into a converged light beam by the collimator 116 via the dispersion compensation member 113 and the optical path length correction member 112, and enters the optical fiber 117. The reference light LR incident on the optical fiber 117 is guided to the polarization controller 118 and its polarization state is adjusted. The reference light LR is guided to the attenuator 120 by the optical fiber 119 and the amount of light is adjusted, and is guided to the fiber coupler 122 by the optical fiber 121. It is burned.
 一方、ファイバカプラ105により生成された測定光LSは、光ファイバ127により導かれてコリメータレンズユニット40により平行光束に変換され、光路長変更部41、光スキャナ42、OCT合焦レンズ43、ミラー44及びリレーレンズ45を経由し、ダイクロイックミラー46により反射され、対物レンズ22により屈折されて被検眼Eに入射する。測定光LSは、被検眼Eの様々な深さ位置において散乱・反射される。被検眼Eからの測定光LSの戻り光は、往路と同じ経路を逆向きに進行してファイバカプラ105に導かれ、光ファイバ128を経由してファイバカプラ122に到達する。 On the other hand, the measurement light LS generated by the fiber coupler 105 is guided by the optical fiber 127 and converted into a parallel light beam by the collimator lens unit 40, and the optical path length changing unit 41, the optical scanner 42, the OCT focusing lens 43, and the mirror 44. Then, the light passes through the relay lens 45, is reflected by the dichroic mirror 46, is refracted by the objective lens 22, and enters the eye E to be examined. The measurement light LS is scattered and reflected at various depth positions of the eye E. The return light of the measurement light LS from the eye E travels in the reverse direction on the same path as the forward path, is guided to the fiber coupler 105, and reaches the fiber coupler 122 via the optical fiber 128.
 ファイバカプラ122は、光ファイバ128を介して入射された測定光LSと、光ファイバ121を介して入射された参照光LRとを合成して(干渉させて)干渉光を生成する。ファイバカプラ122は、所定の分岐比(例えば1:1)で干渉光を分岐することにより、一対の干渉光LCを生成する。一対の干渉光LCは、それぞれ光ファイバ123及び124を通じて検出器125に導かれる。 The fiber coupler 122 combines (interferences) the measurement light LS incident through the optical fiber 128 and the reference light LR incident through the optical fiber 121 to generate interference light. The fiber coupler 122 generates a pair of interference light LC by branching the interference light at a predetermined branching ratio (for example, 1: 1). The pair of interference lights LC are guided to the detector 125 through optical fibers 123 and 124, respectively.
 検出器125は、例えばバランスドフォトダイオード(Balanced Photo Diode)である。バランスドフォトダイオードは、一対の干渉光LCをそれぞれ検出する一対のフォトディテクタを有し、これらによる検出結果の差分を出力する。検出器125は、その検出結果(検出信号)をDAQ(Data Acquisition System)130に送る。 The detector 125 is, for example, a balanced photodiode (Balanced Photo Diode). The balanced photodiode has a pair of photodetectors that respectively detect the pair of interference lights LC, and outputs a difference between detection results obtained by these. The detector 125 sends the detection result (detection signal) to a DAQ (Data Acquisition System) 130.
 DAQ130には、光源ユニット101からクロックKCが供給される。クロックKCは、光源ユニット101において、波長掃引型光源により所定の波長範囲内で掃引される各波長の出力タイミングに同期して生成される。光源ユニット101は、例えば、各出力波長の光L0を分岐することにより得られた2つの分岐光の一方を光学的に遅延させた後、これらの合成光を検出した結果に基づいてクロックKCを生成する。DAQ130は、検出器125から入力される検出信号をクロックKCに基づきサンプリングする。DAQ130は、検出器125からの検出信号のサンプリング結果を演算制御ユニット200に送る。 The clock KC is supplied from the light source unit 101 to the DAQ 130. The clock KC is generated in synchronization with the output timing of each wavelength swept within a predetermined wavelength range by the wavelength sweep type light source in the light source unit 101. For example, the light source unit 101 optically delays one of the two branched lights obtained by branching the light L0 of each output wavelength, and then generates a clock KC based on the result of detecting these combined lights. Generate. The DAQ 130 samples the detection signal input from the detector 125 based on the clock KC. The DAQ 130 sends the sampling result of the detection signal from the detector 125 to the arithmetic control unit 200.
〈演算制御ユニット200〉
 演算制御ユニット200は、眼底カメラユニット2、表示装置3及びOCTユニット100の各部を制御する。また、演算制御ユニット200は、各種の演算処理を実行する。例えば、演算制御ユニット200は、一連の波長走査毎に(Aライン毎に)、検出器125により得られた検出結果に基づくスペクトル分布にフーリエ変換等の信号処理を施すことにより、各Aラインにおける反射強度プロファイルを形成する。更に、演算制御ユニット200は、各Aラインの反射強度プロファイルを画像化することにより画像データを形成する。そのための演算処理は、従来のスウェプトソースOCTと同様である。
<Calculation control unit 200>
The arithmetic control unit 200 controls each part of the fundus camera unit 2, the display device 3, and the OCT unit 100. The arithmetic control unit 200 executes various arithmetic processes. For example, the arithmetic control unit 200 performs signal processing such as Fourier transform on the spectrum distribution based on the detection result obtained by the detector 125 for each series of wavelength scans (for each A line). A reflection intensity profile is formed. Furthermore, the arithmetic control unit 200 forms image data by imaging the reflection intensity profile of each A line. The arithmetic processing for that is the same as the conventional swept source OCT.
 演算制御ユニット200は、例えば、プロセッサ、RAM(Random Access Memory)、ROM(Read Only Memory)、ハードディスクドライブ、通信インターフェイスなどを含む。ハードディスクドライブ等の記憶装置には各種コンピュータプログラムが格納されている。演算制御ユニット200は、操作デバイス、入力デバイス、表示デバイスなどを含んでよい。 The arithmetic control unit 200 includes, for example, a processor, a RAM (Random Access Memory), a ROM (Read Only Memory), a hard disk drive, a communication interface, and the like. Various computer programs are stored in a storage device such as a hard disk drive. The arithmetic control unit 200 may include an operation device, an input device, a display device, and the like.
〈制御系〉
 眼科装置1の制御系の構成例を図3に示す。
<Control system>
A configuration example of the control system of the ophthalmologic apparatus 1 is shown in FIG.
〈制御部210〉
 制御部210は、眼科装置1の各部を制御する。制御部210はプロセッサを含む。制御部210には、主制御部211と記憶部212とが設けられている。
<Control unit 210>
The control unit 210 controls each unit of the ophthalmologic apparatus 1. Control unit 210 includes a processor. The control unit 210 is provided with a main control unit 211 and a storage unit 212.
〈主制御部211〉
 主制御部211は各種の制御を行う。例えば、主制御部211は、撮影合焦レンズ31、CCD(イメージセンサ)35及び38、LCD39、光路長変更部41、光スキャナ42、OCT合焦レンズ43、フォーカス光学系60、反射棒67、光源ユニット101、参照駆動部114A、検出器125、DAQ130などを制御する。参照駆動部114Aは、参照光路に設けられたコーナーキューブ114を移動させる。それにより、参照光路の長さが変更される。
<Main control unit 211>
The main control unit 211 performs various controls. For example, the main control unit 211 includes an imaging focusing lens 31, CCDs (image sensors) 35 and 38, an LCD 39, an optical path length changing unit 41, an optical scanner 42, an OCT focusing lens 43, a focus optical system 60, a reflector 67, The light source unit 101, the reference driving unit 114A, the detector 125, the DAQ 130, and the like are controlled. The reference driving unit 114A moves the corner cube 114 provided in the reference optical path. Thereby, the length of the reference optical path is changed.
 眼科装置1は図示しない光学系移動機構を備えてよい。光学系移動機構は、眼底カメラユニット2(又は、それに格納された光学系の少なくとも一部)及びOCTユニット100(又は、それに格納された光学系の少なくとも一部)を3次元的に移動する。 The ophthalmologic apparatus 1 may include an optical system moving mechanism (not shown). The optical system moving mechanism moves the fundus camera unit 2 (or at least a part of the optical system stored therein) and the OCT unit 100 (or at least a part of the optical system stored therein) in a three-dimensional manner.
〈記憶部212〉
 記憶部212は各種のデータを記憶する。記憶部212に記憶されるデータとしては、例えば、OCT画像の画像データ、眼底像の画像データ、被検眼情報などがある。被検眼情報は、患者IDや氏名などの被検者情報や、左眼/右眼の識別情報や、電子カルテ情報などを含む。
<Storage unit 212>
The storage unit 212 stores various data. Examples of the data stored in the storage unit 212 include image data of an OCT image, image data of a fundus image, and eye information to be examined. The eye information includes subject information such as patient ID and name, left / right eye identification information, electronic medical record information, and the like.
 記憶部212には正常眼データ212aが予め記憶される。正常眼データ212aは、少なくとも所定の傷病を罹患していないと診断された眼(正常眼)の測定データを収集し、それらを統計的に処理して得られる。正常眼データ212aは、正常眼の測定データの標準的な値を表す。正常眼データ212aは、例えば、正常眼の所定の特性値に関し、平均値や中間値等の代表的標準値と、標準偏差や分散や最大値や最小値等の範囲を表す値とを含む。或いは、正常眼データ212aは、正常眼の所定の特性値の閾値を含む。 In the storage unit 212, normal eye data 212a is stored in advance. The normal eye data 212a is obtained by collecting measurement data of at least eyes (normal eyes) diagnosed as not suffering from a predetermined injury and disease, and statistically processing them. The normal eye data 212a represents a standard value of normal eye measurement data. The normal eye data 212a includes, for example, a typical standard value such as an average value or an intermediate value, and a value representing a range such as a standard deviation, variance, maximum value, minimum value, or the like regarding a predetermined characteristic value of normal eyes. Alternatively, the normal eye data 212a includes a threshold value of a predetermined characteristic value of the normal eye.
 測定される特性値は、所定の傷病の有無や程度に応じて変化することが知られた値であり、OCTスキャンで収集されたデータから導出可能な値である。例えば、特性値は、緑内障等の眼底疾患の診断で参照される、眼底組織のサイズであってよい。その具体例として、網膜神経線維層、神経節細胞、内網状層、又はこれらの任意の組み合わせからなる層組織の厚さがある。更に、脈絡膜の厚さ、視神経乳頭の径や傾き、篩状板の孔部の径や数や分布、角膜の厚さ分布や曲率分布などが、特性値の例である。なお、特性値はこれらに限定されず、眼科装置1が使用される目的に応じて任意の特性値を採用することが可能である。 The characteristic value to be measured is a value that is known to change according to the presence or absence and degree of a predetermined injury or illness, and is a value that can be derived from data collected by an OCT scan. For example, the characteristic value may be a size of a fundus tissue referred to in diagnosis of a fundus disease such as glaucoma. Specific examples thereof include the thickness of a layer tissue composed of a retinal nerve fiber layer, a ganglion cell, an inner plexiform layer, or any combination thereof. Further, the choroid thickness, the diameter and inclination of the optic disc, the diameter and number and distribution of the pores of the phloem plate, the cornea thickness distribution and the curvature distribution, etc. are examples of characteristic values. The characteristic values are not limited to these, and any characteristic value can be adopted according to the purpose for which the ophthalmologic apparatus 1 is used.
 本実施形態の正常眼データ212aは、特性値の標準的な分布を表すグラフ情報(標準グラフ)を含む。標準グラフは、眼の所定範囲における位置を表す第1座標軸と、特性値の大きさを表す第2座標値とを含む座標系を用いて定義される。本実施形態の標準グラフは、正常眼と判定されるための特性値の許容範囲を表す。眼の所定範囲は、OCTスキャンの対象範囲に含まれる。ここで、所定範囲と対象範囲とは同じでも異なってもよい。例えば、所定範囲は、3次元スキャンの対象である3次元領域の一部からなる2次元領域であってよい。第1座標軸は1以上の座標軸を含む。例えば、第1座標軸が単一の座標軸からなる場合、第1座標軸は、1次元的な位置、例えば直線上又は曲線状に配列された複数の位置を表す。また、第1座標軸が2つの座標軸からなる場合、第1座標軸は、2次元的な位置、例えば平面上又は曲面状に配列された複数の位置を表す。第1座標軸が3以上の座標軸からなる場合についても同様である。 The normal eye data 212a of the present embodiment includes graph information (standard graph) representing a standard distribution of characteristic values. The standard graph is defined using a coordinate system including a first coordinate axis that represents the position of the eye in a predetermined range and a second coordinate value that represents the magnitude of the characteristic value. The standard graph of the present embodiment represents an allowable range of characteristic values for determining a normal eye. The predetermined range of the eye is included in the target range of the OCT scan. Here, the predetermined range and the target range may be the same or different. For example, the predetermined range may be a two-dimensional area composed of a part of a three-dimensional area that is a target of a three-dimensional scan. The first coordinate axis includes one or more coordinate axes. For example, when the first coordinate axis includes a single coordinate axis, the first coordinate axis represents a one-dimensional position, for example, a plurality of positions arranged on a straight line or in a curved line. When the first coordinate axis is composed of two coordinate axes, the first coordinate axis represents a two-dimensional position, for example, a plurality of positions arranged on a plane or a curved surface. The same applies to the case where the first coordinate axis is composed of three or more coordinate axes.
 標準グラフの一例を図4に示す。標準グラフ300は、視神経乳頭の周囲の網膜神経線維層(cpRNFL)の厚さの標準的な分布を表す。横軸は第1座標軸に相当し、標準グラフ300の定義域を表す。縦軸は第2座標軸に相当し、網膜神経線維層の厚さの値を表す。 An example of a standard graph is shown in FIG. The standard graph 300 represents a standard distribution of retinal nerve fiber layer (cpRNFL) thickness around the optic disc. The horizontal axis corresponds to the first coordinate axis and represents the definition area of the standard graph 300. The vertical axis corresponds to the second coordinate axis and represents the value of the thickness of the retinal nerve fiber layer.
 横軸が表す定義域は、視神経乳頭を囲みかつ所定の径を有する円筒状範囲の円周である。その一例を図5に示す。本例では、視神経乳頭Dの周囲に所定の径を有する円形のスキャン経路Pが適用される。スキャン経路P上の複数の位置に対して順次にOCTが実行される。円形のスキャン経路Pの中心Pは、例えば視神経乳頭Dの重心又は視神経乳頭Dの近似円(近似楕円)の中心に設定される。スキャン経路Pの径はデフォルト値であってよい。或いは、スキャン経路Pの径は、視神経乳頭Dの径(ディスク径、カップ径、リム径等)に基づき被検眼E毎に設定されてよい。具体例として、視神経乳頭Dの径の測定値に既定の定数を乗算することによってスキャン経路Pの径を求めることができる。 The domain defined by the horizontal axis is the circumference of a cylindrical area surrounding the optic disc and having a predetermined diameter. An example is shown in FIG. In this example, a circular scan path P having a predetermined diameter is applied around the optic disc D. OCT is sequentially performed on a plurality of positions on the scan path P. Center P C of the circular scan path P is set in the center of example approximate circle (approximate ellipse) of the center of gravity or the optic nerve head D of the optic disc D. The diameter of the scan path P may be a default value. Alternatively, the diameter of the scan path P may be set for each eye E based on the diameter of the optic nerve head D (disk diameter, cup diameter, rim diameter, etc.). As a specific example, the diameter of the scan path P can be obtained by multiplying the measured value of the diameter of the optic nerve head D by a predetermined constant.
 なお、図5において、「T」は耳側(temporal)を、「S」は上側(superior)を、「N」は鼻側(nasal)を、「I」は下側(inferior)を、それぞれ示す。図4の横軸に付された「T」、「S」、「N」及び「I」はこれら方向に相当する。図4に示す標準グラフ300は、視神経乳頭の周囲の円筒状断面を耳側「T」の位置でz方向に沿って切り開いて得られた2次元断面における網膜神経線維層の厚さの標準的な分布を表している。 In FIG. 5, “T” is the ear side (temporal), “S” is the upper side (superior), “N” is the nasal side (nasal), “I” is the lower side (inferior), respectively. Show. “T”, “S”, “N”, and “I” attached to the horizontal axis in FIG. 4 correspond to these directions. A standard graph 300 shown in FIG. 4 shows a standard thickness of the retinal nerve fiber layer in a two-dimensional section obtained by cutting a cylindrical section around the optic nerve head along the z-direction at the position of the ear side “T”. Represents a significant distribution.
 本例の標準グラフ300は、円周(スキャン経路P)上の各点(横軸上の各点)における層厚値の許容範囲を表していてよい。この場合、標準グラフ300は、例えば、円周上の各点における層厚の平均値を示すグラフ(平均グラフ)301と、許容範囲の上限を示すグラフ(上限グラフ)302と、下限を示すグラフ(下限グラフ)303とを含んでよい。また、多数の正常眼を測定して網膜神経線維層の厚さの分布グラフを収集し、これら厚さ分布の態様(形状、位置、値の大きさ等)を解析することによって標準グラフ300を形成することも可能である。この場合、標準グラフ300は、層厚値の標準的な分布ではなく、正常眼の分布グラフが取りうる形態(つまり、分布グラフの形態の許容範囲)を表す。また、層厚値と分布グラフの形態の双方を考慮して分布グラフ300を作成することも可能である。 The standard graph 300 in this example may represent the allowable range of the layer thickness value at each point (each point on the horizontal axis) on the circumference (scan path P). In this case, the standard graph 300 includes, for example, a graph (average graph) 301 indicating the average value of the layer thickness at each point on the circumference, a graph (upper limit graph) 302 indicating the upper limit of the allowable range, and a graph indicating the lower limit. (Lower limit graph) 303. In addition, the standard graph 300 is obtained by measuring a large number of normal eyes, collecting distribution graphs of the thickness of the retinal nerve fiber layer, and analyzing the thickness distribution modes (shape, position, value size, etc.). It is also possible to form. In this case, the standard graph 300 represents not a standard distribution of layer thickness values but a form that can be taken by a normal eye distribution graph (that is, an allowable range of the distribution graph form). It is also possible to create the distribution graph 300 in consideration of both the layer thickness value and the form of the distribution graph.
〈画像形成部220〉
 画像形成部220は、DAQ130から入力された検出信号のサンプリング結果に基づいて、眼底Ef又は前眼部の断面像の画像データを形成する。この処理には、従来のスウェプトソースOCTと同様に、ノイズ除去(ノイズ低減)、フィルタ処理、FFT(Fast Fourier Transform)などの信号処理が含まれる。画像形成部220により形成される画像データは、スキャンラインに沿って配列された複数のAライン(z方向のライン)における反射強度プロファイルを画像化することにより形成された一群の画像データ(一群のAスキャン像データ)を含むデータセットである。
<Image forming unit 220>
The image forming unit 220 forms image data of a cross-sectional image of the fundus oculi Ef or the anterior ocular segment based on the sampling result of the detection signal input from the DAQ 130. This processing includes signal processing such as noise removal (noise reduction), filter processing, and FFT (Fast Fourier Transform) as in the case of the conventional swept source OCT. The image data formed by the image forming unit 220 is a group of image data (a group of image data) formed by imaging reflection intensity profiles in a plurality of A lines (lines in the z direction) arranged along the scan line. A scan image data).
 画像形成部220は、例えば、プロセッサ及び専用回路基板の少なくともいずれかを含む。なお、本明細書では、「画像データ」と、それに基づく「画像」とを同一視することがある。また、被検眼Eの部位とそれを表す画像とを同一視することがある。 The image forming unit 220 includes, for example, at least one of a processor and a dedicated circuit board. In the present specification, “image data” and “image” based thereon may be identified. Further, the part of the eye E to be examined and the image representing it may be identified.
〈データ処理部230〉
 データ処理部230は、画像形成部220により形成された画像に対して画像処理や解析処理を施す。例えば、データ処理部230は、画像の輝度補正や分散補正等の補正処理を実行する。また、データ処理部230は、眼底カメラユニット2により得られた画像(眼底像、前眼部像等)に対して画像処理や解析処理を施す。データ処理部230は、例えば、プロセッサ及び専用回路基板の少なくともいずれかを含む。データ処理部230は、3次元画像形成部231と、部分画像特定部232と、特性値算出部233と、分布グラフ作成部234と、グラフ比較部235とを備える。
<Data processing unit 230>
The data processing unit 230 performs image processing and analysis processing on the image formed by the image forming unit 220. For example, the data processing unit 230 executes correction processing such as image luminance correction and dispersion correction. Further, the data processing unit 230 performs image processing and analysis processing on an image (fundus image, anterior eye image, etc.) obtained by the fundus camera unit 2. The data processing unit 230 includes, for example, at least one of a processor and a dedicated circuit board. The data processing unit 230 includes a three-dimensional image forming unit 231, a partial image specifying unit 232, a characteristic value calculating unit 233, a distribution graph creating unit 234, and a graph comparing unit 235.
〈3次元画像形成部231〉
 被検眼Eの3次元スキャン(ラスタースキャン等)が行われた場合、画像形成部220は、各走査線に対応する2次元断面像(Bスキャン像)を形成する。3次元画像形成部231は、これら2次元断面像に基づいて3次元画像を形成する。3次元画像とは、3次元座標系により画素の位置が定義された画像データを意味する。3次元画像の例として、スタックデータやボリュームデータがある。
<Three-dimensional image forming unit 231>
When a three-dimensional scan (raster scan or the like) of the eye E is performed, the image forming unit 220 forms a two-dimensional cross-sectional image (B scan image) corresponding to each scanning line. The three-dimensional image forming unit 231 forms a three-dimensional image based on these two-dimensional cross-sectional images. A three-dimensional image means image data in which pixel positions are defined by a three-dimensional coordinate system. Examples of 3D images include stack data and volume data.
 スタックデータは、複数の走査線に対応する複数の断面像を、これら走査線の位置関係に基づいて3次元的に配列させることで得られる画像データである。すなわち、スタックデータは、元々個別の2次元座標系により定義されていた複数の断面像を単一の3次元座標系により表現する(つまり1つの3次元空間に埋め込む)ことにより得られる画像データである。 Stack data is image data obtained by three-dimensionally arranging a plurality of cross-sectional images corresponding to a plurality of scanning lines based on the positional relationship of these scanning lines. That is, stack data is image data obtained by expressing a plurality of cross-sectional images originally defined by individual two-dimensional coordinate systems by a single three-dimensional coordinate system (that is, by embedding them in one three-dimensional space). is there.
 スタックデータに含まれる複数の断面像の間の画素を補間する補間処理などの公知の画像処理を実行してボクセルを形成することによりボリュームデータ(ボクセルデータ)が形成される。ボリュームデータにレンダリング処理(ボリュームレンダリングやMIP(Maximum Intensity Projection:最大値投影)など)を施すことにより擬似的な3次元画像(レンダリング画像)が形成される。ボリュームデータにおける任意の断面に配置された画素群から2次元断面像を形成することができる。この画像処理は多断面再構成(Multi-planar Reconstruction、MPR)と呼ばれる。 Volume data (voxel data) is formed by executing known image processing such as interpolation processing for interpolating pixels between a plurality of cross-sectional images included in the stack data to form voxels. A pseudo three-dimensional image (rendered image) is formed by performing rendering processing (volume rendering, MIP (Maximum Intensity Projection), etc.) on the volume data. A two-dimensional cross-sectional image can be formed from a group of pixels arranged in an arbitrary cross section in the volume data. This image processing is called multi-planar reconstruction (MPR).
〈部分画像特定部232〉
 3次元画像形成部231により被検眼Eの3次元画像が形成された場合、部分画像特定部232は、この3次元画像を解析することにより、正常眼データ212aとの比較対象となる3次元画像中の部分画像を特定する。前述のように、正常眼データ212aは標準グラフを含む。部分画像特定部232は、標準グラフの第1座標軸が示す定義域(前述した眼の所定範囲)に相当する3次元画像中の領域を特定する。
<Partial image specifying unit 232>
When the three-dimensional image forming unit 231 forms a three-dimensional image of the eye E, the partial image specifying unit 232 analyzes the three-dimensional image, thereby comparing the three-dimensional image with the normal eye data 212a. Identify the partial image inside. As described above, the normal eye data 212a includes a standard graph. The partial image specifying unit 232 specifies an area in the three-dimensional image corresponding to the definition area (predetermined range of the eye described above) indicated by the first coordinate axis of the standard graph.
 具体例を説明する。図4に示す標準グラフ300が適用される場合、視神経乳頭D及びその周囲を含む3次元領域のOCTスキャンが実行されて3次元画像が形成される。部分画像特定部232は、この3次元画像を解析することで、図5に示すスキャン経路Pに相当する部分画像を特定する。この処理の具体例を図6に示す。符号Vは3次元画像を示す。部分画像特定部232は、3次元画像Vを解析することにより視神経乳頭Dを特定し、特定された視神経乳頭Dの位置や径に基づいてスキャン経路Pを特定する。スキャン経路Pは、視神経乳頭Dを囲みかつ所定の径を有する円である。スキャン経路Pに対応する部分画像は、スキャン経路Pを周として有し、かつz方向を軸方向とする円筒状断面像Gである。 A specific example will be described. When the standard graph 300 shown in FIG. 4 is applied, an OCT scan of a three-dimensional region including the optic disc D and its periphery is executed to form a three-dimensional image. The partial image specifying unit 232 specifies a partial image corresponding to the scan path P shown in FIG. 5 by analyzing the three-dimensional image. A specific example of this processing is shown in FIG. A symbol V indicates a three-dimensional image. The partial image specifying unit 232 specifies the optic disc D by analyzing the three-dimensional image V, and specifies the scan path P based on the position and diameter of the specified optic disc D. The scanning path P is a circle that surrounds the optic nerve head D and has a predetermined diameter. The partial image corresponding to the scan path P is a cylindrical cross-sectional image G having the scan path P as a circumference and the z direction as an axial direction.
〈特性値算出部233〉
 特性値算出部233は、部分画像特定部232が特定した部分画像を解析することにより、対象部位内の複数の位置のそれぞれにおける特性値を算出する。本例では、特性値算出部233は、円筒状断面像Gを解析することで網膜神経線維層の厚さの分布を求める。つまり、スキャン経路P中の複数の位置において網膜神経線維層の厚さが計測される。
<Characteristic Value Calculation Unit 233>
The characteristic value calculation unit 233 calculates the characteristic value at each of a plurality of positions in the target region by analyzing the partial image specified by the partial image specifying unit 232. In this example, the characteristic value calculation unit 233 determines the thickness distribution of the retinal nerve fiber layer by analyzing the cylindrical cross-sectional image G. That is, the thickness of the retinal nerve fiber layer is measured at a plurality of positions in the scan path P.
 そのために、特性値算出部233は、まず、円筒状断面像Gを耳側位置「T」でz方向に沿って切り開く。それにより、図7に示すような2次元断面像Gが得られる。次に、特性値算出部233は、2次元断面像Gのセグメンテーションを行うことにより、網膜神経線維層に相当する画像領域(神経線維層画像)Lを特定する。セグメンテーションは、従来と同様に、画素値の変化に基づいて実行される。例えば、各Aラインに配置された画素群の値のうち特徴的な値を特定し、特定された値を有する画素を所定層(又はその境界)の画素として選択するようにセグメンテーションが実行される。セグメンテーションにおいて、特性値算出部233は、層境界の近似曲線(線形近似曲線、対数近似曲線、多項式近似曲線、累乗近似曲線、指数近似曲線、移動平均近似曲線等)を求めてよい。 For this purpose, the characteristic value calculation unit 233 first cuts the cylindrical cross-sectional image G along the z direction at the ear position “T”. Thus, a two-dimensional cross-sectional image G A as shown in FIG. 7 is obtained. Next, the characteristic value calculating unit 233, by performing segmentation of two-dimensional cross-sectional images G A, image regions (nerve fiber layer image) corresponding to the retinal nerve fiber layer to identify the L. Segmentation is performed based on changes in pixel values, as in the conventional case. For example, the segmentation is performed so that a characteristic value is specified from the values of the pixel group arranged in each A line, and a pixel having the specified value is selected as a pixel of a predetermined layer (or its boundary). . In the segmentation, the characteristic value calculation unit 233 may obtain an approximate curve (a linear approximate curve, a logarithmic approximate curve, a polynomial approximate curve, a power approximate curve, an exponential approximate curve, a moving average approximate curve, etc.) of the layer boundary.
 更に、特性値算出部233は、スキャン経路P上の各位置p(m=1,2,・・・、M)について、位置pを通過するAラインにおける神経線維層画像Lの厚さW(p)を求める。それにより、スキャン経路P上の複数の位置p,p,・・・,pにおける網膜神経線維層の厚さW(p),W(p),・・・,W(p)が得られる。すなわち、スキャン経路Pに沿った網膜神経線維層の厚さ分布W(P)が得られる。 Furthermore, the characteristic value calculating unit 233, each position on the scan path P p m (m = 1,2, ···, M) for, the nerve fiber layer image L in the A line passing through the position p m thick W (p m ) is obtained. Thereby, a plurality of positions p 1 on the scan path P, p 2, · · ·, the thickness of the retinal nerve fiber layer in p M W (p 1), W (p 2), ···, W (p M ) is obtained. That is, the thickness distribution W (P) of the retinal nerve fiber layer along the scan path P is obtained.
〈分布グラフ作成部234〉
 分布グラフ作成部234は、特性値算出部233により複数の位置について算出された特性値に基づいて分布グラフを作成する。本例では、分布グラフ作成部234は、スキャン経路P上の位置を横軸(第1座標軸)で表し、かつ網膜神経線維層の厚さWを縦軸(第2座標軸)で表した座標系に、特性値算出部233により得られた対応関係(p,W(p))をプロットする。分布グラフ作成部234は、座標系にプロットされた複数の点を折れ線や曲線で接続することができる。このような処理により、図8に示す分布グラフHが作成される。
<Distribution graph creation unit 234>
The distribution graph creation unit 234 creates a distribution graph based on the characteristic values calculated for a plurality of positions by the characteristic value calculation unit 233. In this example, the distribution graph creation unit 234 represents a coordinate system in which the position on the scan path P is represented by the horizontal axis (first coordinate axis) and the thickness W of the retinal nerve fiber layer is represented by the vertical axis (second coordinate axis). The correspondence relationship (p m , W (p m )) obtained by the characteristic value calculation unit 233 is plotted. The distribution graph creation unit 234 can connect a plurality of points plotted in the coordinate system with broken lines or curves. By such processing, a distribution graph H shown in FIG. 8 is created.
〈グラフ比較部235〉
 グラフ比較部235は、分布グラフ作成部234により作成された分布グラフHと記憶部212aに記憶されている標準グラフ300とを比較する。本例では、グラフ比較部235は、標準グラフ300が表す許容範囲に分布グラフHが含まれるか否か判定する。分布グラフHと標準グラフ300との比較の例を図9に示す。図9に示す例では、標準グラフ300が示す許容範囲(図4に示す上限グラフ302と下限グラフ303とにより挟まれた領域)に分布グラフHが含まれている。したがって、この比較診断の結果は正常と判定される。
<Graph Comparison Unit 235>
The graph comparison unit 235 compares the distribution graph H created by the distribution graph creation unit 234 with the standard graph 300 stored in the storage unit 212a. In this example, the graph comparison unit 235 determines whether or not the distribution graph H is included in the allowable range represented by the standard graph 300. An example of comparison between the distribution graph H and the standard graph 300 is shown in FIG. In the example illustrated in FIG. 9, the distribution graph H is included in an allowable range (a region sandwiched between the upper limit graph 302 and the lower limit graph 303 illustrated in FIG. 4) indicated by the standard graph 300. Therefore, the result of this comparative diagnosis is determined to be normal.
 グラフを比較する方法は任意である。例えば、標準グラフが表す許容範囲に分布グラフ全体が含まれるか否か判断するよう構成することや、標準グラフが表す許容範囲に分布グラフの少なくとも一部が含まれるか判断するよう構成することや、標準グラフが表す許容範囲に含まれる分布グラフの割合が閾値以上であるか判断するよう構成することが可能である。 The method of comparing the graphs is arbitrary. For example, it is configured to determine whether or not the entire distribution graph is included in the allowable range represented by the standard graph, or to be configured to determine whether or not at least a part of the distribution graph is included in the allowable range represented by the standard graph, The distribution graph included in the allowable range represented by the standard graph can be configured to determine whether the ratio is equal to or greater than a threshold value.
 また、標準グラフが有する特徴を分布グラフが有するか否か判定するよう構成することもできる。例えば、正常眼の乳頭周囲網膜神経線維層厚(cpRNFLT)は、下側、上側、鼻側、耳側の順に薄くなり(ISNTの法則)、そのグラフの「I」近傍及び「S」近傍にはそれぞれ凸部が観察される(ダブルハンプと呼ばれる)。緑内障眼などではダブルハンプ形状が崩れることが知られている。したがって、2つの凸部の間隔の許容範囲や、凸部の変位の許容範囲や、凸部と他の部分との標高差の許容範囲などを考慮して標準グラフを作成することが可能である。 Also, it can be configured to determine whether or not the distribution graph has the characteristics of the standard graph. For example, the retinal nerve fiber layer thickness (cpRNFLT) around the nipple of normal eyes becomes thinner in the order of lower side, upper side, nose side, and ear side (ISNT's law), and in the vicinity of “I” and “S” in the graph Each has a convex portion (called double hump). It is known that the double hump shape collapses in glaucomatous eyes. Therefore, it is possible to create a standard graph in consideration of the allowable range of the interval between the two convex portions, the allowable range of the displacement of the convex portions, the allowable range of the elevation difference between the convex portions and other portions, and the like. .
 また、双方のグラフを画像として認識し、それらの間の画像相関を求めることができる。画像相関の値は、標準グラフに対する分布グラフの偏差として用いられ、被検眼Eに異常が存在するか否かの判断材料となる。 Also, both graphs can be recognized as images and the image correlation between them can be obtained. The value of the image correlation is used as a deviation of the distribution graph with respect to the standard graph, and serves as a determination material for determining whether or not there is an abnormality in the eye E.
 また、双方のグラフを、層厚(W)とスキャン経路(P)上の位置とにより張られた空間(W-P空間)における厚さ分布関数として捉え、これら分布関数の間の相関を求めることができる。分布関数間の相関値は、標準グラフに対する被検眼Eのグラフの相互相関関数として求められ、標準的な正常眼との比較に基づく異常の有無や程度の判断材料になる。 Both graphs are regarded as thickness distribution functions in a space (WP space) spanned by the layer thickness (W) and the position on the scan path (P), and a correlation between these distribution functions is obtained. be able to. The correlation value between the distribution functions is obtained as a cross-correlation function of the graph of the eye E to be examined with respect to the standard graph, and is used as a material for determining the presence / absence and degree of abnormality based on comparison with a standard normal eye.
〈ユーザインターフェイス240〉
 ユーザインターフェイス240は表示部241と操作部242とを含む。表示部241は表示装置3を含む。主制御部211は、データ処理部230により得られた情報を表示部241に表示させる。例えば、標準グラフと分布グラフとの比較結果や、被検眼Eに異常が発見されたか否かを示す情報を表示することができる。操作部242は各種の操作デバイスや入力デバイスを含む。ユーザインターフェイス240は、例えばタッチパネルのような表示機能と操作機能とが一体となったデバイスを含んでいてもよい。ユーザインターフェイス240の少なくとも一部を含まない実施形態を構築することも可能である。例えば、表示デバイスは、眼科装置に接続された外部装置であってよい。
<User Interface 240>
The user interface 240 includes a display unit 241 and an operation unit 242. The display unit 241 includes the display device 3. The main control unit 211 causes the display unit 241 to display information obtained by the data processing unit 230. For example, the comparison result between the standard graph and the distribution graph and information indicating whether or not an abnormality has been found in the eye E can be displayed. The operation unit 242 includes various operation devices and input devices. The user interface 240 may include a device such as a touch panel in which a display function and an operation function are integrated. Embodiments that do not include at least a portion of the user interface 240 can also be constructed. For example, the display device may be an external device connected to the ophthalmologic apparatus.
〈動作〉
 眼科装置1の動作について説明する。動作の一例を図10に示す。
<Operation>
The operation of the ophthalmologic apparatus 1 will be described. An example of the operation is shown in FIG.
(S1:3次元OCTスキャンを行う)
 まず、被検眼EのOCTが行われる。本例では、図5に示す視神経乳頭D及びその周囲(スキャン経路Pを少なくとも含む)を含む3次元領域がスキャンされてデータが収集される。この3次元スキャンは、例えば、複数の直線的な走査線が互いに平行に配列されたラスタースキャンである。各走査線は例えばx方向に延びる線分形状であり、複数の走査線はy方向に等間隔で配列される。
(S1: Perform 3D OCT scan)
First, OCT of the eye E is performed. In this example, data is collected by scanning a three-dimensional region including the optic papilla D shown in FIG. 5 and its periphery (including at least the scan path P). This three-dimensional scan is, for example, a raster scan in which a plurality of linear scanning lines are arranged in parallel to each other. Each scanning line has, for example, a line segment shape extending in the x direction, and the plurality of scanning lines are arranged at equal intervals in the y direction.
(S2:複数のBスキャン像を形成する)
 画像形成部220は、ステップS1で収集されたデータに基づいて、複数の走査線に対応する複数のBスキャン像を形成する。本例のBスキャン像は、例えば、xz断面を表す2次元断面像である。
(S2: Form a plurality of B-scan images)
The image forming unit 220 forms a plurality of B scan images corresponding to the plurality of scanning lines based on the data collected in step S1. The B scan image of this example is, for example, a two-dimensional cross-sectional image representing an xz cross section.
(S3:3次元画像を形成する)
 3次元画像形成部231は、ステップS2で形成された複数のBスキャン像に基づいて3次元画像(図6に示す3次元画像V)を形成する。
(S3: Form a three-dimensional image)
The three-dimensional image forming unit 231 forms a three-dimensional image (three-dimensional image V shown in FIG. 6) based on the plurality of B scan images formed in step S2.
(S4:部分画像を特定する)
 部分領域特定部232は、ステップS3で形成された3次元画像Vを解析することにより、視神経乳頭Dを含みかつ所定の径を有する円筒状断面像G(3次元画像Vの部分画像)を特定する。
(S4: Specify a partial image)
The partial region specifying unit 232 specifies the cylindrical cross-sectional image G (partial image of the three-dimensional image V) including the optic nerve head D and having a predetermined diameter by analyzing the three-dimensional image V formed in step S3. To do.
(S5:特性値を算出する)
 特性値算出部233は、ステップS4で特定された円筒状断面像Gを解析することで、スキャン経路P上の複数の位置における網膜神経線維層の厚さ(特性値)を算出する。具体的には、特性値算出部233は、円筒状断面像Gを耳側位置「T」でz方向に沿って切り開くことにより図7に示す2次元断面像Gを形成し、2次元断面像Gのセグメンテーションを行って神経線維層画像Lを特定する。更に、特性値算出部233は、スキャン経路P上の各位置pについて、位置pを通過するAラインにおける神経線維層画像Lの厚さW(p)を求める。それにより、スキャン経路Pに沿った網膜神経線維層の厚さ分布W(P)={W(p),W(p),・・・,W(p)}が得られる。
(S5: Calculate characteristic value)
The characteristic value calculation unit 233 calculates the thickness (characteristic value) of the retinal nerve fiber layer at a plurality of positions on the scan path P by analyzing the cylindrical cross-sectional image G identified in step S4. Specifically, the characteristic value calculating unit 233 forms a 2-dimensional cross-sectional image G A shown in FIG. 7 by cutting open along the z-direction a cylindrical cross-sectional image G ear-side position in the "T", the two-dimensional cross-section performing segmentation of the image G a identifies the nerve fiber layer image L by. Furthermore, the characteristic value calculating unit 233, for each position p m on the scan path P, determining the thickness W of the nerve fiber layer image L in the A line passing through the position p m (p m). Thereby, the thickness distribution W (P) = {W (p 1 ), W (p 2 ),..., W (p M )} of the retinal nerve fiber layer along the scan path P is obtained.
(S6:分布グラフを作成する)
 分布グラフ作成部234は、スキャン経路P上の位置を横軸に取り、かつ網膜神経線維層の厚さWを縦軸に取った座標系に、ステップS5で得られた網膜神経線維層の厚さ分布W(P)={W(p),W(p),・・・,W(p)}をプロットする。それにより、図8に示す分布グラフHが得られる。
(S6: Create a distribution graph)
The distribution graph creation unit 234 takes the position of the retinal nerve fiber layer obtained in step S5 in the coordinate system having the position on the scan path P on the horizontal axis and the thickness W of the retinal nerve fiber layer on the vertical axis. The height distribution W (P) = {W (p 1 ), W (p 2 ),..., W (p M )} is plotted. Thereby, the distribution graph H shown in FIG. 8 is obtained.
(S7:分布グラフと標準グラフを比較する)
 グラフ比較部235は、ステップS6で作成された分布グラフHと記憶部212aに予め記憶された標準グラフ300(図4を参照)とを比較することで、疾患(緑内障等)の有無や程度の判定が行われる。
(S7: Compare the distribution graph with the standard graph)
The graph comparison unit 235 compares the distribution graph H created in step S6 with the standard graph 300 (see FIG. 4) stored in advance in the storage unit 212a, thereby determining whether or not there is a disease (such as glaucoma). A determination is made.
(S8:比較結果を出力する)
 主制御部211は、ステップS7で得られた比較結果(疾患の有無、程度等)を表示部241に表示させる。なお、比較結果の出力方法は表示には限定されず、例えば、外部のコンピュータや記憶装置への送信や、記録媒体への記録や、印刷媒体への印刷などであってもよい。
(S8: Output the comparison result)
The main control unit 211 causes the display unit 241 to display the comparison result (presence / absence of disease, degree, etc.) obtained in step S7. The output method of the comparison result is not limited to display, and may be, for example, transmission to an external computer or storage device, recording on a recording medium, printing on a printing medium, or the like.
〈変形例〉
 上記実施形態の幾つかの変形例を説明する。特に言及しないかぎり、以下の変形例の説明において、上記実施形態と同一又は類似の機能を備える要素には同じ符号を用いる。
<Modification>
Several modifications of the above embodiment will be described. Unless otherwise specified, the same reference numerals are used for elements having the same or similar functions as those in the above embodiment in the following description of the modified examples.
 図11は、変形例に係る眼科装置の一部を示す。図11に示すデータ処理部230Aは、上記実施形態のデータ処理部230(図3参照)の代わりに適用される。つまり、本変形例の眼科装置は、図1及び図2に示す構成と、図3に示された要素のうちデータ処理部230以外の要素と、図11に示すデータ処理部230Aとを備える。 FIG. 11 shows a part of an ophthalmologic apparatus according to a modification. A data processing unit 230A shown in FIG. 11 is applied instead of the data processing unit 230 (see FIG. 3) of the above embodiment. That is, the ophthalmologic apparatus of the present modification includes the configuration shown in FIGS. 1 and 2, the elements shown in FIG. 3 other than the data processing unit 230, and the data processing unit 230A shown in FIG.
 データ処理部230Aは、3次元画像形成部231Aと特性値算出部232Aと特性値選択部233Aと分布グラフ作成部234Aとグラフ比較部235Aとを備える。3次元画像形成部231Aは、上記実施形態の3次元画像形成部231と同様の処理を実行するよう構成される。 The data processing unit 230A includes a three-dimensional image forming unit 231A, a characteristic value calculating unit 232A, a characteristic value selecting unit 233A, a distribution graph creating unit 234A, and a graph comparing unit 235A. The three-dimensional image forming unit 231A is configured to execute the same processing as the three-dimensional image forming unit 231 of the above embodiment.
 特性値算出部232Aは、3次元画像形成部231Aにより形成された3次元画像を解析することにより、3次元画像の複数の位置のそれぞれにおける特性値を算出する。例えば、特性値算出部232Aは、3次元画像を構成する全てのAラインについて網膜神経線維層の厚さが算出される。それにより、3次元OCTスキャンが行われた3次元領域における網膜神経線維層の厚さ分布が得られる。 The characteristic value calculation unit 232A calculates the characteristic value at each of a plurality of positions of the three-dimensional image by analyzing the three-dimensional image formed by the three-dimensional image forming unit 231A. For example, the characteristic value calculation unit 232A calculates the thickness of the retinal nerve fiber layer for all the A lines constituting the three-dimensional image. Thereby, the thickness distribution of the retinal nerve fiber layer in the three-dimensional region where the three-dimensional OCT scan is performed is obtained.
 特性値選択部233Aは、3次元画像を解析することにより、特性値算出部232Aにより得られた全ての特性値のうちから対象部位に相当する特性値を選択する。そのために、特性値選択部233Aは、例えば、対象部位に相当する3次元画像中の部分画像を特定する。この処理は、例えば、上記実施形態の部分画像特定部232と同様にして実行される。それにより、例えば、図6に示す円筒状断面像Gが表す領域内の複数の位置における特性値(網膜神経線維層の厚さ)が選択される。 The characteristic value selection unit 233A selects a characteristic value corresponding to the target part from all the characteristic values obtained by the characteristic value calculation unit 232A by analyzing the three-dimensional image. For this purpose, the characteristic value selection unit 233A specifies, for example, a partial image in the three-dimensional image corresponding to the target part. This process is executed, for example, in the same manner as the partial image specifying unit 232 of the above embodiment. Thereby, for example, characteristic values (thicknesses of the retinal nerve fiber layer) at a plurality of positions in the region represented by the cylindrical cross-sectional image G shown in FIG. 6 are selected.
 分布グラフ作成部234Aは、特性値選択部233Aにより選択された特性値群に基づいて、対象部位における特性値の分布が所定の座標系により表現された分布グラフを作成する。この処理は、上記実施形態の分布グラフ作成部234と同様にして実行される。それにより、例えば、図8に示すような分布グラフHが得られる。 The distribution graph creation unit 234A creates a distribution graph in which the distribution of the characteristic values in the target part is expressed by a predetermined coordinate system based on the characteristic value group selected by the characteristic value selection unit 233A. This process is executed in the same manner as the distribution graph creation unit 234 of the above embodiment. Thereby, for example, a distribution graph H as shown in FIG. 8 is obtained.
 グラフ比較部235Aは、分布グラフ作成部234Aにより作成された分布グラフと、記憶部212に予め記憶された標準グラフとを比較する。この処理は、上記実施形態のグラフ比較部235と同様にして実行される。 The graph comparison unit 235A compares the distribution graph created by the distribution graph creation unit 234A with the standard graph stored in advance in the storage unit 212. This process is executed in the same manner as the graph comparison unit 235 of the above embodiment.
 このように、3次元OCTスキャンにより得られた3次元画像に基づく一連の処理の組み合わせや順序は任意であってよい。また、3次元画像を形成する前に、部分画像(神経線維層画像等)の特定や、特性値の算出を行ってもよい。例えば、3次元画像の形成に用いられるBスキャン像を解析して部分画像を特定したり特性値を算出したりするよう構成することが可能である。 Thus, the combination and order of a series of processes based on a three-dimensional image obtained by a three-dimensional OCT scan may be arbitrary. In addition, before forming a three-dimensional image, a partial image (such as a nerve fiber layer image) may be specified and a characteristic value may be calculated. For example, a B-scan image used for forming a three-dimensional image can be analyzed to identify a partial image or calculate a characteristic value.
 また、3次元OCTスキャンを実行する必要はない。例えば、比較診断の対象部位のみをOCTスキャンするようにしてもよい。例えば、図5に示すケースにおいて、スキャン経路Pに対するサークルスキャンを適用することが可能である。このような場合、3次元画像の形成は不要である。また、対象部位のみOCTスキャンを行う場合には、部分画像の特定も不要である。 Also, there is no need to perform a 3D OCT scan. For example, only the target site of the comparative diagnosis may be OCT scanned. For example, in the case shown in FIG. 5, it is possible to apply a circle scan for the scan path P. In such a case, it is not necessary to form a three-dimensional image. In addition, when an OCT scan is performed only on the target part, it is not necessary to specify a partial image.
〈作用・効果〉
 実施形態に係る眼科装置及びその変形例について、その作用及び効果を説明する。また、更なる変形例についても説明する。
<Action and effect>
The operation and effect of the ophthalmologic apparatus according to the embodiment and its modification will be described. Further modifications will be described.
 眼科装置は、記憶部とデータ収集部と処理部と比較部と出力部とを備える。記憶部は、眼の所定範囲における位置を表す第1座標軸と眼の所定の特性値の大きさを表す第2座標軸とを含む座標系により特性値の標準的な分布が表現された標準グラフを予め記憶する。上記実施形態の記憶部212はその一例である。データ収集部は、上記所定範囲に相当する対象部位を含む被検眼の領域のデータをOCTを用いて収集する。上記実施形態では、OCTユニット100内の要素と、眼底カメラユニット2内の要素のうち測定アームに配置された要素とが、データ収集部に含まれる。処理部は、収集されたデータを処理することにより、上記対象部位における特性値の分布が上記座標系により表現された分布グラフを作成する。上記実施形態では、画像形成部220と、データ処理部230内の要素のうち分布グラフの作成に寄与する要素(少なくとも特性値算出部233及び分布グラフ作成部234)とが、処理部に含まれる。比較部は、処理部により作成された分布グラフと標準グラフとを比較する。上記実施形態の比較部235はその一例である。出力部は、分布グラフと標準グラフとの比較結果を出力する。上記実施形態の主制御部211及び表示部241はその一例である。 The ophthalmologic apparatus includes a storage unit, a data collection unit, a processing unit, a comparison unit, and an output unit. The storage unit displays a standard graph in which a standard distribution of characteristic values is expressed by a coordinate system including a first coordinate axis representing a position in a predetermined range of the eye and a second coordinate axis representing a magnitude of the predetermined characteristic value of the eye. Store in advance. The storage unit 212 of the above embodiment is an example. The data collection unit collects data of the region of the eye to be examined including the target region corresponding to the predetermined range using OCT. In the above-described embodiment, the elements in the OCT unit 100 and the elements arranged in the measurement arm among the elements in the fundus camera unit 2 are included in the data collection unit. The processing unit processes the collected data to create a distribution graph in which the distribution of characteristic values in the target region is expressed by the coordinate system. In the above embodiment, the image forming unit 220 and elements (at least the characteristic value calculating unit 233 and the distribution graph creating unit 234) that contribute to creating the distribution graph among the elements in the data processing unit 230 are included in the processing unit. . The comparison unit compares the distribution graph created by the processing unit with the standard graph. The comparison unit 235 of the above embodiment is an example. The output unit outputs a comparison result between the distribution graph and the standard graph. The main control unit 211 and the display unit 241 of the above embodiment are examples thereof.
 処理部は、画像形成部と特性値算出部と分布グラフ作成部とを含んでいてよい。画像形成部は、データ収集部により収集されたデータに基づいて画像を形成するよう構成される。この画像は、例えば、2次元画像(Bスキャン像等)や3次元画像である。上記実施形態の画像形成部220(及び3次元画像形成部231)はその一例である。特性値算出部は、画像形成部により形成された画像を解析することにより、比較診断の対象部位内の複数の位置のそれぞれにおける特性値を算出するよう構成される。上記実施形態の特性値算出部233はその一例である。分布グラフ作成部は、複数の位置について算出された特性値(対象部位における特性値の分布)に基づいて分布グラフを作成するよう構成される。上記実施形態の分布作成部234はその一例である。 The processing unit may include an image forming unit, a characteristic value calculating unit, and a distribution graph creating unit. The image forming unit is configured to form an image based on the data collected by the data collecting unit. This image is, for example, a two-dimensional image (B-scan image or the like) or a three-dimensional image. The image forming unit 220 (and the three-dimensional image forming unit 231) of the above embodiment is an example. The characteristic value calculation unit is configured to calculate characteristic values at each of a plurality of positions in the target site of the comparative diagnosis by analyzing the image formed by the image forming unit. The characteristic value calculation unit 233 of the above embodiment is an example. The distribution graph creation unit is configured to create a distribution graph based on the characteristic values calculated for a plurality of positions (distribution of characteristic values in the target part). The distribution creation unit 234 of the above embodiment is an example.
 標準グラフの定義域(上記所定範囲)は、視神経乳頭を囲みかつ所定の径を有する円筒状範囲であってよい。その場合、特性値算出部は、網膜神経線維層を含む層組織の厚さを特性値として算出するよう構成されてよい。更に、分布グラフ作成部は、この円筒状範囲における周方向の複数の位置を第1座標軸で表しかつ層組織の厚さを第2座標軸で表す座標系で層組織の厚さの分布を表現することにより分布グラフを作成するよう構成されてよい。 The domain of the standard graph (the predetermined range above) may be a cylindrical range surrounding the optic nerve head and having a predetermined diameter. In that case, the characteristic value calculation unit may be configured to calculate the thickness of the layer tissue including the retinal nerve fiber layer as the characteristic value. Further, the distribution graph creation unit expresses the distribution of the thickness of the layer structure in a coordinate system in which a plurality of positions in the circumferential direction in the cylindrical range are represented by the first coordinate axis and the thickness of the layer structure is represented by the second coordinate axis. Thus, a distribution graph may be created.
 データ収集部は、標準グラフの定義域(上記所定範囲)に相当する対象部位を含む被検眼の3次元領域をスキャンするよう構成されてよい。その場合、次のように構成することが可能である。画像形成部は、3次元領域のスキャンにより収集されたデータに基づいて3次元画像を形成する。処理部は、形成された3次元画像を解析することにより、対象部位に相当する部分画像を特定する第1特定部を含む。上記実施形態の部分画像特定部232はその一例である。特性値算出部は、特定された部分画像を解析することにより特性値の算出を行う。分布グラフ作成部は、部分画像について取得された特性値に基づいて分布グラフの作成を行う。 The data collection unit may be configured to scan a three-dimensional region of the eye to be examined that includes a target region corresponding to the definition region (the predetermined range) of the standard graph. In that case, it is possible to configure as follows. The image forming unit forms a three-dimensional image based on data collected by scanning the three-dimensional region. The processing unit includes a first specifying unit that specifies a partial image corresponding to the target region by analyzing the formed three-dimensional image. The partial image specifying unit 232 of the above embodiment is an example. The characteristic value calculation unit calculates the characteristic value by analyzing the specified partial image. The distribution graph creation unit creates a distribution graph based on the characteristic values acquired for the partial images.
 データ収集部が3次元領域をスキャンする場合の他の例として、次のような構成を採用することが可能である。画像形成部は、3次元領域のスキャンにより収集されたデータに基づいて3次元画像を形成する。特性値算出部は、形成された3次元画像を解析することにより特性値の算出を行う。処理部は、3次元画像を解析することにより、特性値算出部により得られた特性値のうち対象部位に相当する特性値を特定する第2特定部を含む。上記変形例の特性値選択部233Aはその一例である。分布グラフ作成部は、特定された特性値に基づいて分布グラフの作成を行う。 As another example when the data collection unit scans a three-dimensional area, the following configuration can be adopted. The image forming unit forms a three-dimensional image based on data collected by scanning the three-dimensional region. The characteristic value calculation unit calculates the characteristic value by analyzing the formed three-dimensional image. The processing unit includes a second specifying unit that specifies a characteristic value corresponding to the target part among the characteristic values obtained by the characteristic value calculating unit by analyzing the three-dimensional image. The characteristic value selection unit 233A of the modification is an example. The distribution graph creation unit creates a distribution graph based on the specified characteristic value.
 以上のような実施形態や変形例によれば、標準グラフに対する分布グラフの一致の程度(偏差の程度)に基づいて眼科疾患の比較診断(罹患しているか否か、疾患の進行度等の診断)を行うことができる。よって、被検眼の層厚分布と標準層厚分布とのレジストレーション行う必要がなく、レジストレーションの精度や確度に診断結果が影響されることがない。また、組織の厚さ等の絶対値を標準値と比較する従来の手法と比較し、個人差が診断結果に与える影響が小さい。したがって、実施形態や変形例によれば、標準データを用いた比較診断の確度の向上を図ることが可能である。 According to the above-described embodiments and modifications, a comparative diagnosis of ophthalmic diseases (diagnosis of whether or not the disease is present, the degree of progression of the disease, etc.) based on the degree of coincidence of the distribution graph with respect to the standard graph (degree of deviation) )It can be performed. Therefore, it is not necessary to perform registration between the layer thickness distribution of the eye to be examined and the standard layer thickness distribution, and the diagnosis result is not affected by the accuracy and accuracy of registration. In addition, compared with the conventional method of comparing the absolute value of the tissue thickness or the like with the standard value, the influence of individual differences on the diagnosis result is small. Therefore, according to the embodiment and the modification, it is possible to improve the accuracy of the comparative diagnosis using the standard data.
 換言すると、被検眼の特性値が正常か否かを局所的に(ローカルに)判定する従来の手法と異なり、実施形態や変形例は、被検眼の特性の分布状態を大局的に(グローバルに)判定するよう構成されている。血管の存在により局所的にデータが取得されない場合や、測定ノイズ等により正確なデータが局所的に取得されない場合、従来の手法では当該部位の比較診断を行うことができないが、実施形態や変形例によれば、大局的な比較診断により結果を得ることが可能である。 In other words, unlike the conventional method for locally determining whether or not the characteristic value of the eye to be examined is normal (locally), the embodiment and the modified example globally (globally) distribute the characteristic state of the eye to be examined. ) Is configured to determine. When data is not acquired locally due to the presence of blood vessels, or when accurate data is not acquired locally due to measurement noise or the like, comparative diagnosis of the part cannot be performed with the conventional method. According to this, it is possible to obtain a result by a global comparative diagnosis.
 また、実施形態や変形例によれば、緑内障眼などではダブルハンプ形状の崩れのように従来の手法では把握が困難な異常を検出することも可能である。なお、被検眼が近視眼(強度近視眼)である場合、2つのハンプ(凸部)の距離が広くなる傾向があることが知られている。実施形態や変形例によれば、ハンプ間の距離を参照して比較診断を行ったり、逆にハンプ間の距離から当該被検眼が近視眼であるか否かを判別したりすることも可能である。 In addition, according to the embodiment and the modification, it is also possible to detect an abnormality that is difficult to grasp by a conventional method such as a double hump shape collapse in a glaucoma eye or the like. It is known that the distance between two humps (convex portions) tends to increase when the eye to be examined is a myopic eye (high-strength myopia). According to the embodiment and the modification, it is possible to make a comparative diagnosis by referring to the distance between the humps, or to determine whether the eye to be examined is a myopic eye based on the distance between the humps. .
 なお、特性値の標準値との比較を局所的に行う従来の手法にも利点はある。よって、従来の手法と上記実施形態や変形例の手法とを組み合わせることが可能である。その場合、記憶部(例えば記憶部212)は、標準グラフに加え、特性値の許容範囲を予め記憶する。特性値の許容範囲は、従来の比較診断で用いられる正常眼データであり、例えば、眼の複数の位置における特性値の許容範囲(正常眼における特性値の範囲)を表すマップ情報である。 In addition, there is an advantage in the conventional method for locally comparing the characteristic value with the standard value. Therefore, it is possible to combine the conventional method and the method of the said embodiment and modification. In this case, the storage unit (for example, the storage unit 212) stores in advance the allowable range of the characteristic value in addition to the standard graph. The allowable range of characteristic values is normal eye data used in conventional comparative diagnosis, and is map information representing, for example, allowable ranges of characteristic values at a plurality of positions of the eye (characteristic value ranges for normal eyes).
 従来の比較診断と実施形態等の比較診断とを順次に行うことが可能である。例えば、従来の比較診断を最初に実行し、その結果に応じて実施形態等の比較診断を行うことが可能である。例えば、処理部により得られた特性値(その少なくとも一部でもよい)が許容範囲に含まれるとき(つまり正常値であると判定されたとき)、比較部が実施形態の比較診断(分布グラフと標準グラフとの比較)を行うよう構成することができる。なお、従来の比較診断は、データ処理部230等により従来と同じ要領で実行される。一方、処理部により得られた特性値(その少なくとも一部でもよい)が許容範囲に含まれないとき(つまり異常値であると判定されたとき)、主制御部211は、異常が存在する旨を示す情報(異常情報)を表示部241等によって出力させることができる。 It is possible to sequentially perform a conventional comparative diagnosis and a comparative diagnosis such as an embodiment. For example, it is possible to first execute a conventional comparative diagnosis and perform a comparative diagnosis of the embodiment or the like according to the result. For example, when the characteristic value (or at least a part thereof) obtained by the processing unit is included in the allowable range (that is, when it is determined to be a normal value), the comparison unit performs comparison diagnosis (distribution graph and (Compared to a standard graph). The conventional comparative diagnosis is executed by the data processing unit 230 and the like in the same manner as in the past. On the other hand, when the characteristic value (or at least a part thereof) obtained by the processing unit is not included in the allowable range (that is, when it is determined to be an abnormal value), the main control unit 211 indicates that an abnormality exists. Can be output by the display unit 241 or the like.
 具体例を説明する。まず、データ処理部230が、網膜神経線維層の厚さに関する従来の比較診断を行って、許容範囲から外れた厚さを有する部位(異常部位)が存在するか否か判定する。異常部位が存在しないと判定された場合、グラフ比較部235は、分布グラフと標準グラフとの比較を行う。このように2段階の比較診断を行うことにより、偽陰性が発生する可能性が低減される。一方、異常部位が存在すると判定された場合、分布グラフと標準グラフとの比較処理が省略される。 A specific example will be described. First, the data processing unit 230 performs a conventional comparative diagnosis on the thickness of the retinal nerve fiber layer, and determines whether or not there is a site (abnormal site) having a thickness outside the allowable range. When it is determined that there is no abnormal site, the graph comparison unit 235 compares the distribution graph with the standard graph. By performing the two-stage comparative diagnosis in this way, the possibility of false negatives is reduced. On the other hand, when it is determined that an abnormal site exists, the comparison process between the distribution graph and the standard graph is omitted.
 なお、これら比較診断を行う順序はこの逆であってもよい。例えば、実施形態や変形例の比較診断の結果に応じて従来の比較診断を実行するか否か決定することができる。また、実施形態や変形例の比較診断の結果と従来の比較診断の結果とを総合的に考慮して結果を出力するようにしてもよい。例えば、一方又は双方の比較診断で異常と判定された場合に結果「異常あり」を出力し、双方の比較診断で正常と判定された場合に結果「正常」を出力するように構成できる。また、双方の比較診断の異常の程度を考慮して結果を出力するようにしてもよい。 Note that the order in which these comparative diagnoses are performed may be reversed. For example, it is possible to determine whether or not to execute the conventional comparative diagnosis according to the result of the comparative diagnosis of the embodiment or the modification. In addition, the results may be output by comprehensively considering the results of the comparative diagnosis of the embodiment or the modified example and the results of the conventional comparative diagnosis. For example, it can be configured to output the result “abnormal” when it is determined to be abnormal in one or both of the comparative diagnoses, and to output the result “normal” when it is determined to be normal in both of the comparative diagnoses. Further, the result may be output in consideration of the degree of abnormality in both comparative diagnoses.
 上記実施形態や変形例の眼科装置はOCTを実行するための構成(データ収集部)を備えているが、実施形態はこれに限定されない。例えば、実施形態は、OCT以外の眼科モダリティやコンピュータなどであってよい。この場合、OCTを用いて得られたデータが外部から入力される。このような眼科装置は、記憶部と受付部と処理部と比較部と出力部とを備える。 Although the ophthalmologic apparatus of the above-described embodiment or modification includes a configuration (data collection unit) for executing OCT, the embodiment is not limited to this. For example, the embodiment may be an ophthalmic modality other than OCT, a computer, or the like. In this case, data obtained using OCT is input from the outside. Such an ophthalmologic apparatus includes a storage unit, a reception unit, a processing unit, a comparison unit, and an output unit.
 記憶部は、上記実施形態と同様に、眼の所定範囲における位置を表す第1座標軸と眼の所定の特性値の大きさを表す第2座標軸とを含む座標系により特性値の標準的な分布が表現された標準グラフを予め記憶する。 Similar to the above embodiment, the storage unit has a standard distribution of characteristic values by a coordinate system including a first coordinate axis that represents a position in a predetermined range of the eye and a second coordinate axis that represents the magnitude of the predetermined characteristic value of the eye. Is stored in advance.
 受付部は、上記所定範囲に相当する対象部位を含む被検眼の領域をOCTでスキャンすることで収集されたデータを受け付ける。受付部は、例えば、上記実施形態の演算制御ユニット200内の通信インターフェイスである。通信インターフェイスは、インターネットやLAN等の通信回線を介して、外部装置からデータを受け付ける。外部装置は、例えば、通信回線上のコンピュータ(サーバ、記憶装置等)である。なお、受付部は通信インターフェイスには限定されない。例えば、受付部は、記録媒体に記録されたデータを読み出すドライブ装置を含んでよい。 The accepting unit accepts data collected by scanning the region of the eye to be examined including the target region corresponding to the predetermined range by OCT. The reception unit is, for example, a communication interface in the arithmetic control unit 200 of the above embodiment. The communication interface receives data from an external device via a communication line such as the Internet or a LAN. The external device is, for example, a computer (server, storage device, etc.) on a communication line. The accepting unit is not limited to the communication interface. For example, the reception unit may include a drive device that reads data recorded on the recording medium.
 処理部は、受付部により受け付けられたデータを処理することにより、上記対象部位における特性値の分布が上記座標系により表現された分布グラフを作成する。比較部は、作成された分布グラフと標準グラフとを比較する。出力部は、分布グラフと標準グラフとの比較結果を出力する。ここで、処理部、比較部及び出力部のそれぞれは、上記実施形態と同様の構成であってよい。 The processing unit processes the data received by the receiving unit to create a distribution graph in which the distribution of characteristic values in the target region is expressed by the coordinate system. The comparison unit compares the created distribution graph with the standard graph. The output unit outputs a comparison result between the distribution graph and the standard graph. Here, each of the processing unit, the comparison unit, and the output unit may have the same configuration as in the above embodiment.
 このような実施形態によっても、上記実施形態や変形例と同様に、標準データを用いた比較診断の確度の向上を図ることが可能である。 Even in such an embodiment, it is possible to improve the accuracy of comparative diagnosis using standard data, as in the above-described embodiments and modifications.
 以上に説明した実施形態は本発明の一例に過ぎない。本発明を実施しようとする者は、本発明の要旨の範囲内における変形(省略、置換、付加等)を任意に施すことが可能である。 The embodiment described above is merely an example of the present invention. A person who intends to implement the present invention can arbitrarily make modifications (omission, substitution, addition, etc.) within the scope of the present invention.
1 眼科装置
100 OCTユニット
212 記憶部
212a 正常眼データ
220 画像形成部
230 データ処理部
234 分布グラフ作成部
235 グラフ比較部
1 Ophthalmic Device 100 OCT Unit 212 Storage Unit 212a Normal Eye Data 220 Image Forming Unit 230 Data Processing Unit 234 Distribution Graph Creation Unit 235 Graph Comparison Unit

Claims (8)

  1.  眼の所定範囲における位置を表す第1座標軸と眼の所定の特性値の大きさを表す第2座標軸とを含む座標系により前記特性値の標準的な分布が表現された標準グラフを予め記憶する記憶部と、
     前記所定範囲に相当する対象部位を含む被検眼の領域のデータを光コヒーレンストモグラフィを用いて収集するデータ収集部と、
     収集された前記データを処理することにより、前記対象部位における前記特性値の分布が前記座標系により表現された分布グラフを作成する処理部と、
     作成された前記分布グラフと前記標準グラフとを比較する比較部と、
     前記分布グラフと前記標準グラフとの比較結果を出力する出力部と
     を備える眼科装置。
    A standard graph in which a standard distribution of the characteristic values is expressed by a coordinate system including a first coordinate axis representing a position in a predetermined range of the eye and a second coordinate axis representing the magnitude of the predetermined characteristic value of the eye is stored in advance. A storage unit;
    A data collection unit that collects data of a region of the eye to be examined including a target region corresponding to the predetermined range using optical coherence tomography;
    By processing the collected data, a processing unit that creates a distribution graph in which the distribution of the characteristic values in the target part is expressed by the coordinate system;
    A comparison unit for comparing the created distribution graph and the standard graph;
    An ophthalmologic apparatus comprising: an output unit that outputs a comparison result between the distribution graph and the standard graph.
  2.  前記処理部は、
     前記データ収集部により収集された前記データに基づいて画像を形成する画像形成部と、
     形成された前記画像を解析することにより、前記対象部位内の複数の位置のそれぞれにおける前記特性値を算出する特性値算出部と、
     前記複数の位置について算出された前記特性値に基づいて前記分布グラフを作成する分布グラフ作成部と
     を含む
     ことを特徴とする請求項1に記載の眼科装置。
    The processor is
    An image forming unit that forms an image based on the data collected by the data collecting unit;
    A characteristic value calculation unit that calculates the characteristic value at each of a plurality of positions in the target region by analyzing the formed image;
    The ophthalmic apparatus according to claim 1, further comprising: a distribution graph creating unit that creates the distribution graph based on the characteristic values calculated for the plurality of positions.
  3.  前記所定範囲は、視神経乳頭を囲みかつ所定の径を有する円筒状範囲であり、
     前記特性値算出部は、網膜神経線維層を含む層組織の厚さを前記特性値として算出し、
     前記分布グラフ作成部は、前記円筒状範囲における周方向の複数の位置を前記第1座標軸で表しかつ前記層組織の厚さを前記第2座標軸で表す座標系で前記層組織の厚さの分布を表現することにより前記分布グラフを作成する
     ことを特徴とする請求項2に記載の眼科装置。
    The predetermined range is a cylindrical range surrounding the optic nerve head and having a predetermined diameter,
    The characteristic value calculation unit calculates the thickness of the layer tissue including the retinal nerve fiber layer as the characteristic value,
    The distribution graph creation unit is configured to distribute the thickness of the layer structure in a coordinate system in which a plurality of circumferential positions in the cylindrical range are represented by the first coordinate axis and the thickness of the layer structure is represented by the second coordinate axis. The ophthalmic apparatus according to claim 2, wherein the distribution graph is created by expressing
  4.  前記データ収集部は、前記対象部位を含む被検眼の3次元領域をスキャンし、
     前記画像形成部は、前記3次元領域のスキャンにより収集されたデータに基づいて3次元画像を形成し、
     前記処理部は、形成された前記3次元画像を解析することにより前記対象部位に相当する部分画像を特定する第1特定部を含み、
     前記特性値算出部は、特定された前記部分画像を解析することにより前記特性値の算出を行い、
     前記分布グラフ作成部は、前記部分画像について取得された前記特性値に基づいて前記分布グラフの作成を行う
     ことを特徴とする請求項2又は3に記載の眼科装置。
    The data collection unit scans a three-dimensional region of the eye to be examined including the target region,
    The image forming unit forms a three-dimensional image based on data collected by scanning the three-dimensional region;
    The processing unit includes a first specifying unit that specifies a partial image corresponding to the target region by analyzing the formed three-dimensional image,
    The characteristic value calculation unit calculates the characteristic value by analyzing the specified partial image,
    The ophthalmologic apparatus according to claim 2, wherein the distribution graph creation unit creates the distribution graph based on the characteristic value acquired for the partial image.
  5.  前記データ収集部は、前記対象部位を含む被検眼の3次元領域をスキャンし、
     前記画像形成部は、前記3次元領域のスキャンにより収集されたデータに基づいて3次元画像を形成し、
     前記特性値算出部は、形成された前記3次元画像を解析することにより前記特性値の算出を行い、
     前記処理部は、前記3次元画像を解析することにより、前記特性値算出部により得られた前記特性値のうち前記対象部位に相当する特性値を特定する第2特定部を含み、
     前記分布グラフ作成部は、特定された前記特性値に基づいて前記分布グラフの作成を行う
     ことを特徴とする請求項2又は3に記載の眼科装置。
    The data collection unit scans a three-dimensional region of the eye to be examined including the target region,
    The image forming unit forms a three-dimensional image based on data collected by scanning the three-dimensional region;
    The characteristic value calculation unit calculates the characteristic value by analyzing the formed three-dimensional image,
    The processing unit includes a second specifying unit that specifies a characteristic value corresponding to the target portion among the characteristic values obtained by the characteristic value calculating unit by analyzing the three-dimensional image,
    The ophthalmologic apparatus according to claim 2, wherein the distribution graph creating unit creates the distribution graph based on the specified characteristic value.
  6.  前記記憶部は、前記特性値の許容範囲を予め記憶し、
     前記処理部により得られた前記特性値が前記許容範囲に含まれるとき、前記比較部は、前記分布グラフと前記標準グラフとの比較を行う
     ことを特徴とする請求項1~5のいずれかに記載の眼科装置。
    The storage unit stores an allowable range of the characteristic value in advance,
    6. The comparison unit according to claim 1, wherein when the characteristic value obtained by the processing unit is included in the allowable range, the comparison unit compares the distribution graph with the standard graph. The ophthalmic device described.
  7.  前記処理部により得られた前記特性値が前記許容範囲に含まれないとき、前記出力部は異常情報を出力する
     ことを特徴とする請求項6に記載の眼科装置。
    The ophthalmologic apparatus according to claim 6, wherein the output unit outputs abnormality information when the characteristic value obtained by the processing unit is not included in the allowable range.
  8.  眼の所定範囲における位置を表す第1座標軸と眼の所定の特性値の大きさを表す第2座標軸とを含む座標系により前記特性値の標準的な分布が表現された標準グラフを予め記憶する記憶部と、
     前記所定範囲に相当する対象部位を含む被検眼の領域を光コヒーレンストモグラフィでスキャンすることで収集されたデータを受け付ける受付部と、
     受け付けられた前記データを処理することにより、前記対象部位における前記特性値の分布が前記座標系により表現された分布グラフを作成する処理部と、
     作成された前記分布グラフと前記標準グラフとを比較する比較部と、
     前記分布グラフと前記標準グラフとの比較結果を出力する出力部と
     を備える眼科装置。
    A standard graph in which a standard distribution of the characteristic values is expressed by a coordinate system including a first coordinate axis representing a position in a predetermined range of the eye and a second coordinate axis representing the magnitude of the predetermined characteristic value of the eye is stored in advance. A storage unit;
    A reception unit that receives data collected by scanning an area of the eye to be examined including a target region corresponding to the predetermined range by optical coherence tomography;
    By processing the received data, a processing unit that creates a distribution graph in which the distribution of the characteristic values in the target part is expressed by the coordinate system;
    A comparison unit for comparing the created distribution graph and the standard graph;
    An ophthalmologic apparatus comprising: an output unit that outputs a comparison result between the distribution graph and the standard graph.
PCT/JP2016/084403 2015-11-25 2016-11-21 Ophthalmological device WO2017090550A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015-229887 2015-11-25
JP2015229887A JP6637743B2 (en) 2015-11-25 2015-11-25 Ophthalmic equipment

Publications (1)

Publication Number Publication Date
WO2017090550A1 true WO2017090550A1 (en) 2017-06-01

Family

ID=58763189

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/084403 WO2017090550A1 (en) 2015-11-25 2016-11-21 Ophthalmological device

Country Status (2)

Country Link
JP (1) JP6637743B2 (en)
WO (1) WO2017090550A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6940349B2 (en) * 2017-09-21 2021-09-29 株式会社トプコン Ophthalmic equipment
JP2020058647A (en) * 2018-10-11 2020-04-16 株式会社ニコン Image processing method, image processing device and image processing program
WO2020240629A1 (en) * 2019-05-24 2020-12-03 株式会社ニコンヘルスケアジャパン Data creation method, data creation device, and data creation program

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015080678A (en) * 2013-10-24 2015-04-27 キヤノン株式会社 Ophthalmologic apparatus

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5101975B2 (en) * 2007-10-04 2012-12-19 株式会社トプコン Fundus observation apparatus and fundus image processing apparatus
JP5790002B2 (en) * 2011-02-04 2015-10-07 株式会社ニデック Ophthalmic imaging equipment
JP6130723B2 (en) * 2013-05-01 2017-05-17 キヤノン株式会社 Information processing apparatus, information processing apparatus control method, and program
JP6429447B2 (en) * 2013-10-24 2018-11-28 キヤノン株式会社 Information processing apparatus, comparison method, alignment method, and program

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015080678A (en) * 2013-10-24 2015-04-27 キヤノン株式会社 Ophthalmologic apparatus

Also Published As

Publication number Publication date
JP2017093855A (en) 2017-06-01
JP6637743B2 (en) 2020-01-29

Similar Documents

Publication Publication Date Title
JP6580448B2 (en) Ophthalmic photographing apparatus and ophthalmic information processing apparatus
JP5941761B2 (en) Ophthalmic photographing apparatus and ophthalmic image processing apparatus
JP6703839B2 (en) Ophthalmic measuring device
JP2022040372A (en) Ophthalmologic apparatus
JP7348374B2 (en) Ophthalmology information processing device, ophthalmology imaging device, ophthalmology information processing method, and program
JP2019154988A (en) Ophthalmologic imaging apparatus, control method therefor, program, and storage medium
JP6633468B2 (en) Blood flow measurement device
WO2017090550A1 (en) Ophthalmological device
JP6736734B2 (en) Ophthalmic photographing device and ophthalmic information processing device
WO2017069019A1 (en) Blood flow measurement device
WO2020158282A1 (en) Ophthalmic information processing device, ophthalmic device, ophthalmic information processing method, and program
WO2020044712A1 (en) Ophthalmology device, and control method therefor
JP2020048825A (en) Ophthalmic imaging apparatus, control method thereof, program, and record medium
JP2019054994A (en) Ophthalmologic imaging apparatus, ophthalmologic information processing apparatus, program, and recording medium
JP2019170710A (en) Ophthalmologic apparatus
JP2017202369A (en) Ophthalmologic image processing device
JP6158535B2 (en) Fundus analyzer
WO2020054280A1 (en) Ophthalmological imaging device, control method thereof, program, and storage medium
JP7374272B2 (en) ophthalmology equipment
JP7339011B2 (en) Ophthalmic device, ophthalmic information processing device, program, and recording medium
JP7286853B2 (en) Ophthalmic device and its control method
JP7202808B2 (en) Ophthalmic device and its control method
US20210204809A1 (en) Ophthalmic imaging apparatus, controlling method of the same, and recording medium
JP6664992B2 (en) Ophthalmic imaging equipment
JP2023175006A (en) Ophthalmologic apparatus and control method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16868497

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16868497

Country of ref document: EP

Kind code of ref document: A1