WO2020054524A1 - Image processing apparatus, image processing method, and program - Google Patents

Image processing apparatus, image processing method, and program Download PDF

Info

Publication number
WO2020054524A1
WO2020054524A1 PCT/JP2019/034685 JP2019034685W WO2020054524A1 WO 2020054524 A1 WO2020054524 A1 WO 2020054524A1 JP 2019034685 W JP2019034685 W JP 2019034685W WO 2020054524 A1 WO2020054524 A1 WO 2020054524A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
luminance
dimensional
motion contrast
image processing
Prior art date
Application number
PCT/JP2019/034685
Other languages
French (fr)
Japanese (ja)
Inventor
裕之 今村
弘樹 内田
律也 富田
Original Assignee
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2019133788A external-priority patent/JP7446730B2/en
Application filed by キヤノン株式会社 filed Critical キヤノン株式会社
Publication of WO2020054524A1 publication Critical patent/WO2020054524A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions

Definitions

  • the present invention relates to an image processing device, an image processing method, and a program.
  • a tomographic imaging apparatus such as an optical coherence tomography (OCT; Optical Coherence Tomography)
  • OCT optical coherence tomography
  • the tomographic imaging apparatus is widely used for ophthalmic medical treatment because it is useful for more accurately diagnosing a disease.
  • OCT optical coherence tomography
  • TD-OCT Time domain OCT
  • Michelson interferometer a broadband light source and a Michelson interferometer are combined. This is configured such that the position of the reference mirror is moved at a constant speed, interference light with backscattered light acquired by the signal arm is measured, and a reflected light intensity distribution in the depth direction is obtained.
  • TD-OCT requires mechanical scanning, so that high-speed image acquisition is difficult. Therefore, as a higher-speed image acquisition method, a broadband light source is used, and an SS-OCT (Spectral domain OCT) for acquiring an interference signal with a spectroscope or an SS-OCT (Swept @ Source) for temporally dispersing by using a high-speed wavelength sweep light source.
  • SS-OCT Spectral domain OCT
  • OOCT which uses OCT to non-invasively render three-dimensional fundus blood vessels using OCT to grasp the pathological conditions related to the blood vessels of the eye to be examined Angiography (hereinafter referred to as OCTA) technology.
  • OCTA the same position is scanned a plurality of times with the measurement light, and the motion contrast obtained by the interaction between the displacement of the red blood cells and the measurement light is imaged.
  • FIG. 4A shows an example of OCTA imaging in which the B-scan is performed r times continuously at each position (yi; 1 ⁇ i ⁇ n) in the slow axis direction (y-axis direction) in which the fast axis direction is the horizontal (x-axis) direction. ing.
  • cluster scanning scanning a plurality of times at the same position is referred to as cluster scanning
  • a plurality of tomographic images obtained at the same position is referred to as a cluster
  • a motion contrast image is generated for each cluster.
  • Japanese Patent Application Laid-Open No. H11-163,199 discloses a luminance value in an OCTA image (en-face image in which a blood vessel region is emphasized) in order to suppress a band-like artifact of a white line extending in the X direction (fast axis direction) due to fixation disparity. Is integrated in the X direction, and a tomographic image based on the ratio or difference between a one-dimensional luminance profile along the Y direction (slow axis direction) and a smoothed one-dimensional luminance profile obtained by smoothing the one-dimensional luminance profile. A method for correcting the luminance of the image is disclosed. That is, Patent Literature 1 discloses a technique for suppressing band-like artifacts existing in the entire area along the fast axis direction in an OCTA image.
  • the conventional technique cannot reduce the band-like artifact that exists partially along the fast axis direction. Further, there is a possibility that the luminance value of a blood vessel region or the like existing along the fast axis direction is overcorrected or erroneously suppressed.
  • the present invention has been made in view of the above problems, and has as its object to reduce artifacts in an image of an eye to be inspected.
  • the present invention is not limited to the above-described object, and it is an operation effect derived from each configuration shown in an embodiment for carrying out the invention described later, and it is another object of the present invention to provide an operation effect that cannot be obtained by the conventional technology. Can be positioned as one.
  • one of the image processing apparatuses of the present invention performs a two-dimensional conversion process on at least one front image based on a three-dimensional tomographic image or a three-dimensional motion contrast image of an eye to be inspected.
  • the distribution of the correction coefficient values is obtained by the calculation of the first approximate value distribution obtained by performing the above and the second approximate value distribution obtained by executing the one-dimensional conversion processing on the at least one front image.
  • Correction means for correcting at least a part of the three-dimensional tomographic image or the three-dimensional motion contrast image using a distribution of the values of the correction coefficients;
  • Generating means for generating at least a part of the corrected image.
  • artifacts in an image of an eye to be inspected can be reduced.
  • FIG. 1 is a block diagram illustrating a configuration of an image processing device according to a first embodiment of the present invention.
  • FIG. 1 is a diagram illustrating an image processing system according to an embodiment of the present invention and a measurement optical system included in a tomographic image capturing apparatus included in the image processing system.
  • FIG. 1 is a diagram illustrating an image processing system according to an embodiment of the present invention and a measurement optical system included in a tomographic image capturing apparatus included in the image processing system.
  • 5 is a flowchart of a process that can be executed by the image processing system according to the first embodiment of the present invention.
  • FIG. 3 is a diagram illustrating a scanning method of OCTA imaging and a luminance step artifact occurring on an OCT tomographic image and an OCTA image according to the embodiment of the present invention.
  • FIG. 3 is a diagram illustrating a scanning method of OCTA imaging and a luminance step artifact occurring on an OCT tomographic image and an OCTA image according to the embodiment of the present invention.
  • FIG. 3 is a diagram illustrating a scanning method of OCTA imaging and a luminance step artifact occurring on an OCT tomographic image and an OCTA image according to the embodiment of the present invention.
  • FIG. 3 is a diagram illustrating a scanning method of OCTA imaging and a luminance step artifact occurring on an OCT tomographic image and an OCTA image according to the embodiment of the present invention.
  • FIG. 3 is a diagram illustrating a scanning method of OCTA imaging and a luminance step artifact occurring on an OCT tomographic image and an OCTA image according to the embodiment of the present invention.
  • FIG. 3 is a diagram illustrating a scanning method of OCTA imaging and a luminance step artifact occurring on an OCT tomographic image and an OCTA image according to the embodiment of the present invention.
  • 5 is a flowchart of processing executed in S303 and S304 according to the first embodiment of the present invention.
  • 5 is a flowchart of processing executed in S303 and S304 according to the first embodiment of the present invention.
  • FIG. 7 is a diagram illustrating processing performed in S303 according to the first embodiment of the present invention.
  • FIG. 7 is a diagram illustrating processing performed in S303 according to the first embodiment of the present invention.
  • FIG. 7 is a diagram illustrating processing performed in S303 according to the first embodiment of the present invention.
  • FIG. 7 is a diagram illustrating processing performed in S303 according to the first embodiment of the present invention.
  • FIG. 7 is a diagram illustrating processing performed in S303 according to the first embodiment of the present invention.
  • FIG. 4 is a diagram for explaining image processing content executed in the first embodiment of the present invention.
  • FIG. 4 is a diagram for explaining image processing content executed in the first embodiment of the present invention.
  • FIG. 4 is a diagram for explaining image processing content executed in the first embodiment of the present invention.
  • FIG. 4 is a diagram for explaining image processing content executed in the first embodiment of the present invention.
  • FIG. 4 is a diagram for explaining image processing content executed in the first embodiment of the present invention.
  • FIG. 4 is a diagram for explaining image processing content executed in the first embodiment of the present invention.
  • FIG. 4 is a diagram for explaining image processing content executed in the first embodiment of the present invention.
  • FIG. 4 is a diagram for explaining image processing content executed in the first embodiment of the present invention.
  • FIG. 5 is a diagram illustrating an effect of image processing performed in the first embodiment of the present invention.
  • FIG. 5 is a diagram illustrating an effect of image processing performed in the first embodiment of the present invention.
  • FIG. 5 is a diagram illustrating an effect of image processing performed in the first embodiment of the present invention.
  • FIG. 5 is a diagram illustrating an effect of image processing performed in the first embodiment of the present invention.
  • FIG. 5 is a diagram illustrating an effect of image processing performed in the first embodiment of the present invention.
  • FIG. 5 is a diagram illustrating an effect of image processing performed in the first embodiment of the present invention.
  • FIG. 5 is a diagram illustrating an effect of image processing performed in the first embodiment of the present invention.
  • FIG. 5 is a diagram illustrating an effect of image processing performed in the first embodiment of the present invention.
  • FIG. 5 is a diagram illustrating an effect of image processing performed in the first embodiment of the present invention.
  • It is a block diagram showing the composition of the image processing device concerning a 2nd embodiment of the present invention.
  • 9 is a flowchart of a process that can be executed by the image processing system according to the second embodiment of the present invention.
  • FIG. 9 is a diagram illustrating image processing contents executed in a second embodiment of the present invention.
  • FIG. 9 is a diagram illustrating image processing contents executed in a second embodiment of the present invention.
  • FIG. 9 is a diagram illustrating image processing contents executed in a second embodiment of the present invention.
  • FIG. 9 is a diagram illustrating image processing contents executed in a second embodiment of the present invention.
  • FIG. 9 is a diagram illustrating image processing contents executed in a second embodiment of the present invention.
  • FIG. 9 is a diagram illustrating image processing contents executed in a second embodiment of the present invention.
  • FIG. 9 is a diagram illustrating image processing contents executed in a second embodiment of the present invention.
  • FIG. 9 is a diagram illustrating image processing contents executed in a second embodiment of the present invention.
  • FIG. 9 is a diagram illustrating image processing contents executed in a second embodiment of the present invention.
  • FIG. 9 is a diagram illustrating an effect of image processing performed in a second embodiment of the present invention.
  • FIG. 9 is a diagram illustrating an effect of image processing performed in a second embodiment of the present invention.
  • FIG. 9 is a diagram illustrating an effect of image processing performed in a second embodiment of the present invention.
  • FIG. 9 is a diagram illustrating an effect of image processing performed in a second embodiment of the present invention.
  • FIG. 9 is a diagram illustrating an effect of image processing performed in a second embodiment of the present invention.
  • FIG. 9 is a diagram illustrating an effect of image processing performed in a second
  • FIG. 9 is a diagram illustrating an effect of image processing performed in a second embodiment of the present invention.
  • FIG. 9 is a diagram illustrating an effect of image processing performed in a second embodiment of the present invention.
  • FIG. 9 is a diagram illustrating an effect of image processing performed in a second embodiment of the present invention. It is a figure explaining the report screen displayed on display means in S1007 of a 2nd embodiment of the present invention. It is a figure explaining a confirmation screen in a 3rd embodiment of the present invention. It is a figure explaining the front motion contrast image displayed on the confirmation screen in a 3rd embodiment of the present invention. It is a figure explaining the front motion contrast image displayed on the confirmation screen in a 3rd embodiment of the present invention. It is a figure explaining the front motion contrast image displayed on the confirmation screen in a 3rd embodiment of the present invention.
  • the image processing apparatus performs the following image correction processing in order to robustly suppress various luminance step artifacts generated in the slow axis direction of the tomographic image of the subject's eye captured using OCT. That is, distribution information of the blood vessel candidate region is generated based on the luminance attenuation rate between the retinal surface layer and the outer retinal layer of the tomographic image. Next, by dividing the luminance value of the high-dimensional smoothed tomographic image by the luminance value of the low-dimensional (fast axis direction only) smoothed tomographic image weighted to the blood vessel candidate region, the luminance correction coefficient value is obtained. Generate a distribution.
  • the luminance step artifact that occurs in the slow axis direction is, for example, a band-like artifact that extends in the X direction (fast axis direction) due to fixation disparity.
  • the fast axis direction is, for example, the axial direction of the main scanning of the measurement light used when acquiring a three-dimensional tomographic image.
  • FIG. 4B An example of a luminance step occurring in the slow axis direction of the tomographic image of the subject's eye to be corrected in the present embodiment will be described. If fixation disparity of the subject's eye occurs during imaging of the OCT tomographic image, rescanning is performed. For example, in long-time imaging, the luminance of the re-scanned region is low because the positions of the eyelashes and pupils of the subject's eye are different between the first scan and the re-scan, etc., as shown by the white arrow in FIG. 4B. Such a band-like low-luminance step easily occurs.
  • the horizontal direction is the fast axis direction
  • the vertical direction is the slow axis direction.
  • a band-shaped luminance step artifact is not always generated in the entire area in the fast axis direction of an image, and a band-shaped luminance step artifact localized in a part of the fast axis direction occurs in the following cases. That is, an area including vitreous opacity is scanned at the time of rescanning, and when a shadow area occurs, a band-shaped low-luminance step localized in a part in the fast axis direction may occur in the rescanned area on the tomographic image. (A low luminance step indicated by a white arrow in FIG. 4E).
  • FIG. 2 is a diagram illustrating a configuration of an image processing system 10 including the image processing apparatus 101 according to the present embodiment.
  • the image processing apparatus 101 is connected to a tomographic imaging apparatus 100 (also referred to as OCT), an external storage unit 102, an input unit 103, and a display unit 104 via an interface. It is constituted by.
  • the tomographic image capturing apparatus 100 is an apparatus that captures a tomographic image of the eye to be inspected.
  • SD-OCT is used as the tomographic imaging apparatus 100.
  • the present invention is not limited to this, and may be configured using, for example, SS-OCT.
  • the measurement optical system 100-1 is an optical system for acquiring an anterior ocular segment image, an SLO fundus image of an eye to be examined, and a tomographic image.
  • the stage section 100-2 enables the measurement optical system 100-1 to move forward, backward, left, and right.
  • the base unit 100-3 incorporates a spectroscope described later.
  • the image processing apparatus 101 is a computer that controls the stage unit 100-2, controls the alignment operation, reconstructs a tomographic image, and the like.
  • the external storage unit 102 stores a tomographic imaging program, patient information, imaging data, image data and measurement data of a past examination, and the like.
  • the input unit 103 issues instructions to the computer, and is specifically composed of a keyboard and a mouse.
  • the display unit 104 includes, for example, a monitor.
  • An objective lens 201 is installed so as to face the subject's eye 200, and a first dichroic mirror 202 and a second dichroic mirror 203 are arranged on the optical axis. These dichroic mirrors are branched into an optical path 250 of the OCT optical system, an optical path 251 for the SLO optical system and the fixation lamp, and an optical path 252 for anterior eye observation for each wavelength band.
  • the optical path 251 for the SLO optical system and the fixation lamp includes an SLO scanning unit 204, lenses 205 and 206, a mirror 207, a third dichroic mirror 208, an APD (Avalanche Photodiode) 209, an SLO light source 210, and a fixation lamp 211. ing.
  • the mirror 207 is a prism on which a perforated mirror or a hollow mirror is deposited, and separates the illumination light from the SLO light source 210 from the return light from the subject's eye.
  • the third dichroic mirror 208 separates the optical path of the SLO light source 210 and the optical path of the fixation lamp 211 for each wavelength band.
  • the SLO scanning unit 204 scans the light emitted from the SLO light source 210 on the eye 200, and includes an X scanner that scans in the X direction and a Y scanner that scans in the Y direction.
  • the X scanner needs to perform high-speed scanning, it is constituted by a polygon mirror, and the Y scanner is constituted by a galvanometer mirror.
  • the lens 205 is driven by a motor (not shown) for focusing the SLO optical system and the fixation lamp 211.
  • the SLO light source 210 generates light having a wavelength near 780 nm.
  • the APD 209 detects the return light from the subject's eye.
  • the fixation lamp 211 generates visible light and urges the subject to fixate.
  • Light emitted from the SLO light source 210 is reflected by the third dichroic mirror 208, passes through the mirror 207, passes through the lenses 206 and 205, and is scanned on the eye 200 by the SLO scanning means 204.
  • the return light from the subject's eye 200 returns along the same path as the illumination light, is reflected by the mirror 207, is guided to the APD 209, and an SLO fundus image is obtained.
  • the light emitted from the fixation lamp 211 passes through the third dichroic mirror 208 and the mirror 207, passes through the lenses 206 and 205, forms a predetermined shape at an arbitrary position on the eye 200 by the SLO scanning unit 204, Encourage the subject to fixate.
  • lenses 212 and 213, a split prism 214, and a CCD 215 for anterior eye observation for detecting infrared light are arranged.
  • the CCD 215 has sensitivity at the wavelength of irradiation light (not shown) for anterior ocular segment observation, specifically, around 970 nm.
  • the split prism 214 is disposed at a position conjugate with the pupil of the eye 200 to be inspected, and the distance in the Z-axis direction (optical axis direction) of the measurement optical system 100-1 to the eye 200 to be inspected is defined as a split image of the anterior eye part. Can be detected.
  • the optical path 250 of the OCT optical system constitutes an OCT optical system as described above, and is used for capturing a tomographic image of the eye 200 to be inspected. More specifically, an interference signal for forming a tomographic image is obtained.
  • the XY scanner 216 scans light on the eye 200 to be inspected, and is illustrated as a single mirror in FIG. 2B, but is actually a galvano mirror that performs scanning in the XY two-axis directions.
  • the X direction fast axis direction
  • the Y direction slow axis direction
  • the direction in which the measurement light is scanned on the fundus.
  • the scan is not a raster scan as in the present embodiment (for example, a circle scan or a radial scan), this is not always the case.
  • the lens 217 is driven by a motor (not shown) to focus light from the OCT light source 220 emitted from the fiber 224 connected to the optical coupler 219 to the eye 200 to be inspected.
  • the return light from the eye 200 to be examined is simultaneously focused on the tip of the fiber 224 in the form of a spot and is incident.
  • 220 is an OCT light source
  • 221 is a reference mirror
  • 222 is a dispersion compensating glass
  • 223 is a lens
  • 219 is an optical coupler
  • 224 to 227 are single mode optical fibers connected to and integrated with the optical coupler
  • 230 is a spectroscope.
  • the light emitted from the OCT light source 220 passes through an optical fiber 225 and is split via an optical coupler 219 into measurement light on the optical fiber 224 side and reference light on the optical fiber 226 side.
  • the measurement light is applied to the subject's eye 200 to be observed through the above-described OCT optical system optical path, and reaches the optical coupler 219 via the same optical path due to reflection and scattering by the subject's eye 200.
  • the reference light reaches the reference mirror 221 via the optical fiber 226, the lens 223, and the dispersion compensating glass 222 inserted for adjusting the wavelength dispersion of the measurement light and the reference light, and is reflected. Then, the light returns to the same optical path and reaches the optical coupler 219.
  • the measuring light and the reference light are multiplexed by the optical coupler 219 to become interference light.
  • interference occurs when the optical path length of the measurement light and the optical path length of the reference light become substantially the same.
  • the reference mirror 221 is held so as to be adjustable in the optical axis direction by a motor and a driving mechanism (not shown), and can adjust the optical path length of the reference light to the optical path length of the measurement light.
  • the interference light is guided to the spectroscope 230 via the optical fiber 227.
  • the polarization adjustment units 228 and 229 are provided in the optical fibers 224 and 226, respectively, and perform polarization adjustment. These polarization adjusting sections have several portions where the optical fiber is looped. By rotating the loop portion about the longitudinal direction of the fiber, the fiber is twisted, and the polarization states of the measurement light and the reference light can be adjusted and matched.
  • the spectroscope 230 includes lenses 232 and 234, a diffraction grating 233, and a line sensor 231.
  • the interference light emitted from the optical fiber 227 is converted into parallel light through the lens 234, is then separated by the diffraction grating 233, and is imaged on the line sensor 231 by the lens 232.
  • the OCT light source 220 is an SLD (Super Luminescent Diode) that is a typical low coherent light source.
  • the center wavelength is 855 nm and the wavelength bandwidth is about 100 nm.
  • the bandwidth is an important parameter because it affects the resolution of the obtained tomographic image in the optical axis direction.
  • the type of the light source is SLD, but it is sufficient that low coherent light can be emitted, and ASE (Amplified Spontaneous Emission) or the like can be used.
  • ASE Ampton-induced Spontaneous Emission
  • Near-infrared light is suitable for the center wavelength in view of measuring the eye. Since the center wavelength affects the resolution of the obtained tomographic image in the horizontal direction, it is desirable that the center wavelength be as short as possible. For both reasons, the center wavelength was 855 nm.
  • a Michelson interferometer is used as an interferometer, but a Mach-Zehnder interferometer may be used. According to the light amount difference between the measurement light and the reference light, it is desirable to use a Mach-Zehnder interferometer when the light amount difference is large and to use a Michelson interferometer when the light amount difference is relatively small.
  • the image processing apparatus 101 is a personal computer (PC) connected to the tomographic image capturing apparatus 100, and includes an image acquisition unit 101-01, a storage unit 101-02, an imaging control unit 101-03, an image processing unit 101-04, and a display.
  • a control unit 101-05 is provided.
  • the image processing apparatus 101 has a function that the arithmetic processing unit CPU executes software modules for realizing the image acquisition unit 101-01, the imaging control unit 101-03, the image processing unit 101-04, and the display control unit 101-05. To achieve.
  • the present invention is not limited to this.
  • the image processing unit 101-04 may be realized by dedicated hardware such as an ASIC, or the display control unit 101-05 may be realized by using a dedicated processor such as a GPU different from a CPU. May be realized.
  • the connection between the tomographic imaging apparatus 100 and the image processing apparatus 101 may be configured via a network.
  • the image acquisition unit 101-01 acquires signal data of an SLO fundus image and a tomographic image captured by the tomographic image capturing apparatus 100.
  • the image acquisition unit 101-01 has a tomographic image generation unit 101-11.
  • the tomographic image generation unit 101-11 acquires signal data (interference signal) of the tomographic image captured by the tomographic image capturing apparatus 100, generates a tomographic image by signal processing, and stores the generated tomographic image in the storage unit 101-02. Store.
  • the imaging control unit 101-03 controls imaging of the tomographic imaging apparatus 100.
  • the imaging control includes instructing the tomographic imaging apparatus 100 regarding setting of imaging parameters, and instructing the start or end of imaging.
  • the image processing unit 101-04 includes a positioning unit 101-41, an image feature obtaining unit 101-42, a projecting unit 101-43, and a correcting unit 101-44.
  • the image acquisition unit 101-01 described above is an example of a first acquisition unit according to the present invention.
  • the image feature acquisition unit 101-42 acquires the layer boundary of the retina and the choroid, a candidate blood vessel region, the position of the fovea and the center of the optic disc from the tomographic image.
  • the projection unit 101-43 projects an image in a depth range based on the position of the layer boundary acquired by the image feature acquisition unit 101-42, and generates a front image.
  • the correction unit 101-44 includes a conversion unit 101-441, a weighting unit 101-442, and a calculation unit 101-443.
  • the correction unit 101-44 uses the luminance correction coefficient calculated by the arithmetic processing of the high-dimensional smoothed tomographic image and the low-dimensional smoothed image obtained by smoothing the tomographic image weighted with the luminance of the blood vessel candidate region in the fast axis direction. A process for suppressing a luminance step generated in the slow axis direction of the tomographic image is performed.
  • the conversion unit 101-441 includes a high-dimensional conversion unit 101-4411 that generates a high-dimensional approximate luminance value distribution, and a low-dimensional conversion unit 101-4412 that generates a low-dimensional approximate luminance value distribution.
  • the weighting unit 101-442 weights the luminance value of the tomographic image based on the distribution information of the blood vessel candidate region acquired by the blood vessel acquisition unit 101-421.
  • Arithmetic sections 101-443 calculate the luminance correction coefficient value distribution by calculating the high-dimensional smoothed image generated by high-dimensional conversion section 101-4411 and the low-dimensional smoothed image generated by low-dimensional conversion section 101-4412. Is calculated.
  • the external storage unit 102 stores information on the subject's eye (patient's name, age, gender, etc.), captured tomographic images and SLO images, imaging parameters, images obtained by processing the images, and distribution data of blood vessel candidate regions. , The luminance correction coefficient value distribution, and the parameters set by the operator in association with each other.
  • the input unit 103 is, for example, a mouse, a keyboard, a touch operation screen, or the like. An operator issues an instruction to the image processing apparatus 101 or the tomographic image capturing apparatus 100 via the input unit 103.
  • FIG. 3 is a flowchart showing a flow of operation processing of the entire system in the present embodiment.
  • Step 301> By operating the input unit 103, the operator sets the imaging conditions of the OCT image (three-dimensional tomographic image) to be instructed to the tomographic imaging apparatus 100.
  • the procedure includes 1) selection of a scan mode and 2) a procedure for setting imaging parameters corresponding to the scan mode.
  • OCT imaging is executed with the following settings. 1) Select the Macula 3D scan mode 2) Set the following shooting parameters 2-1) Scan area size: 10 ⁇ 10 mm 2-2) Main scanning direction: horizontal direction 2-3) Scan interval: 0.01 mm 2-4) Fixation light position: midway between fovea and optic disc 2-5) Number of B scans at the same imaging position: 1 2-6) Coherence gate position: vitreous side
  • the operator operates the input unit 103 and presses a shooting start button (not displayed) in the shooting screen to start shooting an OCT tomographic image under the set shooting conditions.
  • the imaging control unit 101-03 instructs the tomographic imaging apparatus 100 to perform OCT imaging based on the above settings, and the tomographic imaging apparatus 100 acquires a corresponding OCT tomographic image.
  • the tomographic imaging apparatus 100 also acquires an SLO image, and executes a tracking process based on the SLO moving image.
  • the number of times of capturing a tomographic image at the same scanning position is one (not repeated).
  • the number of times of imaging at the same scanning position may be set to an arbitrary number.
  • Step 302 The image acquisition unit 101-01 and the image processing unit 101-04 reconstruct the tomographic image acquired in S301.
  • the tomographic image generation unit 101-11 generates a tomographic image by performing wave number conversion, fast Fourier transform (FFT), and absolute value conversion (acquisition of amplitude) on the interference signal acquired by the image acquisition unit 101-01.
  • the positioning unit 101-41 performs positioning between B-scan tomographic images.
  • the image feature acquisition unit 101-42 acquires the layer boundaries of the retina and the choroid and the boundaries (not shown) of the anterior and posterior surfaces of the cribriform plate from the tomographic image.
  • the inner boundary membrane 1 As shown in FIG. 6A, the inner boundary membrane 1, the nerve fiber layer-ganglion cell layer boundary 2, the ganglion cell layer-inner plexiform layer boundary 3, and the photoreceptor cell inner-segment outer-node junction 4 as the layer boundaries.
  • the retinal pigment epithelium 5, Bruch's membrane 6, and choroid-sclera boundary 7 are acquired.
  • the detected end of the Bruch's membrane 6 (the end of the Bruch's membrane opening) is specified as a Disc boundary of the optic papilla.
  • the variable shape model is used as a method for acquiring the layer boundaries of the retina and the choroid and the front / rear boundaries of the cribriform plate, but any known segmentation method may be used.
  • the layer boundary to be obtained is not limited to the above.
  • the inner plexiform layer-inner genomic layer boundary, inner genomic layer-outer plexiform layer boundary, outer plexiform layer-outer genomic layer boundary, outer limiting membrane, photoreceptor outer segment tip (COST) of the retina can be obtained by any known segmentation method. May be acquired.
  • the present invention includes a case where the choroid capillary plate-Sattler layer boundary and the Sattler layer-Haller layer boundary of the choroid are obtained by any known segmentation method.
  • the front and rear boundaries of the sieve plate may be manually set. For example, it can be set manually by moving the position of a specific layer boundary (for example, the inner limiting membrane 1) by a predetermined amount.
  • the acquisition process of the layer boundary and the front / rear surface boundary of the cribriform plate may be performed at the time of acquiring the blood vessel candidate region in S303 instead of this step.
  • the blood vessel acquisition units 101-421 generate information on the distribution of the blood vessel candidate regions based on the result of comparing the luminance statistics between different predetermined depth ranges.
  • the blood vessel region has a feature that the depth range (layer type) existing as shown by 601 in FIG. 6A is roughly determined, and that a shadow 602 is likely to be generated under the region.
  • the luminance tends to be low over most of the depth range as shown by 603 in FIG. 6A.
  • the difference (difference or ratio) of the luminance in the “depth range in which blood vessels are likely to be present (retina surface layer)” and the “depth range in which the luminance decrease due to shadow is most remarkable (outer retina)” The blood vessel candidate region is specified based on
  • the map is an example of distribution information in an in-plane direction intersecting the depth direction of the subject's eye.
  • the weighting unit 101-442 generates a weighted tomographic image in which the luminance value in the blood vessel candidate region of the tomographic image is weighted using the information on the distribution of the blood vessel candidate region generated by the blood vessel acquisition unit 101-421 in S303.
  • the high-dimensional conversion unit 101-4411 generates a high-dimensional smoothed tomographic image
  • the low-dimensional conversion unit 101-4412 performs smoothing processing on the weighted tomographic image in the fast axis direction.
  • the calculation units 101 to 443 generate a brightness correction coefficient map for the tomographic image by performing an arithmetic process on the high-dimensional smoothed tomographic image and the low-dimensional smoothed tomographic image.
  • the correction unit 101-44 multiplies each pixel of the tomographic image by the luminance correction coefficient value calculated in S304 to generate a tomographic image having a luminance step corrected.
  • the method of applying the luminance correction coefficient is not limited to multiplication, and any known calculation method may be applied. For example, any of addition, subtraction, and division may be applied. Further, at least a part of the three-dimensional tomographic image may be corrected using the luminance correction coefficient value. At this time, at least a part of the three-dimensional tomographic image also includes a C-scan image and the like.
  • the display control unit 101-05 displays the brightness-corrected tomographic image generated in S305 on the display unit 104.
  • the brightness-corrected tomographic image is stored in the storage unit 101-02 or the external storage unit 102. save.
  • the image processing unit 101-04 which is an example of the image generating unit, generate at least one front image (front tomographic image) based on at least a part of the corrected three-dimensional tomographic image.
  • the display control unit 101-05 preferably causes the display unit 104 to display at least one generated front image.
  • the blood vessel acquiring unit 101-421 acquires the tomographic image generated by the tomographic image generating unit 101-11 in S302.
  • the blood vessel acquisition unit 101-421 acquires the layer data of the retina and the choroid, and the boundary data of the front and back surfaces of the cribriform plate, which are specified by the image feature acquisition unit 101-42 in S302.
  • the blood vessel acquisition unit 101-421 instructs the correction units 101-44 to perform a correction process (hereinafter, referred to as roll-off correction) for compensating signal attenuation in the depth direction caused by the roll-off characteristic of the tomographic imaging apparatus 100.
  • the correction units 101-44 perform the roll-off correction processing.
  • the correction coefficient H (z) in the depth direction for performing the roll-off correction can be expressed as, for example, Expression (1).
  • the roll-off correction is performed by multiplying each pixel value of the tomographic image by the correction coefficient H (z).
  • H (z) ⁇ (BGa + 2 ⁇ ) / (BGa (z) + 2 ⁇ (z)) ⁇ / (1 + RoF (z) ⁇ RoF (z0)) (1)
  • BGa and ⁇ indicate the average value and the standard deviation of the luminance distribution BG of the entire B-scan data acquired without the inspection object, respectively.
  • BGa (z) and ⁇ (z) are the average value and the standard deviation of the luminance distribution in the direction orthogonal to the z-axis, calculated at each depth position (z) in the B-scan data acquired without the inspection object. Is shown.
  • z0 indicates a reference depth position included in the B scan range. Although z0 may be set to an arbitrary constant, it is set here to a value of 1/4 of the maximum value of z. Note that the roll-off correction formula is not limited to the above, and any known correction process may be executed as long as the process has an effect of compensating signal attenuation in the depth direction caused by the roll-off characteristic of the tomographic image capturing apparatus 100. Good.
  • the blood vessel acquisition unit 101-421 instructs the projection unit 101-43 to generate a front tomographic image of the retinal surface layer and a front tomographic image of the outer retinal layer in preparation for comparing the luminance statistics in different depth ranges.
  • a step -43 generates the front tomographic image. Any known projection method may be used as the projection method, but in this embodiment, average value projection is performed.
  • FIG. 6B shows an example of a front tomographic image of the retina surface layer
  • FIG. 6C shows an example of a front tomographic image of the outer retina layer.
  • the brightness value in the blood vessel region is high (due to the interaction between the measurement light and the red blood cells in the blood vessel region), and in the front tomographic image of the outer retinal layer, the brightness value in the blood vessel region is low due to shadows You can see that.
  • the blood vessel acquisition unit 101-421 which is an example of the information generation unit, calculates the distribution of the luminance attenuation rate Ar based on the luminance values of the two types of front tomographic images calculated in S504 in order to compare the luminance statistics in different depth ranges. Is calculated.
  • (luminance of frontal tomographic image of retinal surface layer) ⁇ (luminance of frontal tomographic image of outer retinal layer) is calculated for each pixel (x, y) as an index relating to comparison of luminance statistical values in different depth ranges.
  • a map (FIG. 6D) of the attenuation rate Ar (x, y) is generated.
  • the blood vessel acquisition unit 101-421 generates a blood vessel candidate area map V (x, y) representing the likeness of a blood vessel area by normalizing the luminance attenuation rate map Ar (x, y) generated in S505.
  • the blood vessel candidate region map V (x, y) is normalized to V (x, y) by normalizing the luminance attenuation rate map Ar (x, y) calculated in S505 using predetermined values WL and WW.
  • x, y) (Ar (x, y) -WL) / WW So that 0 ⁇ V (x, y) ⁇ 1 is satisfied.
  • FIG. 6E shows an example of the blood vessel candidate region map V (x, y). It can be seen that the blood vessel candidate region is highlighted.
  • the normalization processing is not limited to the above, and any known normalization method may be used.
  • a low-brightness area in a projection image to which the brightness values of all depth ranges are added or a low-brightness area in a front tomographic image generated in the depth range of the outer retina layer is regarded as a blood vessel area
  • a shadow due to vitreous opacity (603 in FIG. 6A) or a vitiligo (605 in FIG. 6A) is also included.
  • distribution information of only the blood vessel candidate area (and the bleeding area generated from the blood vessel) can be generated. Further, a cluster scan such as OCTA is unnecessary, and distribution information on a blood vessel candidate region can be generated even with a single-scan tomographic image.
  • a high-brightness lesion such as a vitiligo, for example, (average luminance in the deep retina)
  • a luminance attenuation rate such as (value) ⁇ (average luminance value in the outer retina layer) may be calculated.
  • the blood vessel region, the bleeding region, the high-brightness lesion, and the like are regions included in the eye to be inspected, and are examples of regions that cause shadows generated along the depth direction of the eye to be inspected.
  • the generation of the front tomographic image is not indispensable in the calculation of the luminance decay rate, and the calculation may be performed in A-scan units as a three-dimensional tomographic image.
  • the luminance decay rate is not limited to the ratio between the luminance statistics in different depth ranges, and may be calculated based on, for example, the difference amount (between the luminance statistics in different depth ranges).
  • the correction unit 101-44 acquires the tomographic image generated by the tomographic image generation unit 101-11 in S302. Next, the operator instructs, via a user interface displayed on the display unit 104, a desired projection depth range and generation of a front tomographic image corresponding to the projection depth range.
  • the projection unit 101-43 projects the three-dimensional tomographic image to which the roll-off correction has been applied in the instructed depth range to generate a front tomographic image (FIG. 7A).
  • the average value of the tomographic data in the depth direction corresponding to each pixel in the plane corresponding to the front of the fundus is set as the pixel value of the pixel.
  • the projection processing is not limited to such average value projection, and any known projection method may be used.
  • the median value, maximum value, mode value, or the like of the tomographic data in the depth direction corresponding to each pixel may be used as the pixel value.
  • the high-dimensional conversion unit 101-4411 calculates a high-dimensional approximate value distribution, which is an example of the first approximate value distribution, by smoothing the luminance value of the front tomographic image two-dimensionally.
  • the two-dimensional smoothing process is an example of a process (two-dimensional conversion process) of converting the front image into two dimensions when acquiring the first approximate value distribution.
  • the high-dimensional conversion unit 101-4411 smoothes the luminance value of each pixel in the front tomographic image generated in S511 two-dimensionally, so that the luminance value of the tomographic image as shown in FIG. Calculate the approximate dimension distribution.
  • the smoothing process is performed as an example of the process of calculating the approximate value distribution, but a morphological operation such as a closing process or an opening process may be performed as described later.
  • smoothing may be performed using an arbitrary spatial filter, or may be performed by performing frequency conversion on tomographic data using fast Fourier transform (FFT) or the like and then suppressing high frequency components to perform smoothing.
  • FFT fast Fourier transform
  • the convolution operation is unnecessary, so that the smoothing process can be executed at high speed.
  • a predetermined window function (Hamming window or Hanning window) is applied in the frequency domain to suppress ringing, or a Butterworth filter or the like is applied. By doing so, the high-frequency component may be suppressed.
  • the weighting unit 101-442 acquires the blood vessel candidate region map V (x, y) (FIG. 7D) from the blood vessel acquisition unit 101-421.
  • the weighting unit 101-442 weights the luminance value of the tomographic image in the blood vessel candidate region using the value of the blood vessel candidate region map V (x, y). Note that this weighting exists along the fast axis direction of the measurement light used when acquiring a three-dimensional tomographic image when acquiring a low-dimensional approximate value distribution which is an example of the second approximate value distribution. It is an example of a different calculation process performed for a predetermined tissue that is a blood vessel or a bleeding region and a region other than the predetermined tissue. This weighting is not essential in the present invention.
  • the brightness value of the region corresponding to the high region in the front tomographic image acquired in S511 is calculated in S512 as the region (blood vessel likeness) of the blood vessel candidate region map V (x, y) is high.
  • the luminance value I (x, y) of the front tomographic image acquired in S511 is maintained as close to the high-dimensional approximate value I_2ds (x, y) as possible, and in a region where the value of V (x, y) is lower.
  • I_w (x, y) (1.0 ⁇ V (x, y)) * I (x, y) + V (x, y) * I_2ds (x, y)
  • FIG. 7E shows an example of the weighted front tomographic image I_w (x, y).
  • the weighting method for the luminance value of the blood vessel candidate region shown here is merely an example, and is a process of increasing the luminance value of the blood vessel candidate region traveling in the fast axis direction or a process of approaching the luminance value near the blood vessel candidate region. If so, any weighting may be performed.
  • the low-dimensional conversion unit 101-4412 calculates a low-dimensional approximate value distribution related to the luminance value of the tomographic image. Specifically, a process (smoothing process or morphological operation) of calculating a rough value distribution in the fast axis direction is performed on the brightness value of each pixel of the front tomographic image weighted with the brightness value of the blood vessel candidate region.
  • FIG. 7F shows an example of the low-dimensional approximate value distribution calculated in this step.
  • the low-dimensional smoothing processing is an example of processing (one-dimensional conversion processing) for converting the front image into one dimension when obtaining the second approximate value distribution.
  • a predetermined window function such as a Hamming window or a Hanning window
  • a predetermined window function such as a Hamming window or a Hanning window
  • the calculation units 101-443 calculate the luminance correction coefficient distribution for the tomographic image by calculating the high-dimensional rough value distribution and the low-dimensional rough value distribution of the tomographic image.
  • the luminance value of the two-dimensional smoothed tomographic image generated in S512 is divided by the luminance value of the weighted fast-axis direction smoothed tomographic image generated in S515, so that the luminance correction coefficient map for the tomographic image ( FIG. 7G) is generated.
  • FIG. 8A is an example of a tomographic image including both a band-shaped luminance step limited to a certain range in the fast axis direction and a blood vessel region traveling in the fast axis direction. It is necessary to selectively suppress only the band-shaped luminance step without overcorrecting the luminance value of the blood vessel region traveling in the fast axis direction.
  • FIGS. 8B and 8D show examples of processing results when the low-dimensional conversion processing in S515, the luminance correction coefficient map calculation processing in S516, and the luminance step correction processing in S305 are performed without performing luminance weighting on the blood vessel candidate region.
  • FIG. 8F In the blood vessel region (running in the fast axis direction) indicated by the white arrow in FIG. 8B, a low-luminance region remains in a band shape, and is similar to a luminance step. Therefore, a high correction coefficient value is calculated in the blood vessel region indicated by the white arrow in FIG. 8D even though there is no luminance step, and in the luminance step correction processing in S305, the luminance of the blood vessel traveling in the fast axis direction and its neighboring region is calculated. Overcorrection of the value occurs (region indicated by a white arrow in FIG. 8F).
  • FIG. 8C, FIG. 8E, and FIG. 8E show the processing results when the low-dimensional conversion processing in S515, the luminance correction coefficient map calculation processing in S516, and the luminance step correction processing in S305 are performed after performing luminance weighting on the blood vessel candidate area. It is shown in FIG. 8G.
  • FIG. 8C no band-like low-luminance area corresponding to the blood vessel area running in the fast axis direction is generated. Therefore, in FIG. 8E, an appropriate correction coefficient value is calculated for the blood vessel region, and no overcorrection is found for the brightness values of the blood vessel traveling in the fast axis direction and the vicinity thereof in the brightness step correction process of S305 (FIG. 8G).
  • the method of suppressing the band-shaped luminance step generated on the front tomographic image (generating the frontal tomographic image with the luminance step corrected) has been described, but the present invention is not limited to this.
  • the following procedure may be used to suppress a band-shaped luminance step generated on the three-dimensional tomographic image and generate a three-dimensional tomographic image with the luminance step corrected.
  • a front tomographic image is generated in a number of different projection depth ranges, and a luminance step correction coefficient map is generated for each front tomographic image.
  • the value of the luminance step correction coefficient map (correction coefficient value) corresponding to the projection depth range to which each pixel belongs is calculated, whereby the luminance step correction is completed.
  • a three-dimensional tomographic image can be generated.
  • the different projection depth ranges for example, there are four types: a retinal surface layer, a deep retinal layer, an outer retinal layer, and a choroid. Alternatively, the type of each layer belonging to the retina and the choroid may be specified.
  • the present invention is not limited to the correction processing of the band-shaped luminance step generated when a tomographic image is photographed by a so-called 3-D scan, and is not limited to the slow axis direction when a tomographic image is photographed by various scan patterns. Can be applied to the correction of the luminance step occurring in the image.
  • the present invention includes a case of correcting a luminance step generated in a slow axis direction when photographing is performed by a number of circle scans having different radii or a radial scan.
  • the circumferential direction is considered to be the fast axis direction
  • the direction orthogonal to the circumferential direction is considered to be the slow axis direction.
  • each radial scan direction passing through a predetermined point is considered to be a fast axis direction
  • a circumferential direction around the predetermined point is considered to be a slow axis direction.
  • the image processing apparatus 101 performs the following image correction processing in order to robustly suppress various luminance step artifacts generated in the slow axis direction of the tomographic image of the subject's eye captured using OCT. Do. That is, the image processing apparatus compares the distribution information in the in-plane direction that intersects the depth direction of the subject's eye with a plurality of distribution information corresponding to the plurality of depth ranges in the three-dimensional tomographic image. Distribution information on a predetermined area (candidate blood vessel area) in the eye to be inspected, which causes a shadow generated along, is generated.
  • the image processing apparatus generates distribution information of a blood vessel candidate region based on a luminance attenuation rate between a retinal surface layer and a retinal outer layer of a tomographic image.
  • a luminance correction coefficient map is generated by weighting the luminance value of the high-dimensional smoothed tomographic image with the luminance value of the blood vessel candidate region and then dividing the luminance value of the tomographic image smoothed in the fast axis direction. I do.
  • a luminance step artifact generated in the slow axis direction of the tomographic image is robustly suppressed. It is sufficient that distribution information is finally generated, and it is not necessary to generate, for example, an image (map) as a plurality of distribution information during the generation.
  • the image processing apparatus is configured to robustly suppress various luminance step artifacts generated in the slow axis direction of a motion contrast image generated from a tomographic image of the subject's eye obtained by cluster imaging using OCT.
  • the following image processing is performed. That is, the luminance value of the high-dimensional smoothed motion contrast image is weighted with the luminance value of the blood vessel candidate region acquired in the same manner as in the first embodiment, and then the luminance value of the motion contrast image smoothed in the fast axis direction. Is divided to generate a luminance correction coefficient map. Further, a case will be described in which a luminance step artifact generated in the slow axis direction of a motion contrast image is robustly suppressed by multiplying a tomographic image by a luminance correction coefficient.
  • a high motion contrast value is calculated even for an area where red blood cell displacement is not actually occurring.
  • a band-like high-luminance step as shown by a white arrow in 4D is generated.
  • the same motion contrast image may include both a low luminance step and a high luminance step.
  • the eye tissue such as the retina and the choroid is included only in a part of the fast axis direction on the image, or when there is a region where layer boundary detection is defective, a band-like high-luminance step is formed in the middle. There is a problem that the brightness is interrupted or the thickness or height of the luminance step changes halfway.
  • FIG. 9 shows a configuration of an image processing system 10 including the image processing apparatus 101 according to the present embodiment.
  • the image acquisition unit 101-01 includes a motion contrast data generation unit 101-12
  • the image processing unit 101-04 includes a synthesis unit 101-45.
  • FIG. 10 shows an image processing flow in this embodiment. In FIG. 10, steps S1002 and S1003 are the same as those in the first embodiment, and a description thereof will not be repeated.
  • Step 1001> By operating the input unit 103, the operator sets an OCT image capturing condition to be instructed to the tomographic image capturing apparatus 100.
  • the procedure includes 1) selection of a scan mode and 2) a procedure for setting imaging parameters corresponding to the scan mode.
  • OCT imaging is executed with the following settings. 1) Select the OCTA scan mode 2) Set the following imaging parameters 2-1) Scan area size: 10 x 10 mm 2-2) Main scanning direction: horizontal direction 2-3) Scan interval: 0.01 mm 2-4) Fixation light position: intermediate between fovea and optic papilla 2-5) Number of B scans at the same imaging position: 4 2-6) Coherence gate position: vitreous side
  • the operator operates the input unit 103 and presses a shooting start button (not displayed) in the shooting screen to start OCTA shooting repeatedly under the set shooting conditions.
  • the imaging control unit 101-03 instructs the tomographic imaging apparatus 100 to repeatedly perform OCTA imaging based on the above setting, and the tomographic imaging apparatus 100 acquires a corresponding OCT tomographic image.
  • the number of times of repetitive imaging (the number of clusters) in this step is five.
  • the number of times of repetitive imaging (the number of clusters) may be set to an arbitrary number.
  • the tomographic imaging apparatus 100 also acquires an SLO image, and executes a tracking process based on the SLO moving image.
  • the reference SLO image used for the tracking process in the repeated OCTA imaging is the reference SLO image set in the first cluster imaging, and a common reference SLO image is used in all cluster imaging. Also, during the second and subsequent cluster shootings, The same setting values as in the first cluster imaging are used (not changed) for the selection of the left and right eyes and the execution of the tracking processing.
  • Step 1004 The image acquisition unit 101-01 and the image processing unit 101-04 perform positioning of the tomographic images belonging to the same cluster and positioning of the tomographic images between the clusters by the positioning unit 101-41. Is used to generate a motion contrast image.
  • the motion contrast data generation unit 101-12 calculates a motion contrast between adjacent tomographic images in the same cluster.
  • the decorrelation value Mxy is obtained as the motion contrast based on the following equation (2).
  • Axy indicates the amplitude (of the complex number data after the FFT processing) at the position (x, y) of the tomographic image data A
  • Bxy indicates the amplitude at the same position (x, y) of the tomographic data B. 0 ⁇ Mxy ⁇ 1, and a value closer to 1 is taken as the difference between the two amplitude values is larger.
  • the decorrelation calculation processing as in equation (2) is performed between arbitrary adjacent tomographic images (belonging to the same cluster), and the average of the obtained (number of tomographic images per cluster minus one) motion contrast values is calculated. An image having pixel values is generated as a final motion contrast image.
  • the motion contrast is calculated based on the amplitude of the complex data after the FFT processing here
  • the method of calculating the motion contrast is not limited to the above.
  • the motion contrast may be calculated based on the phase information of the complex data, or the motion contrast may be calculated based on both the amplitude and the phase information.
  • the motion contrast may be calculated based on the real part or the imaginary part of the complex data.
  • the decorrelation value is calculated as the motion contrast, but the motion contrast calculation method is not limited to this.
  • the motion contrast may be calculated based on a difference between two values, or the motion contrast may be calculated based on a ratio of the two values.
  • the final motion contrast image is obtained by obtaining the average value of the plurality of acquired decorrelation values, but the present invention is not limited to this.
  • an image having the median value or the maximum value of a plurality of acquired decorrelation values as pixel values may be generated as a final motion contrast image.
  • the image processing unit 101-04 three-dimensionally aligns a group of motion contrast images obtained through repeated OCTA imaging, and performs averaging to generate a high-contrast combined motion contrast image.
  • the combining process is not limited to the simple averaging.
  • the luminance value of each motion contrast image may be arbitrarily weighted and averaged, or an arbitrary statistical value such as a median value may be calculated.
  • the present invention also includes a case where the positioning process is performed two-dimensionally.
  • the synthesizing unit 101-45 may be configured to determine whether a motion contrast image inappropriate for the synthesizing process is included, and then perform the synthesizing process excluding the motion contrast image determined to be inappropriate. For example, when the evaluation value (for example, the average value or median of decorrelation values) of each motion contrast image is out of a predetermined range, it may be determined that the motion contrast image is not suitable for the combination processing.
  • the evaluation value for example, the average value or median of decorrelation values
  • the correcting unit 101-44 performs a process of three-dimensionally suppressing the projection artifact occurring in the motion contrast image.
  • the projection artifact refers to a phenomenon in which the motion contrast in the superficial blood vessels of the retina is reflected on the deep side (the deep retina, the outer retina and the choroid), and a high decorrelation value is generated in the deep area where no blood vessels actually exist.
  • the correction units 101-44 execute processing for suppressing projection artifacts generated on the three-dimensional synthesized motion contrast image. Any known projection artifact suppression method may be used, but in the present embodiment, Step-down ⁇ Exponential ⁇ Filtering is used. In Step-down ⁇ Exponential ⁇ Filtering, projection artifacts are suppressed by executing the processing represented by Expression (3) on each A-scan data on the three-dimensional motion contrast image.
  • is a damping coefficient having a negative value
  • D (x, y, z) is a decorrelation value before the projection artifact suppression processing
  • D E (x, y, z) is a decorrelation value after the suppression processing.
  • the image processing apparatus 101 converts the acquired image group (SLO image or tomographic image) and the imaging condition data of the image group, or the generated condition data associated with the generated motion contrast image into the examination date and time and the information for identifying the eye to be examined. Is stored in the external storage unit 102 in association with the
  • the weighting unit 101-442 generates a weighted motion contrast image in which the luminance value in the blood vessel candidate region of the motion contrast image is weighted using the information on the distribution of the blood vessel candidate region generated by the blood vessel acquisition unit 101-421 in S1003.
  • the high-dimensional conversion unit 101-4411 generates a high-dimensional smoothed motion contrast image
  • the low-dimensional conversion unit 101-4412 performs smoothing processing on the weighted motion contrast image in the fast axis direction.
  • the arithmetic units 101 to 443 generate a luminance correction coefficient map for the motion contrast image by performing an arithmetic process on the high-dimensional smoothed motion contrast image and the low-dimensional smoothed motion contrast image.
  • the correction unit 101-44 multiplies each pixel of the motion contrast image by the luminance correction coefficient value calculated in S1005, thereby generating a motion contrast image with the luminance step corrected.
  • the method of applying the luminance correction coefficient is not limited to multiplication, and any known calculation method may be applied.
  • at least a part of the three-dimensional motion contrast image may be corrected using the luminance correction coefficient value.
  • at least a part of the three-dimensional motion contrast image includes a C-scan motion contrast image and the like.
  • the display control unit 101-05 displays the luminance-corrected motion contrast image generated in step S1006 on the display unit 104.
  • the brightness-corrected motion contrast image is stored in the storage unit 101-02 or the external storage unit 102. I do.
  • the image processing unit 101-04 which is an example of the image generation unit, generate at least one front image (front motion contrast image) based on at least a part of the corrected three-dimensional motion contrast image.
  • the display control unit 101-05 preferably causes the display unit 104 to display at least one generated front image.
  • the report screen 1300 is displayed on the display unit 104 by pressing the Report button 1312 in FIG.
  • a luminance step corrected front tomographic image 1309 is displayed at the lower left of the report screen 1300, and the projection range can be changed by the operator selecting from the list displayed in the list box 1310.
  • a frontal tomographic image or a frontal motion contrast image with the luminance step corrected is superimposed on the SLO image at the upper left of the report screen 1300.
  • brightness contrast corrected motion contrast images 1301 and 1305 having different projection depth ranges are displayed above and below the center of the report screen 1300.
  • the projection range of the luminance step-corrected front motion contrast image can be changed by the operator selecting from a predetermined depth range set (1302 and 1306) displayed in the list box.
  • the type and offset position of the layer boundary used to specify the projection range can be changed from a user interface such as 1303 or 1307, or the layer boundary data (1304 and 1308) superimposed on the tomographic image can be operated from the input unit 103. To change the projection range.
  • the image projection method of the motion contrast image after the luminance step correction and the presence or absence of the projection artifact suppression process may be changed by selecting the user interface such as a context menu.
  • the operator may press a synthesis instruction button 1311 for synthesizing a plurality of motion contrast images to generate a synthesized motion contrast image with corrected luminance step.
  • the combination instruction button in FIG. 13 shows an example in the case of a superimposition process.
  • the present invention is not limited to this, and a combination instruction button for a combination process is also included in the present invention.
  • the tomographic image or the motion contrast image in which the luminance step correction process can be applied is displayed.
  • the application state (applied / not applied) of the luminance step correction processing to the tomographic image or the motion contrast image displayed on the report screen 1300 is determined. You can switch.
  • the user interface is not limited to the check box, and any known user interface may be used.
  • a user interface may be provided that can independently indicate whether or not the luminance step correction processing can be applied to the tomographic image and the motion contrast image.
  • buttons may be provided for a tomographic image and a motion contrast image, or a single user interface may apply to four types of options ((1) applied to both (2) applied only to a tomographic image ( 3) Only for the motion contrast image (4) Not applicable to either).
  • a tomographic image or motion contrast image to which the luminance step correction processing has been applied and a tomographic image or motion contrast image to which the luminance step correction processing has not been applied may be displayed on the display unit 104 side by side.
  • the correction unit 101-44 acquires the motion contrast image and the synthesized motion contrast image generated by the motion contrast data generation unit 101-12 and the synthesis unit 101-45 in S1004.
  • the operator instructs, via the user interface displayed on the display unit 104, a desired projection depth range and generation of a front motion contrast image corresponding to the projection depth range.
  • the projection units 101-43 project in the instructed depth range to generate a front motion contrast image (FIG. 11A).
  • the maximum value of the motion contrast data in the depth direction corresponding to each pixel in the plane corresponding to the front of the fundus is set as the pixel value of the pixel.
  • the projection processing is not limited to such maximum intensity projection, and any known projection method may be used.
  • the median value, the maximum value, the mode value, etc. of the motion contrast data in the depth direction corresponding to each pixel may be set as the pixel value.
  • the high-dimensional conversion unit 101-4411 calculates a high-dimensional approximate value distribution by smoothing the luminance value of the front motion contrast image two-dimensionally.
  • the high-dimensional conversion unit 101-4411 two-dimensionally smoothes the luminance value of each pixel in the front motion contrast image generated in S511, thereby obtaining the luminance value of the motion contrast image as shown in FIG. 11C.
  • the smoothing process is performed as an example of the process of calculating the approximate value distribution, but a morphological operation such as a closing process or an opening process may be performed as described later.
  • smoothing may be performed by using an arbitrary spatial filter, or may be performed by performing frequency conversion on motion contrast data using fast Fourier transform (FFT) or the like and then suppressing high frequency components to perform smoothing. .
  • FFT fast Fourier transform
  • the convolution operation is unnecessary, so that the smoothing process can be executed at high speed.
  • the weighting unit 101-442 acquires the blood vessel candidate region map V (x, y) (FIG. 7D) from the blood vessel acquisition unit 101-421.
  • the weighting unit 101-442 weights the luminance value in the blood vessel candidate area of the motion contrast image using the value of the blood vessel candidate area map V (x, y). Note that this weighting exists along the fast axis direction of the measurement light used when acquiring a three-dimensional motion contrast image when acquiring a low-dimensional approximate value distribution which is an example of the second approximate value distribution. 5 is an example of a different calculation process performed on a predetermined tissue that is a blood vessel or a bleeding region to be performed and a region other than the predetermined tissue. This weighting is not essential in the present invention.
  • the brightness value of the region corresponding to the higher region in the front motion contrast image acquired in S511 is calculated in S512 as the region (the likeness of the blood vessel) of the blood vessel candidate region map V (x, y) is higher.
  • the luminance value M (x, y) of the front motion contrast image acquired in S511 is set closer to the calculated high-dimensional approximate value M_2ds (x, y) and in a region where the value of V (x, y) is lower.
  • FIG. 11E shows an example of a weighted front motion contrast image M_w (x, y).
  • the weighting method for the brightness value of the blood vessel candidate region shown here is merely an example, and is a process of decreasing the brightness value of the blood vessel candidate region traveling in the fast axis direction or a process of approaching the brightness value near the blood vessel candidate region. If so, any weighting may be performed.
  • the low-dimensional conversion unit 101-4412 calculates a low-dimensional approximate value distribution related to the luminance value of the motion contrast image. Specifically, a process (smoothing process or morphological operation) of calculating a rough value distribution in the fast axis direction is performed on the brightness value of each pixel of the front motion contrast image weighted with the brightness value of the blood vessel candidate region.
  • FIG. 11F shows an example of the low-dimensional approximate value distribution calculated in this step.
  • the “blood vessel region traveling in the fast axis direction” has a band shape. The problem of remaining as a high luminance area can be avoided.
  • the calculation units 101-443 calculate the luminance correction coefficient distribution for the motion contrast image by calculating the high-dimensional rough value distribution of the motion contrast image and the low-dimensional rough value distribution of the motion contrast image.
  • a luminance correction coefficient map for a motion contrast image is obtained by dividing the luminance value of the two-dimensional smoothed motion contrast image generated in S512 by the luminance value of the weighted fast axis direction smoothed motion contrast image generated in S515. (FIG. 11G).
  • FIG. 12A is an example of a motion contrast image including both a band-shaped luminance step (white line) and a blood vessel region traveling in the fast axis direction. It is necessary to selectively suppress only the band-shaped luminance step without excessively suppressing the luminance value of the blood vessel region traveling in the fast axis direction.
  • FIGS. 12B and 12D show examples of processing results when the low-dimensional conversion processing in S515, the luminance correction coefficient map calculation processing in S516, and the luminance step correction processing in S1006 are performed without performing luminance weighting on the blood vessel candidate area. It is shown in FIG. 12F.
  • a white arrow running in the fast axis direction
  • FIG. 12B a high-luminance region remains in a band shape, and resembles a luminance step. Therefore, a low correction coefficient value is calculated in the blood vessel region indicated by the white arrow in FIG.
  • FIG. 12C, FIG. 12E, and FIG. 12C show the processing results when the low-dimensional conversion processing in S515, the luminance correction coefficient map calculation processing in S516, and the luminance step correction processing in S1006 are performed after performing luminance weighting on the blood vessel candidate area. It is shown in FIG. 12G.
  • FIG. 12C no band-like high-luminance area corresponding to the blood vessel area running in the fast axis direction is generated.
  • FIG. 12E an appropriate correction coefficient value is calculated for the blood vessel region, and no excessive suppression of the brightness values of the blood vessel traveling in the fast axis direction and the vicinity area is observed in the luminance step correction processing of S1006 (FIG. 12G).
  • the method of suppressing the band-shaped luminance step generated on the front motion contrast image (generating the front motion contrast image with the luminance step corrected) has been described, but the present invention is not limited to this.
  • the following procedure may be used to suppress a band-shaped luminance step generated on the three-dimensional motion contrast image, and generate a three-dimensional motion contrast image with the luminance step corrected.
  • a front motion contrast image is generated in a number of different projection depth ranges, and a luminance step correction coefficient map is generated for each front motion contrast image.
  • a luminance step correction coefficient value (correction coefficient value) corresponding to the projection depth range to which the pixel belongs is calculated to calculate the luminance step.
  • the different projection depth ranges for example, there are four types: a retinal surface layer, a deep retinal layer, an outer retinal layer, and a choroid. Alternatively, the type of each layer belonging to the retina and the choroid may be specified.
  • the timing for correcting the luminance step is determined in advance by generating a three-dimensional tomographic image or a three-dimensional motion contrast image with the luminance step corrected in accordance with the above-described procedure, and instructing the operator to generate a front tomographic image or a front motion contrast image. May be generated and displayed at the point in time when there is.
  • Examples of the generation timing of the three-dimensional tomographic image or the three-dimensional motion contrast image after the correction of the luminance step include, for example, immediately after capturing the tomographic image, during reconstruction, and during storage.
  • a front motion contrast image may be displayed.
  • the list box 1310 shown in FIG. 13 is an example of a user interface for generating a front tomographic image.
  • a user interface for instructing generation of a front motion contrast image for example, there are user interfaces (1302, 1303, 1304, 1306, 1307, 1308) for designating and changing the projection depth range shown in FIG.
  • luminance step correction may be performed on each tomographic image or motion contrast image in advance or based on a compositing (overlapping or pasting) instruction from the operator, and then compositing may be performed.
  • the luminance step may be corrected for the composite image (the superimposed image or the bonded image) and displayed on the display unit 104.
  • the present invention is not limited to the correction of the band-shaped luminance step on the motion contrast image generated when the image is captured by the so-called 3-D scan, but is not limited to the slow axis direction on the motion contrast image when the image is captured by various scan patterns. Can also be applied to the correction of the luminance step occurring in the image.
  • the present invention includes a case of correcting a luminance step generated in a slow axis direction when photographing is performed by a number of circle scans having different radii or a radial scan.
  • the image processing apparatus 101 robustly suppresses various luminance step artifacts generated in the slow axis direction of a motion contrast image generated from a tomographic image of the subject's eye obtained by OCT-based cluster imaging.
  • the following image processing is performed. That is, the luminance value of the high-dimensional smoothed motion contrast image is weighted with the luminance value of the blood vessel candidate region acquired in the same manner as in the first embodiment, and then the luminance value of the motion contrast image smoothed in the fast axis direction. Is divided to generate a luminance correction coefficient map.
  • the luminance step artifact generated in the slow axis direction of the motion contrast image is robustly suppressed. This makes it possible to robustly suppress the luminance step generated in the slow axis direction of the motion contrast image of the subject's eye.
  • the image processing apparatus provides a medical image such as a front image (a front tomographic image or a front motion contrast image) to which the various artifact reduction processes such as the luminance step artifact suppression process described above are not applied on the shooting confirmation screen.
  • a medical image to which the artifact reduction processing has been applied is displayed.
  • the operator can easily grasp the success or failure of imaging (or the degree of failure in imaging) in a state where medical processing is not performed as much as possible. You can check the image.
  • the operator may check a medical image in which various artifacts unnecessary for analysis are reduced as much as possible in order to grasp an analysis result or the like. it can. Therefore, it is possible for the operator to check a medical image suitable for the purpose.
  • the artifact in the present embodiment is not limited to the luminance step artifact described above, and may be any artifact within the above-described purpose.
  • the artifact in the present embodiment may be a projection artifact in OCTA (a blood vessel that originally does not exist in the lower layer is drawn by erroneously detecting the fluctuation of the shadow of the blood vessel in the upper layer as a motion contrast).
  • the execution of the various artifact reduction processing itself may be started immediately after shooting, or may be started after a transition from the shooting confirmation screen to the report screen.
  • the image processing unit 101-04 is an example of a generating unit that generates a front image in which various artifacts are reduced.
  • the shooting confirmation screen is an example of a first front image
  • the report screen is an example of a second display screen.
  • the report screen is one of the display screens after switching from the photographing confirmation screen, and is a display screen for the operator to check the image after various processes, the analysis result, and the like.
  • the image processing apparatus may include a receiving unit that receives an instruction regarding whether or not to apply the luminance step artifact suppression processing that occurs in the slow axis direction of the tomographic image or the motion contrast image.
  • the display control unit converts the tomographic image or the motion contrast image into the luminance step artifact suppression processing applied, Alternatively, it can be displayed on the display unit in a non-application state.
  • the setting relating to the application of the luminance step artifact suppression processing when displaying the tomographic image and the motion contrast image is different between the imaging confirmation screen and the report screen.
  • the image processing apparatus is different from the image processing apparatus according to the first or second embodiment in that the image processing apparatus includes a receiving unit (not shown).
  • the image processing flow in the present embodiment is the same as that in the second embodiment (FIG. 10).
  • steps S1001 to S1004 are the same as those in the second embodiment, and a description thereof will be omitted.
  • the weighting units 101-442 generate a luminance correction coefficient map for a motion contrast image by performing the same processing as in S1005 of the second embodiment. Note that in this embodiment, the weighting units 101-442 perform the same processing as in S304 of the first embodiment to generate a luminance correction coefficient map for tomographic images.
  • the correction unit 101-44 multiplies each pixel of the motion contrast image by the luminance correction coefficient value for the motion contrast image calculated in S1005, thereby generating a motion contrast image with the luminance step corrected. Further, in this embodiment, the correcting unit 101-44 multiplies each pixel of the tomographic image by the luminance correction coefficient value for the tomographic image calculated in S1005, thereby generating a tomographic image having a luminance level difference corrected. .
  • Step 1007> Based on the tomographic image generated in step S1002, the motion contrast image generated in step S1004, the luminance step corrected motion contrast image generated by the correction unit 101-44, and the tomographic image, the display control unit 101-05 confirms shooting on the display unit 104. A screen (FIG. 14) is displayed. On the main photographing confirmation screen, a fundus image 1401 at the upper left, a front tomographic image 1402 at the lower left, and B-scan tomographic images (1406a, 1406b, 1406c) corresponding to each scanning position (1402a, 1402b, 1402c) of the front tomographic image. indicate.
  • the operator issues an instruction regarding whether or not the photographed tomographic image can be stored (pressing the OK button 1407 or NG button 1408) and an instruction regarding continuation of repeated photographing (Repeat). Button 1409).
  • the image processing apparatus Based on the instruction received by the receiving unit, the image processing apparatus performs a corresponding data saving / photographing continuation process. Further, by pressing the Report button 1312 as in S1007 of the second embodiment, the display control unit 101-05 displays the report screen 1300 on the display unit 104.
  • the front tomographic image 1402 is displayed at the lower left, and the display is switched to the front motion contrast image (FIG. 15B) after the generation of the motion contrast image and the luminance step correction processing are completed.
  • a user interface 1403 for instructing whether or not to apply the luminance step correction processing to a tomographic image or a motion contrast image displayed on the imaging confirmation screen is provided at the lower left portion of the imaging confirmation screen.
  • a check box described as Image ⁇ Quality ⁇ Enhancement is displayed, and the check box is in a non-selected state (OFF state).
  • the display control unit 101-05 displays the tomographic image or the motion contrast image in a state where the luminance step correction processing is not applied (for example, in the case of the motion contrast image, a state as shown in FIG. 15B). Displayed at 104.
  • the check box is selected (ON)
  • the tomographic image and the motion contrast image are displayed on the display unit 104 in a state where the luminance step correction processing is applied (FIGS. 15A and 15C).
  • a user interface capable of independently instructing whether or not to apply the luminance step correction processing on the main imaging confirmation screen may be provided for the tomographic image and the motion contrast image.
  • the instruction user interface may be provided separately for the tomographic image and the motion contrast image, or the four user options ((1) can be applied to both (1) and (2) only to the tomographic image) with a single user interface.
  • (1) it is possible to instruct whether to save the tomographic image or continue continual imaging after grasping the image quality of the tomographic image and the motion contrast image used in actual medical care.
  • selecting (4) how much and where the misalignment between tomographic images that causes a white line when generating a tomographic image defective portion (low luminance area) or a motion contrast image exists.
  • the user interface regarding whether or not the luminance step correction processing can be applied to the tomographic image or the motion contrast image may be configured to be independently settable between the imaging confirmation screen and the report screen.
  • a state in which an instruction to display the luminance step correction processing for a tomographic image or a motion contrast image in a non-applied state on an imaging screen and in an applied state on a report screen is selected is a default state of the user interface. It may be set. With this configuration, it is possible to easily grasp a failed imaging portion at the time of confirming imaging, and to observe a high-quality tomographic image or motion contrast image suitable for medical treatment on the report screen.
  • the instruction selected before the transition of the display screen may be configured to be reflected after the transition.
  • the configuration may be such that the instruction selected on the shooting confirmation screen is reflected even after the transition to the report screen.
  • an instruction selected on a display screen which is a report screen on which an image obtained at a predetermined date and time is displayed, is configured to be reflected even after transition to a display screen for follow-up observation. May be.
  • an instruction selected on the display screen for follow-up observation may be configured to be collectively reflected on a plurality of images at different dates and times. As a result, convenience for the operator can be improved.
  • the correction processing targeted by the user interface for designating the applicability on the shooting confirmation screen or the report screen is not limited to the luminance step correction processing, and may be configured to select any known high-quality processing.
  • a user interface is provided for receiving an instruction on whether or not to apply the image quality enhancement processing by machine learning on a shooting confirmation screen, and application / non-application of the image quality enhancement processing to a tomographic image or a motion contrast image according to a selection state of the user interface.
  • the application may be switched and displayed.
  • the application or non-application of the luminance step correction processing may be switched by a predetermined user interface or script to display a tomographic image or a motion contrast image.
  • the image processing apparatus 101 may be configured to display a tomographic image or motion contrast image to which the luminance step correction processing has been applied and a tomographic image or motion contrast image to which the luminance step correction processing has not been applied, side by side.
  • the image processing apparatus 101 may include a receiving unit that receives an instruction as to whether or not to apply the luminance step artifact suppression processing generated in the slow axis direction of the tomographic image or the motion contrast image.
  • the display control unit converts the tomographic image or the motion contrast image into the luminance step artifact suppression processing applied, Alternatively, it can be displayed on the display unit in a non-application state. For this reason, it is possible to easily confirm a failed imaging portion when confirming imaging, and to observe a high-quality tomographic image or motion contrast image suitable for medical treatment when observing a report screen.
  • the display control unit determines the determination result (classification result) of the state of various artifacts (for example, luminance step artifact) in the front image (front tomographic image or front motion contrast image). It may be displayed together with the front image.
  • the state of the artifact is, for example, the presence or absence of the artifact.
  • a label indicating the state of various artifacts is attached to a plurality of front images of at least one eye to be examined, and learning obtained by machine learning using the plurality of labeled front images is performed.
  • the state of various artifacts (for example, the existence of various artifacts) in the input front image is displayed on the shooting confirmation screen. That is, the determination result of the state of the artifact obtained using the above-described learned model can be displayed on the shooting confirmation screen.
  • the processing time can be reduced while determining with high accuracy. For this reason, the examiner can confirm the determination result with high accuracy even immediately after imaging. Further, for example, even immediately after the photographing, it is possible to improve the efficiency of the examiner's judgment on the necessity of the re-photographing. Therefore, the accuracy and efficiency of diagnosis can be improved.
  • the label of the state of various artifacts may be manually input by an operator via a user interface, or may be an execution result by rule-based analysis for automatically or semi-automatically determining various artifacts.
  • the display screen on which the determination result of the state of the artifact is displayed is not limited to the shooting confirmation screen.
  • a report screen, a display screen for follow-up observation, a preview screen for various adjustments before shooting (various live video images) May be displayed on at least one display screen.
  • the machine learning includes, for example, deep learning (Deep Learning) composed of a multi-layer neural network.
  • Deep learning Deep Learning
  • CNN Convolutional Neural Network
  • the machine learning is not limited to the deep learning, but may be any model that can extract (represent) the feature amount of learning data such as an image by learning.
  • the learned model may be obtained by, for example, supervised learning using learning data in which information relating to the state of various artifacts is correct data (teacher data) and a medical image such as a front image is input data.
  • the learned model is obtained by unsupervised learning using a plurality of medical images such as a plurality of front images of at least one eye to be examined as learning data without using the above-described correct data (teacher data). Is also good. Further, the learned model may be updated as a result of the additional learning, and may be customized as, for example, a model suitable for the operator.
  • the image processing unit 101-04 is an example of a determining unit (classifying unit) that determines the state of various artifacts in a medical image such as a front image.
  • the determination unit can determine (classify or not) the presence or absence of the artifact in the medical image as the state of the artifact.
  • the determination unit may determine, for example, a stage according to the degree of the artifact in the medical image as the state of the artifact (the medical image is classified into one of a plurality of stages).
  • the plurality of stages may be, for example, the presence or absence of an artifact, or may be a plurality of levels according to the number of artifacts, the size of the existence range, and the like.
  • the determination unit may evaluate the type of the artifact in the medical image as the state of the artifact (classify the medical image into one of a plurality of types).
  • the above-described learned model may be obtained by learning using learning data in which a plurality of types of medical images corresponding to each other are set.
  • the learned model includes, for example, learning data in which a front tomographic image and a front motion contrast image of the same part of the same subject eye (or at least a part of the predetermined part are obtained by a common interference signal) are set. It can be obtained by the learning used.
  • learning data in which a plurality of medical images of different types are set it is possible to classify not only the state of the artifact but also the type corresponding to the feature amount of the medical image, The accuracy of this classification can be improved.
  • analysis results such as a desired layer thickness and various blood vessel densities may be displayed on the report screens in the various embodiments described above.
  • a highly accurate analysis result can be displayed.
  • the analysis result may be displayed as an analysis map, a sector indicating a statistical value corresponding to each divided region, or the like.
  • the above-described learned model may be obtained by learning using the analysis result of the medical image as learning data.
  • the learned model is obtained by learning using learning data including a medical image and an analysis result of the medical image, and learning data including a medical image and an analysis result of a medical image of a different type from the medical image. It may be obtained.
  • the learned model is obtained by learning using learning data including input data in which a plurality of different types of medical images of a predetermined part are set, such as a front tomographic image and a front motion contrast image. Is also good.
  • various diagnostic results such as glaucoma and age-related macular degeneration may be displayed on the report screens in the various embodiments described above.
  • a highly accurate diagnosis result can be displayed.
  • the position of the specified abnormal part may be displayed on the image, or the state of the abnormal part may be displayed by characters or the like.
  • the above-described learned model may be obtained by learning using a diagnosis result of a medical image as learning data.
  • the learned model is obtained by learning using learning data including a medical image and a diagnosis result of the medical image, and learning data including a medical image and a diagnosis result of a medical image of a different type from the medical image. It may be obtained.
  • the learned model described above may be a learned model obtained by learning using learning data including input data in which a plurality of different types of medical images of a predetermined part of the subject are set.
  • the input data included in the learning data includes, for example, input data in which a fundus motion contrast front image and a luminance front image (or luminance tomographic image) are set, a fundus tomographic image (B-scan image), and a color fundus.
  • Input data or the like in which an image (or a fluorescent fundus image) is set can be considered.
  • the plurality of different types of medical images may be any images obtained by different modalities, different optical systems, different principles, and the like.
  • the above-described learned model may be a learned model obtained by learning using learning data including input data in which a plurality of medical images of different parts of the subject are set.
  • the input data included in the learning data includes, for example, input data in which a tomographic image (B-scan image) of the fundus and a tomographic image (B-scan image) of the anterior segment are set, or a three-dimensional image of the macula of the fundus.
  • Input data or the like which includes an OCT image and a circle scan (or raster scan) tomographic image of the optic disc of the fundus, can be considered.
  • the input data included in the learning data may be a plurality of medical images of different types and different types of the subject.
  • the input data included in the learning data may be, for example, input data that sets a tomographic image of the anterior ocular segment and a color fundus image.
  • the above-described learned model may be a learned model obtained by learning using learning data including input data in which a plurality of medical images of a predetermined part of the subject with different imaging angles of view are set.
  • the input data included in the learning data may be a combination of a plurality of medical images obtained by time-dividing a predetermined region into a plurality of regions, such as a panoramic image.
  • the input data included in the learning data may be input data in which a plurality of medical images at different dates and times of a predetermined part of the subject are set.
  • the display screen on which at least one of the analysis result and the diagnosis result is displayed is not limited to the report screen, and may be, for example, a shooting confirmation screen, a display screen for follow-up observation, and various adjustments before shooting. It may be displayed on at least one display screen such as a preview screen (a display screen on which various live moving images are displayed). For example, by displaying at least one of an analysis result and a diagnosis result obtained using the above-described trained model on a photographing confirmation screen, the examiner can confirm a highly accurate result even immediately after photographing. can do.
  • the preview screens in the various embodiments and modifications described above may be configured such that the learned model is used for at least one frame of a live moving image.
  • the learned model corresponding to each live moving image may be used.
  • the processing time can be shortened, so that the examiner can obtain highly accurate information before the start of imaging. For this reason, for example, failure in re-imaging can be reduced, so that the accuracy and efficiency of diagnosis can be improved.
  • the plurality of live moving images include, for example, a moving image of the anterior segment for alignment in the XYZ directions, a front moving image of the fundus for focus adjustment of the fundus observation optical system and OCT focus adjustment, and coherence gate adjustment of the OCT.
  • This is a tomographic moving image of the fundus, for example (adjustment of an optical path length difference between a measured optical path length and a reference optical path length).
  • the moving image to which the above-described learned model can be applied is not limited to a live moving image, and may be, for example, a moving image stored (saved) in a storage unit.
  • a moving image obtained by aligning at least one frame of the tomographic moving image of the fundus stored (saved) in the storage unit may be displayed on the display screen.
  • a reference frame based on the condition that the vitreous body exists on the frame as much as possible is selected, and another frame is aligned with the selected reference frame.
  • the moving image may be displayed on the display screen.
  • a learned model obtained by learning for each imaging region may be selectively used. Specifically, a first learned model obtained using learning data including a first imaging region (lung, eye to be examined, and the like) and a learning including a second imaging region different from the first imaging region There may be provided a selecting means for selecting any one of a plurality of learned models including the second learned model obtained using the data. At this time, in accordance with an instruction from the operator, the imaging part (information of the header or manually input by the operator) corresponding to the selected learned model is paired with the imaging image of the imaging part.
  • Data is retrieved (for example, from a server of an external facility such as a hospital or a research institute via a network), and learning using the retrieved data as learning data is added to the selected trained model. And control means for executing as Thus, additional learning can be efficiently performed for each imaging region using the imaging image of the imaging region corresponding to the learned model.
  • the validity of the learning data for additional learning may be detected by confirming the matching by digital signature or hashing. Thereby, the learning data for additional learning can be protected. At this time, if the validity of the learning data for additional learning cannot be detected as a result of checking the consistency by digital signature or hashing, a warning to that effect is issued, and additional learning using the learning data is performed. Absent.
  • the instruction from the examiner may be an instruction by voice or the like in addition to a manual instruction (for example, an instruction using a user interface or the like).
  • a machine learning engine including a speech recognition engine obtained by machine learning may be used.
  • the manual instruction may be an instruction by character input using a keyboard, a touch panel, or the like.
  • a machine learning engine including a character recognition engine obtained by machine learning may be used.
  • the instruction from the examiner may be a gesture instruction.
  • a machine learning engine including a gesture recognition engine obtained by machine learning may be used.
  • the machine learning includes the above-described deep learning, and a recurrent neural network (RNN) can be used for at least one layer of the multi-layer neural network, for example.
  • RNN recurrent neural network
  • the object to be inspected is not limited to the eye to be inspected, and may be any site as long as it is a predetermined part of the subject.
  • the front image of the predetermined part of the subject may be any medical image.
  • the medical image to be processed is an image of a predetermined part of the subject, and the image of the predetermined part includes at least a part of the predetermined part of the subject.
  • the medical image may include other parts of the subject.
  • the medical image may be a still image or a moving image, and may be a black and white image or a color image.
  • the medical image may be an image representing the structure (form) of the predetermined part or an image representing the function thereof.
  • the images representing functions include, for example, images representing blood flow dynamics (blood flow, blood flow velocity, etc.) such as OCTA images, Doppler OCT images, fMRI images, and ultrasonic Doppler images.
  • the predetermined site of the subject may be determined according to the imaging target, and includes the human eye (examined eye), brain, lung, intestine, heart, pancreas, kidney, liver, and other organs, head, chest, Includes any parts such as legs and arms.
  • the medical image may be a tomographic image of the subject or a front image.
  • the front image is, for example, a fundus front image, a front image of an anterior segment, a fundus image obtained by fluorescence imaging, and data obtained by OCT (three-dimensional OCT data) in at least a part of the range in the depth direction of the imaging target.
  • OCT three-dimensional OCT data
  • En-Face images generated using the data.
  • the En-Face image is an OCTA En-Face image (motion contrast front view) generated using three-dimensional OCTA data (three-dimensional motion contrast data) using data in at least a part of the depth direction of the imaging target. Image).
  • the three-dimensional OCT data and the three-dimensional motion contrast data are examples of three-dimensional medical image data.
  • the imaging device is a device for imaging an image used for diagnosis.
  • the imaging apparatus detects, for example, a device that obtains an image of a predetermined portion by irradiating a predetermined portion of the subject with light, radiation such as X-rays, electromagnetic waves, or ultrasonic waves, or detects radiation emitted from a subject.
  • a device for obtaining an image of a predetermined part includes at least an X-ray imaging apparatus, a CT apparatus, an MRI apparatus, a PET apparatus, a SPECT apparatus, an SLO apparatus, an OCT apparatus, an OCTA apparatus, a fundus camera, and an endoscope. Including mirrors.
  • the OCT device may include a time domain OCT (TD-OCT) device and a Fourier domain OCT (FD-OCT) device. Further, the Fourier domain OCT device may include a spectral domain OCT (SD-OCT) device and a wavelength sweep type OCT (SS-OCT) device. Further, the SLO device or OCT device may include a wavefront compensation SLO (AO-SLO) device using a wavefront compensation optical system, a wavefront compensation OCT (AO-OCT) device, or the like.
  • TD-OCT time domain OCT
  • FD-OCT Fourier domain OCT
  • SD-OCT spectral domain OCT
  • SS-OCT wavelength sweep type OCT
  • the SLO device or OCT device may include a wavefront compensation SLO (AO-SLO) device using a wavefront compensation optical system, a wavefront compensation OCT (AO-OCT) device, or the like.
  • AO-SLO wavefront compensation SLO
  • the image processing apparatus includes a luminance statistic calculated in a different depth range to robustly suppress a luminance step artifact generated in a slow axis direction of a wide-angle tomographic image or a motion contrast image.
  • the distribution information of the blood vessel candidate region is generated based on a value normalized by a local representative value regarding the distribution of the luminance statistics in the in-plane direction. That is, the image processing apparatus according to the present embodiment compares distribution information obtained by comparing a plurality of pieces of distribution information corresponding to a plurality of depth ranges in a three-dimensional tomographic image with distribution information obtained by comparing a plurality of pieces of distribution information.
  • the obtained distribution information is compared with the distribution information to generate distribution information on a predetermined region (a blood vessel candidate region). It is sufficient that distribution information is finally generated, and it is not necessary to generate, for example, an image (map) as distribution information during the generation.
  • the luminance value of the low-dimensional (only in the fast axis direction) smoothed tomographic image or motion contrast image weighted for the blood vessel candidate region is divided from the luminance value of the high-dimensional smoothed tomographic image or motion contrast image. By doing so, a luminance correction coefficient value distribution is generated.
  • the wide-field-of-view motion contrast image information on the distribution of the blood vessel candidate region is generated in the procedure described in S303 of the first embodiment, and the luminance step is corrected in the procedure described in S1004 to S1006 in the second embodiment.
  • the motion contrast image is generated, there is a problem that the luminance attenuation rate is easily affected by the nerve fiber layer thickness.
  • the angle of view of the tomographic image or the motion contrast image is small, the difference in the nerve fiber layer thickness between the parts is small, and the influence on the luminance attenuation rate is small.
  • the luminance decay rate tends to increase near the optic papilla (white arrow portion) and the peripheral portion (small nerve fiber layer thickness). (Gray arrows), the luminance decay rate tends to be small. Therefore, when the luminance step correction is performed based on the information on the distribution of such blood vessel candidate regions, a high luminance luminance step (white line) may remain near the optic papilla (the white arrow in FIG. 18F). Further, in the peripheral portion, a case where the motion contrast value of the blood vessel region running in the fast axis direction is excessively suppressed (a gray arrow portion in FIG. 18F) may occur.
  • a difference value is used in this embodiment in steps S501 to S505, and then each A scan is calculated.
  • a representative value for example, an average value or a median value
  • the luminance attenuation rate is prevented from being affected by the nerve fiber layer thickness.
  • the configuration of the image processing system 10 including the image processing apparatus 101 according to the present embodiment and the image processing flow in the present embodiment are the same as those in the second embodiment, and a description thereof will be omitted. Note that in FIG. 10, except for S1003 and S1005, which are the same as in the case of the second embodiment, the description is omitted.
  • the blood vessel acquisition units 101-421 generate information on the distribution of the blood vessel candidate regions based on the result of comparing the luminance statistics between different predetermined depth ranges. In the present embodiment, based on the degree of difference (difference or ratio) of the luminance in the “depth range in which blood vessels are likely to be present (retina surface layer)” and in the “depth range in which the luminance drop due to shadow is most remarkable (outer retina)”. To specify a blood vessel candidate region.
  • the in-plane centering on each A-scan position is calculated.
  • the representative value (local average value) of the luminance decay rate in the region near the direction is calculated.
  • the luminance decay rate calculated at each A-scan position is calculated using the nerve fiber layer thickness or the nerve fiber layer. It may be normalized by dividing by a local representative value of the thickness (local average value or the like). That is, the image processing apparatus according to the present embodiment includes distribution information obtained by comparing a plurality of pieces of distribution information corresponding to a plurality of depth ranges in a three-dimensional tomographic image, and distribution information relating to a layer thickness of a predetermined layer of the eye to be examined. May be generated to generate distribution information about a predetermined region (candidate blood vessel region).
  • Steps S501 to S504 are the same as those in the first embodiment and the second embodiment, and a description thereof will not be repeated.
  • the blood vessel acquiring unit 101-421 which is an example of the information generating unit, compares the luminance statistics of the two types of front tomographic images (FIGS. 16A and 16B) calculated in S504 in order to compare the luminance statistics of different depth ranges.
  • (luminance of retinal surface frontal tomographic image)-(luminance of retinal outer frontal tomographic image) is calculated for each pixel (x, y) as an index relating to comparison of luminance statistical values in different depth ranges.
  • a map (FIG. 16C) of the attenuation rate Ar (x, y) is generated.
  • the blood vessel acquisition unit 101-421 generates a blood vessel candidate area map V (x, y) representing the likeness of a blood vessel area by normalizing the luminance attenuation rate map Ar (x, y) generated in S505.
  • a representative value related to the luminance decay rate is calculated in a neighborhood of a predetermined size at each pixel position (x, y) of the luminance decay rate map Ar (x, y) calculated in S505.
  • a local average is calculated as a representative value.
  • FIG. 16D shows an example of the calculated local average distribution.
  • the representative value is not limited to this, and any known representative value (for example, a median value) may be calculated.
  • normalization processing using the representative value is performed on the value of the luminance decay rate Ar (x, y) at each pixel position (x, y).
  • subtraction processing is performed as normalization processing.
  • the present invention is not limited to this.
  • the normalization of the luminance decay rate in the present invention is not limited to the method based on the local representative value of the luminance decay rate.
  • the luminance decay rate calculated at each A-scan position is calculated by calculating the nerve fiber layer thickness.
  • it may be normalized by dividing by a local representative value (local average value or the like) of the nerve fiber layer thickness.
  • FIG. 16E shows an example of the blood vessel candidate region map V (x, y). It can be seen that the blood vessel candidate region is stably drawn regardless of the site.
  • the normalization processing is not limited to the above, and any known normalization method may be used.
  • the weighting unit 101-442 uses the information (FIG. 17D) on the distribution of the blood vessel candidate region generated by the blood vessel acquisition unit 101-421 in S1003 to calculate the luminance value of the wide-angle motion contrast image (FIG. 17A) in the blood vessel candidate region.
  • a weighted motion contrast image (FIG. 17E) is generated.
  • the high-dimensional conversion unit 101-4411 generates a high-dimensional smoothed motion contrast image (FIG. 17C)
  • the low-dimensional conversion unit 101-4412 performs smoothing processing on the weighted motion contrast image in the fast axis direction. Is performed to generate a low-dimensional smoothed motion contrast image (FIG. 17F).
  • the arithmetic units 101 to 443 generate a luminance correction coefficient map (FIG. 17G) for the motion contrast image by performing an arithmetic process on the high-dimensional smoothed motion contrast image and the low-dimensional smoothed motion contrast image.
  • the weighting unit 101-442 weights the luminance value in the blood vessel candidate area of the motion contrast image using the value of the blood vessel candidate area map V (x, y). Note that this weighting exists along the fast axis direction of the measurement light used when acquiring a three-dimensional motion contrast image when acquiring a low-dimensional approximate value distribution which is an example of the second approximate value distribution. 5 is an example of a different calculation process performed on a predetermined tissue that is a blood vessel or a bleeding region to be performed and a region other than the predetermined tissue. This weighting is not essential in the present invention.
  • the brightness value of the region corresponding to the higher region in the front motion contrast image acquired in S511 is calculated in S512 as the region (the likeness of the blood vessel) of the blood vessel candidate region map V (x, y) is higher.
  • the luminance value M (x, y) of the front motion contrast image acquired in S511 is set closer to the calculated high-dimensional approximate value M_2ds (x, y) and in a region where the value of V (x, y) is lower.
  • FIG. 17E shows an example of a weighted front motion contrast image M_w (x, y).
  • the weighting method for the luminance value of the blood vessel candidate region shown here is merely an example. If the process is to reduce the luminance value of the blood vessel candidate region traveling in the fast axis direction or to approximate the luminance value near the blood vessel candidate region, Arbitrary weighting may be performed.
  • the luminance step generated in the wide-field-of-view tomographic image has a greater variation in the depth (height) of the luminance step than the luminance step generated in the wide-field-of-view motion contrast image.
  • a blood vessel candidate region map Vt (x, y) for a wide-angle tomographic image and a blood vessel candidate region map Vm (x, y) for a wide-angle motion contrast image are separately generated, and Vt (x, y) is generated. y) ⁇ Vm (x, y).
  • FIG. 18A is an example of a motion contrast image including both a number of band-shaped luminance steps (white lines) and a blood vessel region running in the fast axis direction. In both the region where the nerve fiber layer thickness is large (near the optic papilla) and the region where the nerve fiber layer thickness is small (the periphery of the wide-angle image), it is necessary to stably selectively suppress only the luminance step.
  • FIGS. 18B, 18D, and 18F show examples of processing results when each processing is executed.
  • FIG. 18A In the region with a large nerve layer thickness (near the optic papilla (white arrow)) in FIG.
  • the luminance attenuation rate is calculated to be high even in the non-vascular region, while the region with a small nerve fiber layer thickness (peripheral portion of the blood vessel candidate region map) (Gray arrow)), the luminance decay rate is calculated to be too low in spite of being a blood vessel region.
  • the difference between the blood vessel and the non-blood vessel region tends to be small near the optic disc portion (white arrow) in FIG. 18D, and the blood vessel region remains high in the peripheral portion of the image (gray arrow).
  • the suppression of the luminance step is insufficient in the vicinity of the optic disc portion (white arrow) in FIG. 18F, and the blood vessel region is excessively suppressed in the peripheral portion of the image (gray arrow).
  • FIGS. 18C, 18E, and 18G The processing results are shown in FIGS. 18C, 18E, and 18G.
  • FIG. 18C a blood vessel region is stably drawn both in a region where the nerve fiber layer thickness is large (near the optic nerve head) and in a region where the nerve fiber layer thickness is small (periphery of the image). Therefore, the blood vessel region is also appropriately weighted in FIG. 18E, and the luminance step correction processing in S1006 does not show insufficient luminance step suppression near the optic papilla or excessive suppression of the luminance value of the blood vessel region near the image. (FIG. 18G).
  • a method of suppressing a band-shaped luminance step generated on a wide-angle-of-view front-view motion contrast image (generating a normal-wide-angle-of-view plane motion contrast image with corrected luminance step) has been described. Is not limited to this.
  • the following procedure may be used to suppress a band-shaped luminance step generated on a wide-angle-of-view three-dimensional motion contrast image and generate a wide-angle three-dimensional motion contrast image with corrected luminance step.
  • a front motion contrast image is generated in a number of different projection depth ranges, and a luminance step correction coefficient map is generated for each front motion contrast image.
  • a luminance step correction coefficient value (correction coefficient value) corresponding to the projection depth range to which the pixel belongs is calculated to calculate the luminance step.
  • the different projection depth ranges for example, there are four types: a retinal surface layer, a deep retinal layer, an outer retinal layer, and a choroid. Alternatively, the type of each layer belonging to the retina and the choroid may be specified.
  • the timing of correcting the luminance step is determined in advance by generating a wide-field-of-view three-dimensional tomographic image or a motion contrast image with the luminance step corrected in accordance with the above-described procedure, and instructing the operator to generate a front tomographic image or a motion contrast image. May be generated and displayed at the point in time when there is.
  • Examples of the generation timing of the wide-field-of-view three-dimensional tomographic image or the motion contrast image after the correction of the luminance step include, for example, immediately after capturing the tomographic image, during reconstruction, and during storage.
  • the luminance step correction is performed, and the luminance step corrected wide An angle-of-view front tomographic image or a motion contrast image may be displayed.
  • the luminance step suppression processing for a wide-field-of-view motion contrast image has been described as an example of a wide-field image, but the present invention is not limited to this. That is, by performing the image processing described in S304 to S305 of the first embodiment using the blood vessel candidate area map generated in the procedure described in S1003 of the present embodiment, the luminance generated in the wide-angle tomographic image is obtained.
  • the present invention includes robustly suppressing a step (even in a region where the nerve fiber layer thickness is high or in a region where the nerve fiber layer thickness is low).
  • the luminance statistical values calculated in different depth ranges are compared with the luminance statistical values.
  • Blood vessel candidate distribution information is generated based on a value normalized by a local representative value regarding the distribution of values in the in-plane direction.
  • the luminance value of the low-dimensional (only in the fast axis direction) smoothed tomographic image or motion contrast image weighted for the blood vessel candidate region is divided from the luminance value of the high-dimensional smoothed tomographic image or motion contrast image.
  • a luminance correction coefficient value distribution is generated.
  • the luminance step artifact generated in the slow axis direction of the wide-angle tomographic image or the motion contrast image is robustly suppressed.
  • the present invention is realized as the image processing apparatus 101, but the embodiments of the present invention are not limited to the image processing apparatus 101 alone.
  • the present invention can take an embodiment as a system, an apparatus, a method, a program, a storage medium, or the like.
  • the present invention is also realized by executing the following processing. That is, software (program) that realizes the functions of the above-described various embodiments and modifications is supplied to a system or an apparatus via a network or various storage media, and a computer (or a CPU or an MPU or the like) of the system or the apparatus is provided. Is a process of reading and executing a program.

Abstract

This image processing apparatus comprises: an acquisition means for acquiring a distribution of values of correction coefficients by calculating a first approximate value distribution obtained by performing a two-dimensional conversion process on at least one front image on the basis of a three-dimensional tomographic image or a three-dimensional motion contrast image of an eye to be examined, and a second approximate value distribution obtained by performing a one-dimensional conversion process on the at least one front image; a correction means for correcting at least a part of the three-dimensional tomographic image or the three-dimensional motion contrast image, by using the distribution of the values of the correction coefficients; and a generation means for generating at least a part of the corrected image.

Description

画像処理装置、画像処理方法及びプログラムImage processing apparatus, image processing method, and program
 本発明は、画像処理装置、画像処理方法及びプログラムに関する。 The present invention relates to an image processing device, an image processing method, and a program.
 光干渉断層計(OCT;Optical Coherence Tomography)などの断層画像撮影装置を用いると、被検眼内部の状態を三次元的に観察できる。この断層画像撮影装置は、疾病の診断をより的確に行うのに有用であることから眼科診療に広く用いられている。OCTの形態として、例えば広帯域な光源とマイケルソン干渉計を組み合わせたTD-OCT(Time domain OCT)がある。これは、参照ミラーの位置を一定速度で移動させて信号アームで取得した後方散乱光との干渉光を計測し、深度方向の反射光強度分布を得るように構成されている。しかし、このようなTD-OCTでは機械的な走査が必要となるため高速な画像取得は難しい。そこで、より高速な画像取得法として広帯域光源を用い、分光器で干渉信号を取得するSD-OCT(Spectral domain OCT)や高速波長掃引光源を用いることで時間的に分光するSS-OCT(Swept Source OCT)が開発され、より広画角な断層画像を取得できるようになっている。 With the use of a tomographic imaging apparatus such as an optical coherence tomography (OCT; Optical Coherence Tomography), the state inside the eye to be examined can be three-dimensionally observed. The tomographic imaging apparatus is widely used for ophthalmic medical treatment because it is useful for more accurately diagnosing a disease. As a form of the OCT, for example, there is a TD-OCT (Time domain OCT) in which a broadband light source and a Michelson interferometer are combined. This is configured such that the position of the reference mirror is moved at a constant speed, interference light with backscattered light acquired by the signal arm is measured, and a reflected light intensity distribution in the depth direction is obtained. However, such TD-OCT requires mechanical scanning, so that high-speed image acquisition is difficult. Therefore, as a higher-speed image acquisition method, a broadband light source is used, and an SS-OCT (Spectral domain OCT) for acquiring an interference signal with a spectroscope or an SS-OCT (Swept @ Source) for temporally dispersing by using a high-speed wavelength sweep light source. OCT) has been developed so that a tomographic image with a wider angle of view can be acquired.
 被検眼の血管に関する病態を把握するためにOCTを用いて非侵襲に眼底血管を3次元で描出するOCT Angiography(以下、OCTAと表記)技術が用いられる。OCTAでは測定光で同一位置を複数回走査し、赤血球の変位と測定光との相互作用により得られるモーションコントラストを画像化する。図4Aは速軸方向が水平(x軸)方向で、遅軸方向(y軸方向)の各位置(yi;1≦i≦n)においてr回連続でBスキャンを行うOCTA撮影の例を示している。なおOCTA撮像において同一位置で複数回走査することをクラスタ走査、同一位置で得られた複数枚の断層画像のことをクラスタと呼び、クラスタ単位でモーションコントラスト画像が生成される。 OOCT which uses OCT to non-invasively render three-dimensional fundus blood vessels using OCT to grasp the pathological conditions related to the blood vessels of the eye to be examined Angiography (hereinafter referred to as OCTA) technology. In OCTA, the same position is scanned a plurality of times with the measurement light, and the motion contrast obtained by the interaction between the displacement of the red blood cells and the measurement light is imaged. FIG. 4A shows an example of OCTA imaging in which the B-scan is performed r times continuously at each position (yi; 1 ≦ i ≦ n) in the slow axis direction (y-axis direction) in which the fast axis direction is the horizontal (x-axis) direction. ing. In OCTA imaging, scanning a plurality of times at the same position is referred to as cluster scanning, and a plurality of tomographic images obtained at the same position is referred to as a cluster, and a motion contrast image is generated for each cluster.
 特許文献1には、OCTA画像(血管領域が強調されたen-face画像)において、固視ずれに起因してX方向(速軸方向)に延びる白線の帯状アーチファクトを抑制するために、輝度値をX方向に積算することによりY方向(遅軸方向)に沿った1次元輝度プロファイルと、該1次元輝度プロファイルを平滑化した平滑化1次元輝度プロファイルとの比または差に基づいて、断層画像に対して輝度補正する方法が開示されている。すなわち、特許文献1には、OCTA画像において速軸方向に沿って全域に存在する帯状アーチファクトを抑制する技術について開示されている。 Japanese Patent Application Laid-Open No. H11-163,199 discloses a luminance value in an OCTA image (en-face image in which a blood vessel region is emphasized) in order to suppress a band-like artifact of a white line extending in the X direction (fast axis direction) due to fixation disparity. Is integrated in the X direction, and a tomographic image based on the ratio or difference between a one-dimensional luminance profile along the Y direction (slow axis direction) and a smoothed one-dimensional luminance profile obtained by smoothing the one-dimensional luminance profile. A method for correcting the luminance of the image is disclosed. That is, Patent Literature 1 discloses a technique for suppressing band-like artifacts existing in the entire area along the fast axis direction in an OCTA image.
特開2018-15189号公報JP 2018-15189 A
 しかしながら、従来の手法では、速軸方向に沿って部分的に存在する帯状アーチファクトを低減することはできない。また、速軸方向に沿って存在する血管領域等の輝度値を過補正もしくは誤抑制してしまう可能性があった。 However, the conventional technique cannot reduce the band-like artifact that exists partially along the fast axis direction. Further, there is a possibility that the luminance value of a blood vessel region or the like existing along the fast axis direction is overcorrected or erroneously suppressed.
 本発明は上記課題に鑑みてなされたものであり、被検眼の画像におけるアーチファクトを低減することを目的の一つとする。 The present invention has been made in view of the above problems, and has as its object to reduce artifacts in an image of an eye to be inspected.
 なお、前記目的に限らず、後述する発明を実施するための形態に示す各構成により導かれる作用効果であって、従来の技術によっては得られない作用効果を奏することも本件の他の目的の一つとして位置付けることができる。 It is to be noted that the present invention is not limited to the above-described object, and it is an operation effect derived from each configuration shown in an embodiment for carrying out the invention described later, and it is another object of the present invention to provide an operation effect that cannot be obtained by the conventional technology. Can be positioned as one.
 本発明の目的を達成するために、例えば本発明の画像処理装置の一つは、被検眼の3次元断層画像または3次元モーションコントラスト画像に基づく少なくとも1つの正面画像を2次元の変換処理を実行して得た第1の概略値分布と、前記少なくとも1つの正面画像を1次元の変換処理を実行して得た第2の概略値分布との演算により、補正係数の値の分布を取得する取得手段と、
 前記補正係数の値の分布を用いて、前記3次元断層画像または前記3次元モーションコントラスト画像の少なくとも一部を補正する補正手段と、
 前記補正された少なくとも一部の画像を生成する生成手段と、を備える。
In order to achieve the object of the present invention, for example, one of the image processing apparatuses of the present invention performs a two-dimensional conversion process on at least one front image based on a three-dimensional tomographic image or a three-dimensional motion contrast image of an eye to be inspected. The distribution of the correction coefficient values is obtained by the calculation of the first approximate value distribution obtained by performing the above and the second approximate value distribution obtained by executing the one-dimensional conversion processing on the at least one front image. Acquisition means;
Correction means for correcting at least a part of the three-dimensional tomographic image or the three-dimensional motion contrast image using a distribution of the values of the correction coefficients;
Generating means for generating at least a part of the corrected image.
 本発明の一つによれば、被検眼の画像におけるアーチファクトを低減することができる。 According to one aspect of the present invention, artifacts in an image of an eye to be inspected can be reduced.
本発明の第1実施形態に係る画像処理装置の構成を示すブロック図である。1 is a block diagram illustrating a configuration of an image processing device according to a first embodiment of the present invention. 本発明の実施形態に係る画像処理システムや、該画像処理システムを構成する断層画像撮影装置に含まれる測定光学系を説明する図である。FIG. 1 is a diagram illustrating an image processing system according to an embodiment of the present invention and a measurement optical system included in a tomographic image capturing apparatus included in the image processing system. 本発明の実施形態に係る画像処理システムや、該画像処理システムを構成する断層画像撮影装置に含まれる測定光学系を説明する図である。FIG. 1 is a diagram illustrating an image processing system according to an embodiment of the present invention and a measurement optical system included in a tomographic image capturing apparatus included in the image processing system. 本発明の第1実施形態に係る画像処理システムが実行可能な処理のフローチャートである。5 is a flowchart of a process that can be executed by the image processing system according to the first embodiment of the present invention. 本発明の実施形態におけるOCTA撮影の走査方法や、OCT断層画像及びOCTA画像上に生じる輝度段差アーチファクトを説明する図である。FIG. 3 is a diagram illustrating a scanning method of OCTA imaging and a luminance step artifact occurring on an OCT tomographic image and an OCTA image according to the embodiment of the present invention. 本発明の実施形態におけるOCTA撮影の走査方法や、OCT断層画像及びOCTA画像上に生じる輝度段差アーチファクトを説明する図である。FIG. 3 is a diagram illustrating a scanning method of OCTA imaging and a luminance step artifact occurring on an OCT tomographic image and an OCTA image according to the embodiment of the present invention. 本発明の実施形態におけるOCTA撮影の走査方法や、OCT断層画像及びOCTA画像上に生じる輝度段差アーチファクトを説明する図である。FIG. 3 is a diagram illustrating a scanning method of OCTA imaging and a luminance step artifact occurring on an OCT tomographic image and an OCTA image according to the embodiment of the present invention. 本発明の実施形態におけるOCTA撮影の走査方法や、OCT断層画像及びOCTA画像上に生じる輝度段差アーチファクトを説明する図である。FIG. 3 is a diagram illustrating a scanning method of OCTA imaging and a luminance step artifact occurring on an OCT tomographic image and an OCTA image according to the embodiment of the present invention. 本発明の実施形態におけるOCTA撮影の走査方法や、OCT断層画像及びOCTA画像上に生じる輝度段差アーチファクトを説明する図である。FIG. 3 is a diagram illustrating a scanning method of OCTA imaging and a luminance step artifact occurring on an OCT tomographic image and an OCTA image according to the embodiment of the present invention. 本発明の第1実施形態のS303及びS304で実行される処理のフローチャートである。5 is a flowchart of processing executed in S303 and S304 according to the first embodiment of the present invention. 本発明の第1実施形態のS303及びS304で実行される処理のフローチャートである。5 is a flowchart of processing executed in S303 and S304 according to the first embodiment of the present invention. 本発明の第1実施形態のS303で実行される処理を説明する図である。FIG. 7 is a diagram illustrating processing performed in S303 according to the first embodiment of the present invention. 本発明の第1実施形態のS303で実行される処理を説明する図である。FIG. 7 is a diagram illustrating processing performed in S303 according to the first embodiment of the present invention. 本発明の第1実施形態のS303で実行される処理を説明する図である。FIG. 7 is a diagram illustrating processing performed in S303 according to the first embodiment of the present invention. 本発明の第1実施形態のS303で実行される処理を説明する図である。FIG. 7 is a diagram illustrating processing performed in S303 according to the first embodiment of the present invention. 本発明の第1実施形態のS303で実行される処理を説明する図である。FIG. 7 is a diagram illustrating processing performed in S303 according to the first embodiment of the present invention. 本発明の第1実施形態において実行される画像処理内容を説明する図である。FIG. 4 is a diagram for explaining image processing content executed in the first embodiment of the present invention. 本発明の第1実施形態において実行される画像処理内容を説明する図である。FIG. 4 is a diagram for explaining image processing content executed in the first embodiment of the present invention. 本発明の第1実施形態において実行される画像処理内容を説明する図である。FIG. 4 is a diagram for explaining image processing content executed in the first embodiment of the present invention. 本発明の第1実施形態において実行される画像処理内容を説明する図である。FIG. 4 is a diagram for explaining image processing content executed in the first embodiment of the present invention. 本発明の第1実施形態において実行される画像処理内容を説明する図である。FIG. 4 is a diagram for explaining image processing content executed in the first embodiment of the present invention. 本発明の第1実施形態において実行される画像処理内容を説明する図である。FIG. 4 is a diagram for explaining image processing content executed in the first embodiment of the present invention. 本発明の第1実施形態において実行される画像処理内容を説明する図である。FIG. 4 is a diagram for explaining image processing content executed in the first embodiment of the present invention. 本発明の第1実施形態において実行される画像処理の効果について説明する図である。FIG. 5 is a diagram illustrating an effect of image processing performed in the first embodiment of the present invention. 本発明の第1実施形態において実行される画像処理の効果について説明する図である。FIG. 5 is a diagram illustrating an effect of image processing performed in the first embodiment of the present invention. 本発明の第1実施形態において実行される画像処理の効果について説明する図である。FIG. 5 is a diagram illustrating an effect of image processing performed in the first embodiment of the present invention. 本発明の第1実施形態において実行される画像処理の効果について説明する図である。FIG. 5 is a diagram illustrating an effect of image processing performed in the first embodiment of the present invention. 本発明の第1実施形態において実行される画像処理の効果について説明する図である。FIG. 5 is a diagram illustrating an effect of image processing performed in the first embodiment of the present invention. 本発明の第1実施形態において実行される画像処理の効果について説明する図である。FIG. 5 is a diagram illustrating an effect of image processing performed in the first embodiment of the present invention. 本発明の第1実施形態において実行される画像処理の効果について説明する図である。FIG. 5 is a diagram illustrating an effect of image processing performed in the first embodiment of the present invention. 本発明の第2実施形態に係る画像処理装置の構成を示すブロック図である。It is a block diagram showing the composition of the image processing device concerning a 2nd embodiment of the present invention. 本発明の第2実施形態に係る画像処理システムが実行可能な処理のフローチャートである。9 is a flowchart of a process that can be executed by the image processing system according to the second embodiment of the present invention. 本発明の第2実施形態において実行される画像処理内容を説明する図である。FIG. 9 is a diagram illustrating image processing contents executed in a second embodiment of the present invention. 本発明の第2実施形態において実行される画像処理内容を説明する図である。FIG. 9 is a diagram illustrating image processing contents executed in a second embodiment of the present invention. 本発明の第2実施形態において実行される画像処理内容を説明する図である。FIG. 9 is a diagram illustrating image processing contents executed in a second embodiment of the present invention. 本発明の第2実施形態において実行される画像処理内容を説明する図である。FIG. 9 is a diagram illustrating image processing contents executed in a second embodiment of the present invention. 本発明の第2実施形態において実行される画像処理内容を説明する図である。FIG. 9 is a diagram illustrating image processing contents executed in a second embodiment of the present invention. 本発明の第2実施形態において実行される画像処理内容を説明する図である。FIG. 9 is a diagram illustrating image processing contents executed in a second embodiment of the present invention. 本発明の第2実施形態において実行される画像処理内容を説明する図である。FIG. 9 is a diagram illustrating image processing contents executed in a second embodiment of the present invention. 本発明の第2実施形態において実行される画像処理の効果について説明する図である。FIG. 9 is a diagram illustrating an effect of image processing performed in a second embodiment of the present invention. 本発明の第2実施形態において実行される画像処理の効果について説明する図である。FIG. 9 is a diagram illustrating an effect of image processing performed in a second embodiment of the present invention. 本発明の第2実施形態において実行される画像処理の効果について説明する図である。FIG. 9 is a diagram illustrating an effect of image processing performed in a second embodiment of the present invention. 本発明の第2実施形態において実行される画像処理の効果について説明する図である。FIG. 9 is a diagram illustrating an effect of image processing performed in a second embodiment of the present invention. 本発明の第2実施形態において実行される画像処理の効果について説明する図である。FIG. 9 is a diagram illustrating an effect of image processing performed in a second embodiment of the present invention. 本発明の第2実施形態において実行される画像処理の効果について説明する図である。FIG. 9 is a diagram illustrating an effect of image processing performed in a second embodiment of the present invention. 本発明の第2実施形態において実行される画像処理の効果について説明する図である。FIG. 9 is a diagram illustrating an effect of image processing performed in a second embodiment of the present invention. 本発明の第2実施形態のS1007において表示手段に表示するレポート画面を説明する図である。It is a figure explaining the report screen displayed on display means in S1007 of a 2nd embodiment of the present invention. 本発明の第3実施形態における確認画面を説明する図である。It is a figure explaining a confirmation screen in a 3rd embodiment of the present invention. 本発明の第3実施形態における確認画面に表示される正面モーションコントラスト画像を説明する図である。It is a figure explaining the front motion contrast image displayed on the confirmation screen in a 3rd embodiment of the present invention. 本発明の第3実施形態における確認画面に表示される正面モーションコントラスト画像を説明する図である。It is a figure explaining the front motion contrast image displayed on the confirmation screen in a 3rd embodiment of the present invention. 本発明の第3実施形態における確認画面に表示される正面モーションコントラスト画像を説明する図である。It is a figure explaining the front motion contrast image displayed on the confirmation screen in a 3rd embodiment of the present invention. 本発明の第4実施形態のS1003で実行される処理を説明する図である。It is a figure explaining processing performed at S1003 of a 4th embodiment of the present invention. 本発明の第4実施形態のS1003で実行される処理を説明する図である。It is a figure explaining processing performed at S1003 of a 4th embodiment of the present invention. 本発明の第4実施形態のS1003で実行される処理を説明する図である。It is a figure explaining processing performed at S1003 of a 4th embodiment of the present invention. 本発明の第4実施形態のS1003で実行される処理を説明する図である。It is a figure explaining processing performed at S1003 of a 4th embodiment of the present invention. 本発明の第4実施形態のS1003で実行される処理を説明する図である。It is a figure explaining processing performed at S1003 of a 4th embodiment of the present invention. 本発明の第4実施形態において実行される画像処理内容を説明する図である。It is a figure explaining contents of image processing performed in a 4th embodiment of the present invention. 本発明の第4実施形態において実行される画像処理内容を説明する図である。It is a figure explaining contents of image processing performed in a 4th embodiment of the present invention. 本発明の第4実施形態において実行される画像処理内容を説明する図である。It is a figure explaining contents of image processing performed in a 4th embodiment of the present invention. 本発明の第4実施形態において実行される画像処理内容を説明する図である。It is a figure explaining contents of image processing performed in a 4th embodiment of the present invention. 本発明の第4実施形態において実行される画像処理内容を説明する図である。It is a figure explaining contents of image processing performed in a 4th embodiment of the present invention. 本発明の第4実施形態において実行される画像処理内容を説明する図である。It is a figure explaining contents of image processing performed in a 4th embodiment of the present invention. 本発明の第4実施形態において実行される画像処理内容を説明する図である。It is a figure explaining contents of image processing performed in a 4th embodiment of the present invention. 本発明の第4実施形態において実行される画像処理の効果について説明する図である。It is a figure explaining an effect of image processing performed in a 4th embodiment of the present invention. 本発明の第4実施形態において実行される画像処理の効果について説明する図である。It is a figure explaining an effect of image processing performed in a 4th embodiment of the present invention. 本発明の第4実施形態において実行される画像処理の効果について説明する図である。It is a figure explaining an effect of image processing performed in a 4th embodiment of the present invention. 本発明の第4実施形態において実行される画像処理の効果について説明する図である。It is a figure explaining an effect of image processing performed in a 4th embodiment of the present invention. 本発明の第4実施形態において実行される画像処理の効果について説明する図である。It is a figure explaining an effect of image processing performed in a 4th embodiment of the present invention. 本発明の第4実施形態において実行される画像処理の効果について説明する図である。It is a figure explaining an effect of image processing performed in a 4th embodiment of the present invention. 本発明の第4実施形態において実行される画像処理の効果について説明する図である。It is a figure explaining an effect of image processing performed in a 4th embodiment of the present invention.
 [第1実施形態]
 本実施形態に係る画像処理装置は、OCTを用いて撮影した被検眼の断層画像の遅軸方向に生じた様々な輝度段差アーチファクトをロバストに抑制するために、以下の画像補正処理を行う。すなわち、断層画像の網膜表層と網膜外層間の輝度減衰率に基づいて血管候補領域の分布情報を生成する。次に、高次元平滑化断層画像の輝度値に対して、該血管候補領域に対して重み付けした低次元(速軸方向のみ)平滑化断層画像の輝度値を除算することにより、輝度補正係数値分布を生成する。さらに、断層画像の各画素に対して輝度補正係数値を乗算することで、断層画像の遅軸方向に生じた輝度段差アーチファクトをロバストに抑制する場合について説明する。ここで、遅軸方向に生じた輝度段差アーチファクトとは、例えば、固視ずれに起因してX方向(速軸方向)に延びる帯状アーチファクトのことである。なお、速軸方向は、例えば、3次元断層画像を取得する際に使用される測定光の主走査の軸方向のことである。
[First Embodiment]
The image processing apparatus according to the present embodiment performs the following image correction processing in order to robustly suppress various luminance step artifacts generated in the slow axis direction of the tomographic image of the subject's eye captured using OCT. That is, distribution information of the blood vessel candidate region is generated based on the luminance attenuation rate between the retinal surface layer and the outer retinal layer of the tomographic image. Next, by dividing the luminance value of the high-dimensional smoothed tomographic image by the luminance value of the low-dimensional (fast axis direction only) smoothed tomographic image weighted to the blood vessel candidate region, the luminance correction coefficient value is obtained. Generate a distribution. Furthermore, a case will be described in which a luminance step artifact generated in the slow axis direction of a tomographic image is robustly suppressed by multiplying each pixel of the tomographic image by a luminance correction coefficient value. Here, the luminance step artifact that occurs in the slow axis direction is, for example, a band-like artifact that extends in the X direction (fast axis direction) due to fixation disparity. The fast axis direction is, for example, the axial direction of the main scanning of the measurement light used when acquiring a three-dimensional tomographic image.
 また、本実施形態で補正対象とする被検眼の断層画像の遅軸方向に生じた輝度段差の例について説明する。OCT断層画像の撮影中に被検眼の固視ずれが生じた場合には再走査が行われる。例えば、長時間にわたる撮影などでは、被検眼の睫毛や瞳孔の位置が初回走査時と再走査時とで異なる等の理由で再走査した領域の輝度が低くなり、図4Bの白矢印で示すような帯状の低輝度な段差が生じやすい。なお、図4Bにおいて横方向が速軸方向、縦方向が遅軸方向である。また、帯状の輝度段差アーチファクトが画像の速軸方向の全域に生じるとは限らず、以下に示すようなケースにおいて速軸方向の一部に限局した帯状の輝度段差アーチファクトが生じるという課題がある。すなわち、再走査時に硝子体混濁を含む領域が走査され、影領域が生じた場合に断層画像上の再走査した領域内に速軸方向の一部に限局した帯状の低輝度段差が生じることがある(図4Eの白矢印で示した低輝度段差)。 An example of a luminance step occurring in the slow axis direction of the tomographic image of the subject's eye to be corrected in the present embodiment will be described. If fixation disparity of the subject's eye occurs during imaging of the OCT tomographic image, rescanning is performed. For example, in long-time imaging, the luminance of the re-scanned region is low because the positions of the eyelashes and pupils of the subject's eye are different between the first scan and the re-scan, etc., as shown by the white arrow in FIG. 4B. Such a band-like low-luminance step easily occurs. In FIG. 4B, the horizontal direction is the fast axis direction, and the vertical direction is the slow axis direction. Further, there is a problem that a band-shaped luminance step artifact is not always generated in the entire area in the fast axis direction of an image, and a band-shaped luminance step artifact localized in a part of the fast axis direction occurs in the following cases. That is, an area including vitreous opacity is scanned at the time of rescanning, and when a shadow area occurs, a band-shaped low-luminance step localized in a part in the fast axis direction may occur in the rescanned area on the tomographic image. (A low luminance step indicated by a white arrow in FIG. 4E).
 以下、図面を参照しながら、本発明の第1実施形態に係る画像処理装置を備える画像処理システムについて説明する。 Hereinafter, an image processing system including the image processing apparatus according to the first embodiment of the present invention will be described with reference to the drawings.
 図2は、本実施形態に係る画像処理装置101を備える画像処理システム10の構成を示す図である。図2に示すように、画像処理システム10は、画像処理装置101が、インタフェースを介して断層画像撮影装置100(OCTとも言う)、外部記憶部102、入力部103、表示部104と接続されることにより構成されている。 FIG. 2 is a diagram illustrating a configuration of an image processing system 10 including the image processing apparatus 101 according to the present embodiment. As shown in FIG. 2, in the image processing system 10, the image processing apparatus 101 is connected to a tomographic imaging apparatus 100 (also referred to as OCT), an external storage unit 102, an input unit 103, and a display unit 104 via an interface. It is constituted by.
 断層画像撮影装置100は、被検眼の断層画像を撮影する装置である。本実施形態においては、断層画像撮影装置100としてSD-OCTを用いるものとする。これに限らず、例えばSS-OCTを用いて構成しても良い。 The tomographic image capturing apparatus 100 is an apparatus that captures a tomographic image of the eye to be inspected. In the present embodiment, it is assumed that SD-OCT is used as the tomographic imaging apparatus 100. The present invention is not limited to this, and may be configured using, for example, SS-OCT.
 図2Aにおいて、測定光学系100-1は前眼部像、被検眼のSLO眼底像、断層画像を取得するための光学系である。ステージ部100-2は、測定光学系100-1を前後左右に移動可能にする。ベース部100-3は、後述の分光器を内蔵している。 2A, the measurement optical system 100-1 is an optical system for acquiring an anterior ocular segment image, an SLO fundus image of an eye to be examined, and a tomographic image. The stage section 100-2 enables the measurement optical system 100-1 to move forward, backward, left, and right. The base unit 100-3 incorporates a spectroscope described later.
 画像処理装置101は、ステージ部100-2の制御、アラインメント動作の制御、断層画像の再構成などを実行するコンピュータである。外部記憶部102は、断層撮像用のプログラム、患者情報、撮影データ、過去検査の画像データや計測データなどを記憶する。 The image processing apparatus 101 is a computer that controls the stage unit 100-2, controls the alignment operation, reconstructs a tomographic image, and the like. The external storage unit 102 stores a tomographic imaging program, patient information, imaging data, image data and measurement data of a past examination, and the like.
 入力部103はコンピュータへの指示を行い、具体的にはキーボードとマウスから構成される。表示部104は、例えばモニタからなる。 The input unit 103 issues instructions to the computer, and is specifically composed of a keyboard and a mouse. The display unit 104 includes, for example, a monitor.
 (断層画像撮影装置の構成)
 本実施形態の断層画像撮影装置100における測定光学系及び分光器の構成について図2Bを用いて説明する。
(Configuration of tomographic imaging device)
The configuration of the measuring optical system and the spectroscope in the tomographic imaging apparatus 100 of the present embodiment will be described with reference to FIG. 2B.
 まず、測定光学系100-1の内部について説明する。被検眼200に対向して対物レンズ201が設置され、その光軸上に第1ダイクロイックミラー202及び第2ダイクロイックミラー203が配置されている。これらのダイクロイックミラーによってOCT光学系の光路250、SLO光学系と固視灯用の光路251、及び前眼観察用の光路252とに波長帯域ごとに分岐される。 First, the inside of the measuring optical system 100-1 will be described. An objective lens 201 is installed so as to face the subject's eye 200, and a first dichroic mirror 202 and a second dichroic mirror 203 are arranged on the optical axis. These dichroic mirrors are branched into an optical path 250 of the OCT optical system, an optical path 251 for the SLO optical system and the fixation lamp, and an optical path 252 for anterior eye observation for each wavelength band.
 SLO光学系と固視灯用の光路251は、SLO走査手段204、レンズ205及び206、ミラー207、第3ダイクロイックミラー208、APD(Avalanche Photodiode)209、SLO光源210、固視灯211を有している。 The optical path 251 for the SLO optical system and the fixation lamp includes an SLO scanning unit 204, lenses 205 and 206, a mirror 207, a third dichroic mirror 208, an APD (Avalanche Photodiode) 209, an SLO light source 210, and a fixation lamp 211. ing.
 ミラー207は、穴あきミラーや中空のミラーが蒸着されたプリズムであり、SLO光源210による照明光と、被検眼からの戻り光とを分離する。第3ダイクロイックミラー208はSLO光源210の光路と固視灯211の光路とに波長帯域ごとに分離する。 The mirror 207 is a prism on which a perforated mirror or a hollow mirror is deposited, and separates the illumination light from the SLO light source 210 from the return light from the subject's eye. The third dichroic mirror 208 separates the optical path of the SLO light source 210 and the optical path of the fixation lamp 211 for each wavelength band.
 SLO走査手段204は、SLO光源210から発せられた光を被検眼200上で走査するものであり、X方向に走査するXスキャナ、Y方向に走査するYスキャナから構成されている。本実施形態では、Xスキャナは高速走査を行う必要があるためポリゴンミラーで、Yスキャナはガルバノミラーによって構成されている。 The SLO scanning unit 204 scans the light emitted from the SLO light source 210 on the eye 200, and includes an X scanner that scans in the X direction and a Y scanner that scans in the Y direction. In the present embodiment, since the X scanner needs to perform high-speed scanning, it is constituted by a polygon mirror, and the Y scanner is constituted by a galvanometer mirror.
 レンズ205はSLO光学系及び固視灯211の焦点合わせのため、不図示のモータによって駆動される。SLO光源210は780nm付近の波長の光を発生する。APD209は、被検眼からの戻り光を検出する。固視灯211は可視光を発生して被検者の固視を促すものである。 The lens 205 is driven by a motor (not shown) for focusing the SLO optical system and the fixation lamp 211. The SLO light source 210 generates light having a wavelength near 780 nm. The APD 209 detects the return light from the subject's eye. The fixation lamp 211 generates visible light and urges the subject to fixate.
 SLO光源210から発せられた光は、第3ダイクロイックミラー208で反射され、ミラー207を通過し、レンズ206及び205を通ってSLO走査手段204によって被検眼200上で走査される。被検眼200からの戻り光は、照明光と同じ経路を戻った後、ミラー207によって反射され、APD209へと導かれ、SLO眼底像が得られる。 Light emitted from the SLO light source 210 is reflected by the third dichroic mirror 208, passes through the mirror 207, passes through the lenses 206 and 205, and is scanned on the eye 200 by the SLO scanning means 204. The return light from the subject's eye 200 returns along the same path as the illumination light, is reflected by the mirror 207, is guided to the APD 209, and an SLO fundus image is obtained.
 固視灯211から発せられた光は、第3ダイクロイックミラー208、ミラー207を透過し、レンズ206及び205を通り、SLO走査手段204によって被検眼200上の任意の位置に所定の形状を作り、被検者の固視を促す。 The light emitted from the fixation lamp 211 passes through the third dichroic mirror 208 and the mirror 207, passes through the lenses 206 and 205, forms a predetermined shape at an arbitrary position on the eye 200 by the SLO scanning unit 204, Encourage the subject to fixate.
 前眼観察用の光路252には、レンズ212及び213、スプリットプリズム214、赤外光を検知する前眼部観察用のCCD215が配置されている。このCCD215は、不図示の前眼部観察用照射光の波長、具体的には970nm付近に感度を持つものである。スプリットプリズム214は、被検眼200の瞳孔と共役な位置に配置されており、被検眼200に対する測定光学系100-1のZ軸方向(光軸方向)の距離を、前眼部のスプリット像として検出できる。 (4) In the optical path 252 for anterior eye observation, lenses 212 and 213, a split prism 214, and a CCD 215 for anterior eye observation for detecting infrared light are arranged. The CCD 215 has sensitivity at the wavelength of irradiation light (not shown) for anterior ocular segment observation, specifically, around 970 nm. The split prism 214 is disposed at a position conjugate with the pupil of the eye 200 to be inspected, and the distance in the Z-axis direction (optical axis direction) of the measurement optical system 100-1 to the eye 200 to be inspected is defined as a split image of the anterior eye part. Can be detected.
 OCT光学系の光路250は前述の通りOCT光学系を構成しており、被検眼200の断層画像を撮影するためのものである。より具体的には、断層画像を形成するための干渉信号を得るものである。XYスキャナ216は光を被検眼200上で走査するためのものであり、図2Bでは1枚のミラーとして図示されているが、実際はXY2軸方向の走査を行うガルバノミラーである。なお、本実施形態では、X方向(速軸方向)が、Xスキャナが測定光を被検眼の眼底上で走査する方向のことであり、また、Y方向(遅軸方向)が、Yスキャナが測定光を眼底上で走査する方向のことである。ただし、本実施形態のようにラスタスキャンではない場合には(例えば、サークルスキャン、ラジアルスキャン)、この限りではない。 The optical path 250 of the OCT optical system constitutes an OCT optical system as described above, and is used for capturing a tomographic image of the eye 200 to be inspected. More specifically, an interference signal for forming a tomographic image is obtained. The XY scanner 216 scans light on the eye 200 to be inspected, and is illustrated as a single mirror in FIG. 2B, but is actually a galvano mirror that performs scanning in the XY two-axis directions. In the present embodiment, the X direction (fast axis direction) is the direction in which the X scanner scans the measurement light on the fundus of the eye to be examined, and the Y direction (slow axis direction) is the Y scanner. The direction in which the measurement light is scanned on the fundus. However, when the scan is not a raster scan as in the present embodiment (for example, a circle scan or a radial scan), this is not always the case.
 レンズ217及び218のうち、レンズ217については光カプラー219に接続されているファイバー224から出射するOCT光源220からの光を、被検眼200に焦点合わせするために不図示のモータによって駆動される。この焦点合わせによって、被検眼200からの戻り光は同時にファイバー224の先端に、スポット状に結像されて入射されることとなる。次に、OCT光源220からの光路と参照光学系、分光器の構成について説明する。220はOCT光源、221は参照ミラー、222は分散補償硝子、223はレンズ、219は光カプラー、224から227は光カプラーに接続されて一体化しているシングルモードの光ファイバー、230は分光器である。 Of the lenses 217 and 218, the lens 217 is driven by a motor (not shown) to focus light from the OCT light source 220 emitted from the fiber 224 connected to the optical coupler 219 to the eye 200 to be inspected. By this focusing, the return light from the eye 200 to be examined is simultaneously focused on the tip of the fiber 224 in the form of a spot and is incident. Next, the configuration of the optical path from the OCT light source 220, the reference optical system, and the spectroscope will be described. 220 is an OCT light source, 221 is a reference mirror, 222 is a dispersion compensating glass, 223 is a lens, 219 is an optical coupler, 224 to 227 are single mode optical fibers connected to and integrated with the optical coupler, and 230 is a spectroscope. .
 これらの構成によってマイケルソン干渉計を構成している。OCT光源220から出射された光は、光ファイバー225を通じ、光カプラー219を介して光ファイバー224側の測定光と、光ファイバー226側の参照光とに分割される。測定光は前述のOCT光学系光路を通じ、観察対象である被検眼200に照射され、被検眼200による反射や散乱により同じ光路を通じて光カプラー219に到達する。 マ イ These components constitute the Michelson interferometer. The light emitted from the OCT light source 220 passes through an optical fiber 225 and is split via an optical coupler 219 into measurement light on the optical fiber 224 side and reference light on the optical fiber 226 side. The measurement light is applied to the subject's eye 200 to be observed through the above-described OCT optical system optical path, and reaches the optical coupler 219 via the same optical path due to reflection and scattering by the subject's eye 200.
 一方、参照光は光ファイバー226、レンズ223、測定光と参照光の波長分散を合わせるために挿入された分散補償ガラス222を介して参照ミラー221に到達し反射される。そして同じ光路を戻り、光カプラー219に到達する。 On the other hand, the reference light reaches the reference mirror 221 via the optical fiber 226, the lens 223, and the dispersion compensating glass 222 inserted for adjusting the wavelength dispersion of the measurement light and the reference light, and is reflected. Then, the light returns to the same optical path and reaches the optical coupler 219.
 光カプラー219によって、測定光と参照光は合波され干渉光となる。 測定 The measuring light and the reference light are multiplexed by the optical coupler 219 to become interference light.
 ここで、測定光の光路長と参照光の光路長がほぼ同一となったときに干渉を生じる。参照ミラー221は、不図示のモータおよび駆動機構によって光軸方向に調整可能に保持され、測定光の光路長に参照光の光路長を合わせることが可能である。干渉光は光ファイバー227を介して分光器230に導かれる。 干 渉 Here, interference occurs when the optical path length of the measurement light and the optical path length of the reference light become substantially the same. The reference mirror 221 is held so as to be adjustable in the optical axis direction by a motor and a driving mechanism (not shown), and can adjust the optical path length of the reference light to the optical path length of the measurement light. The interference light is guided to the spectroscope 230 via the optical fiber 227.
 また、偏光調整部228、229は、各々光ファイバー224、226中に設けられ、偏光調整を行う。これらの偏光調整部は光ファイバーをループ状に引きまわした部分を幾つか持っている。このループ状の部分をファイバーの長手方向を中心として回転させることでファイバーに捩じりを加え、測定光と参照光の偏光状態を各々調整して合わせることができる。 (4) The polarization adjustment units 228 and 229 are provided in the optical fibers 224 and 226, respectively, and perform polarization adjustment. These polarization adjusting sections have several portions where the optical fiber is looped. By rotating the loop portion about the longitudinal direction of the fiber, the fiber is twisted, and the polarization states of the measurement light and the reference light can be adjusted and matched.
 分光器230はレンズ232、234、回折格子233、ラインセンサ231から構成される。光ファイバー227から出射された干渉光はレンズ234を介して平行光となった後、回折格子233で分光され、レンズ232によってラインセンサ231に結像される。 The spectroscope 230 includes lenses 232 and 234, a diffraction grating 233, and a line sensor 231. The interference light emitted from the optical fiber 227 is converted into parallel light through the lens 234, is then separated by the diffraction grating 233, and is imaged on the line sensor 231 by the lens 232.
 次に、OCT光源220の周辺について説明する。OCT光源220は、代表的な低コヒーレント光源であるSLD(Super Luminescent Diode)である。中心波長は855nm、波長バンド幅は約100nmである。ここで、バンド幅は、得られる断層画像の光軸方向の分解能に影響するため、重要なパラメータである。 Next, the periphery of the OCT light source 220 will be described. The OCT light source 220 is an SLD (Super Luminescent Diode) that is a typical low coherent light source. The center wavelength is 855 nm and the wavelength bandwidth is about 100 nm. Here, the bandwidth is an important parameter because it affects the resolution of the obtained tomographic image in the optical axis direction.
 光源の種類は、ここではSLDを選択したが、低コヒーレント光が出射できればよく、ASE(Amplified Spontaneous Emission)等を用いることができる。中心波長は眼を測定することを鑑みると近赤外光が適する。また、中心波長は得られる断層画像の横方向の分解能に影響するため、なるべく短波長であることが望ましい。双方の理由から中心波長は855nmとした。 In this example, the type of the light source is SLD, but it is sufficient that low coherent light can be emitted, and ASE (Amplified Spontaneous Emission) or the like can be used. Near-infrared light is suitable for the center wavelength in view of measuring the eye. Since the center wavelength affects the resolution of the obtained tomographic image in the horizontal direction, it is desirable that the center wavelength be as short as possible. For both reasons, the center wavelength was 855 nm.
 本実施形態では干渉計としてマイケルソン干渉計を用いたが、マッハツェンダー干渉計を用いても良い。測定光と参照光との光量差に応じて、光量差が大きい場合にはマッハツェンダー干渉計を、光量差が比較的小さい場合にはマイケルソン干渉計を用いることが望ましい。 マ イ In the present embodiment, a Michelson interferometer is used as an interferometer, but a Mach-Zehnder interferometer may be used. According to the light amount difference between the measurement light and the reference light, it is desirable to use a Mach-Zehnder interferometer when the light amount difference is large and to use a Michelson interferometer when the light amount difference is relatively small.
 (画像処理装置の構成)
 本実施形態の画像処理装置101の構成について図1を用いて説明する。
(Configuration of image processing device)
The configuration of the image processing apparatus 101 according to the present embodiment will be described with reference to FIG.
 画像処理装置101は断層画像撮影装置100に接続されたパーソナルコンピュータ(PC)であり、画像取得部101-01、記憶部101-02、撮影制御部101-03、画像処理部101-04、表示制御部101-05を備える。また、画像処理装置101は演算処理装置CPUが画像取得部101-01、撮影制御部101-03、画像処理部101-04および表示制御部101-05を実現するソフトウェアモジュールを実行することで機能を実現する。本発明はこれに限定されず、例えば画像処理部101-04をASIC等の専用のハードウェアで実現してもよいし、表示制御部101-05をCPUとは異なるGPU等の専用プロセッサを用いて実現してもよい。また断層画像撮影装置100と画像処理装置101との接続はネットワークを介した構成であってもよい。 The image processing apparatus 101 is a personal computer (PC) connected to the tomographic image capturing apparatus 100, and includes an image acquisition unit 101-01, a storage unit 101-02, an imaging control unit 101-03, an image processing unit 101-04, and a display. A control unit 101-05 is provided. The image processing apparatus 101 has a function that the arithmetic processing unit CPU executes software modules for realizing the image acquisition unit 101-01, the imaging control unit 101-03, the image processing unit 101-04, and the display control unit 101-05. To achieve. The present invention is not limited to this. For example, the image processing unit 101-04 may be realized by dedicated hardware such as an ASIC, or the display control unit 101-05 may be realized by using a dedicated processor such as a GPU different from a CPU. May be realized. The connection between the tomographic imaging apparatus 100 and the image processing apparatus 101 may be configured via a network.
 画像取得部101-01は断層画像撮影装置100により撮影されたSLO眼底像や断層画像の信号データを取得する。また画像取得部101-01は断層画像生成部101―11を有する。断層画像生成部101―11は断層画像撮影装置100により撮影された断層画像の信号データ(干渉信号)を取得して信号処理により断層画像を生成し、生成した断層画像を記憶部101-02に格納する。 The image acquisition unit 101-01 acquires signal data of an SLO fundus image and a tomographic image captured by the tomographic image capturing apparatus 100. The image acquisition unit 101-01 has a tomographic image generation unit 101-11. The tomographic image generation unit 101-11 acquires signal data (interference signal) of the tomographic image captured by the tomographic image capturing apparatus 100, generates a tomographic image by signal processing, and stores the generated tomographic image in the storage unit 101-02. Store.
 撮影制御部101-03は、断層画像撮影装置100に対する撮影制御を行う。撮影制御には、断層画像撮影装置100に対して撮影パラメータの設定に関して指示することや、撮影の開始もしくは終了に関して指示することも含まれる。 (4) The imaging control unit 101-03 controls imaging of the tomographic imaging apparatus 100. The imaging control includes instructing the tomographic imaging apparatus 100 regarding setting of imaging parameters, and instructing the start or end of imaging.
 画像処理部101-04は、位置合わせ部101-41、画像特徴取得部101-42、投影部101-43、補正部101-44を有する。先に述べた画像取得部101-01は、本発明に係る第1の取得手段の一例である。画像特徴取得部101-42は断層画像から網膜や脈絡膜の層境界、血管候補領域、中心窩や視神経乳頭中心の位置を取得する。投影部101-43は画像特徴取得部101-42が取得した層境界の位置に基づく深度範囲で画像投影し、正面画像を生成する。補正部101-44は変換部101-441と重みづけ部101-442、演算部101-443を有する。補正部101-44は高次元平滑化断層画像と、血管候補領域の輝度を重み付けした断層画像を速軸方向に平滑化した低次元平滑化画像との演算処理により算出した輝度補正係数を用いて断層画像の遅軸方向に生じた輝度段差を抑制する処理を行う。変換部101-441は、高次元の概略輝度値分布を生成する高次元変換部101-4411と、低次元の概略輝度値分布を生成する低次元変換部101-4412とを備える。重み付け部101-442は、血管取得部101-421が取得した血管候補領域の分布情報に基づいて断層画像の輝度値を重み付けする。演算部101-443は、高次元変換部101-4411が生成した高次元平滑化画像と、低次元変換部101-4412が生成した低次元平滑化画像とを演算することで輝度補正係数値分布を算出する。 The image processing unit 101-04 includes a positioning unit 101-41, an image feature obtaining unit 101-42, a projecting unit 101-43, and a correcting unit 101-44. The image acquisition unit 101-01 described above is an example of a first acquisition unit according to the present invention. The image feature acquisition unit 101-42 acquires the layer boundary of the retina and the choroid, a candidate blood vessel region, the position of the fovea and the center of the optic disc from the tomographic image. The projection unit 101-43 projects an image in a depth range based on the position of the layer boundary acquired by the image feature acquisition unit 101-42, and generates a front image. The correction unit 101-44 includes a conversion unit 101-441, a weighting unit 101-442, and a calculation unit 101-443. The correction unit 101-44 uses the luminance correction coefficient calculated by the arithmetic processing of the high-dimensional smoothed tomographic image and the low-dimensional smoothed image obtained by smoothing the tomographic image weighted with the luminance of the blood vessel candidate region in the fast axis direction. A process for suppressing a luminance step generated in the slow axis direction of the tomographic image is performed. The conversion unit 101-441 includes a high-dimensional conversion unit 101-4411 that generates a high-dimensional approximate luminance value distribution, and a low-dimensional conversion unit 101-4412 that generates a low-dimensional approximate luminance value distribution. The weighting unit 101-442 weights the luminance value of the tomographic image based on the distribution information of the blood vessel candidate region acquired by the blood vessel acquisition unit 101-421. Arithmetic sections 101-443 calculate the luminance correction coefficient value distribution by calculating the high-dimensional smoothed image generated by high-dimensional conversion section 101-4411 and the low-dimensional smoothed image generated by low-dimensional conversion section 101-4412. Is calculated.
 外部記憶部102は、被検眼の情報(患者の氏名、年齢、性別など)と、撮影した断層画像及びSLO画像や撮影パラメータ、該画像を処理して得られた画像、血管候補領域の分布データ、輝度補正係数値分布、操作者が設定したパラメータを関連付けて保持している。入力部103は、例えば、マウス、キーボード、タッチ操作画面などであり、操作者は、入力部103を介して、画像処理装置101や断層画像撮影装置100へ指示を行う。 The external storage unit 102 stores information on the subject's eye (patient's name, age, gender, etc.), captured tomographic images and SLO images, imaging parameters, images obtained by processing the images, and distribution data of blood vessel candidate regions. , The luminance correction coefficient value distribution, and the parameters set by the operator in association with each other. The input unit 103 is, for example, a mouse, a keyboard, a touch operation screen, or the like. An operator issues an instruction to the image processing apparatus 101 or the tomographic image capturing apparatus 100 via the input unit 103.
 次に、図3を参照して本実施形態の画像処理装置101の処理手順を示す。図3は、本実施形態における本システム全体の動作処理の流れを示すフローチャートである。 Next, a processing procedure of the image processing apparatus 101 according to the present embodiment will be described with reference to FIG. FIG. 3 is a flowchart showing a flow of operation processing of the entire system in the present embodiment.
 <ステップ301>
 操作者は入力部103を操作することにより、断層画像撮影装置100に対して指示するOCT画像(3次元断層画像)の撮影条件を設定する。
<Step 301>
By operating the input unit 103, the operator sets the imaging conditions of the OCT image (three-dimensional tomographic image) to be instructed to the tomographic imaging apparatus 100.
 具体的には
1)スキャンモードの選択
2)スキャンモードに対応する撮影パラメータ設定
の手順からなり、本実施形態では以下のように設定してOCT撮影を実行する。
1)Macula 3Dスキャンモードを選択
2)以下の撮影パラメータを設定
2-1)走査領域サイズ:10x10mm
2-2)主走査方向:水平方向
2-3)走査間隔:0.01mm
2-4)固視灯位置:中心窩と視神経乳頭との中間
2-5)同一撮影位置でのBスキャン数:1
2-6)コヒーレンスゲート位置:硝子体側
More specifically, the procedure includes 1) selection of a scan mode and 2) a procedure for setting imaging parameters corresponding to the scan mode. In the present embodiment, OCT imaging is executed with the following settings.
1) Select the Macula 3D scan mode 2) Set the following shooting parameters 2-1) Scan area size: 10 × 10 mm
2-2) Main scanning direction: horizontal direction 2-3) Scan interval: 0.01 mm
2-4) Fixation light position: midway between fovea and optic disc 2-5) Number of B scans at the same imaging position: 1
2-6) Coherence gate position: vitreous side
 次に、操作者は入力部103を操作して撮影画面中の撮影開始ボタン(非表示)を押下することにより、上記設定した撮影条件によるOCT断層画像の撮影を開始する。 Next, the operator operates the input unit 103 and presses a shooting start button (not displayed) in the shooting screen to start shooting an OCT tomographic image under the set shooting conditions.
 撮影制御部101-03は断層画像撮影装置100に対して上記設定に基づいてOCT撮影を実施することを指示し、断層画像撮影装置100が対応するOCT断層画像を取得する。 The imaging control unit 101-03 instructs the tomographic imaging apparatus 100 to perform OCT imaging based on the above settings, and the tomographic imaging apparatus 100 acquires a corresponding OCT tomographic image.
 また断層画像撮影装置100はSLO画像の取得も行い、SLO動画像に基づく追尾処理を実行する。なお、本実施形態では同一走査位置における断層画像の撮像回数を1回(繰り返さない)とする。これに限らず、同一走査位置における撮像回数は任意の回数に設定してよい。 (5) The tomographic imaging apparatus 100 also acquires an SLO image, and executes a tracking process based on the SLO moving image. In the present embodiment, the number of times of capturing a tomographic image at the same scanning position is one (not repeated). However, the number of times of imaging at the same scanning position may be set to an arbitrary number.
 <ステップ302>
 画像取得部101-01及び画像処理部101-04は、S301で取得された断層画像を再構成する。
<Step 302>
The image acquisition unit 101-01 and the image processing unit 101-04 reconstruct the tomographic image acquired in S301.
 まず断層画像生成部101-11は画像取得部101-01が取得した干渉信号に対して波数変換及び高速フーリエ変換(FFT)、絶対値変換(振幅の取得)を行うことで断層画像を生成する。次に位置合わせ部101-41はBスキャン断層画像間の位置合わせを行う。 First, the tomographic image generation unit 101-11 generates a tomographic image by performing wave number conversion, fast Fourier transform (FFT), and absolute value conversion (acquisition of amplitude) on the interference signal acquired by the image acquisition unit 101-01. . Next, the positioning unit 101-41 performs positioning between B-scan tomographic images.
 さらに、画像特徴取得部101-42が断層画像から網膜及び脈絡膜の層境界、篩状板部の前面・後面の境界(非図示)を取得する。本実施形態では、図6Aに示すように層境界として内境界膜1、神経線維層‐神経節細胞層境界2、神経節細胞層‐内網状層境界3、視細胞内節外節接合部4、網膜色素上皮5、ブルッフ膜6、脈絡膜-強膜境界7を取得する。また検出したブルッフ膜6の端部(ブルッフ膜開口端部)を視神経乳頭部のDisc境界として特定する。本実施形態では網膜及び脈絡膜の層境界及び篩状板部の前面・後面境界の取得法として可変形状モデルを用いるが、任意の公知のセグメンテーション手法を用いてよい。また取得する層境界は上記に限らない。例えば網膜の内網状層-内顆粒層境界、内顆粒層-外網状層境界、外網状層‐外顆粒層境界、外境界膜、視細胞外節先端(COST)を任意の公知のセグメンテーション法により取得してもよい。あるいは、脈絡膜の脈絡膜毛細血管板‐Sattler層境界、Sattler層‐Haller層境界を任意の公知のセグメンテーション法により取得する場合も本発明に含まれる。また、篩状板部の前面・後面境界は手動で設定してもよい。例えば、特定の層境界(例えば内境界膜1)の位置を所定量だけ動かすことにより手動で設定できる。 Furthermore, the image feature acquisition unit 101-42 acquires the layer boundaries of the retina and the choroid and the boundaries (not shown) of the anterior and posterior surfaces of the cribriform plate from the tomographic image. In this embodiment, as shown in FIG. 6A, the inner boundary membrane 1, the nerve fiber layer-ganglion cell layer boundary 2, the ganglion cell layer-inner plexiform layer boundary 3, and the photoreceptor cell inner-segment outer-node junction 4 as the layer boundaries. The retinal pigment epithelium 5, Bruch's membrane 6, and choroid-sclera boundary 7 are acquired. In addition, the detected end of the Bruch's membrane 6 (the end of the Bruch's membrane opening) is specified as a Disc boundary of the optic papilla. In the present embodiment, the variable shape model is used as a method for acquiring the layer boundaries of the retina and the choroid and the front / rear boundaries of the cribriform plate, but any known segmentation method may be used. The layer boundary to be obtained is not limited to the above. For example, the inner plexiform layer-inner genomic layer boundary, inner genomic layer-outer plexiform layer boundary, outer plexiform layer-outer genomic layer boundary, outer limiting membrane, photoreceptor outer segment tip (COST) of the retina can be obtained by any known segmentation method. May be acquired. Alternatively, the present invention includes a case where the choroid capillary plate-Sattler layer boundary and the Sattler layer-Haller layer boundary of the choroid are obtained by any known segmentation method. The front and rear boundaries of the sieve plate may be manually set. For example, it can be set manually by moving the position of a specific layer boundary (for example, the inner limiting membrane 1) by a predetermined amount.
 なお、層境界及び篩状板の前面・後面境界の取得処理は本ステップでなくS303の血管候補領域取得時に実施してもよい。 Note that the acquisition process of the layer boundary and the front / rear surface boundary of the cribriform plate may be performed at the time of acquiring the blood vessel candidate region in S303 instead of this step.
 <ステップ303>
 血管取得部101-421は、異なる所定の深度範囲間の輝度統計値を比較した結果に基づいて血管候補領域の分布に関する情報を生成する。
<Step 303>
The blood vessel acquisition units 101-421 generate information on the distribution of the blood vessel candidate regions based on the result of comparing the luminance statistics between different predetermined depth ranges.
 血管領域は図6Aの601に示すように存在する深度範囲(層の種類)がおおよそ決まっており、かつ該領域下に影602が生じやすいという特徴がある。一方、断層画像の再走査領域上に生じる輝度段差アーチファクトや、硝子体混濁などの場合は、図6Aの603に示すようにほとんどの深度範囲にわたって低輝度になりやすい。 The blood vessel region has a feature that the depth range (layer type) existing as shown by 601 in FIG. 6A is roughly determined, and that a shadow 602 is likely to be generated under the region. On the other hand, in the case of a luminance step artifact or vitreous opacity that occurs on the rescanning region of the tomographic image, the luminance tends to be low over most of the depth range as shown by 603 in FIG. 6A.
 そこで、本実施形態では「血管が存在する可能性の高い深度範囲(網膜表層)」と「影による輝度低下が最も顕著に現れる深度範囲(網膜外層)」における輝度の相違度(差もしくは比率)に基づいて血管候補領域を特定する。 Therefore, in the present embodiment, the difference (difference or ratio) of the luminance in the “depth range in which blood vessels are likely to be present (retina surface layer)” and the “depth range in which the luminance decrease due to shadow is most remarkable (outer retina)” The blood vessel candidate region is specified based on
 血管候補領域マップ生成処理の詳細はS501~S506で説明する。なお、マップとは、被検眼の深度方向に交差する面内方向の分布情報の一例である。 Details of the blood vessel candidate area map generation processing will be described in S501 to S506. The map is an example of distribution information in an in-plane direction intersecting the depth direction of the subject's eye.
 <ステップ304>
 重み付け部101-442は、S303で血管取得部101-421が生成した血管候補領域の分布に関する情報を用いて断層画像の血管候補領域における輝度値を重み付けした重み付き断層画像を生成する。次に、高次元変換部101-4411が高次元平滑化断層画像を生成し、低次元変換部101-4412が該重み付き断層画像に対して速軸方向に平滑化処理を行った低次元平滑化断層画像を生成する。さらに演算部101-443が該高次元平滑化断層画像と該低次元平滑化断層画像との演算処理により断層画像用の輝度補正係数マップを生成する。
<Step 304>
The weighting unit 101-442 generates a weighted tomographic image in which the luminance value in the blood vessel candidate region of the tomographic image is weighted using the information on the distribution of the blood vessel candidate region generated by the blood vessel acquisition unit 101-421 in S303. Next, the high-dimensional conversion unit 101-4411 generates a high-dimensional smoothed tomographic image, and the low-dimensional conversion unit 101-4412 performs smoothing processing on the weighted tomographic image in the fast axis direction. Generate a generalized tomographic image. Further, the calculation units 101 to 443 generate a brightness correction coefficient map for the tomographic image by performing an arithmetic process on the high-dimensional smoothed tomographic image and the low-dimensional smoothed tomographic image.
 輝度補正係数マップ生成処理の詳細はS511~S516で説明する。 The details of the brightness correction coefficient map generation processing will be described in S511 to S516.
 <ステップ305>
 補正部101-44は、断層画像の各画素に対してS304で算出した輝度補正係数値を乗算することにより、輝度段差補正済の断層画像を生成する。なお、輝度補正係数の適用方法は乗算に限定されるものではなく、任意の公知の演算方法を適用してよい。例えば加算、減算、除算のいずれかを適用してもよい。また、輝度補正係数値を用いて3次元断層画像の少なくとも一部が補正されれば良い。このとき、3次元断層画像の少なくとも一部には、Cスキャン画像等も含まれる。
<Step 305>
The correction unit 101-44 multiplies each pixel of the tomographic image by the luminance correction coefficient value calculated in S304 to generate a tomographic image having a luminance step corrected. Note that the method of applying the luminance correction coefficient is not limited to multiplication, and any known calculation method may be applied. For example, any of addition, subtraction, and division may be applied. Further, at least a part of the three-dimensional tomographic image may be corrected using the luminance correction coefficient value. At this time, at least a part of the three-dimensional tomographic image also includes a C-scan image and the like.
 <ステップ306>
 表示制御部101-05は、S305で生成した輝度補正済の断層画像を表示部104に表示する。また、操作者が入力部103を用いて表示部104に表示された非図示のボタンもしくはショートカットメニューを選択することで、該輝度補正済の断層画像を記憶部101-02もしくは外部記憶部102に保存する。なお、画像生成手段の一例である画像処理部101-04が、補正された少なくとも一部の3次元断層画像に基づく少なくとも1つの正面画像(正面断層画像)を生成することが好ましい。このとき、表示制御部101-05は、生成された少なくとも1つの正面画像を表示部104に表示させることが好ましい。
<Step 306>
The display control unit 101-05 displays the brightness-corrected tomographic image generated in S305 on the display unit 104. When the operator selects a button (not shown) or a shortcut menu displayed on the display unit 104 using the input unit 103, the brightness-corrected tomographic image is stored in the storage unit 101-02 or the external storage unit 102. save. It is preferable that the image processing unit 101-04, which is an example of the image generating unit, generate at least one front image (front tomographic image) based on at least a part of the corrected three-dimensional tomographic image. At this time, the display control unit 101-05 preferably causes the display unit 104 to display at least one generated front image.
 次に、図5Aに示すフローチャートを参照しながら、S303で実行される処理の詳細について説明する。 Next, the details of the processing executed in S303 will be described with reference to the flowchart shown in FIG. 5A.
 <ステップ501>
 血管取得部101-421は、S302で断層画像生成部101-11が生成した断層画像を取得する。
<Step 501>
The blood vessel acquiring unit 101-421 acquires the tomographic image generated by the tomographic image generating unit 101-11 in S302.
 <ステップ502>
 血管取得部101-421はS302で画像特徴取得部101-42が特定した網膜及び脈絡膜の層境界、篩状板部の前面・後面の境界データを取得する。
<Step 502>
The blood vessel acquisition unit 101-421 acquires the layer data of the retina and the choroid, and the boundary data of the front and back surfaces of the cribriform plate, which are specified by the image feature acquisition unit 101-42 in S302.
 <ステップ503>
 血管取得部101-421は断層画像撮影装置100のロールオフ特性により生じる深度方向の信号減衰を補償するための補正処理(以下、ロールオフ補正と表記)を補正部101-44に対して指示し、補正部101-44が該ロールオフ補正処理を行う。
<Step 503>
The blood vessel acquisition unit 101-421 instructs the correction units 101-44 to perform a correction process (hereinafter, referred to as roll-off correction) for compensating signal attenuation in the depth direction caused by the roll-off characteristic of the tomographic imaging apparatus 100. , The correction units 101-44 perform the roll-off correction processing.
 ロールオフ補正を行うための深度方向の補正係数H(z)は、深度位置zを引数とする正規化ロールオフ特性関数をRoF(z)とした場合に、例えば式(1)のように表せ、該補正係数H(z)を断層画像の各画素値に対して乗じることでロールオフ補正が行われる。
 H(z)={(BGa+2σ)/(BGa(z)+2σ(z))}/(1+RoF(z)-RoF(z0))・・・(1)
When the normalized roll-off characteristic function with the depth position z as an argument is RoF (z), the correction coefficient H (z) in the depth direction for performing the roll-off correction can be expressed as, for example, Expression (1). The roll-off correction is performed by multiplying each pixel value of the tomographic image by the correction coefficient H (z).
H (z) = {(BGa + 2σ) / (BGa (z) + 2σ (z))} / (1 + RoF (z) −RoF (z0)) (1)
 BGa及びσは、各々被検査物がない状態で取得したBスキャンデータ全体の輝度分布BGの平均値及び標準偏差を示す。またBGa(z)及びσ(z)は、被検査物がない状態で取得したBスキャンデータにおいて各深度位置(z)で算出した、z軸に直交する方向に関する輝度分布の平均値及び標準偏差を示す。z0はBスキャン範囲に含まれる基準の深度位置を示す。z0は任意の定数を設定してよいが、ここではzの最大値の1/4の値に設定するものとする。なおロールオフ補正式は上記に限られるものではなく、断層画像撮影装置100のロールオフ特性により生じる深度方向の信号減衰を補償する効果を持つ処理であれば任意の公知の補正処理を実行してよい。 BGa and σ indicate the average value and the standard deviation of the luminance distribution BG of the entire B-scan data acquired without the inspection object, respectively. BGa (z) and σ (z) are the average value and the standard deviation of the luminance distribution in the direction orthogonal to the z-axis, calculated at each depth position (z) in the B-scan data acquired without the inspection object. Is shown. z0 indicates a reference depth position included in the B scan range. Although z0 may be set to an arbitrary constant, it is set here to a value of 1/4 of the maximum value of z. Note that the roll-off correction formula is not limited to the above, and any known correction process may be executed as long as the process has an effect of compensating signal attenuation in the depth direction caused by the roll-off characteristic of the tomographic image capturing apparatus 100. Good.
 <ステップ504>
 血管取得部101-421は異なる深度範囲の輝度統計値を比較するための準備として投影部101-43に網膜表層の正面断層画像と網膜外層の正面断層画像を生成するよう指示し、投影部101-43が該正面断層画像を生成する。投影法として任意の公知の投影法を用いてよいが、本実施形態では平均値投影を行うものとする。図6Bに網膜表層の正面断層画像、図6Cに網膜外層の正面断層画像の例を示す。網膜表層の正面断層画像では(測定光と血管領域内の赤血球との相互作用により)血管領域における輝度値が高く、網膜外層の正面断層画像では影が生じることにより血管領域における輝度値が低くなることがわかる。
<Step 504>
The blood vessel acquisition unit 101-421 instructs the projection unit 101-43 to generate a front tomographic image of the retinal surface layer and a front tomographic image of the outer retinal layer in preparation for comparing the luminance statistics in different depth ranges. A step -43 generates the front tomographic image. Any known projection method may be used as the projection method, but in this embodiment, average value projection is performed. FIG. 6B shows an example of a front tomographic image of the retina surface layer, and FIG. 6C shows an example of a front tomographic image of the outer retina layer. In the front tomographic image of the retinal surface, the brightness value in the blood vessel region is high (due to the interaction between the measurement light and the red blood cells in the blood vessel region), and in the front tomographic image of the outer retinal layer, the brightness value in the blood vessel region is low due to shadows You can see that.
 <ステップ505>
 情報生成手段の一例である血管取得部101-421は、異なる深度範囲の輝度統計値を比較するために、S504で算出した2種類の正面断層画像の輝度値に基づいて輝度減衰率Arの分布を算出する。異なる深度範囲における輝度統計値の比較に関する指標として、本実施形態では、(網膜表層正面断層画像の輝度)÷(網膜外層正面断層画像の輝度)を各画素(x,y)で算出し、輝度減衰率Ar(x,y)のマップ(図6D)を生成する。
<Step 505>
The blood vessel acquisition unit 101-421, which is an example of the information generation unit, calculates the distribution of the luminance attenuation rate Ar based on the luminance values of the two types of front tomographic images calculated in S504 in order to compare the luminance statistics in different depth ranges. Is calculated. In the present embodiment, (luminance of frontal tomographic image of retinal surface layer) ÷ (luminance of frontal tomographic image of outer retinal layer) is calculated for each pixel (x, y) as an index relating to comparison of luminance statistical values in different depth ranges. A map (FIG. 6D) of the attenuation rate Ar (x, y) is generated.
 <ステップ506>
 血管取得部101-421は、S505で生成した輝度減衰率マップAr(x,y)を正規化することにより、血管領域らしさを表わす血管候補領域マップV(x,y)を生成する。
<Step 506>
The blood vessel acquisition unit 101-421 generates a blood vessel candidate area map V (x, y) representing the likeness of a blood vessel area by normalizing the luminance attenuation rate map Ar (x, y) generated in S505.
 本実施形態では、S505で算出した輝度減衰率マップAr(x,y)に対して所定の値WLとWWを用いて正規化することで、血管候補領域マップV(x,y)を
 V(x,y)=(Ar(x,y)-WL)/WW
として算出し、0≦V(x,y)≦1を満たすようにする。図6Eに血管候補領域マップV(x,y)の例を示す。血管候補領域がハイライトされていることがわかる。なお、正規化処理は上記に限らず任意の公知の正規化法を用いてよい。
In the present embodiment, the blood vessel candidate region map V (x, y) is normalized to V (x, y) by normalizing the luminance attenuation rate map Ar (x, y) calculated in S505 using predetermined values WL and WW. x, y) = (Ar (x, y) -WL) / WW
So that 0 ≦ V (x, y) ≦ 1 is satisfied. FIG. 6E shows an example of the blood vessel candidate region map V (x, y). It can be seen that the blood vessel candidate region is highlighted. The normalization processing is not limited to the above, and any known normalization method may be used.
 全ての深度範囲の輝度値を加算したProjection画像や、網膜外層の深度範囲で生成した正面断層画像における低輝度領域を血管領域とみなす場合には硝子体混濁による影(図6Aの603)や白斑による影(図6Aの605)も含まれてしまう。それに対し、本実施形態のS303で示した方法では血管候補領域(及び該血管から生じた出血領域)のみの分布情報を生成できる。また、OCTAのようなクラスタスキャンは不要で、単回スキャンの断層画像でも血管候補領域に関する分布情報を生成できる。 When a low-brightness area in a projection image to which the brightness values of all depth ranges are added or a low-brightness area in a front tomographic image generated in the depth range of the outer retina layer is regarded as a blood vessel area, a shadow due to vitreous opacity (603 in FIG. 6A) or a vitiligo (605 in FIG. 6A) is also included. On the other hand, in the method shown in S303 of the present embodiment, distribution information of only the blood vessel candidate area (and the bleeding area generated from the blood vessel) can be generated. Further, a cluster scan such as OCTA is unnecessary, and distribution information on a blood vessel candidate region can be generated even with a single-scan tomographic image.
 なお、血管以外の遮蔽物(影の発生原因となる物体)、例えば白斑等の高輝度病変(図6Aの604)の分布情報を生成する場合は、Aスキャン単位で例えば(網膜深層における輝度平均値)÷(網膜外層における輝度平均値)のような輝度減衰率を算出すればよい。ここで、血管領域、出血領域、高輝度病変等は、被検眼に含まれる領域であって、被検眼の深度方向に沿って発生する影の原因となる領域の一例である。 When generating distribution information of a shielding object other than a blood vessel (an object causing a shadow), for example, a high-brightness lesion (604 in FIG. 6A) such as a vitiligo, for example, (average luminance in the deep retina) A luminance attenuation rate such as (value) 率 (average luminance value in the outer retina layer) may be calculated. Here, the blood vessel region, the bleeding region, the high-brightness lesion, and the like are regions included in the eye to be inspected, and are examples of regions that cause shadows generated along the depth direction of the eye to be inspected.
 また、輝度減衰率の算出においては正面断層画像の生成は必須ではなく、3次元断層画像のままAスキャン単位で算出してもよい。さらに、輝度減衰率は異なる深度範囲の輝度統計値同士の比率に限定されるものではなく、例えば(異なる深度範囲の輝度統計値同士の)差分量に基づいて算出してもよい。 In addition, the generation of the front tomographic image is not indispensable in the calculation of the luminance decay rate, and the calculation may be performed in A-scan units as a three-dimensional tomographic image. Furthermore, the luminance decay rate is not limited to the ratio between the luminance statistics in different depth ranges, and may be calculated based on, for example, the difference amount (between the luminance statistics in different depth ranges).
 さらに、図5Bに示すフローチャートを参照しながら、S304で実行される処理の詳細について説明する。 (5) Further, details of the processing executed in S304 will be described with reference to the flowchart shown in FIG. 5B.
 <ステップ511>
 補正部101-44は、S302で断層画像生成部101-11が生成した断層画像を取得する。次に、操作者が表示部104に表示されたユーザインターフェースを介して所望の投影深度範囲と該投影深度範囲に対応する正面断層画像の生成を指示する。投影部101-43は、ロールオフ補正適用後の3次元断層画像に対して指示された深度範囲で投影し、正面断層画像(図7A)を生成する。
<Step 511>
The correction unit 101-44 acquires the tomographic image generated by the tomographic image generation unit 101-11 in S302. Next, the operator instructs, via a user interface displayed on the display unit 104, a desired projection depth range and generation of a front tomographic image corresponding to the projection depth range. The projection unit 101-43 projects the three-dimensional tomographic image to which the roll-off correction has been applied in the instructed depth range to generate a front tomographic image (FIG. 7A).
 なお本実施形態では、3次元断層画像の投影処理として、眼底正面に対応する面内の各画素に対応する深度方向の断層データの平均値を該画素の画素値としている。しかしながら投影処理はこのような平均値投影に限られず、任意の公知の投影方法を用いてよい。例えば、各画素に対応する深度方向の断層データの中央値や最大値、最頻値等を画素値としてもよい。さらに、背景輝度値に相当するノイズ閾値を超えた画素の輝度値のみを投影することで、背景領域の影響を除いた(眼底組織の輝度値に由来する)画素値を取得できる。 In the present embodiment, as the projection processing of the three-dimensional tomographic image, the average value of the tomographic data in the depth direction corresponding to each pixel in the plane corresponding to the front of the fundus is set as the pixel value of the pixel. However, the projection processing is not limited to such average value projection, and any known projection method may be used. For example, the median value, maximum value, mode value, or the like of the tomographic data in the depth direction corresponding to each pixel may be used as the pixel value. Furthermore, by projecting only the luminance value of the pixel exceeding the noise threshold corresponding to the background luminance value, it is possible to acquire the pixel value (derived from the luminance value of the fundus tissue) excluding the influence of the background region.
 <ステップ512>
 高次元変換部101-4411は、正面断層画像の輝度値を2次元で平滑化することにより、第1の概略値分布の一例である高次元概略値分布を算出する。ここで、2次元での平滑化処理は、第1の概略値分布を取得する際に、正面画像を2次元で変換する処理(2次元の変換処理)の一例である。本実施形態では、高次元変換部101-4411は、S511で生成した正面断層画像における各画素の輝度値を2次元で平滑化することにより、図7Cに示すような断層画像の輝度値に関する高次元概略値分布を算出する。
<Step 512>
The high-dimensional conversion unit 101-4411 calculates a high-dimensional approximate value distribution, which is an example of the first approximate value distribution, by smoothing the luminance value of the front tomographic image two-dimensionally. Here, the two-dimensional smoothing process is an example of a process (two-dimensional conversion process) of converting the front image into two dimensions when acquiring the first approximate value distribution. In the present embodiment, the high-dimensional conversion unit 101-4411 smoothes the luminance value of each pixel in the front tomographic image generated in S511 two-dimensionally, so that the luminance value of the tomographic image as shown in FIG. Calculate the approximate dimension distribution.
 なお、本実施形態では、概略値分布を算出する処理の例として平滑化処理を行ったが、後述するようにClosing処理やOpening処理等のモルフォロジー演算を行ってもよい。また平滑化処理は任意の空間フィルタを用いて平滑化もよいし、高速フーリエ変換(FFT)等を用いて断層データを周波数変換した上で、高周波成分を抑制することで平滑化してもよい。FFTを用いる場合、畳み込み演算が不要になるため、高速に平滑化処理を実行できる。断層データを周波数変換した上で高周波成分を抑制することで平滑化する場合、リンギングを抑制するために周波数領域で所定の窓関数(Hamming窓もしくはHanning窓)を適用したり、Butterworthフィルタ等を適用したりすることによって高周波成分を抑制してもよい。 In the present embodiment, the smoothing process is performed as an example of the process of calculating the approximate value distribution, but a morphological operation such as a closing process or an opening process may be performed as described later. In the smoothing process, smoothing may be performed using an arbitrary spatial filter, or may be performed by performing frequency conversion on tomographic data using fast Fourier transform (FFT) or the like and then suppressing high frequency components to perform smoothing. When the FFT is used, the convolution operation is unnecessary, so that the smoothing process can be executed at high speed. When the tomographic data is frequency-converted and then smoothed by suppressing high-frequency components, a predetermined window function (Hamming window or Hanning window) is applied in the frequency domain to suppress ringing, or a Butterworth filter or the like is applied. By doing so, the high-frequency component may be suppressed.
 <ステップ513>
 重み付け部101-442は、血管取得部101-421から血管候補領域マップV(x,y)(図7D)を取得する。
<Step 513>
The weighting unit 101-442 acquires the blood vessel candidate region map V (x, y) (FIG. 7D) from the blood vessel acquisition unit 101-421.
 <ステップ514>
 重み付け部101-442は、血管候補領域マップV(x,y)の値を用いて断層画像の該血管候補領域における輝度値を重み付けする。なお、この重み付けは、第2の概略値分布の一例である低次元概略値分布を取得する際に、3次元断層画像を取得する際に使用される測定光の速軸方向に沿って存在する血管もしくは出血領域である所定の組織と、それ以外の領域とに対して実行される異なる算出処理の一例である。また、この重み付けは、本発明において必須ではない。
<Step 514>
The weighting unit 101-442 weights the luminance value of the tomographic image in the blood vessel candidate region using the value of the blood vessel candidate region map V (x, y). Note that this weighting exists along the fast axis direction of the measurement light used when acquiring a three-dimensional tomographic image when acquiring a low-dimensional approximate value distribution which is an example of the second approximate value distribution. It is an example of a different calculation process performed for a predetermined tissue that is a blood vessel or a bleeding region and a region other than the predetermined tissue. This weighting is not essential in the present invention.
 本実施形態では、血管候補領域マップV(x,y)の値(血管らしさ)が高い領域ほど、S511で取得した正面断層画像における該高い領域に対応する領域の輝度値が、S512で算出した高次元概略値I_2ds(x,y)に近づくように、また、該V(x,y)の値が低い領域ほど、S511で取得した正面断層画像の輝度値I(x,y)をできるだけ維持するように、正面断層画像を重み付けした重み付き正面画像I_w(x,y)を生成する。具体的には、以下として算出すれば良い。
 I_w(x,y)=(1.0-V(x,y))*I(x,y)+V(x,y)*I_2ds(x,y)
In the present embodiment, the brightness value of the region corresponding to the high region in the front tomographic image acquired in S511 is calculated in S512 as the region (blood vessel likeness) of the blood vessel candidate region map V (x, y) is high. The luminance value I (x, y) of the front tomographic image acquired in S511 is maintained as close to the high-dimensional approximate value I_2ds (x, y) as possible, and in a region where the value of V (x, y) is lower. To generate a weighted front image I_w (x, y) by weighting the front tomographic image. Specifically, it may be calculated as follows.
I_w (x, y) = (1.0−V (x, y)) * I (x, y) + V (x, y) * I_2ds (x, y)
 図7Eに重み付き正面断層画像I_w(x,y)の例を示す。 FIG. 7E shows an example of the weighted front tomographic image I_w (x, y).
 なお、ここで示した血管候補領域の輝度値に対する重み付け法はあくまで例であり、速軸方向に走行する血管候補領域の輝度値を増加させる処理もしくは該血管候補領域近傍の輝度値に近づける処理であれば任意の重み付けを行ってもよい。 Note that the weighting method for the luminance value of the blood vessel candidate region shown here is merely an example, and is a process of increasing the luminance value of the blood vessel candidate region traveling in the fast axis direction or a process of approaching the luminance value near the blood vessel candidate region. If so, any weighting may be performed.
 <ステップ515>
 低次元変換部101-4412は、断層画像の輝度値に関する低次元の概略値分布を算出する。具体的には、血管候補領域の輝度値を重み付けした正面断層画像の各画素の輝度値に対し、速軸方向に関する概略値分布を算出する処理(平滑化処理やモルフォロジー演算)を行う。図7Fに本ステップで算出した低次元概略値分布の例を示す。速軸方向に走行する血管領域の輝度値が底上げされた画像に対して低次元変換(速軸方向の平滑化)処理を行うために、「速軸方向に走行する血管領域」が帯状の低輝度領域として残存する問題を回避できる。ここで、低次元での平滑化処理は、第2の概略値分布を取得する際に、正面画像を1次元で変換する処理(1次元の変換処理)の一例である。なおS512と同様に平滑化処理を周波数領域で実施する場合、リンギングを抑制するために周波数領域で所定の窓関数(Hamming窓やHanning窓等)を適用したり、Butterworthフィルタ等を適用したりすることによって高周波成分を抑制して平滑化してもよい。
<Step 515>
The low-dimensional conversion unit 101-4412 calculates a low-dimensional approximate value distribution related to the luminance value of the tomographic image. Specifically, a process (smoothing process or morphological operation) of calculating a rough value distribution in the fast axis direction is performed on the brightness value of each pixel of the front tomographic image weighted with the brightness value of the blood vessel candidate region. FIG. 7F shows an example of the low-dimensional approximate value distribution calculated in this step. In order to perform low-dimensional conversion (smoothing in the fast axis direction) on an image in which the brightness value of the blood vessel region traveling in the fast axis direction has been raised, the “blood vessel region traveling in the fast axis direction” has a band-like low The problem that remains as a luminance region can be avoided. Here, the low-dimensional smoothing processing is an example of processing (one-dimensional conversion processing) for converting the front image into one dimension when obtaining the second approximate value distribution. When the smoothing process is performed in the frequency domain as in S512, a predetermined window function (such as a Hamming window or a Hanning window) is applied in the frequency domain to suppress ringing, or a Butterworth filter or the like is applied. By doing so, the high-frequency component may be suppressed and smoothed.
 <ステップ516>
 演算部101-443は、断層画像の高次元概略値分布と低次元概略値分布とを演算することにより、断層画像用の輝度補正係数分布を算出する。
<Step 516>
The calculation units 101-443 calculate the luminance correction coefficient distribution for the tomographic image by calculating the high-dimensional rough value distribution and the low-dimensional rough value distribution of the tomographic image.
 本実施形態では、S512で生成した2次元平滑化断層画像の輝度値をS515で生成した重み付き速軸方向平滑化断層画像の輝度値で除算することにより、断層画像用の輝度補正係数マップ(図7G)を生成する。 In this embodiment, the luminance value of the two-dimensional smoothed tomographic image generated in S512 is divided by the luminance value of the weighted fast-axis direction smoothed tomographic image generated in S515, so that the luminance correction coefficient map for the tomographic image ( FIG. 7G) is generated.
 次に、図8A~図8Gを用いて血管候補領域に対する輝度重み付けの効果(輝度段差アーチファクトのみを選択的に抑制しやすくする)について説明する。図8Aは速軸方向の一定範囲に限局した帯状の輝度段差と、速軸方向に走行する血管領域の双方を含む断層画像の例である。速軸方向に走行する血管領域の輝度値を過補正することなく、帯状の輝度段差のみ選択的に抑制する必要がある。 Next, the effect of the luminance weighting on the blood vessel candidate region (only the luminance step artifact is easily suppressed selectively) will be described with reference to FIGS. 8A to 8G. FIG. 8A is an example of a tomographic image including both a band-shaped luminance step limited to a certain range in the fast axis direction and a blood vessel region traveling in the fast axis direction. It is necessary to selectively suppress only the band-shaped luminance step without overcorrecting the luminance value of the blood vessel region traveling in the fast axis direction.
 血管候補領域に対する輝度重み付けを実施せずにS515の低次元変換処理、S516の輝度補正係数マップ算出処理、S305の輝度段差補正処理を各々実行した場合の処理結果の例を図8B、図8D、図8Fに示す。図8Bの白矢印で示した(速軸方向に走行する)血管領域において低輝度領域が帯状に残存し、輝度段差に類似している。そのため、図8Dの白矢印で示した血管領域では輝度段差でないにも関わらず高い補正係数値が算出されてしまい、S305の輝度段差補正処理において速軸方向に走行する血管及びその近傍領域の輝度値の過補正が生じる(図8Fの白矢印で示した領域)。 FIGS. 8B and 8D show examples of processing results when the low-dimensional conversion processing in S515, the luminance correction coefficient map calculation processing in S516, and the luminance step correction processing in S305 are performed without performing luminance weighting on the blood vessel candidate region. As shown in FIG. 8F. In the blood vessel region (running in the fast axis direction) indicated by the white arrow in FIG. 8B, a low-luminance region remains in a band shape, and is similar to a luminance step. Therefore, a high correction coefficient value is calculated in the blood vessel region indicated by the white arrow in FIG. 8D even though there is no luminance step, and in the luminance step correction processing in S305, the luminance of the blood vessel traveling in the fast axis direction and its neighboring region is calculated. Overcorrection of the value occurs (region indicated by a white arrow in FIG. 8F).
 一方、血管候補領域に対する輝度重み付けを実施してからS515の低次元変換処理、S516の輝度補正係数マップ算出処理、S305の輝度段差補正処理を各々実行した場合の処理結果を図8C、図8E、図8Gに示す。図8Cにおいて、速軸方向に走行する血管領域に対応する帯状の低輝度領域は生じていない。そのため、図8Eでも血管領域に対して適切な補正係数値が算出され、S305の輝度段差補正処理においても速軸方向に走行する血管及びその近傍領域の輝度値に対する過補正は見られない(図8G)。 On the other hand, FIG. 8C, FIG. 8E, and FIG. 8E show the processing results when the low-dimensional conversion processing in S515, the luminance correction coefficient map calculation processing in S516, and the luminance step correction processing in S305 are performed after performing luminance weighting on the blood vessel candidate area. It is shown in FIG. 8G. In FIG. 8C, no band-like low-luminance area corresponding to the blood vessel area running in the fast axis direction is generated. Therefore, in FIG. 8E, an appropriate correction coefficient value is calculated for the blood vessel region, and no overcorrection is found for the brightness values of the blood vessel traveling in the fast axis direction and the vicinity thereof in the brightness step correction process of S305 (FIG. 8G).
 なお、本実施形態においては正面断層画像上に生じた帯状の輝度段差を抑制する(輝度段差補正済の正面断層画像を生成する)方法について説明したが、本発明はこれに限定されない。以下の手順で3次元断層画像上に生じた帯状の輝度段差を抑制し、輝度段差補正済の3次元断層画像を生成してもよい。 In the present embodiment, the method of suppressing the band-shaped luminance step generated on the front tomographic image (generating the frontal tomographic image with the luminance step corrected) has been described, but the present invention is not limited to this. The following procedure may be used to suppress a band-shaped luminance step generated on the three-dimensional tomographic image and generate a three-dimensional tomographic image with the luminance step corrected.
 すなわち、多数の異なる投影深度範囲で正面断層画像を生成しておき、各正面断層画像に対して輝度段差補正係数マップを生成しておく。次に3次元断層画像上の各画素の輝度値に対して、該各画素が属する投影深度範囲に対応する輝度段差補正係数マップの値(補正係数値)を演算することによって輝度段差補正済の3次元断層画像を生成できる。異なる投影深度範囲として、例えば網膜表層・網膜深層・網膜外層・脈絡膜の4種類が挙げられる。あるいは、網膜及び脈絡膜に属する各層の種類を指定してもよい。 That is, a front tomographic image is generated in a number of different projection depth ranges, and a luminance step correction coefficient map is generated for each front tomographic image. Next, for the luminance value of each pixel on the three-dimensional tomographic image, the value of the luminance step correction coefficient map (correction coefficient value) corresponding to the projection depth range to which each pixel belongs is calculated, whereby the luminance step correction is completed. A three-dimensional tomographic image can be generated. As the different projection depth ranges, for example, there are four types: a retinal surface layer, a deep retinal layer, an outer retinal layer, and a choroid. Alternatively, the type of each layer belonging to the retina and the choroid may be specified.
 なお、Cスキャン画像に生じた輝度段差アーチファクトを抑制する場合、画像内に複数種の層が含まれるために(層境界付近の画素値に悪影響を及ぼさずに)輝度段差アーチファクトのみ選択的に抑制するのが難しい場合がある。輝度段差補正済の3次元断層画像を生成しておき、該輝度段差補正済の3次元断層画像からCスキャン画像を生成することにより、輝度段差補正済Cスキャン画像を得られる。 When suppressing the luminance step artifact occurring in the C-scan image, since the image includes a plurality of layers, only the luminance step artifact is selectively suppressed (without adversely affecting the pixel value near the layer boundary). It can be difficult to do. By generating a three-dimensional tomographic image with corrected luminance step and generating a C-scan image from the three-dimensional tomographic image with corrected luminance step, a C-scan image with corrected luminance step can be obtained.
 また、本発明は、いわゆる3-Dスキャンで断層画像を撮影した場合に生じる帯状の輝度段差の補正処理に限定されるものではなく、種々のスキャンパターンで断層画像を撮影した場合の遅軸方向に生じる輝度段差の補正にも適用できる。例えば、半径の異なる多数のサークルスキャンや、ラジアルスキャンで撮影した場合の遅軸方向に生じる輝度段差を補正する場合も本発明に含まれる。なお、サークルスキャンの場合には、例えば、円周方向が速軸方向で、円周方向に直交する方向が遅軸方向と考えられる。また、ラジアルスキャンの場合には、例えば、所定の点を通り、放射状の各スキャン方向が速軸方向で、所定の点を中心とする円周方向が遅軸方向と考えられる。 Further, the present invention is not limited to the correction processing of the band-shaped luminance step generated when a tomographic image is photographed by a so-called 3-D scan, and is not limited to the slow axis direction when a tomographic image is photographed by various scan patterns. Can be applied to the correction of the luminance step occurring in the image. For example, the present invention includes a case of correcting a luminance step generated in a slow axis direction when photographing is performed by a number of circle scans having different radii or a radial scan. In the case of a circle scan, for example, the circumferential direction is considered to be the fast axis direction, and the direction orthogonal to the circumferential direction is considered to be the slow axis direction. In the case of the radial scan, for example, each radial scan direction passing through a predetermined point is considered to be a fast axis direction, and a circumferential direction around the predetermined point is considered to be a slow axis direction.
 以上述べた構成によれば、画像処理装置101はOCTを用いて撮影した被検眼の断層画像の遅軸方向に生じた様々な輝度段差アーチファクトをロバストに抑制するために、以下の画像補正処理を行う。すなわち、画像処理装置は、被検眼の深度方向に交差する面内方向の分布情報であって、3次元断層画像における複数の深度範囲に対応する複数の分布情報を比較することにより、深度方向に沿って発生する影の原因となる被検眼における所定の領域(血管候補領域)に関する分布情報を生成する。例えば、画像処理装置は、断層画像の網膜表層と網膜外層間の輝度減衰率に基づいて血管候補領域の分布情報を生成する。次に高次元平滑化断層画像の輝度値に対して、該血管候補領域の輝度値を重み付けした後に速軸方向に平滑化した断層画像の輝度値を除算することにより、輝度補正係数マップを生成する。さらに断層画像に対して輝度補正係数を乗算することで、断層画像の遅軸方向に生じた輝度段差アーチファクトをロバストに抑制する。なお、最終的に分布情報が生成されていればよく、生成の途中においては複数の分布情報として例えば画像(マップ)を生成する必要はない。 According to the configuration described above, the image processing apparatus 101 performs the following image correction processing in order to robustly suppress various luminance step artifacts generated in the slow axis direction of the tomographic image of the subject's eye captured using OCT. Do. That is, the image processing apparatus compares the distribution information in the in-plane direction that intersects the depth direction of the subject's eye with a plurality of distribution information corresponding to the plurality of depth ranges in the three-dimensional tomographic image. Distribution information on a predetermined area (candidate blood vessel area) in the eye to be inspected, which causes a shadow generated along, is generated. For example, the image processing apparatus generates distribution information of a blood vessel candidate region based on a luminance attenuation rate between a retinal surface layer and a retinal outer layer of a tomographic image. Next, a luminance correction coefficient map is generated by weighting the luminance value of the high-dimensional smoothed tomographic image with the luminance value of the blood vessel candidate region and then dividing the luminance value of the tomographic image smoothed in the fast axis direction. I do. Further, by multiplying the tomographic image by a luminance correction coefficient, a luminance step artifact generated in the slow axis direction of the tomographic image is robustly suppressed. It is sufficient that distribution information is finally generated, and it is not necessary to generate, for example, an image (map) as a plurality of distribution information during the generation.
 これにより、被検眼の断層画像の遅軸方向に生じた輝度段差をロバストに抑制できる。 This makes it possible to robustly suppress the luminance step generated in the slow axis direction of the tomographic image of the eye to be inspected.
 [第2実施形態]
 本実施形態に係る画像処理装置は、OCTを用いたクラスタ撮影により得られた被検眼の断層画像から生成したモーションコントラスト画像の遅軸方向に生じた様々な輝度段差アーチファクトをロバストに抑制するために、以下の画像処理を行う。すなわち、高次元平滑化モーションコントラスト画像の輝度値に対して、第1実施形態と同様の方法で取得した血管候補領域の輝度値を重み付けした後に速軸方向に平滑化したモーションコントラスト画像の輝度値を除算することで、輝度補正係数マップを生成する。さらに、断層画像に対して輝度補正係数を乗算することで、モーションコントラスト画像の遅軸方向に生じた輝度段差アーチファクトをロバストに抑制する場合について説明する。
[Second embodiment]
The image processing apparatus according to the present embodiment is configured to robustly suppress various luminance step artifacts generated in the slow axis direction of a motion contrast image generated from a tomographic image of the subject's eye obtained by cluster imaging using OCT. The following image processing is performed. That is, the luminance value of the high-dimensional smoothed motion contrast image is weighted with the luminance value of the blood vessel candidate region acquired in the same manner as in the first embodiment, and then the luminance value of the motion contrast image smoothed in the fast axis direction. Is divided to generate a luminance correction coefficient map. Further, a case will be described in which a luminance step artifact generated in the slow axis direction of a motion contrast image is robustly suppressed by multiplying a tomographic image by a luminance correction coefficient.
 ここで、本実施形態で補正対象とする被検眼のモーションコントラスト画像の遅軸方向に生じた輝度段差の例について説明する。図4Bに示した再走査領域が低輝度であるような断層画像を用いてモーションコントラスト画像を生成すると、図4Cに示すようにモーションコントラスト画像上の対応する(白矢印で示した)領域に帯状の低輝度な段差を生じる。また、図4Eに示すような再走査領域に短い低輝度な段差領域を含む断層画像を用いてモーションコントラスト画像を生成すると、モーションコントラスト画像上の対応する領域に短い帯状の低輝度な段差を生じるという課題がある。 Here, an example of a luminance step occurring in the slow axis direction of the motion contrast image of the subject's eye to be corrected in the present embodiment will be described. When a motion contrast image is generated using a tomographic image in which the re-scanning area shown in FIG. 4B has low luminance, a band-like area is formed on a corresponding area (indicated by a white arrow) on the motion contrast image as shown in FIG. 4C. Causes a step with low luminance. Also, when a motion contrast image is generated using a tomographic image including a short low-luminance step region in the rescanning region as shown in FIG. 4E, a short band-like low-luminance step occurs in a corresponding region on the motion contrast image. There is a problem that.
 さらに、同一位置で複数回走査した断層画像間の位置ずれが生じている場合には、実際には赤血球の変位が生じてない領域に対しても高いモーションコントラスト値が算出されてしまうため、図4Dの白矢印で示すような帯状の高輝度な段差が生じる。なお、同一のモーションコントラスト画像上に低輝度な段差と高輝度な段差の双方が含まれる場合もある。また、網膜や脈絡膜等の眼組織が画像上の速軸方向の一部の範囲にしか含まれない場合や、層境界検出不良の領域が存在する場合などでは帯状の高輝度な段差が途中で途切れたり、該輝度段差の太さや高さが途中で変化したりする課題がある。 Furthermore, if there is a displacement between tomographic images scanned a plurality of times at the same position, a high motion contrast value is calculated even for an area where red blood cell displacement is not actually occurring. A band-like high-luminance step as shown by a white arrow in 4D is generated. Note that the same motion contrast image may include both a low luminance step and a high luminance step. In addition, when the eye tissue such as the retina and the choroid is included only in a part of the fast axis direction on the image, or when there is a region where layer boundary detection is defective, a band-like high-luminance step is formed in the middle. There is a problem that the brightness is interrupted or the thickness or height of the luminance step changes halfway.
 本実施形態に係る画像処理装置101を備える画像処理システム10の構成を図9に示す。画像取得部101-01にモーションコントラストデータ生成部101-12を、画像処理部101-04に合成部101-45を備える点が第1実施形態の場合と異なる。また本実施形態での画像処理フローを図10に示す。なお図10においてS1002とS1003は第1実施形態の場合と同様であるので説明は省略する。 FIG. 9 shows a configuration of an image processing system 10 including the image processing apparatus 101 according to the present embodiment. The difference from the first embodiment is that the image acquisition unit 101-01 includes a motion contrast data generation unit 101-12, and the image processing unit 101-04 includes a synthesis unit 101-45. FIG. 10 shows an image processing flow in this embodiment. In FIG. 10, steps S1002 and S1003 are the same as those in the first embodiment, and a description thereof will not be repeated.
 <ステップ1001>
 操作者は入力部103を操作することにより、断層画像撮影装置100に対して指示するOCT画像の撮影条件を設定する。
<Step 1001>
By operating the input unit 103, the operator sets an OCT image capturing condition to be instructed to the tomographic image capturing apparatus 100.
 具体的には
1)スキャンモードの選択
2)スキャンモードに対応する撮影パラメータ設定
の手順からなり、本実施形態では以下のように設定してOCT撮影を実行する。
1)OCTAスキャンモードを選択
2)以下の撮影パラメータを設定
2-1)走査領域サイズ:10x10mm
2-2)主走査方向:水平方向
2-3)走査間隔:0.01mm
2-4)固視灯位置:中心窩と視神経乳頭との中間
2-5)同一撮影位置でのBスキャン数:4
2-6)コヒーレンスゲート位置:硝子体側
More specifically, the procedure includes 1) selection of a scan mode and 2) a procedure for setting imaging parameters corresponding to the scan mode. In the present embodiment, OCT imaging is executed with the following settings.
1) Select the OCTA scan mode 2) Set the following imaging parameters 2-1) Scan area size: 10 x 10 mm
2-2) Main scanning direction: horizontal direction 2-3) Scan interval: 0.01 mm
2-4) Fixation light position: intermediate between fovea and optic papilla 2-5) Number of B scans at the same imaging position: 4
2-6) Coherence gate position: vitreous side
 次に、操作者は入力部103を操作して撮影画面中の撮影開始ボタン(非表示)を押下することにより、上記設定した撮影条件による繰り返しOCTA撮影を開始する。 Next, the operator operates the input unit 103 and presses a shooting start button (not displayed) in the shooting screen to start OCTA shooting repeatedly under the set shooting conditions.
 撮影制御部101-03は断層画像撮影装置100に対して上記設定に基づいて繰り返しOCTA撮影を実施することを指示し、断層画像撮影装置100が対応するOCT断層画像を取得する。 The imaging control unit 101-03 instructs the tomographic imaging apparatus 100 to repeatedly perform OCTA imaging based on the above setting, and the tomographic imaging apparatus 100 acquires a corresponding OCT tomographic image.
 なお、本実施形態では本ステップにおける繰り返し撮像回数(クラスタ数)を5回とする。これに限らず、繰り返し撮像回数(クラスタ数)は任意の回数に設定してよい。 In the present embodiment, the number of times of repetitive imaging (the number of clusters) in this step is five. However, the number of times of repetitive imaging (the number of clusters) may be set to an arbitrary number.
 また断層画像撮影装置100はSLO画像の取得も行い、SLO動画像に基づく追尾処理を実行する。 (5) The tomographic imaging apparatus 100 also acquires an SLO image, and executes a tracking process based on the SLO moving image.
 なおクラスタ数が2以上の場合、繰り返しOCTA撮影における追尾処理に用いる基準SLO画像は1回目のクラスタ撮影時に設定した基準SLO画像とし、全てのクラスタ撮影において共通の基準SLO画像を用いる。また2回目以降のクラスタ撮影中は、
・左右眼の選択
・追尾処理の実行有無
について1回目のクラスタ撮影の場合と同じ設定値を用いる(変更しない)ものとする。
When the number of clusters is two or more, the reference SLO image used for the tracking process in the repeated OCTA imaging is the reference SLO image set in the first cluster imaging, and a common reference SLO image is used in all cluster imaging. Also, during the second and subsequent cluster shootings,
The same setting values as in the first cluster imaging are used (not changed) for the selection of the left and right eyes and the execution of the tracking processing.
 <ステップ1004>
 画像取得部101-01及び画像処理部101-04は、位置合わせ部101-41が同一クラスタに属する断層画像同士の位置合わせとクラスタ間の断層画像の位置合わせを行い、該位置合わせ済断層画像を用いてモーションコントラスト画像を生成する。
<Step 1004>
The image acquisition unit 101-01 and the image processing unit 101-04 perform positioning of the tomographic images belonging to the same cluster and positioning of the tomographic images between the clusters by the positioning unit 101-41. Is used to generate a motion contrast image.
 モーションコントラストデータ生成部101-12が同一クラスタ内の隣接する断層画像間でモーションコントラストを算出する。本実施形態では、モーションコントラストとして脱相関値Mxyを以下の式(2)に基づき求める。 (4) The motion contrast data generation unit 101-12 calculates a motion contrast between adjacent tomographic images in the same cluster. In the present embodiment, the decorrelation value Mxy is obtained as the motion contrast based on the following equation (2).
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 ここで、Axyは断層画像データAの位置(x,y)における(FFT処理後の複素数データの)振幅、Bxyは断層データBの同一位置(x,y)における振幅を示している。0≦Mxy≦1であり、両振幅値の差異が大きいほど1に近い値をとる。式(2)のような脱相関演算処理を(同一クラスタに属する)任意の隣接する断層画像間で行い、得られた(1クラスタあたりの断層画像数-1)個のモーションコントラスト値の平均を画素値として持つ画像を最終的なモーションコントラスト画像として生成する。 Here, Axy indicates the amplitude (of the complex number data after the FFT processing) at the position (x, y) of the tomographic image data A, and Bxy indicates the amplitude at the same position (x, y) of the tomographic data B. 0 ≦ Mxy ≦ 1, and a value closer to 1 is taken as the difference between the two amplitude values is larger. The decorrelation calculation processing as in equation (2) is performed between arbitrary adjacent tomographic images (belonging to the same cluster), and the average of the obtained (number of tomographic images per cluster minus one) motion contrast values is calculated. An image having pixel values is generated as a final motion contrast image.
 なお、ここではFFT処理後の複素数データの振幅に基づいてモーションコントラストを計算したが、モーションコントラストの計算法は上記に限定されない。例えば複素数データの位相情報に基づいてモーションコントラストを計算してもよいし、振幅と位相の両方の情報に基づいてモーションコントラストを計算してもよい。あるいは、複素数データの実部や虚部に基づいてモーションコントラストを計算してもよい。 Although the motion contrast is calculated based on the amplitude of the complex data after the FFT processing here, the method of calculating the motion contrast is not limited to the above. For example, the motion contrast may be calculated based on the phase information of the complex data, or the motion contrast may be calculated based on both the amplitude and the phase information. Alternatively, the motion contrast may be calculated based on the real part or the imaginary part of the complex data.
 また、本実施形態ではモーションコントラストとして脱相関値を計算したが、モーションコントラストの計算法はこれに限定されない。例えば二つの値の差分に基づいてモーションコントラストを計算しても良いし、二つの値の比に基づいてモーションコントラストを計算してもよい。 In the present embodiment, the decorrelation value is calculated as the motion contrast, but the motion contrast calculation method is not limited to this. For example, the motion contrast may be calculated based on a difference between two values, or the motion contrast may be calculated based on a ratio of the two values.
 さらに、上記では取得された複数の脱相関値の平均値を求めることで最終的なモーションコントラスト画像を得ているが、本発明はこれに限定されない。例えば取得された複数の脱相関値の中央値、あるいは最大値を画素値として持つ画像を最終的なモーションコントラスト画像として生成しても良い。 Furthermore, in the above, the final motion contrast image is obtained by obtaining the average value of the plurality of acquired decorrelation values, but the present invention is not limited to this. For example, an image having the median value or the maximum value of a plurality of acquired decorrelation values as pixel values may be generated as a final motion contrast image.
 画像処理部101-04は、繰り返しOCTA撮影を通して得られたモーションコントラスト画像群を3次元的に位置合わせし、加算平均することで高コントラストな合成モーションコントラスト画像を生成する。なお、合成処理は単純加算平均に限定されない。例えば各モーションコントラスト画像の輝度値に対して任意の重みづけをした上で平均した値でもよいし、中央値をはじめとする任意の統計値を算出してもよい。また位置合わせ処理を2次元的に行う場合も本発明に含まれる。 The image processing unit 101-04 three-dimensionally aligns a group of motion contrast images obtained through repeated OCTA imaging, and performs averaging to generate a high-contrast combined motion contrast image. The combining process is not limited to the simple averaging. For example, the luminance value of each motion contrast image may be arbitrarily weighted and averaged, or an arbitrary statistical value such as a median value may be calculated. The present invention also includes a case where the positioning process is performed two-dimensionally.
 なお、合成部101-45が合成処理に不適なモーションコントラスト画像が含まれているか否かを判定した上で、不適と判定したモーションコントラスト画像を除いて合成処理を行うよう構成してもよい。例えば、各モーションコントラスト画像に対して評価値(例えば脱相関値の平均値や中央値)が所定の範囲外である場合に、合成処理に不適と判定すればよい。 Note that the synthesizing unit 101-45 may be configured to determine whether a motion contrast image inappropriate for the synthesizing process is included, and then perform the synthesizing process excluding the motion contrast image determined to be inappropriate. For example, when the evaluation value (for example, the average value or median of decorrelation values) of each motion contrast image is out of a predetermined range, it may be determined that the motion contrast image is not suitable for the combination processing.
 本実施形態では合成部101-45がモーションコントラスト画像を3次元的に合成した後、補正部101-44がモーションコントラスト画像内に生じるプロジェクションアーチファクトを3次元的に抑制する処理を行う。 In the present embodiment, after the synthesizing unit 101-45 synthesizes the motion contrast image three-dimensionally, the correcting unit 101-44 performs a process of three-dimensionally suppressing the projection artifact occurring in the motion contrast image.
 ここで、プロジェクションアーチファクトは網膜表層血管内のモーションコントラストが深層側(網膜深層や網膜外層・脈絡膜)に映り込み、実際には血管の存在しない深層側の領域に高い脱相関値が生じる現象を指す。補正部101-44は、3次元の合成モーションコントラスト画像上に生じたプロジェクションアーチファクトを抑制する処理を実行する。任意の公知のプロジェクションアーチファクト抑制手法を用いてよいが、本実施形態ではStep-down Exponential Filteringを用いる。Step-down Exponential Filteringでは、3次元モーションコントラスト画像上の各Aスキャンデータに対して式(3)で表される処理を実行することにより、プロジェクションアーチファクトを抑制する。 Here, the projection artifact refers to a phenomenon in which the motion contrast in the superficial blood vessels of the retina is reflected on the deep side (the deep retina, the outer retina and the choroid), and a high decorrelation value is generated in the deep area where no blood vessels actually exist. . The correction units 101-44 execute processing for suppressing projection artifacts generated on the three-dimensional synthesized motion contrast image. Any known projection artifact suppression method may be used, but in the present embodiment, Step-down \ Exponential \ Filtering is used. In Step-down \ Exponential \ Filtering, projection artifacts are suppressed by executing the processing represented by Expression (3) on each A-scan data on the three-dimensional motion contrast image.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 ここで、γは負の値を持つ減衰係数、D(x,y,z)はプロジェクションアーチファクト抑制処理前の脱相関値、D(x,y,z)は該抑制処理後の脱相関値を表す。 Here, γ is a damping coefficient having a negative value, D (x, y, z) is a decorrelation value before the projection artifact suppression processing, and D E (x, y, z) is a decorrelation value after the suppression processing. Represents
 最後に、画像処理装置101は取得した画像群(SLO画像や断層画像)と該画像群の撮影条件データや、生成したモーションコントラスト画像と付随する生成条件データを検査日時、披検眼を同定する情報と関連付けて外部記憶部102へ保存する。 Finally, the image processing apparatus 101 converts the acquired image group (SLO image or tomographic image) and the imaging condition data of the image group, or the generated condition data associated with the generated motion contrast image into the examination date and time and the information for identifying the eye to be examined. Is stored in the external storage unit 102 in association with the
 <ステップ1005>
 重み付け部101-442は、S1003で血管取得部101-421が生成した血管候補領域の分布に関する情報を用いてモーションコントラスト画像の血管候補領域における輝度値を重み付けした重み付きモーションコントラスト画像を生成する。次に、高次元変換部101-4411が高次元平滑化モーションコントラスト画像を生成し、低次元変換部101-4412が該重み付きモーションコントラスト画像に対して速軸方向に平滑化処理を行った低次元平滑化モーションコントラスト画像を生成する。さらに演算部101-443が該高次元平滑化モーションコントラスト画像と該低次元平滑化モーションコントラスト画像との演算処理によりモーションコントラスト画像用の輝度補正係数マップを生成する。
<Step 1005>
The weighting unit 101-442 generates a weighted motion contrast image in which the luminance value in the blood vessel candidate region of the motion contrast image is weighted using the information on the distribution of the blood vessel candidate region generated by the blood vessel acquisition unit 101-421 in S1003. Next, the high-dimensional conversion unit 101-4411 generates a high-dimensional smoothed motion contrast image, and the low-dimensional conversion unit 101-4412 performs smoothing processing on the weighted motion contrast image in the fast axis direction. Generate a dimensional smoothed motion contrast image. Further, the arithmetic units 101 to 443 generate a luminance correction coefficient map for the motion contrast image by performing an arithmetic process on the high-dimensional smoothed motion contrast image and the low-dimensional smoothed motion contrast image.
 なお、輝度補正係数マップ生成処理の詳細はS511~S516で説明する。 The details of the brightness correction coefficient map generation processing will be described in S511 to S516.
 <ステップ1006>
 補正部101-44は、モーションコントラスト画像の各画素に対してS1005で算出した輝度補正係数値を乗算することにより、輝度段差補正済のモーションコントラスト画像を生成する。なお、輝度補正係数の適用方法は乗算に限定されるものではなく、任意の公知の演算方法を適用してよい。また、輝度補正係数値を用いて3次元モーションコントラスト画像の少なくとも一部が補正されれば良い。このとき、3次元モーションコントラスト画像の少なくとも一部には、Cスキャンのモーションコントラスト画像等も含まれる。
<Step 1006>
The correction unit 101-44 multiplies each pixel of the motion contrast image by the luminance correction coefficient value calculated in S1005, thereby generating a motion contrast image with the luminance step corrected. Note that the method of applying the luminance correction coefficient is not limited to multiplication, and any known calculation method may be applied. Further, at least a part of the three-dimensional motion contrast image may be corrected using the luminance correction coefficient value. At this time, at least a part of the three-dimensional motion contrast image includes a C-scan motion contrast image and the like.
 <ステップ1007>
 表示制御部101-05は、S1006で生成した輝度補正済のモーションコントラスト画像を表示部104に表示する。また操作者が入力部103を用いて表示部104に表示された非図示のボタンもしくはショートカットメニューを選択することで、該輝度補正済モーションコントラスト画像を記憶部101-02もしくは外部記憶部102に保存する。なお、画像生成手段の一例である画像処理部101-04が、補正された少なくとも一部の3次元モーションコントラスト画像に基づく少なくとも1つの正面画像(正面モーションコントラスト画像)を生成することが好ましい。このとき、表示制御部101-05は、生成された少なくとも1つの正面画像を表示部104に表示させることが好ましい。
<Step 1007>
The display control unit 101-05 displays the luminance-corrected motion contrast image generated in step S1006 on the display unit 104. When the operator selects a button (not shown) or a shortcut menu displayed on the display unit 104 using the input unit 103, the brightness-corrected motion contrast image is stored in the storage unit 101-02 or the external storage unit 102. I do. It is preferable that the image processing unit 101-04, which is an example of the image generation unit, generate at least one front image (front motion contrast image) based on at least a part of the corrected three-dimensional motion contrast image. At this time, the display control unit 101-05 preferably causes the display unit 104 to display at least one generated front image.
 本実施形態では、図13のReportボタン1312を押下することにより、レポート画面1300を表示部104に表示する。レポート画面1300の左下に輝度段差補正済正面断層画像1309を表示しており、投影範囲に関してはリストボックス1310に表示されたリストから操作者が選択することで変更できる。該レポート画面1300左上のSLO画像上に、輝度段差補正済の正面断層画像もしくは正面モーションコントラスト画像を重畳表示する。またレポート画面1300中央の上下に投影深度範囲の異なる輝度段差補正済モーションコントラスト画像1301、1305を表示している。輝度段差補正済正面モーションコントラスト画像の投影範囲はリストボックスに表示された既定の深度範囲セット(1302及び1306)から操作者が選択することで変更できる。また、投影範囲の指定に用いる層境界の種類とオフセット位置を1303や1307のようなユーザインターフェースから変更したり、断層像上に重畳した層境界データ(1304及び1308)を入力部103から操作して移動させたりして投影範囲を変更できる。 In the present embodiment, the report screen 1300 is displayed on the display unit 104 by pressing the Report button 1312 in FIG. A luminance step corrected front tomographic image 1309 is displayed at the lower left of the report screen 1300, and the projection range can be changed by the operator selecting from the list displayed in the list box 1310. A frontal tomographic image or a frontal motion contrast image with the luminance step corrected is superimposed on the SLO image at the upper left of the report screen 1300. Also, brightness contrast corrected motion contrast images 1301 and 1305 having different projection depth ranges are displayed above and below the center of the report screen 1300. The projection range of the luminance step-corrected front motion contrast image can be changed by the operator selecting from a predetermined depth range set (1302 and 1306) displayed in the list box. Further, the type and offset position of the layer boundary used to specify the projection range can be changed from a user interface such as 1303 or 1307, or the layer boundary data (1304 and 1308) superimposed on the tomographic image can be operated from the input unit 103. To change the projection range.
 さらに、輝度段差補正済モーションコントラスト画像の画像投影法やプロジェクションアーチファクト抑制処理の有無を例えばコンテキストメニューのようなユーザインターフェースから選択することにより変更してもよい。 Further, the image projection method of the motion contrast image after the luminance step correction and the presence or absence of the projection artifact suppression process may be changed by selecting the user interface such as a context menu.
 さらに操作者が複数のモーションコントラスト画像を合成するための合成指示ボタン1311を押下することにより、輝度段差補正済の合成モーションコントラスト画像を生成してもよい。図13の合成指示ボタンは重ね合わせ処理の場合の例を示しているが、これに限らず貼り合わせ処理に関する合成指示である場合も本発明に含まれる。 Furthermore, the operator may press a synthesis instruction button 1311 for synthesizing a plurality of motion contrast images to generate a synthesized motion contrast image with corrected luminance step. The combination instruction button in FIG. 13 shows an example in the case of a superimposition process. However, the present invention is not limited to this, and a combination instruction button for a combination process is also included in the present invention.
 なお断層画像もしくはモーションコントラスト画像に対する輝度段差補正処理の適用可否を指定するユーザインターフェース(1313)を選択することで、輝度段差補正処理の適用可否を変更した断層画像もしくはモーションコントラスト画像を表示させてもよい。例えば、図13の1313に示したようなチェックボックスの選択/非選択に応じて、レポート画面1300に表示する断層画像やモーションコントラスト画像に対する輝度段差補正処理の適用状態(適用する/適用しない)を切り替えてよい。ユーザインターフェースはチェックボックスに限定されるものではなく、任意の公知のユーザインターフェースを用いてよい。また、輝度段差補正処理の適用可否を断層画像とモーションコントラスト画像とで独立に指示可能なユーザインターフェースを備えてもよい。例えば断層画像とモーションコントラスト画像とで別々の指示ボタンを備えてもよいし、単独のユーザインターフェースで、4種類の選択肢((1)両方に対して適用(2)断層画像に対してのみ適用(3)モーションコントラスト画像に対してのみ(4)どちらに対しても適用しない)の中から選択するよう構成してもよい。あるいは、輝度段差補正処理を適用した断層画像もしくはモーションコントラスト画像と、輝度段差補正処理を適用していない断層画像もしくはモーションコントラスト画像とを並べて表示部104に表示させてもよい。 By selecting the user interface (1313) for specifying whether or not the luminance step correction process can be applied to the tomographic image or the motion contrast image, the tomographic image or the motion contrast image in which the luminance step correction process can be applied is displayed. Good. For example, according to the selection / non-selection of the check box 1313 in FIG. 13, the application state (applied / not applied) of the luminance step correction processing to the tomographic image or the motion contrast image displayed on the report screen 1300 is determined. You can switch. The user interface is not limited to the check box, and any known user interface may be used. In addition, a user interface may be provided that can independently indicate whether or not the luminance step correction processing can be applied to the tomographic image and the motion contrast image. For example, separate instruction buttons may be provided for a tomographic image and a motion contrast image, or a single user interface may apply to four types of options ((1) applied to both (2) applied only to a tomographic image ( 3) Only for the motion contrast image (4) Not applicable to either). Alternatively, a tomographic image or motion contrast image to which the luminance step correction processing has been applied and a tomographic image or motion contrast image to which the luminance step correction processing has not been applied may be displayed on the display unit 104 side by side.
 次に、図5Bに示すフローチャートを参照しながら、S1005で実行される処理の詳細について説明する。 Next, details of the processing executed in S1005 will be described with reference to the flowchart shown in FIG. 5B.
 <ステップ511>
 補正部101-44は、S1004でモーションコントラストデータ生成部101-12及び合成部101-45が生成したモーションコントラスト画像及び合成モーションコントラスト画像を取得する。次に、操作者が表示部104に表示されたユーザインターフェースを介して所望の投影深度範囲と該投影深度範囲に対応する正面モーションコントラスト画像の生成を指示する。投影部101-43は、指示された深度範囲で投影し、正面モーションコントラスト画像(図11A)を生成する。なお、本実施形態では、3次元モーションコントラスト画像の投影処理として、眼底正面に対応する面内の各画素に対応する深度方向のモーションコントラストデータの最大値を該画素の画素値としている。しかしながら投影処理はこのような最大値投影に限られず、任意の公知の投影方法を用いてよい。例えば、各画素に対応する深度方向のモーションコントラストデータの中央値や最大値、最頻値等を画素値としてもよい。
<Step 511>
The correction unit 101-44 acquires the motion contrast image and the synthesized motion contrast image generated by the motion contrast data generation unit 101-12 and the synthesis unit 101-45 in S1004. Next, the operator instructs, via the user interface displayed on the display unit 104, a desired projection depth range and generation of a front motion contrast image corresponding to the projection depth range. The projection units 101-43 project in the instructed depth range to generate a front motion contrast image (FIG. 11A). In the present embodiment, as the projection processing of the three-dimensional motion contrast image, the maximum value of the motion contrast data in the depth direction corresponding to each pixel in the plane corresponding to the front of the fundus is set as the pixel value of the pixel. However, the projection processing is not limited to such maximum intensity projection, and any known projection method may be used. For example, the median value, the maximum value, the mode value, etc. of the motion contrast data in the depth direction corresponding to each pixel may be set as the pixel value.
 <ステップ512>
 高次元変換部101-4411は、正面モーションコントラスト画像の輝度値を2次元で平滑化することにより、高次元概略値分布を算出する。本実施形態では、高次元変換部101-4411は、S511で生成した正面モーションコントラスト画像における各画素の輝度値を2次元で平滑化することにより、図11Cに示すようなモーションコントラスト画像の輝度値に関する高次元概略値分布を算出する。
<Step 512>
The high-dimensional conversion unit 101-4411 calculates a high-dimensional approximate value distribution by smoothing the luminance value of the front motion contrast image two-dimensionally. In the present embodiment, the high-dimensional conversion unit 101-4411 two-dimensionally smoothes the luminance value of each pixel in the front motion contrast image generated in S511, thereby obtaining the luminance value of the motion contrast image as shown in FIG. 11C. Calculate the high-dimensional approximate value distribution for
 なお、本実施形態では、概略値分布を算出する処理の例として平滑化処理を行ったが、後述するようにClosing処理やOpening処理等のモルフォロジー演算を行ってもよい。また平滑化処理は任意の空間フィルタを用いて平滑化もよいし、高速フーリエ変換(FFT)等を用いてモーションコントラストデータを周波数変換した上で、高周波成分を抑制することで平滑化してもよい。FFTを用いる場合、畳み込み演算が不要になるため、高速に平滑化処理を実行できる。 In the present embodiment, the smoothing process is performed as an example of the process of calculating the approximate value distribution, but a morphological operation such as a closing process or an opening process may be performed as described later. In the smoothing process, smoothing may be performed by using an arbitrary spatial filter, or may be performed by performing frequency conversion on motion contrast data using fast Fourier transform (FFT) or the like and then suppressing high frequency components to perform smoothing. . When the FFT is used, the convolution operation is unnecessary, so that the smoothing process can be executed at high speed.
 <ステップ513>
 重み付け部101-442は、血管取得部101-421から血管候補領域マップV(x,y)(図7D)を取得する。
<Step 513>
The weighting unit 101-442 acquires the blood vessel candidate region map V (x, y) (FIG. 7D) from the blood vessel acquisition unit 101-421.
 <ステップ514>
 重み付け部101-442は、血管候補領域マップV(x,y)の値を用いてモーションコントラスト画像の該血管候補領域における輝度値を重み付けする。なお、この重み付けは、第2の概略値分布の一例である低次元概略値分布を取得する際に、3次元モーションコントラスト画像を取得する際に使用される測定光の速軸方向に沿って存在する血管もしくは出血領域である所定の組織と、それ以外の領域とに対して実行される異なる算出処理の一例である。また、この重み付けは、本発明において必須ではない。
<Step 514>
The weighting unit 101-442 weights the luminance value in the blood vessel candidate area of the motion contrast image using the value of the blood vessel candidate area map V (x, y). Note that this weighting exists along the fast axis direction of the measurement light used when acquiring a three-dimensional motion contrast image when acquiring a low-dimensional approximate value distribution which is an example of the second approximate value distribution. 5 is an example of a different calculation process performed on a predetermined tissue that is a blood vessel or a bleeding region to be performed and a region other than the predetermined tissue. This weighting is not essential in the present invention.
 本実施形態では、血管候補領域マップV(x,y)の値(血管らしさ)が高い領域ほど、S511で取得した正面モーションコントラスト画像における該高い領域に対応する領域の輝度値が、S512で算出した高次元概略値M_2ds(x,y)に近づくように、また、該V(x,y)の値が低い領域ほど、S511で取得した正面モーションコントラスト画像の輝度値M(x,y)をできるだけ維持するように、正面モーションコントラスト画像を重み付けした重み付き正面モーションコントラスト画像M_w(x,y)を生成する。具体的には、以下として算出すれば良い。
 M_w(x,y)=(1.0-V(x,y))*M(x,y)+V(x,y)*M_2ds(x,y)
In the present embodiment, the brightness value of the region corresponding to the higher region in the front motion contrast image acquired in S511 is calculated in S512 as the region (the likeness of the blood vessel) of the blood vessel candidate region map V (x, y) is higher. The luminance value M (x, y) of the front motion contrast image acquired in S511 is set closer to the calculated high-dimensional approximate value M_2ds (x, y) and in a region where the value of V (x, y) is lower. A weighted front motion contrast image M_w (x, y) is generated by weighting the front motion contrast image so as to maintain as much as possible. Specifically, it may be calculated as follows.
M_w (x, y) = (1.0−V (x, y)) * M (x, y) + V (x, y) * M_2ds (x, y)
 図11Eに重み付き正面モーションコントラスト画像M_w(x,y)の例を示す。 FIG. 11E shows an example of a weighted front motion contrast image M_w (x, y).
 なお、ここで示した血管候補領域の輝度値に対する重み付け法はあくまで例であり、速軸方向に走行する血管候補領域の輝度値を減少させる処理もしくは該血管候補領域近傍の輝度値に近づける処理であれば任意の重み付けを行ってもよい。 Note that the weighting method for the brightness value of the blood vessel candidate region shown here is merely an example, and is a process of decreasing the brightness value of the blood vessel candidate region traveling in the fast axis direction or a process of approaching the brightness value near the blood vessel candidate region. If so, any weighting may be performed.
 <ステップ515>
 低次元変換部101-4412は、モーションコントラスト画像の輝度値に関する低次元の概略値分布を算出する。具体的には、血管候補領域の輝度値を重み付けした正面モーションコントラスト画像の各画素の輝度値に対し、速軸方向に関する概略値分布を算出する処理(平滑化処理やモルフォロジー演算)を行う。図11Fに本ステップで算出した低次元概略値分布の例を示す。速軸方向に走行する血管領域の輝度値が抑制された画像に対して、低次元変換(速軸方向の平滑化)処理を行うために、「速軸方向に走行する血管領域」が帯状の高輝度領域として残存する問題を回避できる。
<Step 515>
The low-dimensional conversion unit 101-4412 calculates a low-dimensional approximate value distribution related to the luminance value of the motion contrast image. Specifically, a process (smoothing process or morphological operation) of calculating a rough value distribution in the fast axis direction is performed on the brightness value of each pixel of the front motion contrast image weighted with the brightness value of the blood vessel candidate region. FIG. 11F shows an example of the low-dimensional approximate value distribution calculated in this step. In order to perform a low-dimensional conversion (smoothing in the fast axis direction) process on the image in which the brightness value of the blood vessel region traveling in the fast axis direction is suppressed, the “blood vessel region traveling in the fast axis direction” has a band shape. The problem of remaining as a high luminance area can be avoided.
 <ステップ516>
 演算部101-443は、モーションコントラスト画像の高次元概略値分布と、モーションコントラスト画像の低次元概略値分布とを演算することにより、モーションコントラスト画像用の輝度補正係数分布を算出する。
<Step 516>
The calculation units 101-443 calculate the luminance correction coefficient distribution for the motion contrast image by calculating the high-dimensional rough value distribution of the motion contrast image and the low-dimensional rough value distribution of the motion contrast image.
 本実施形態ではS512で生成した2次元平滑化モーションコントラスト画像の輝度値とS515で生成した重み付き速軸方向平滑化モーションコントラスト画像の輝度値との除算により、モーションコントラスト画像用の輝度補正係数マップ(図11G)を生成する。 In the present embodiment, a luminance correction coefficient map for a motion contrast image is obtained by dividing the luminance value of the two-dimensional smoothed motion contrast image generated in S512 by the luminance value of the weighted fast axis direction smoothed motion contrast image generated in S515. (FIG. 11G).
 次に、図12A~図12Gを用いて血管候補領域に対する輝度重み付けの効果(輝度段差アーチファクトのみを選択的に抑制しやすくする)について説明する。図12Aは帯状の輝度段差(白線)と、速軸方向に走行する血管領域の双方を含むモーションコントラスト画像の例である。速軸方向に走行する血管領域の輝度値を過抑制することなく、帯状の輝度段差のみ選択的に抑制する必要がある。 Next, with reference to FIGS. 12A to 12G, the effect of luminance weighting on the blood vessel candidate region (making it easier to selectively suppress only luminance step artifacts) will be described. FIG. 12A is an example of a motion contrast image including both a band-shaped luminance step (white line) and a blood vessel region traveling in the fast axis direction. It is necessary to selectively suppress only the band-shaped luminance step without excessively suppressing the luminance value of the blood vessel region traveling in the fast axis direction.
 血管候補領域に対する輝度重み付けを実施せずにS515の低次元変換処理、S516の輝度補正係数マップ算出処理、S1006の輝度段差補正処理を各々実行した場合の処理結果の例を図12B、図12D、図12Fに示す。図12Bの白矢印で示した(速軸方向に走行する)血管領域において高輝度領域が帯状に残存し、輝度段差に類似している。そのため、図12Dの白矢印で示した血管領域では輝度段差でないにも関わらず低い補正係数値が算出され、S1006の輝度段差補正処理において速軸方向に走行する血管及びその近傍領域の輝度値の過抑制が生じる(図12Fの白矢印で示した領域)。 FIGS. 12B and 12D show examples of processing results when the low-dimensional conversion processing in S515, the luminance correction coefficient map calculation processing in S516, and the luminance step correction processing in S1006 are performed without performing luminance weighting on the blood vessel candidate area. It is shown in FIG. 12F. In the blood vessel region indicated by a white arrow (running in the fast axis direction) shown in FIG. 12B, a high-luminance region remains in a band shape, and resembles a luminance step. Therefore, a low correction coefficient value is calculated in the blood vessel region indicated by the white arrow in FIG. 12D even though there is no luminance step, and in the luminance step correction processing in S1006, the luminance value of the blood vessel traveling in the fast axis direction and the luminance value of the nearby region are calculated. Oversuppression occurs (the area indicated by the white arrow in FIG. 12F).
 一方、血管候補領域に対する輝度重み付けを実施してからS515の低次元変換処理、S516の輝度補正係数マップ算出処理、S1006の輝度段差補正処理を各々実行した場合の処理結果を図12C、図12E、図12Gに示す。図12Cにおいて、速軸方向に走行する血管領域に対応する帯状の高輝度領域は生じていない。そのため、図12Eでも血管領域に対して適切な補正係数値が算出され、S1006の輝度段差補正処理においても速軸方向に走行する血管及びその近傍領域の輝度値に対する過抑制は見られない(図12G)。 On the other hand, FIG. 12C, FIG. 12E, and FIG. 12C show the processing results when the low-dimensional conversion processing in S515, the luminance correction coefficient map calculation processing in S516, and the luminance step correction processing in S1006 are performed after performing luminance weighting on the blood vessel candidate area. It is shown in FIG. 12G. In FIG. 12C, no band-like high-luminance area corresponding to the blood vessel area running in the fast axis direction is generated. For this reason, also in FIG. 12E, an appropriate correction coefficient value is calculated for the blood vessel region, and no excessive suppression of the brightness values of the blood vessel traveling in the fast axis direction and the vicinity area is observed in the luminance step correction processing of S1006 (FIG. 12G).
 なお、本実施形態においては正面モーションコントラスト画像上に生じた帯状の輝度段差を抑制する(輝度段差補正済の正面モーションコントラスト画像を生成する)方法について説明したが、本発明はこれに限定されない。以下の手順で3次元モーションコントラスト画像上に生じた帯状の輝度段差を抑制し、輝度段差補正済の3次元モーションコントラスト画像を生成してもよい。 In the present embodiment, the method of suppressing the band-shaped luminance step generated on the front motion contrast image (generating the front motion contrast image with the luminance step corrected) has been described, but the present invention is not limited to this. The following procedure may be used to suppress a band-shaped luminance step generated on the three-dimensional motion contrast image, and generate a three-dimensional motion contrast image with the luminance step corrected.
 すなわち、多数の異なる投影深度範囲で正面モーションコントラスト画像を生成しておき、各正面モーションコントラスト画像に対して輝度段差補正係数マップを生成しておく。次に3次元モーションコントラスト画像上の各画素の輝度値に対して、該各画素が属する投影深度範囲に対応する輝度段差補正係数マップの値(補正係数値)を演算することによって輝度段差補正済の3次元モーションコントラスト画像を生成できる。異なる投影深度範囲として、例えば網膜表層・網膜深層・網膜外層・脈絡膜の4種類が挙げられる。あるいは、網膜及び脈絡膜に属する各層の種類を指定してもよい。 That is, a front motion contrast image is generated in a number of different projection depth ranges, and a luminance step correction coefficient map is generated for each front motion contrast image. Next, with respect to the luminance value of each pixel on the three-dimensional motion contrast image, a luminance step correction coefficient value (correction coefficient value) corresponding to the projection depth range to which the pixel belongs is calculated to calculate the luminance step. Can be generated. As the different projection depth ranges, for example, there are four types: a retinal surface layer, a deep retinal layer, an outer retinal layer, and a choroid. Alternatively, the type of each layer belonging to the retina and the choroid may be specified.
 また、輝度段差を補正するタイミングは、予め上述の手順で輝度段差補正済の3次元断層画像もしくは3次元モーションコントラスト画像を生成しておき、操作者から正面断層画像もしくは正面モーションコントラスト画像の生成指示があった時点でその正面画像を生成・表示してよい。輝度段差補正済の3次元断層画像もしくは3次元モーションコントラスト画像の生成タイミングの例として、例えば断層画像の撮影直後、再構成時、保存時が挙げられる。あるいは、操作者から正面断層画像もしくは正面モーションコントラスト画像の生成指示があった時点で(該指示で指定された投影深度範囲に対して)輝度段差補正を実施し、輝度段差補正済の正面断層画像もしくは正面モーションコントラスト画像を表示してもよい。正面断層画像の生成指示ユーザインターフェースとして、例えば図13に示したリストボックス1310が挙げられる。また、正面モーションコントラスト画像の生成指示ユーザインターフェースとして、例えば図13に示した投影深度範囲指定・変更用ユーザインターフェース(1302、1303、1304、1306、1307、1308)が挙げられる。なお事前に、もしくは操作者からの合成(重ね合わせもしくは貼り合わせ)指示に基づいて、各断層画像もしくはモーションコントラスト画像に対して輝度段差補正を実行し、それから合成してもよい。あるいは、合成画像(重ね合わせ画像もしくは貼り合わせ画像)に対して輝度段差補正を実施して表示部104に表示させてもよい。 The timing for correcting the luminance step is determined in advance by generating a three-dimensional tomographic image or a three-dimensional motion contrast image with the luminance step corrected in accordance with the above-described procedure, and instructing the operator to generate a front tomographic image or a front motion contrast image. May be generated and displayed at the point in time when there is. Examples of the generation timing of the three-dimensional tomographic image or the three-dimensional motion contrast image after the correction of the luminance step include, for example, immediately after capturing the tomographic image, during reconstruction, and during storage. Alternatively, when the operator issues a front tomographic image or front motion contrast image generation instruction (for the projection depth range specified by the instruction), the luminance step correction is performed, and the luminance step corrected front tomographic image is performed. Alternatively, a front motion contrast image may be displayed. The list box 1310 shown in FIG. 13 is an example of a user interface for generating a front tomographic image. Further, as a user interface for instructing generation of a front motion contrast image, for example, there are user interfaces (1302, 1303, 1304, 1306, 1307, 1308) for designating and changing the projection depth range shown in FIG. Note that luminance step correction may be performed on each tomographic image or motion contrast image in advance or based on a compositing (overlapping or pasting) instruction from the operator, and then compositing may be performed. Alternatively, the luminance step may be corrected for the composite image (the superimposed image or the bonded image) and displayed on the display unit 104.
 さらに、本発明はいわゆる3-Dスキャンで撮影した場合に生じるモーションコントラスト画像上の帯状輝度段差の補正に限られるものではなく、種々のスキャンパターンで撮影した場合のモーションコントラスト画像上の遅軸方向に生じる輝度段差補正にも適用できる。例えば、半径の異なる多数のサークルスキャンや、ラジアルスキャンで撮影した場合の遅軸方向に生じる輝度段差を補正する場合も本発明に含まれる。 Further, the present invention is not limited to the correction of the band-shaped luminance step on the motion contrast image generated when the image is captured by the so-called 3-D scan, but is not limited to the slow axis direction on the motion contrast image when the image is captured by various scan patterns. Can also be applied to the correction of the luminance step occurring in the image. For example, the present invention includes a case of correcting a luminance step generated in a slow axis direction when photographing is performed by a number of circle scans having different radii or a radial scan.
 以上述べた構成によれば、画像処理装置101はOCTを用いたクラスタ撮影により得られた被検眼の断層画像から生成したモーションコントラスト画像の遅軸方向に生じた様々な輝度段差アーチファクトをロバストに抑制するために、以下の画像処理を行う。すなわち、高次元平滑化モーションコントラスト画像の輝度値に対して、第1実施形態と同様の方法で取得した血管候補領域の輝度値を重み付けした後に速軸方向に平滑化したモーションコントラスト画像の輝度値を除算することで、輝度補正係数マップを生成する。さらに、断層画像に対して輝度補正係数を乗算することで、モーションコントラスト画像の遅軸方向に生じた輝度段差アーチファクトをロバストに抑制する。これにより、被検眼のモーションコントラスト画像の遅軸方向に生じた輝度段差をロバストに抑制できる。 According to the configuration described above, the image processing apparatus 101 robustly suppresses various luminance step artifacts generated in the slow axis direction of a motion contrast image generated from a tomographic image of the subject's eye obtained by OCT-based cluster imaging. In order to do so, the following image processing is performed. That is, the luminance value of the high-dimensional smoothed motion contrast image is weighted with the luminance value of the blood vessel candidate region acquired in the same manner as in the first embodiment, and then the luminance value of the motion contrast image smoothed in the fast axis direction. Is divided to generate a luminance correction coefficient map. Further, by multiplying the tomographic image by the luminance correction coefficient, the luminance step artifact generated in the slow axis direction of the motion contrast image is robustly suppressed. This makes it possible to robustly suppress the luminance step generated in the slow axis direction of the motion contrast image of the subject's eye.
 [第3実施形態]
 本実施形態に係る画像処理装置は、撮影確認画面において、上述した輝度段差アーチファクト抑制処理等の各種のアーチファクト低減処理が適用されていない正面画像(正面断層画像または正面モーションコントラスト画像)等の医用画像を表示し、一方、レポート画面では、アーチファクト低減処理が適用された医用画像を表示するものである。これにより、操作者は、例えば、撮影後の表示画面(撮影確認画面)においては、撮影成否(あるいは撮影失敗の程度)を容易に把握するために、各種処理ができるだけ施されていない状態の医用画像を確認することができる。また、操作者は、例えば、他の表示画面(レポート画面)においては、解析結果等を把握するために、解析に対しては不要な各種のアーチファクトができるだけ低減された医用画像を確認することができる。このため、操作者が目的に適した医用画像を確認可能とすることができる。
[Third embodiment]
The image processing apparatus according to the present embodiment provides a medical image such as a front image (a front tomographic image or a front motion contrast image) to which the various artifact reduction processes such as the luminance step artifact suppression process described above are not applied on the shooting confirmation screen. On the other hand, on the report screen, a medical image to which the artifact reduction processing has been applied is displayed. Thus, for example, on the display screen (imaging confirmation screen) after imaging, the operator can easily grasp the success or failure of imaging (or the degree of failure in imaging) in a state where medical processing is not performed as much as possible. You can check the image. Further, for example, on another display screen (report screen), the operator may check a medical image in which various artifacts unnecessary for analysis are reduced as much as possible in order to grasp an analysis result or the like. it can. Therefore, it is possible for the operator to check a medical image suitable for the purpose.
 なお、本実施形態におけるアーチファクトは、上述した輝度段差アーチファクトに限らず、上述した目的の範囲であれば何でも良い。例えば、本実施形態におけるアーチファクトは、OCTAにおけるプロジェクションアーチファクト(上層の血管の影の揺らぎをモーションコントラストとして誤検出することによって、下層において本来存在しない血管が描出されたもの)であっても良い。また、各種のアーチファクト低減処理自体は、撮影直後に実行開始されても良いし、撮影確認画面からレポート画面に遷移後に実行開始されても良い。また、画像処理部101-04は、各種のアーチファクトが低減された正面画像を生成する生成手段の一例である。また、撮影確認画面は、第1の正面画像の一例であり、レポート画面は第2の表示画面の一例である。また、レポート画面は、撮影確認画面から切り換わった後の表示画面の一つであり、各種処理後の画像や解析結果等を操作者が確認するための表示画面である。 The artifact in the present embodiment is not limited to the luminance step artifact described above, and may be any artifact within the above-described purpose. For example, the artifact in the present embodiment may be a projection artifact in OCTA (a blood vessel that originally does not exist in the lower layer is drawn by erroneously detecting the fluctuation of the shadow of the blood vessel in the upper layer as a motion contrast). Further, the execution of the various artifact reduction processing itself may be started immediately after shooting, or may be started after a transition from the shooting confirmation screen to the report screen. The image processing unit 101-04 is an example of a generating unit that generates a front image in which various artifacts are reduced. The shooting confirmation screen is an example of a first front image, and the report screen is an example of a second display screen. The report screen is one of the display screens after switching from the photographing confirmation screen, and is a display screen for the operator to check the image after various processes, the analysis result, and the like.
 また、本実施形態に係る画像処理装置は、断層画像もしくはモーションコントラスト画像の遅軸方向に生じた輝度段差アーチファクト抑制処理の適用要否に関する指示を受け付ける受付部を備えても良い。これにより、該受付部が撮影確認画面・レポート画面の各々で受け付けた該抑制処理の適用要否に関する指示に応じ、表示制御部が断層画像もしくはモーションコントラスト画像を該輝度段差アーチファクト抑制処理適用済、もしくは非適用の状態で表示部に表示させることができる。具体的には、撮影確認画面とレポート画面とで、断層画像及びモーションコントラスト画像を表示する際の輝度段差アーチファクト抑制処理の適用に関する設定を異ならせる場合について説明する。 The image processing apparatus according to the present embodiment may include a receiving unit that receives an instruction regarding whether or not to apply the luminance step artifact suppression processing that occurs in the slow axis direction of the tomographic image or the motion contrast image. Thereby, in response to the instruction regarding the necessity of application of the suppression processing received by each of the imaging confirmation screen and the report screen by the reception unit, the display control unit converts the tomographic image or the motion contrast image into the luminance step artifact suppression processing applied, Alternatively, it can be displayed on the display unit in a non-application state. Specifically, a case will be described in which the setting relating to the application of the luminance step artifact suppression processing when displaying the tomographic image and the motion contrast image is different between the imaging confirmation screen and the report screen.
 本実施形態に係る画像処理装置は、受付部(非図示)を備える点が第1実施形態もしくは第2実施形態の場合と異なる。また、本実施形態での画像処理フローは第2実施形態の場合(図10)と同様である。なお、図10においてS1001~S1004は第2実施形態の場合と同様であるので説明は省略する。 画像 The image processing apparatus according to the present embodiment is different from the image processing apparatus according to the first or second embodiment in that the image processing apparatus includes a receiving unit (not shown). The image processing flow in the present embodiment is the same as that in the second embodiment (FIG. 10). In FIG. 10, steps S1001 to S1004 are the same as those in the second embodiment, and a description thereof will be omitted.
 <ステップ1005>
 重み付け部101-442は、第2実施形態のS1005と同様の処理を行うことによりモーションコントラスト画像用の輝度補正係数マップを生成する。なお、本実施形態では重み付け部101-442が第1実施形態のS304と同様の処理を行うことにより、断層画像用の輝度補正係数マップも生成するものとする。
<Step 1005>
The weighting units 101-442 generate a luminance correction coefficient map for a motion contrast image by performing the same processing as in S1005 of the second embodiment. Note that in this embodiment, the weighting units 101-442 perform the same processing as in S304 of the first embodiment to generate a luminance correction coefficient map for tomographic images.
 <ステップ1006>
 補正部101-44は、モーションコントラスト画像の各画素に対してS1005で算出したモーションコントラスト画像用の輝度補正係数値を乗算することにより、輝度段差補正済のモーションコントラスト画像を生成する。また、本実施形態では補正部101-44が断層画像の各画素に対してS1005で算出した断層画像用の輝度補正係数値を乗算することにより、輝度段差補正済の断層画像も生成しておく。
<Step 1006>
The correction unit 101-44 multiplies each pixel of the motion contrast image by the luminance correction coefficient value for the motion contrast image calculated in S1005, thereby generating a motion contrast image with the luminance step corrected. Further, in this embodiment, the correcting unit 101-44 multiplies each pixel of the tomographic image by the luminance correction coefficient value for the tomographic image calculated in S1005, thereby generating a tomographic image having a luminance level difference corrected. .
 <ステップ1007>
 S1002で生成した断層画像やS1004で生成したモーションコントラスト画像、補正部101-44が生成した輝度段差補正済のモーションコントラスト画像及び断層画像に基づき、表示制御部101-05が表示部104に撮影確認画面(図14)を表示する。本撮影確認画面においては、左上に眼底画像1401、左下に正面断層画像1402、該正面断層画像の各走査位置(1402a・1402b・1402c)に対応するBスキャン断層画像(1406a・1406b・1406c)を表示する。また取得した断層画像の画質指標値1405や、取得した3次元の断層画像の各スライスを自動で連続表示するための指示ボタン1404も備える。操作者は、該撮影確認画面に表示された断層画像やモーションコントラスト画像に基づき、撮影した断層画像の保存可否に関する指示(OKボタン1407もしくはNGボタン1408の押下)や繰り返し撮影の継続に関する指示(Repeatボタン1409の押下)を行う。受付部が受け付けた該指示に基づいて画像処理装置が対応するデータ保存・撮影継続処理を行う。さらに、第2実施形態のS1007と同様にReportボタン1312を押下することで、表示制御部101-05がレポート画面1300を表示部104に表示する。
<Step 1007>
Based on the tomographic image generated in step S1002, the motion contrast image generated in step S1004, the luminance step corrected motion contrast image generated by the correction unit 101-44, and the tomographic image, the display control unit 101-05 confirms shooting on the display unit 104. A screen (FIG. 14) is displayed. On the main photographing confirmation screen, a fundus image 1401 at the upper left, a front tomographic image 1402 at the lower left, and B-scan tomographic images (1406a, 1406b, 1406c) corresponding to each scanning position (1402a, 1402b, 1402c) of the front tomographic image. indicate. It also has an image quality index value 1405 of the acquired tomographic image and an instruction button 1404 for automatically and continuously displaying each slice of the acquired three-dimensional tomographic image. Based on the tomographic image or the motion contrast image displayed on the photographing confirmation screen, the operator issues an instruction regarding whether or not the photographed tomographic image can be stored (pressing the OK button 1407 or NG button 1408) and an instruction regarding continuation of repeated photographing (Repeat). Button 1409). Based on the instruction received by the receiving unit, the image processing apparatus performs a corresponding data saving / photographing continuation process. Further, by pressing the Report button 1312 as in S1007 of the second embodiment, the display control unit 101-05 displays the report screen 1300 on the display unit 104.
 撮影確認画面(図14)においては左下に正面断層画像1402を表示し、モーションコントラスト画像の生成及び輝度段差補正処理が終わった段階で正面モーションコントラスト画像(図15B)の表示に切り替える。本実施形態では、撮影確認画面の左側最下部に該撮影確認画面に表示する断層画像もしくはモーションコントラスト画像に対する輝度段差補正処理の適用可否を指示するためのユーザインターフェース1403を備えるものとする。図14では該ユーザインターフェース1403の例としてImage Quality Enhancementと表記されたチェックボックスが表示され、該チェックボックスを非選択の状態(OFFの状態)を表している。このような選択状態の場合には、表示制御部101-05は断層画像やモーションコントラスト画像を輝度段差補正処理非適用の状態(例えばモーションコントラスト画像においては、図15Bのような状態)で表示部104に表示する。一方、該チェックボックスが選択された状態(ON)の場合は断層画像やモーションコントラスト画像を輝度段差補正処理適用済の状態(図15Aや図15C)で表示部104に表示する。なお、第2実施形態の場合と同様に、本撮影確認画面における輝度段差補正処理の適用可否を断層画像とモーションコントラスト画像とで独立に指示可能なユーザインターフェースを備えてもよい。例えば断層画像とモーションコントラスト画像とで別々の該指示ユーザインターフェースを備えてもよいし、単独のユーザインターフェースで、4種類の選択肢((1)両方に対して適用(2)断層画像に対してのみ適用(3)モーションコントラスト画像に対してのみ(4)どちらに対しても適用しない)の中から選択するよう構成してもよい。例えば(1)を選択することで実際の診療で用いる断層画像及びモーションコントラスト画像の画質を把握した上で断層画像の保存や繰り返し撮影継続の可否を指示できる。また(4)を選択することにより、断層画像の撮影不良箇所(低輝度領域)やモーションコントラスト画像を生成する際に白線を生じる原因となる断層画像間の位置ずれがどの程度、どこに存在するか把握した上で断層画像の保存や繰り返し撮影継続の可否を指示できる。あるいは、(3)を選択することで断層画像の撮影不良個所を把握しつつ、最終的なモーションコントラスト画像の画質を理解した状態で断層画像の保存や繰り返し撮影継続の可否を指示できる。細かな断層画像間の位置ずれのみ把握したい場合は、(2)を選択すればよい。 正面 On the photographing confirmation screen (FIG. 14), the front tomographic image 1402 is displayed at the lower left, and the display is switched to the front motion contrast image (FIG. 15B) after the generation of the motion contrast image and the luminance step correction processing are completed. In the present embodiment, it is assumed that a user interface 1403 for instructing whether or not to apply the luminance step correction processing to a tomographic image or a motion contrast image displayed on the imaging confirmation screen is provided at the lower left portion of the imaging confirmation screen. In FIG. 14, as an example of the user interface 1403, a check box described as Image \ Quality \ Enhancement is displayed, and the check box is in a non-selected state (OFF state). In the case of such a selection state, the display control unit 101-05 displays the tomographic image or the motion contrast image in a state where the luminance step correction processing is not applied (for example, in the case of the motion contrast image, a state as shown in FIG. 15B). Displayed at 104. On the other hand, when the check box is selected (ON), the tomographic image and the motion contrast image are displayed on the display unit 104 in a state where the luminance step correction processing is applied (FIGS. 15A and 15C). As in the case of the second embodiment, a user interface capable of independently instructing whether or not to apply the luminance step correction processing on the main imaging confirmation screen may be provided for the tomographic image and the motion contrast image. For example, the instruction user interface may be provided separately for the tomographic image and the motion contrast image, or the four user options ((1) can be applied to both (1) and (2) only to the tomographic image) with a single user interface. (3) Apply only to the motion contrast image (4) Do not apply to both). For example, by selecting (1), it is possible to instruct whether to save the tomographic image or continue continual imaging after grasping the image quality of the tomographic image and the motion contrast image used in actual medical care. Also, by selecting (4), how much and where the misalignment between tomographic images that causes a white line when generating a tomographic image defective portion (low luminance area) or a motion contrast image exists. After grasping, it is possible to instruct whether to store a tomographic image or to continue repetitive imaging. Alternatively, by selecting (3), it is possible to instruct whether or not to save the tomographic image or continue the repeated imaging while understanding the image quality of the final motion contrast image while grasping the imaging failure portion of the tomographic image. If it is desired to grasp only the positional deviation between fine tomographic images, (2) may be selected.
 また、断層画像やモーションコントラスト画像に対する輝度段差補正処理の適用可否に関するユーザインターフェースは、撮影確認画面とレポート画面との間で独立に設定可能に構成してよい。例えば、断層画像やモーションコントラスト画像に対する該輝度段差補正処理を撮影画面においては非適用の状態、レポート画面においては適用済の状態で各々表示するような指示が選択された状態を該ユーザインターフェースのデフォルト設定としてよい。このように構成することで、撮影確認時には撮影失敗箇所を容易に把握でき、レポート画面においては診療に適した高画質な断層画像やモーションコントラスト画像を観察できる。 The user interface regarding whether or not the luminance step correction processing can be applied to the tomographic image or the motion contrast image may be configured to be independently settable between the imaging confirmation screen and the report screen. For example, a state in which an instruction to display the luminance step correction processing for a tomographic image or a motion contrast image in a non-applied state on an imaging screen and in an applied state on a report screen is selected is a default state of the user interface. It may be set. With this configuration, it is possible to easily grasp a failed imaging portion at the time of confirming imaging, and to observe a high-quality tomographic image or motion contrast image suitable for medical treatment on the report screen.
 このとき、表示画面が遷移した際に、表示画面の遷移前に選択された指示が遷移後にも反映されるように構成されても良い。例えば、撮影確認画面において選択された指示が、レポート画面に遷移された後にも反映されるように構成されても良い。また、例えば、レポート画面であって、所定の日時に得た画像が表示されている表示画面において選択された指示が、経過観察用の表示画面に遷移された後にも反映されるように構成されても良い。また、経過観察用の表示画面において選択された指示が、異なる日時の複数の画像に対して一括して反映されるように構成されても良い。これらにより、操作者の利便性を向上することができる。 At this time, when the display screen transitions, the instruction selected before the transition of the display screen may be configured to be reflected after the transition. For example, the configuration may be such that the instruction selected on the shooting confirmation screen is reflected even after the transition to the report screen. Further, for example, an instruction selected on a display screen, which is a report screen on which an image obtained at a predetermined date and time is displayed, is configured to be reflected even after transition to a display screen for follow-up observation. May be. Further, an instruction selected on the display screen for follow-up observation may be configured to be collectively reflected on a plurality of images at different dates and times. As a result, convenience for the operator can be improved.
 さらに、撮影確認画面やレポート画面において適用可否を指定するユーザインターフェースが対象とする補正処理は輝度段差補正処理に限定されず、任意の公知の高画質化処理を選択可能に構成してもよい。例えば、撮影確認画面において機械学習による高画質化処理の適用可否の指示を受け付けるユーザインターフェースを備え、該ユーザインターフェースの選択状態に応じて断層画像もしくはモーションコントラスト画像に対する該高画質化処理の適用/非適用を切り替えて表示してもよい。 Further, the correction processing targeted by the user interface for designating the applicability on the shooting confirmation screen or the report screen is not limited to the luminance step correction processing, and may be configured to select any known high-quality processing. For example, a user interface is provided for receiving an instruction on whether or not to apply the image quality enhancement processing by machine learning on a shooting confirmation screen, and application / non-application of the image quality enhancement processing to a tomographic image or a motion contrast image according to a selection state of the user interface. The application may be switched and displayed.
 なお撮影確認画面やレポート画面において、所定のユーザインターフェースもしくはスクリプトにより輝度段差補正処理の適用・非適用を切り替えて断層画像もしくはモーションコントラスト画像を表示してもよい。あるいは、輝度段差補正処理適用済の断層画像もしくはモーションコントラスト画像と、輝度段差補正処理を適用していない断層画像もしくはモーションコントラスト画像とを並べて表示するよう画像処理装置101を構成してもよい。 In the photographing confirmation screen or the report screen, the application or non-application of the luminance step correction processing may be switched by a predetermined user interface or script to display a tomographic image or a motion contrast image. Alternatively, the image processing apparatus 101 may be configured to display a tomographic image or motion contrast image to which the luminance step correction processing has been applied and a tomographic image or motion contrast image to which the luminance step correction processing has not been applied, side by side.
 以上述べた構成によれば、画像処理装置101は断層画像もしくはモーションコントラスト画像の遅軸方向に生じた輝度段差アーチファクト抑制処理の適用要否に関する指示を受け付ける受付部を備えてもよい。これにより、該受付部が撮影確認画面・レポート画面の各々で受け付けた該抑制処理の適用要否に関する指示に応じ、表示制御部が断層画像もしくはモーションコントラスト画像を該輝度段差アーチファクト抑制処理適用済、もしくは非適用の状態で表示部に表示させることができる。このため、撮影確認時には撮影失敗箇所を容易に確認でき、レポート画面観察時には診療に適した高画質な断層画像やモーションコントラスト画像を観察できる。 According to the configuration described above, the image processing apparatus 101 may include a receiving unit that receives an instruction as to whether or not to apply the luminance step artifact suppression processing generated in the slow axis direction of the tomographic image or the motion contrast image. Thereby, in response to the instruction regarding the necessity of application of the suppression processing received by each of the imaging confirmation screen and the report screen by the reception unit, the display control unit converts the tomographic image or the motion contrast image into the luminance step artifact suppression processing applied, Alternatively, it can be displayed on the display unit in a non-application state. For this reason, it is possible to easily confirm a failed imaging portion when confirming imaging, and to observe a high-quality tomographic image or motion contrast image suitable for medical treatment when observing a report screen.
 (変形例1)
 上述した様々な実施形態における撮影確認画面において、表示制御部が、正面画像(正面断層画像または正面モーションコントラスト画像)における各種アーチファクト(例えば、輝度段差アーチファクト)の状態の判定結果(分類結果)を、正面画像と一緒に表示させても良い。ここで、アーチファクトの状態とは、例えば、アーチファクトの有無である。このとき、例えば、少なくとも1つの被検眼の複数の正面画像に対して各種アーチファクトの状態(例えば、有無)のラベルを付けておき、そのラベルを付けた複数の正面画像による機械学習により得た学習済モデルを用いて、入力された正面画像において各種アーチファクトの状態(例えば、各種アーチファクトが存在すること)が撮影確認画面に表示される。すなわち、上述した学習済モデルを用いて得たアーチファクトの状態の判定結果を撮影確認画面に表示させることができる。これにより、例えば、学習済モデルを用いることにより、精度良く判定しつつ、処理時間を短縮することができる。このため、検者は、撮影直後であっても精度の良い判定結果を確認することができる。また、例えば、撮影直後であっても再撮影の要否等の検者による判断効率を向上させることができる。このため、診断の精度や効率を向上させることができる。なお、各種アーチファクトの状態のラベルは、ユーザインターフェースを介して操作者がマニュアル入力しても良いし、各種アーチファクトを自動または半自動で判定するルールベースの解析による実行結果であっても良い。また、アーチファクトの状態の判定結果が表示される表示画面は、撮影確認画面に限らず、例えば、レポート画面、経過観察用の表示画面、撮影前の各種調整用のプレビュー画面(各種のライブ動画像が表示される表示画面)等の少なくとも1つの表示画面に表示されても良い。
(Modification 1)
In the shooting confirmation screens in the various embodiments described above, the display control unit determines the determination result (classification result) of the state of various artifacts (for example, luminance step artifact) in the front image (front tomographic image or front motion contrast image). It may be displayed together with the front image. Here, the state of the artifact is, for example, the presence or absence of the artifact. At this time, for example, a label indicating the state of various artifacts (for example, presence or absence) is attached to a plurality of front images of at least one eye to be examined, and learning obtained by machine learning using the plurality of labeled front images is performed. Using the completed model, the state of various artifacts (for example, the existence of various artifacts) in the input front image is displayed on the shooting confirmation screen. That is, the determination result of the state of the artifact obtained using the above-described learned model can be displayed on the shooting confirmation screen. Thus, for example, by using a learned model, the processing time can be reduced while determining with high accuracy. For this reason, the examiner can confirm the determination result with high accuracy even immediately after imaging. Further, for example, even immediately after the photographing, it is possible to improve the efficiency of the examiner's judgment on the necessity of the re-photographing. Therefore, the accuracy and efficiency of diagnosis can be improved. The label of the state of various artifacts may be manually input by an operator via a user interface, or may be an execution result by rule-based analysis for automatically or semi-automatically determining various artifacts. Further, the display screen on which the determination result of the state of the artifact is displayed is not limited to the shooting confirmation screen. For example, a report screen, a display screen for follow-up observation, a preview screen for various adjustments before shooting (various live video images) May be displayed on at least one display screen.
 ここで、機械学習には、例えば、多階層のニューラルネットワークから成る深層学習(Deep Learning)がある。また、多階層のニューラルネットワークの少なくとも1層には、例えば、畳み込みニューラルネットワーク(CNN:Convolutional Neural Network)を用いることができる。ただし、機械学習としては、深層学習に限らず、画像等の学習データの特徴量を学習によって自ら抽出(表現)可能なモデルであれば何でも良い。このとき、学習済モデルは、例えば、各種アーチファクトの状態に関する情報を正解データ(教師データ)とし、正面画像等の医用画像を入力データとする学習データを用いた教師あり学習により得ても良い。また、学習済モデルは、上述した正解データ(教師データ)を用いずに、例えば、少なくとも1つの被検眼の複数の正面画像等の複数の医用画像を学習データとして用いた教師なし学習により得ても良い。また、学習済モデルは、追加学習により更新されることで、例えば、操作者に適したモデルとしてカスタマイズされても良い。 Here, the machine learning includes, for example, deep learning (Deep Learning) composed of a multi-layer neural network. For at least one layer of the multi-layer neural network, for example, a convolutional neural network (CNN: Convolutional Neural Network) can be used. However, the machine learning is not limited to the deep learning, but may be any model that can extract (represent) the feature amount of learning data such as an image by learning. At this time, the learned model may be obtained by, for example, supervised learning using learning data in which information relating to the state of various artifacts is correct data (teacher data) and a medical image such as a front image is input data. In addition, the learned model is obtained by unsupervised learning using a plurality of medical images such as a plurality of front images of at least one eye to be examined as learning data without using the above-described correct data (teacher data). Is also good. Further, the learned model may be updated as a result of the additional learning, and may be customized as, for example, a model suitable for the operator.
 また、画像処理部101-04は、正面画像等の医用画像における各種アーチファクトの状態を判定する判定手段(分類手段)の一例である。このとき、判定手段は、例えば、医用画像におけるアーチファクトの有無を、アーチファクトの状態として判定(有無に分類)することができる。また、判定手段は、例えば、医用画像におけるアーチファクトの程度に応じた段階を、アーチファクトの状態として判定(医用画像を複数の段階のいずれかに分類)しても良い。このとき、複数の段階は、例えば、アーチファクトの有無であっても良いし、アーチファクトが多寡や存在範囲の大きさ等に応じた複数のレベルであっても良い。また、判定手段は、例えば、医用画像におけるアーチファクトの種類を、アーチファクトの状態として評価(医用画像を複数の種類のいずれかに分類)しても良い。 The image processing unit 101-04 is an example of a determining unit (classifying unit) that determines the state of various artifacts in a medical image such as a front image. At this time, for example, the determination unit can determine (classify or not) the presence or absence of the artifact in the medical image as the state of the artifact. Further, the determination unit may determine, for example, a stage according to the degree of the artifact in the medical image as the state of the artifact (the medical image is classified into one of a plurality of stages). At this time, the plurality of stages may be, for example, the presence or absence of an artifact, or may be a plurality of levels according to the number of artifacts, the size of the existence range, and the like. Further, the determination unit may evaluate the type of the artifact in the medical image as the state of the artifact (classify the medical image into one of a plurality of types).
 また、上述した学習済モデルは、互いに対応する複数の種類の医用画像をセットとする学習データを用いた学習により得ても良い。このとき、学習済モデルは、例えば、同一被検眼の同一部位の(あるいは所定部位の少なくとも一部が共通する干渉信号により得た)正面断層画像と正面モーションコントラスト画像とをセットとする学習データを用いた学習により得ることができる。このように、互いに異なる種類である複数の医用画像をセットとする学習データを用いた学習によって、アーチファクトの状態の判定だけでなく、医用画像の特徴量に対応する種類に分類することができ、この分類の精度を向上させることができる。 The above-described learned model may be obtained by learning using learning data in which a plurality of types of medical images corresponding to each other are set. At this time, the learned model includes, for example, learning data in which a front tomographic image and a front motion contrast image of the same part of the same subject eye (or at least a part of the predetermined part are obtained by a common interference signal) are set. It can be obtained by the learning used. In this way, by learning using learning data in which a plurality of medical images of different types are set, it is possible to classify not only the state of the artifact but also the type corresponding to the feature amount of the medical image, The accuracy of this classification can be improved.
 また、上述した様々な実施形態におけるレポート画面において、所望の層の層厚や各種の血管密度等の解析結果を表示させても良い。このとき、例えば、各種のアーチファクト低減処理が適用された医用画像を解析することで、精度の良い解析結果を表示させることができる。また、解析結果は、解析マップや、各分割領域に対応する統計値を示すセクター等で表示されても良い。なお、上述した学習済モデルは、医用画像の解析結果を学習データとして用いた学習により得たものであっても良い。また、学習済モデルは、医用画像とその医用画像の解析結果とを含む学習データや、医用画像とその医用画像とは異なる種類の医用画像の解析結果とを含む学習データ等を用いた学習により得たものであっても良い。また、学習済モデルは、正面断層画像及び正面モーションコントラスト画像のように、所定部位の異なる種類の複数の医用画像をセットとする入力データを含む学習データを用いた学習により得たものであっても良い。 Also, analysis results such as a desired layer thickness and various blood vessel densities may be displayed on the report screens in the various embodiments described above. At this time, for example, by analyzing a medical image to which various types of artifact reduction processing are applied, a highly accurate analysis result can be displayed. Further, the analysis result may be displayed as an analysis map, a sector indicating a statistical value corresponding to each divided region, or the like. Note that the above-described learned model may be obtained by learning using the analysis result of the medical image as learning data. The learned model is obtained by learning using learning data including a medical image and an analysis result of the medical image, and learning data including a medical image and an analysis result of a medical image of a different type from the medical image. It may be obtained. In addition, the learned model is obtained by learning using learning data including input data in which a plurality of different types of medical images of a predetermined part are set, such as a front tomographic image and a front motion contrast image. Is also good.
 また、上述した様々な実施形態におけるレポート画面において、緑内障や加齢黄斑変性等の種々の診断結果を表示させても良い。このとき、例えば、各種のアーチファクト低減処理が適用された医用画像を解析することで、精度の良い診断結果を表示させることができる。また、診断結果は、特定された異常部位の位置を画像上に表示されても良いし、また、異常部位の状態等を文字等によって表示されても良い。なお、上述した学習済モデルは、医用画像の診断結果を学習データとして用いた学習により得たものであっても良い。また、学習済モデルは、医用画像とその医用画像の診断結果とを含む学習データや、医用画像とその医用画像とは異なる種類の医用画像の診断結果とを含む学習データ等を用いた学習により得たものであっても良い。 In addition, various diagnostic results such as glaucoma and age-related macular degeneration may be displayed on the report screens in the various embodiments described above. At this time, for example, by analyzing a medical image to which various types of artifact reduction processing have been applied, a highly accurate diagnosis result can be displayed. In the diagnosis result, the position of the specified abnormal part may be displayed on the image, or the state of the abnormal part may be displayed by characters or the like. Note that the above-described learned model may be obtained by learning using a diagnosis result of a medical image as learning data. The learned model is obtained by learning using learning data including a medical image and a diagnosis result of the medical image, and learning data including a medical image and a diagnosis result of a medical image of a different type from the medical image. It may be obtained.
 また、上述した学習済モデルは、被検者の所定部位の異なる種類の複数の医用画像をセットとする入力データを含む学習データにより学習して得た学習済モデルであっても良い。このとき、学習データに含まれる入力データは、例えば、眼底のモーションコントラスト正面画像及び輝度正面画像(あるいは輝度断層画像)をセットとする入力データや、眼底の断層画像(Bスキャン画像)及びカラー眼底画像(あるいは蛍光眼底画像)をセットとする入力データ等が考えられる。また、異なる種類の複数の医療画像は、異なるもモダリティ、異なる光学系、異なる原理等により取得されたものであれば何でも良い。また、上述した学習済モデルは、被検者の異なる部位の複数の医用画像をセットとする入力データを含む学習データにより学習して得た学習済モデルであっても良い。このとき、学習データに含まれる入力データは、例えば、眼底の断層画像(Bスキャン画像)と前眼部の断層画像(Bスキャン画像)とをセットとする入力データや、眼底の黄斑の3次元OCT画像と眼底の視神経乳頭のサークルスキャン(またはラスタスキャン)断層画像とをセットとする入力データ等が考えられる。なお、学習データに含まれる入力データは、被検者の異なる部位及び異なる種類の複数の医用画像であっても良い。このとき、学習データに含まれる入力データは、例えば、前眼部の断層画像とカラー眼底画像とをセットとする入力データ等が考えられる。また、上述した学習済モデルは、被検者の所定部位の異なる撮影画角の複数の医用画像をセットとする入力データを含む学習データにより学習して得た学習済モデルであっても良い。また、学習データに含まれる入力データは、パノラマ画像のように、所定部位を複数領域に時分割して得た複数の医用画像を貼り合わせたものであっても良い。また、学習データに含まれる入力データは、被検者の所定部位の異なる日時の複数の医用画像をセットとする入力データであっても良い。 The learned model described above may be a learned model obtained by learning using learning data including input data in which a plurality of different types of medical images of a predetermined part of the subject are set. At this time, the input data included in the learning data includes, for example, input data in which a fundus motion contrast front image and a luminance front image (or luminance tomographic image) are set, a fundus tomographic image (B-scan image), and a color fundus. Input data or the like in which an image (or a fluorescent fundus image) is set can be considered. In addition, the plurality of different types of medical images may be any images obtained by different modalities, different optical systems, different principles, and the like. Further, the above-described learned model may be a learned model obtained by learning using learning data including input data in which a plurality of medical images of different parts of the subject are set. At this time, the input data included in the learning data includes, for example, input data in which a tomographic image (B-scan image) of the fundus and a tomographic image (B-scan image) of the anterior segment are set, or a three-dimensional image of the macula of the fundus. Input data or the like, which includes an OCT image and a circle scan (or raster scan) tomographic image of the optic disc of the fundus, can be considered. The input data included in the learning data may be a plurality of medical images of different types and different types of the subject. At this time, the input data included in the learning data may be, for example, input data that sets a tomographic image of the anterior ocular segment and a color fundus image. Further, the above-described learned model may be a learned model obtained by learning using learning data including input data in which a plurality of medical images of a predetermined part of the subject with different imaging angles of view are set. The input data included in the learning data may be a combination of a plurality of medical images obtained by time-dividing a predetermined region into a plurality of regions, such as a panoramic image. The input data included in the learning data may be input data in which a plurality of medical images at different dates and times of a predetermined part of the subject are set.
 また、上述した解析結果と診断結果とのうち少なくとも1つの結果が表示される表示画面は、レポート画面に限らず、例えば、撮影確認画面、経過観察用の表示画面、撮影前の各種調整用のプレビュー画面(各種のライブ動画像が表示される表示画面)等の少なくとも1つの表示画面に表示されても良い。例えば、上述した学習済モデルを用いて得た解析結果と診断結果とのうち少なくとも1つの結果を撮影確認画面に表示させることにより、検者は、撮影直後であっても精度の良い結果を確認することができる。 The display screen on which at least one of the analysis result and the diagnosis result is displayed is not limited to the report screen, and may be, for example, a shooting confirmation screen, a display screen for follow-up observation, and various adjustments before shooting. It may be displayed on at least one display screen such as a preview screen (a display screen on which various live moving images are displayed). For example, by displaying at least one of an analysis result and a diagnosis result obtained using the above-described trained model on a photographing confirmation screen, the examiner can confirm a highly accurate result even immediately after photographing. can do.
 (変形例2)
 上述した様々な実施形態及び変形例におけるプレビュー画面において、ライブ動画像の少なくとも1つのフレーム毎に上述した学習済モデルが用いられるように構成されても良い。このとき、プレビュー画面において、異なる部位や異なる種類の複数のライブ動画像が表示されている場合には、各ライブ動画像に対応する学習済モデルが用いられるように構成されても良い。これにより、例えば、ライブ動画像であっても、処理時間を短縮することができるため、検者は撮影開始前に精度の高い情報を得ることができる。このため、例えば、再撮影の失敗等を低減することができるため、診断の精度や効率を向上させることができる。なお、複数のライブ動画像は、例えば、XYZ方向のアライメントのための前眼部の動画像、眼底観察光学系のフォーカス調整やOCTフォーカス調整のための眼底の正面動画像、OCTのコヒーレンスゲート調整(測定光路長と参照光路長との光路長差の調整)のための眼底の断層動画像等である。
(Modification 2)
The preview screens in the various embodiments and modifications described above may be configured such that the learned model is used for at least one frame of a live moving image. At this time, when a plurality of live moving images of different parts or different types are displayed on the preview screen, the learned model corresponding to each live moving image may be used. Thus, for example, even for a live moving image, the processing time can be shortened, so that the examiner can obtain highly accurate information before the start of imaging. For this reason, for example, failure in re-imaging can be reduced, so that the accuracy and efficiency of diagnosis can be improved. Note that the plurality of live moving images include, for example, a moving image of the anterior segment for alignment in the XYZ directions, a front moving image of the fundus for focus adjustment of the fundus observation optical system and OCT focus adjustment, and coherence gate adjustment of the OCT. This is a tomographic moving image of the fundus, for example (adjustment of an optical path length difference between a measured optical path length and a reference optical path length).
 また、上述した学習済モデルを適用可能な動画像は、ライブ動画像に限らず、例えば、記憶部に記憶(保存)された動画像であっても良い。このとき、例えば、記憶部に記憶(保存)された眼底の断層動画像の少なくとも1つのフレーム毎に位置合わせして得た動画像が表示画面に表示されても良い。例えば、硝子体を好適に観察したい場合には、フレーム上に硝子体ができるだけ存在する等の条件を基準とする基準フレームを選択し、選択された基準フレームに対して他のフレームが位置合わせされた動画像が表示画面に表示されても良い。 The moving image to which the above-described learned model can be applied is not limited to a live moving image, and may be, for example, a moving image stored (saved) in a storage unit. At this time, for example, a moving image obtained by aligning at least one frame of the tomographic moving image of the fundus stored (saved) in the storage unit may be displayed on the display screen. For example, when it is desired to appropriately observe the vitreous body, a reference frame based on the condition that the vitreous body exists on the frame as much as possible is selected, and another frame is aligned with the selected reference frame. The moving image may be displayed on the display screen.
 (変形例3)
 上述した様々な実施形態及び変形例においては、学習済モデルが追加学習中である場合、追加学習中の学習済モデル自体を用いて出力(推論・予測)することが難しい可能性がある。このため、追加学習中の学習済モデルに対する医用画像の入力を禁止することが良い。また、追加学習中の学習済モデルと同じ学習済モデルをもう一つ予備の学習済モデルとして用意しても良い。このとき、追加学習中には、予備の学習済モデルに対して医用画像の入力が実行できるようにすることが良い。そして、追加学習が完了した後に、追加学習後の学習済モデルを評価し、問題なければ、予備の学習済モデルから追加学習後の学習済モデルに置き換えれば良い。また、問題があれば、予備の学習済モデルが用いられるようにしても良い。
(Modification 3)
In the various embodiments and modifications described above, when the learned model is undergoing additional learning, it may be difficult to output (inference / prediction) using the learned model itself that is undergoing additional learning. For this reason, it is preferable to prohibit the input of the medical image to the learned model during the additional learning. Further, the same learned model as the learned model during the additional learning may be prepared as another spare learned model. At this time, during the additional learning, it is preferable that a medical image can be input to the spare learned model. Then, after the additional learning is completed, the learned model after the additional learning is evaluated, and if there is no problem, the spare learned model may be replaced with the learned model after the additional learning. If there is a problem, a spare learned model may be used.
 また、撮影部位毎に学習して得た学習済モデルを選択的に利用できるようにしても良い。具体的には、第1の撮影部位(肺、被検眼等)を含む学習データを用いて得た第1の学習済モデルと、第1の撮影部位とは異なる第2の撮影部位を含む学習データを用いて得た第2の学習済モデルと、を含む複数の学習済モデルのいずれかを選択する選択手段を有しても良い。このとき、操作者からの指示に応じて、選択された学習済モデルに対応する撮影部位(ヘッダの情報や、操作者により手動入力されたもの)と該撮影部位の撮影画像とがペアとなるデータを(例えば、病院や研究所等の外部施設のサーバ等からネットワークを介して)検索し、検索して得たデータを学習データとする学習を、選択された学習済モデルに対して追加学習として実行する制御手段と、を有しても良い。これにより、学習済モデルに対応する撮影部位の撮影画像を用いて、撮影部位毎に効率的に追加学習することができる。 学習 Also, a learned model obtained by learning for each imaging region may be selectively used. Specifically, a first learned model obtained using learning data including a first imaging region (lung, eye to be examined, and the like) and a learning including a second imaging region different from the first imaging region There may be provided a selecting means for selecting any one of a plurality of learned models including the second learned model obtained using the data. At this time, in accordance with an instruction from the operator, the imaging part (information of the header or manually input by the operator) corresponding to the selected learned model is paired with the imaging image of the imaging part. Data is retrieved (for example, from a server of an external facility such as a hospital or a research institute via a network), and learning using the retrieved data as learning data is added to the selected trained model. And control means for executing as Thus, additional learning can be efficiently performed for each imaging region using the imaging image of the imaging region corresponding to the learned model.
 また、追加学習用の学習データを、病院や研究所等の外部施設のサーバ等からネットワークを介して取得する際には、改ざん、追加学習時のシステムトラブル等による信頼性低下を低減したい。そこで、デジタル署名やハッシュ化による一致性の確認を行うことで、追加学習用の学習データの正当性を検出しても良い。これにより、追加学習用の学習データを保護することができる。このとき、デジタル署名やハッシュ化による一致性の確認した結果として、追加学習用の学習データの正当性が検出できなかった場合には、その旨の警告を行い、その学習データによる追加学習を行わない。 (4) When acquiring learning data for additional learning from a server or the like of an external facility such as a hospital or a research institute via a network, it is desirable to reduce a decrease in reliability due to falsification, system trouble at the time of additional learning, and the like. Therefore, the validity of the learning data for additional learning may be detected by confirming the matching by digital signature or hashing. Thereby, the learning data for additional learning can be protected. At this time, if the validity of the learning data for additional learning cannot be detected as a result of checking the consistency by digital signature or hashing, a warning to that effect is issued, and additional learning using the learning data is performed. Absent.
 (変形例4)
 上述した様々な実施形態及び変形例において、検者からの指示は、手動による指示(例えば、ユーザインターフェース等を用いた指示)以外にも、音声等による指示であっても良い。このとき、例えば、機械学習により得た音声認識エンジンを含む機械学習エンジンが用いられても良い。また、手動による指示は、キーボードやタッチパネル等を用いた文字入力による指示であっても良い。このとき、例えば、機械学習により得た文字認識エンジンを含む機械学習エンジンが用いられても良い。また、検者からの指示は、ジェスチャーによる指示であっても良い。このとき、機械学習により得たジェスチャー認識エンジンを含む機械学習エンジンが用いられても良い。ここで、機械学習には、上述したような深層学習があり、また、多階層のニューラルネットワークの少なくとも1層には、例えば、再帰型ニューラルネットワーク(RNN:Recurrernt Neural Network)を用いることができる。
(Modification 4)
In the various embodiments and the modified examples described above, the instruction from the examiner may be an instruction by voice or the like in addition to a manual instruction (for example, an instruction using a user interface or the like). At this time, for example, a machine learning engine including a speech recognition engine obtained by machine learning may be used. In addition, the manual instruction may be an instruction by character input using a keyboard, a touch panel, or the like. At this time, for example, a machine learning engine including a character recognition engine obtained by machine learning may be used. The instruction from the examiner may be a gesture instruction. At this time, a machine learning engine including a gesture recognition engine obtained by machine learning may be used. Here, the machine learning includes the above-described deep learning, and a recurrent neural network (RNN) can be used for at least one layer of the multi-layer neural network, for example.
 (変形例5)
 上述した様々な実施形態及び変形例においては、被検査物は被検眼に限らず、被検者の所定部位であればどこでも良い。また、被検者の所定部位の正面画像は、医用画像であれば、何でも良い。このとき、処理される医用画像は、被検者の所定部位の画像であり、所定部位の画像は被検者の所定部位の少なくとも一部を含む。また、当該医用画像は、被検者の他の部位を含んでもよい。また、医用画像は、静止画像又は動画像であってよく、白黒画像又はカラー画像であってもよい。さらに医用画像は、所定部位の構造(形態)を表す画像でもよいし、その機能を表す画像でもよい。機能を表す画像は、例えば、OCTA画像、ドップラーOCT画像、fMRI画像、及び超音波ドップラー画像等の血流動態(血流量、血流速度等)を表す画像を含む。なお、被検者の所定部位は、撮影対象に応じて決定されてよく、人眼(被検眼)、脳、肺、腸、心臓、すい臓、腎臓、及び肝臓等の臓器、頭部、胸部、脚部、並びに腕部等の任意の部位を含む。
(Modification 5)
In the various embodiments and the modified examples described above, the object to be inspected is not limited to the eye to be inspected, and may be any site as long as it is a predetermined part of the subject. The front image of the predetermined part of the subject may be any medical image. At this time, the medical image to be processed is an image of a predetermined part of the subject, and the image of the predetermined part includes at least a part of the predetermined part of the subject. Further, the medical image may include other parts of the subject. Further, the medical image may be a still image or a moving image, and may be a black and white image or a color image. Further, the medical image may be an image representing the structure (form) of the predetermined part or an image representing the function thereof. The images representing functions include, for example, images representing blood flow dynamics (blood flow, blood flow velocity, etc.) such as OCTA images, Doppler OCT images, fMRI images, and ultrasonic Doppler images. The predetermined site of the subject may be determined according to the imaging target, and includes the human eye (examined eye), brain, lung, intestine, heart, pancreas, kidney, liver, and other organs, head, chest, Includes any parts such as legs and arms.
 また、医用画像は、被検者の断層画像であってもよいし、正面画像であってもよい。正面画像は、例えば、眼底正面画像や、前眼部の正面画像、蛍光撮影された眼底画像、OCTで取得したデータ(3次元のOCTデータ)について撮影対象の深度方向における少なくとも一部の範囲のデータを用いて生成したEn-Face画像を含む。なお、En-Face画像は、3次元のOCTAデータ(3次元のモーションコントラストデータ)について撮影対象の深度方向における少なくとも一部の範囲のデータを用いて生成したOCTAのEn-Face画像(モーションコントラスト正面画像)であっても良い。また、3次元のOCTデータや3次元のモーションコントラストデータは、3次元の医用画像データの一例である。 医 Also, the medical image may be a tomographic image of the subject or a front image. The front image is, for example, a fundus front image, a front image of an anterior segment, a fundus image obtained by fluorescence imaging, and data obtained by OCT (three-dimensional OCT data) in at least a part of the range in the depth direction of the imaging target. Includes En-Face images generated using the data. Note that the En-Face image is an OCTA En-Face image (motion contrast front view) generated using three-dimensional OCTA data (three-dimensional motion contrast data) using data in at least a part of the depth direction of the imaging target. Image). The three-dimensional OCT data and the three-dimensional motion contrast data are examples of three-dimensional medical image data.
 また、撮影装置とは、診断に用いられる画像を撮影するための装置である。撮影装置は、例えば、被検者の所定部位に光、X線等の放射線、電磁波、又は超音波等を照射することにより所定部位の画像を得る装置や、被写体から放出される放射線を検出することにより所定部位の画像を得る装置を含む。より具体的には、以下の実施形態に係る撮影装置は、少なくとも、X線撮影装置、CT装置、MRI装置、PET装置、SPECT装置、SLO装置、OCT装置、OCTA装置、眼底カメラ、及び内視鏡等を含む。 撮 影 The imaging device is a device for imaging an image used for diagnosis. The imaging apparatus detects, for example, a device that obtains an image of a predetermined portion by irradiating a predetermined portion of the subject with light, radiation such as X-rays, electromagnetic waves, or ultrasonic waves, or detects radiation emitted from a subject. And a device for obtaining an image of a predetermined part. More specifically, the imaging apparatus according to the following embodiments includes at least an X-ray imaging apparatus, a CT apparatus, an MRI apparatus, a PET apparatus, a SPECT apparatus, an SLO apparatus, an OCT apparatus, an OCTA apparatus, a fundus camera, and an endoscope. Including mirrors.
 なお、OCT装置としては、タイムドメインOCT(TD-OCT)装置やフーリエドメインOCT(FD-OCT)装置を含んでよい。また、フーリエドメインOCT装置はスペクトラルドメインOCT(SD-OCT)装置や波長掃引型OCT(SS-OCT)装置を含んでよい。また、SLO装置やOCT装置として、波面補償光学系を用いた波面補償SLO(AO-SLO)装置や波面補償OCT(AO-OCT)装置等を含んでよい。 The OCT device may include a time domain OCT (TD-OCT) device and a Fourier domain OCT (FD-OCT) device. Further, the Fourier domain OCT device may include a spectral domain OCT (SD-OCT) device and a wavelength sweep type OCT (SS-OCT) device. Further, the SLO device or OCT device may include a wavefront compensation SLO (AO-SLO) device using a wavefront compensation optical system, a wavefront compensation OCT (AO-OCT) device, or the like.
 [第4実施形態]
 本実施形態に係る画像処理装置は、広画角な断層画像もしくはモーションコントラスト画像の遅軸方向に生じた輝度段差アーチファクトをロバストに抑制するために、異なる深度範囲で算出した輝度統計値を、該輝度統計値の面内方向の分布に関する局所代表値で正規化した値に基づき血管候補領域の分布情報を生成する。すなわち、本実施形態に係る画像処理装置は、3次元断層画像における複数の深度範囲に対応する複数の分布情報を比較して得た分布情報と、複数の分布情報を比較して得た分布情報を局所代表値で正規化して(例えば、平滑化処理を行って)得た分布情報とを比較することにより、所定の領域(血管候補領域)に関する分布情報を生成することができる。なお、最終的に分布情報が生成されていればよく、生成の途中においては分布情報として例えば画像(マップ)を生成する必要はない。次に、高次元平滑化断層画像もしくはモーションコントラスト画像の輝度値に対して、該血管候補領域に対して重み付けした低次元(速軸方向のみ)平滑化断層画像もしくはモーションコントラスト画像の輝度値を除算することにより、輝度補正係数値分布を生成する。さらに、断層画像もしくはモーションコントラスト画像の各画素に対して輝度補正係数値を乗算することで、広画角な断層画像もしくはモーションコントラスト画像の遅軸方向に生じた輝度段差アーチファクトをロバストに抑制する場合について説明する。
[Fourth embodiment]
The image processing apparatus according to the present embodiment includes a luminance statistic calculated in a different depth range to robustly suppress a luminance step artifact generated in a slow axis direction of a wide-angle tomographic image or a motion contrast image. The distribution information of the blood vessel candidate region is generated based on a value normalized by a local representative value regarding the distribution of the luminance statistics in the in-plane direction. That is, the image processing apparatus according to the present embodiment compares distribution information obtained by comparing a plurality of pieces of distribution information corresponding to a plurality of depth ranges in a three-dimensional tomographic image with distribution information obtained by comparing a plurality of pieces of distribution information. Is normalized by a local representative value (for example, by performing a smoothing process), and the obtained distribution information is compared with the distribution information to generate distribution information on a predetermined region (a blood vessel candidate region). It is sufficient that distribution information is finally generated, and it is not necessary to generate, for example, an image (map) as distribution information during the generation. Next, the luminance value of the low-dimensional (only in the fast axis direction) smoothed tomographic image or motion contrast image weighted for the blood vessel candidate region is divided from the luminance value of the high-dimensional smoothed tomographic image or motion contrast image. By doing so, a luminance correction coefficient value distribution is generated. Further, when multiplying each pixel of a tomographic image or a motion contrast image by a luminance correction coefficient value, robustly suppresses a luminance step artifact generated in a slow axis direction of a wide-angle tomographic image or a motion contrast image. Will be described.
 ここで、本実施形態で補正対象とする輝度段差のうち広画角モーションコントラスト画像の遅軸方向に生じた輝度段差の例について説明する。同一位置で複数回走査した断層画像間の位置ずれが生じている場合には、実際には赤血球の変位が生じてない領域に対しても高いモーションコントラスト値が算出されてしまうため、帯状の高輝度な段差(白線)が生じる。本実施形態のように広画角なモーションコントラスト画像を取得する場合は、撮影時間が経過するに従って固視が不良になりやすいため、同一のモーションコントラスト画像上に輝度段差(白線)が多数生じることがある(図17A及び図18A)。また、固視不良によって断層画像間の位置ずれだけでなく被検眼の瞬目や睫毛位置低下も生じるため、広画角モーションコントラスト画像においては高輝度な段差(白線)だけでなく低輝度な段差(黒帯)も多数含まれる場合がある。 Here, an example of a luminance step generated in the slow axis direction of the wide-angle motion contrast image among the luminance steps to be corrected in the present embodiment will be described. If there is a misalignment between tomographic images scanned a plurality of times at the same position, a high motion contrast value is calculated even for an area where red blood cell displacement does not actually occur. A luminance step (white line) occurs. When acquiring a wide-field-of-view motion contrast image as in the present embodiment, fixation tends to be poor as the shooting time elapses, so that many luminance steps (white lines) occur on the same motion contrast image. (FIGS. 17A and 18A). In addition, since poor fixation causes not only positional displacement between tomographic images but also blinking of eye to be examined and eyelash position reduction, not only high-luminance steps (white lines) but also low-luminance steps in wide-angle motion contrast images. (Black belt) in some cases.
 一方、広画角モーションコントラスト画像に対して第1実施形態のS303で説明した手順で血管候補領域の分布に関する情報を生成して第2実施形態のS1004~S1006で説明した手順で輝度段差補正済のモーションコントラスト画像を生成する場合、輝度減衰率が神経線維層厚の影響を受けやすくなるという課題がある。断層画像やモーションコントラスト画像の画角が小さい場合には部位ごとの神経線維層厚の相違が小さく、輝度減衰率に与える影響は小さい。しかし、断層画像やモーションコントラスト画像が広画角である場合には部位ごとの神経線維層厚の相違が大きく、輝度減衰率に与える影響が無視できなくなる。例えば図18Bに示すように、血管の有無に関わらず(神経線維層厚の大きい)視神経乳頭部近傍(白色矢印部)では輝度減衰率が大きくなりやすく、(神経線維層厚の小さい)周辺部(灰色矢印部)では輝度減衰率が小さくなりやすい。従ってこのような血管候補領域の分布に関する情報に基づいて輝度段差補正を行うと、視神経乳頭部付近で高輝度な輝度段差(白線)が残存(図18Fの白矢印部)する場合がある。また、周辺部では速軸方向に走行する血管領域のモーションコントラスト値が過抑制されたりする(図18Fの灰色矢印部)場合が生じ得る。 On the other hand, for the wide-field-of-view motion contrast image, information on the distribution of the blood vessel candidate region is generated in the procedure described in S303 of the first embodiment, and the luminance step is corrected in the procedure described in S1004 to S1006 in the second embodiment. When the motion contrast image is generated, there is a problem that the luminance attenuation rate is easily affected by the nerve fiber layer thickness. When the angle of view of the tomographic image or the motion contrast image is small, the difference in the nerve fiber layer thickness between the parts is small, and the influence on the luminance attenuation rate is small. However, when the tomographic image or the motion contrast image has a wide angle of view, the difference in the nerve fiber layer thickness between the parts is large, and the influence on the luminance attenuation rate cannot be ignored. For example, as shown in FIG. 18B, regardless of the presence or absence of blood vessels, the luminance decay rate tends to increase near the optic papilla (white arrow portion) and the peripheral portion (small nerve fiber layer thickness). (Gray arrows), the luminance decay rate tends to be small. Therefore, when the luminance step correction is performed based on the information on the distribution of such blood vessel candidate regions, a high luminance luminance step (white line) may remain near the optic papilla (the white arrow in FIG. 18F). Further, in the peripheral portion, a case where the motion contrast value of the blood vessel region running in the fast axis direction is excessively suppressed (a gray arrow portion in FIG. 18F) may occur.
 そこで、本実施形態ではS501からS505の手順で各Aスキャン位置における輝度減衰率(相違度を示す値であれば何でもよいが、本実施形態では差分値とする)を算出した後、各Aスキャン位置を中心とした面内方向の近傍領域における輝度減衰率の代表値(例えば平均値や中央値)を算出する。本実施形態では各Aスキャン位置において該代表値を用いて正規化することにより、輝度減衰率が神経線維層厚の影響を受けることを防止する。 Therefore, in this embodiment, after calculating the luminance decay rate (any value indicating the degree of difference, any difference is used as the difference value in the present embodiment, a difference value is used in this embodiment) in steps S501 to S505, and then each A scan is calculated. A representative value (for example, an average value or a median value) of the luminance decay rate in a region near the position in the in-plane direction is calculated. In the present embodiment, by normalizing using the representative value at each A-scan position, the luminance attenuation rate is prevented from being affected by the nerve fiber layer thickness.
 本実施形態に係る画像処理装置101を備える画像処理システム10の構成及び本実施形態での画像処理フローは第2実施形態の場合と同様であるので省略する。なお図10においてS1003、S1005以外は第2実施形態の場合と同様であるので説明は省略する。 構成 The configuration of the image processing system 10 including the image processing apparatus 101 according to the present embodiment and the image processing flow in the present embodiment are the same as those in the second embodiment, and a description thereof will be omitted. Note that in FIG. 10, except for S1003 and S1005, which are the same as in the case of the second embodiment, the description is omitted.
 <ステップ1003>
 血管取得部101-421は、異なる所定の深度範囲間の輝度統計値を比較した結果に基づいて血管候補領域の分布に関する情報を生成する。本実施形態では「血管が存在する可能性の高い深度範囲(網膜表層)」と「影による輝度低下が最も顕著に現れる深度範囲(網膜外層)」における輝度の相違度(差もしくは比率)に基づいて血管候補領域を特定する。
<Step 1003>
The blood vessel acquisition units 101-421 generate information on the distribution of the blood vessel candidate regions based on the result of comparing the luminance statistics between different predetermined depth ranges. In the present embodiment, based on the degree of difference (difference or ratio) of the luminance in the “depth range in which blood vessels are likely to be present (retina surface layer)” and in the “depth range in which the luminance drop due to shadow is most remarkable (outer retina)”. To specify a blood vessel candidate region.
 輝度減衰率が神経線維層厚の影響を受けるのを避けるため、本実施形態ではS501からS505の手順で各Aスキャン位置における輝度減衰率を算出した後、各Aスキャン位置を中心とした面内方向の近傍領域における輝度減衰率の代表値(局所平均値)を算出する。各Aスキャン位置で算出した輝度減衰率を該局所平均値で除算して正規化することにより、広画角断層画像に対してもロバストに血管候補領域の分布に関する情報を生成できる。なお、本発明における輝度減衰率の正規化は輝度減衰率の局所代表値に基づく方法に限定されるものではなく、例えば各Aスキャン位置で算出した輝度減衰率を神経線維層厚もしくは神経線維層厚の局所代表値(局所平均値等)で除算して正規化してもよい。すなわち、本実施形態に係る画像処理装置は、3次元断層画像における複数の深度範囲に対応する複数の分布情報を比較して得た分布情報と、被検眼の所定の層の層厚に関する分布情報とを比較することにより、所定の領域(血管候補領域)に関する分布情報を生成してもよい。 In order to avoid the brightness attenuation rate being affected by the nerve fiber layer thickness, in the present embodiment, after calculating the brightness attenuation rate at each A-scan position in the procedure from S501 to S505, the in-plane centering on each A-scan position is calculated. The representative value (local average value) of the luminance decay rate in the region near the direction is calculated. By dividing the luminance decay rate calculated at each A scan position by the local average value and normalizing the information, it is possible to robustly generate information on the distribution of the blood vessel candidate region even for a wide-angle tomographic image. The normalization of the luminance decay rate in the present invention is not limited to the method based on the local representative value of the luminance decay rate. For example, the luminance decay rate calculated at each A-scan position is calculated using the nerve fiber layer thickness or the nerve fiber layer. It may be normalized by dividing by a local representative value of the thickness (local average value or the like). That is, the image processing apparatus according to the present embodiment includes distribution information obtained by comparing a plurality of pieces of distribution information corresponding to a plurality of depth ranges in a three-dimensional tomographic image, and distribution information relating to a layer thickness of a predetermined layer of the eye to be examined. May be generated to generate distribution information about a predetermined region (candidate blood vessel region).
 さらに、図5Aに示すフローチャートを参照しながら、S1003で実行される処理の詳細について説明する。なお、S501からS504までは第1実施形態及び第2実施形態の場合と同様であるので省略する。 (5) Further, details of the processing executed in S1003 will be described with reference to the flowchart shown in FIG. 5A. Steps S501 to S504 are the same as those in the first embodiment and the second embodiment, and a description thereof will not be repeated.
 <ステップ505>
 情報生成手段の一例である血管取得部101-421は、異なる深度範囲の輝度統計値を比較するために、S504で算出した2種類の正面断層画像(図16A及び図16B)の輝度値に基づいて輝度減衰率Arの分布を算出する。異なる深度範囲における輝度統計値の比較に関する指標として、本実施形態では、(網膜表層正面断層画像の輝度)―(網膜外層正面断層画像の輝度)を各画素(x,y)で算出し、輝度減衰率Ar(x,y)のマップ(図16C)を生成する。
<Step 505>
The blood vessel acquiring unit 101-421, which is an example of the information generating unit, compares the luminance statistics of the two types of front tomographic images (FIGS. 16A and 16B) calculated in S504 in order to compare the luminance statistics of different depth ranges. To calculate the distribution of the luminance decay rate Ar. In this embodiment, (luminance of retinal surface frontal tomographic image)-(luminance of retinal outer frontal tomographic image) is calculated for each pixel (x, y) as an index relating to comparison of luminance statistical values in different depth ranges. A map (FIG. 16C) of the attenuation rate Ar (x, y) is generated.
 <ステップ506>
 血管取得部101-421は、S505で生成した輝度減衰率マップAr(x,y)を正規化することにより、血管領域らしさを表わす血管候補領域マップV(x,y)を生成する。
<Step 506>
The blood vessel acquisition unit 101-421 generates a blood vessel candidate area map V (x, y) representing the likeness of a blood vessel area by normalizing the luminance attenuation rate map Ar (x, y) generated in S505.
 本実施形態では、S505で算出した輝度減衰率マップAr(x,y)の各画素位置(x,y)において所定サイズの近傍領域で該輝度減衰率に関する代表値を算出する。本実施形態では、代表値として局所平均を算出する。算出した局所平均値分布の例を図16Dに示す。代表値はこれに限らず、任意の公知の代表値(例えば中央値)を算出してよい。次に、各画素位置(x,y)において輝度減衰率Ar(x,y)の値に対して該代表値を用いた正規化処理を行う。本実施形態では、正規化処理として減算処理を行う。本発明はこれに限定されず、例えば輝度減衰率として(網膜表層輝度代表値÷網膜外層輝度代表値)を算出する場合には本ステップでの該代表値を用いた正規化処理として除算処理を行ってもよい。また前述したように、本発明における輝度減衰率の正規化は輝度減衰率の局所代表値に基づく方法に限定されるものではなく、例えば各Aスキャン位置で算出した輝度減衰率を神経線維層厚もしくは神経線維層厚の局所代表値(局所平均値等)で除算して正規化してもよい。 In the present embodiment, a representative value related to the luminance decay rate is calculated in a neighborhood of a predetermined size at each pixel position (x, y) of the luminance decay rate map Ar (x, y) calculated in S505. In the present embodiment, a local average is calculated as a representative value. FIG. 16D shows an example of the calculated local average distribution. The representative value is not limited to this, and any known representative value (for example, a median value) may be calculated. Next, normalization processing using the representative value is performed on the value of the luminance decay rate Ar (x, y) at each pixel position (x, y). In the present embodiment, subtraction processing is performed as normalization processing. The present invention is not limited to this. For example, when (retina surface layer luminance representative value / retina outer layer luminance representative value) is calculated as a luminance attenuation rate, division processing is performed as normalization processing using the representative value in this step. May go. Further, as described above, the normalization of the luminance decay rate in the present invention is not limited to the method based on the local representative value of the luminance decay rate. For example, the luminance decay rate calculated at each A-scan position is calculated by calculating the nerve fiber layer thickness. Alternatively, it may be normalized by dividing by a local representative value (local average value or the like) of the nerve fiber layer thickness.
 さらに、該正規化した輝度減衰率マップAr(x,y)に対して所定の値WLとWWを用いて正規化することで、血管候補領域マップV(x,y)を
 V(x,y)=(Ar(x,y)-WL)/WW
として算出し、0≦V(x,y)≦1を満たすようにする。図16Eに血管候補領域マップV(x,y)の例を示す。血管候補領域が部位によらず安定して描出されていることがわかる。なお、正規化処理は上記に限らず任意の公知の正規化法を用いてよい。
Further, the blood vessel candidate region map V (x, y) is normalized to V (x, y) by normalizing the normalized luminance attenuation rate map Ar (x, y) using predetermined values WL and WW. ) = (Ar (x, y) -WL) / WW
So that 0 ≦ V (x, y) ≦ 1 is satisfied. FIG. 16E shows an example of the blood vessel candidate region map V (x, y). It can be seen that the blood vessel candidate region is stably drawn regardless of the site. The normalization processing is not limited to the above, and any known normalization method may be used.
 <ステップ1005>
 重み付け部101-442は、S1003で血管取得部101-421が生成した血管候補領域の分布に関する情報(図17D)を用いて広画角モーションコントラスト画像(図17A)の血管候補領域における輝度値を重み付けした重み付きモーションコントラスト画像(図17E)を生成する。次に、高次元変換部101-4411が高次元平滑化モーションコントラスト画像(図17C)を生成し、低次元変換部101-4412が該重み付きモーションコントラスト画像に対して速軸方向に平滑化処理を行った低次元平滑化モーションコントラスト画像(図17F)を生成する。さらに演算部101-443が該高次元平滑化モーションコントラスト画像と該低次元平滑化モーションコントラスト画像との演算処理によりモーションコントラスト画像用の輝度補正係数マップ(図17G)を生成する。
<Step 1005>
The weighting unit 101-442 uses the information (FIG. 17D) on the distribution of the blood vessel candidate region generated by the blood vessel acquisition unit 101-421 in S1003 to calculate the luminance value of the wide-angle motion contrast image (FIG. 17A) in the blood vessel candidate region. A weighted motion contrast image (FIG. 17E) is generated. Next, the high-dimensional conversion unit 101-4411 generates a high-dimensional smoothed motion contrast image (FIG. 17C), and the low-dimensional conversion unit 101-4412 performs smoothing processing on the weighted motion contrast image in the fast axis direction. Is performed to generate a low-dimensional smoothed motion contrast image (FIG. 17F). Further, the arithmetic units 101 to 443 generate a luminance correction coefficient map (FIG. 17G) for the motion contrast image by performing an arithmetic process on the high-dimensional smoothed motion contrast image and the low-dimensional smoothed motion contrast image.
 さらに、図5Bに示すフローチャートを参照しながら、S1005で実行される処理の詳細について説明する。なお、S514以外は第2実施形態の場合と同様であるので省略する。 {Further, the details of the processing executed in S1005 will be described with reference to the flowchart shown in FIG. 5B. Note that the steps other than S514 are the same as in the second embodiment, and a description thereof will not be repeated.
 <ステップ514>
 重み付け部101-442は、血管候補領域マップV(x,y)の値を用いてモーションコントラスト画像の該血管候補領域における輝度値を重み付けする。なお、この重み付けは、第2の概略値分布の一例である低次元概略値分布を取得する際に、3次元モーションコントラスト画像を取得する際に使用される測定光の速軸方向に沿って存在する血管もしくは出血領域である所定の組織と、それ以外の領域とに対して実行される異なる算出処理の一例である。また、この重み付けは、本発明において必須ではない。
<Step 514>
The weighting unit 101-442 weights the luminance value in the blood vessel candidate area of the motion contrast image using the value of the blood vessel candidate area map V (x, y). Note that this weighting exists along the fast axis direction of the measurement light used when acquiring a three-dimensional motion contrast image when acquiring a low-dimensional approximate value distribution which is an example of the second approximate value distribution. 5 is an example of a different calculation process performed on a predetermined tissue that is a blood vessel or a bleeding region to be performed and a region other than the predetermined tissue. This weighting is not essential in the present invention.
 本実施形態では、血管候補領域マップV(x,y)の値(血管らしさ)が高い領域ほど、S511で取得した正面モーションコントラスト画像における該高い領域に対応する領域の輝度値が、S512で算出した高次元概略値M_2ds(x,y)に近づくように、また、該V(x,y)の値が低い領域ほど、S511で取得した正面モーションコントラスト画像の輝度値M(x,y)をできるだけ維持するように、正面モーションコントラスト画像を重み付けした重み付き正面モーションコントラスト画像M_w(x,y)を生成する。具体的には、以下として算出すれば良い。
 M_w(x,y)=(1.0-V(x,y))*M(x,y)+V(x,y)*M_2ds(x,y)
In the present embodiment, the brightness value of the region corresponding to the higher region in the front motion contrast image acquired in S511 is calculated in S512 as the region (the likeness of the blood vessel) of the blood vessel candidate region map V (x, y) is higher. The luminance value M (x, y) of the front motion contrast image acquired in S511 is set closer to the calculated high-dimensional approximate value M_2ds (x, y) and in a region where the value of V (x, y) is lower. A weighted front motion contrast image M_w (x, y) is generated by weighting the front motion contrast image so as to maintain as much as possible. Specifically, it may be calculated as follows.
M_w (x, y) = (1.0−V (x, y)) * M (x, y) + V (x, y) * M_2ds (x, y)
 図17Eに重み付き正面モーションコントラスト画像M_w(x,y)の例を示す。 FIG. 17E shows an example of a weighted front motion contrast image M_w (x, y).
 ここで示した血管候補領域の輝度値に対する重み付け法はあくまで例であり、速軸方向に走行する血管候補領域の輝度値を減少させる処理もしくは該血管候補領域近傍の輝度値に近づける処理であれば任意の重み付けを行ってもよい。 The weighting method for the luminance value of the blood vessel candidate region shown here is merely an example. If the process is to reduce the luminance value of the blood vessel candidate region traveling in the fast axis direction or to approximate the luminance value near the blood vessel candidate region, Arbitrary weighting may be performed.
 なお広画角断層画像に生じる輝度段差は広画角モーションコントラスト画像に生じる輝度段差よりも輝度段差の深さ(高さ)のばらつきが大きいため、広画角断層画像に生じる輝度段差を抑制する場合には(広画角モーションコントラスト画像に生じる輝度段差を抑制する場合よりも)M_2dsに対する重みを小さく(M(x,y)に対する重みを大きく)することが望ましい。あるいは広画角断層画像用の血管候補領域マップVt(x,y)と広画角モーションコントラスト画像用の血管候補領域マップVm(x,y)とを別々に生成しておき、Vt(x,y)<Vm(x,y)となるようにしてもよい。 Note that the luminance step generated in the wide-field-of-view tomographic image has a greater variation in the depth (height) of the luminance step than the luminance step generated in the wide-field-of-view motion contrast image. In this case, it is desirable to reduce the weight for M_2ds (increase the weight for M (x, y)) (compared to the case where the luminance step occurring in the wide-angle motion contrast image is suppressed). Alternatively, a blood vessel candidate region map Vt (x, y) for a wide-angle tomographic image and a blood vessel candidate region map Vm (x, y) for a wide-angle motion contrast image are separately generated, and Vt (x, y) is generated. y) <Vm (x, y).
 次に、図18A~図18Gを用いて輝度減衰率の正規化処理を適用した血管候補領域マップにより算出した血管候補領域に対する輝度重み付けの効果(広画角画像において、神経線維層厚の影響を受けずに輝度段差アーチファクトのみを選択的に抑制しやすくする)について説明する。図18Aは多数の帯状の輝度段差(白線)と、速軸方向に走行する血管領域の双方を含むモーションコントラスト画像の例である。神経線維層厚が大きい領域(視神経乳頭近傍)においても小さい領域(広画角画像の周辺部)においても、安定して輝度段差のみ選択的に抑制する必要がある。 Next, the effect of the luminance weighting on the blood vessel candidate area calculated by the blood vessel candidate area map to which the luminance attenuation rate normalization processing is applied using FIGS. 18A to 18G (the influence of the nerve fiber layer thickness in the wide-angle image is described. (It is easy to selectively suppress only the luminance step artifact without receiving it.) FIG. 18A is an example of a motion contrast image including both a number of band-shaped luminance steps (white lines) and a blood vessel region running in the fast axis direction. In both the region where the nerve fiber layer thickness is large (near the optic papilla) and the region where the nerve fiber layer thickness is small (the periphery of the wide-angle image), it is necessary to stably selectively suppress only the luminance step.
 輝度減衰率の局所平均値による正規化処理を実施せずに生成した血管候補領域マップに基づいてS1003の血管候補領域マップ生成処理、S514の血管領域に対する重み付き画像生成処理、S1006の輝度段差補正処理を各々実行した場合の処理結果の例を図18B、図18D、図18Fに示す。図18Bの神経層厚の大きい領域(視神経乳頭部近傍(白矢印))においては非血管領域でも輝度減衰率が高く算出される一方で、神経線維層厚の小さい領域(血管候補領域マップ周辺部(灰色矢印))においては血管領域であるにも関わらず輝度減衰率が過少に算出されている。そのため、図18Dの視神経乳頭部近傍(白矢印)では血管・非血管領域間の相違が小さくなりやすく、画像周辺部(灰色矢印)では血管領域が高輝度なまま残っている。結果的に、図18Fの視神経乳頭部近傍(白矢印)では輝度段差の抑制が不十分となり、画像周辺部(灰色矢印)では血管領域の過抑制が生じている。 A blood vessel candidate region map generation process in S1003 based on a blood vessel candidate region map generated without performing the normalization process using the local average value of the luminance attenuation rate, a weighted image generation process for the blood vessel region in S514, and a luminance step correction in S1006 FIGS. 18B, 18D, and 18F show examples of processing results when each processing is executed. In the region with a large nerve layer thickness (near the optic papilla (white arrow)) in FIG. 18B, the luminance attenuation rate is calculated to be high even in the non-vascular region, while the region with a small nerve fiber layer thickness (peripheral portion of the blood vessel candidate region map) (Gray arrow)), the luminance decay rate is calculated to be too low in spite of being a blood vessel region. For this reason, the difference between the blood vessel and the non-blood vessel region tends to be small near the optic disc portion (white arrow) in FIG. 18D, and the blood vessel region remains high in the peripheral portion of the image (gray arrow). As a result, the suppression of the luminance step is insufficient in the vicinity of the optic disc portion (white arrow) in FIG. 18F, and the blood vessel region is excessively suppressed in the peripheral portion of the image (gray arrow).
 一方、輝度減衰率の局所平均値による正規化処理を実施してからS1003の血管候補領域マップ生成処理、S514の血管領域に対する重み付き画像生成処理、S1006の輝度段差補正処理を各々実行した場合の処理結果を図18C、図18E、図18Gに示す。図18Cにおいて、神経線維層厚が大きい領域(視神経乳頭部近傍)においても小さい領域(画像周辺部)においても、安定して血管領域が描出されている。そのため、図18Eでも血管領域に対して適切な重み付けがなされ、S1006の輝度段差補正処理においても視神経乳頭部付近の輝度段差抑制不足や画像周辺部の血管領域の輝度値に対する過抑制は見られない(図18G)。 On the other hand, the case where the normalization processing based on the local average value of the luminance decay rate is performed, and then the blood vessel candidate area map generation processing in S1003, the weighted image generation processing for the blood vessel area in S514, and the luminance step correction processing in S1006 are executed. The processing results are shown in FIGS. 18C, 18E, and 18G. In FIG. 18C, a blood vessel region is stably drawn both in a region where the nerve fiber layer thickness is large (near the optic nerve head) and in a region where the nerve fiber layer thickness is small (periphery of the image). Therefore, the blood vessel region is also appropriately weighted in FIG. 18E, and the luminance step correction processing in S1006 does not show insufficient luminance step suppression near the optic papilla or excessive suppression of the luminance value of the blood vessel region near the image. (FIG. 18G).
 なお、本実施形態においては広画角正面モーションコントラスト画像上に生じた帯状の輝度段差を抑制する(輝度段差補正済の正広画角面モーションコントラスト画像を生成する)方法について説明したが、本発明はこれに限定されない。以下の手順で広画角3次元モーションコントラスト画像上に生じた帯状の輝度段差を抑制し、輝度段差補正済の広画角3次元モーションコントラスト画像を生成してもよい。 In the present embodiment, a method of suppressing a band-shaped luminance step generated on a wide-angle-of-view front-view motion contrast image (generating a normal-wide-angle-of-view plane motion contrast image with corrected luminance step) has been described. Is not limited to this. The following procedure may be used to suppress a band-shaped luminance step generated on a wide-angle-of-view three-dimensional motion contrast image and generate a wide-angle three-dimensional motion contrast image with corrected luminance step.
 すなわち、多数の異なる投影深度範囲で正面モーションコントラスト画像を生成しておき、各正面モーションコントラスト画像に対して輝度段差補正係数マップを生成しておく。次に3次元モーションコントラスト画像上の各画素の輝度値に対して、該各画素が属する投影深度範囲に対応する輝度段差補正係数マップの値(補正係数値)を演算することによって輝度段差補正済の3次元モーションコントラスト画像を生成できる。異なる投影深度範囲として、例えば網膜表層・網膜深層・網膜外層・脈絡膜の4種類が挙げられる。あるいは、網膜及び脈絡膜に属する各層の種類を指定してもよい。 That is, a front motion contrast image is generated in a number of different projection depth ranges, and a luminance step correction coefficient map is generated for each front motion contrast image. Next, with respect to the luminance value of each pixel on the three-dimensional motion contrast image, a luminance step correction coefficient value (correction coefficient value) corresponding to the projection depth range to which the pixel belongs is calculated to calculate the luminance step. Can be generated. As the different projection depth ranges, for example, there are four types: a retinal surface layer, a deep retinal layer, an outer retinal layer, and a choroid. Alternatively, the type of each layer belonging to the retina and the choroid may be specified.
 また、輝度段差を補正するタイミングは、予め上述の手順で輝度段差補正済の広画角3次元断層画像もしくはモーションコントラスト画像を生成しておき、操作者から正面断層画像もしくはモーションコントラスト画像の生成指示があった時点でその正面画像を生成・表示してよい。輝度段差補正済の広画角3次元断層画像もしくはモーションコントラスト画像の生成タイミングの例として、例えば断層画像の撮影直後、再構成時、保存時が挙げられる。あるいは、操作者から広画角正面断層画像もしくはモーションコントラスト画像の生成指示があった時点で(該指示で指定された投影深度範囲に対して)輝度段差補正を実施し、輝度段差補正済の広画角正面断層画像もしくはモーションコントラスト画像を表示してもよい。 The timing of correcting the luminance step is determined in advance by generating a wide-field-of-view three-dimensional tomographic image or a motion contrast image with the luminance step corrected in accordance with the above-described procedure, and instructing the operator to generate a front tomographic image or a motion contrast image. May be generated and displayed at the point in time when there is. Examples of the generation timing of the wide-field-of-view three-dimensional tomographic image or the motion contrast image after the correction of the luminance step include, for example, immediately after capturing the tomographic image, during reconstruction, and during storage. Alternatively, when the operator instructs generation of a wide-angle front tomographic image or a motion contrast image (with respect to the projection depth range specified by the instruction), the luminance step correction is performed, and the luminance step corrected wide An angle-of-view front tomographic image or a motion contrast image may be displayed.
 なお、本実施形態では広画角画像の例として広画角モーションコントラスト画像に対する輝度段差抑制処理について説明したが、本発明はこれに限定されるものではない。すなわち、本実施形態のS1003で説明した手順で生成した血管候補領域マップを用いて、第1実施形態のS304からS305で説明した画像処理を実施することにより、広画角断層画像に生じた輝度段差を(神経線維層厚が高い部位においても神経線維層厚が低い部位においても)ロバストに抑制することも本発明に含まれる。 In the present embodiment, the luminance step suppression processing for a wide-field-of-view motion contrast image has been described as an example of a wide-field image, but the present invention is not limited to this. That is, by performing the image processing described in S304 to S305 of the first embodiment using the blood vessel candidate area map generated in the procedure described in S1003 of the present embodiment, the luminance generated in the wide-angle tomographic image is obtained. The present invention includes robustly suppressing a step (even in a region where the nerve fiber layer thickness is high or in a region where the nerve fiber layer thickness is low).
 以上述べた構成によれば、広画角な断層画像もしくはモーションコントラスト画像の遅軸方向に生じた輝度段差アーチファクトをロバストに抑制するために、異なる深度範囲で算出した輝度統計値を、該輝度統計値の面内方向の分布に関する局所代表値で正規化した値に基づき血管候補の分布情報を生成する。次に、高次元平滑化断層画像もしくはモーションコントラスト画像の輝度値に対して、該血管候補領域に対して重み付けした低次元(速軸方向のみ)平滑化断層画像もしくはモーションコントラスト画像の輝度値を除算することにより、輝度補正係数値分布を生成する。さらに、断層画像もしくはモーションコントラスト画像の各画素に対して輝度補正係数値を乗算することで、広画角な断層画像もしくはモーションコントラスト画像の遅軸方向に生じた輝度段差アーチファクトをロバストに抑制する。 According to the configuration described above, in order to robustly suppress a luminance step artifact generated in the slow axis direction of a wide-angle tomographic image or a motion contrast image, the luminance statistical values calculated in different depth ranges are compared with the luminance statistical values. Blood vessel candidate distribution information is generated based on a value normalized by a local representative value regarding the distribution of values in the in-plane direction. Next, the luminance value of the low-dimensional (only in the fast axis direction) smoothed tomographic image or motion contrast image weighted for the blood vessel candidate region is divided from the luminance value of the high-dimensional smoothed tomographic image or motion contrast image. By doing so, a luminance correction coefficient value distribution is generated. Further, by multiplying each pixel of the tomographic image or the motion contrast image by the luminance correction coefficient value, the luminance step artifact generated in the slow axis direction of the wide-angle tomographic image or the motion contrast image is robustly suppressed.
 これにより、被検眼の広画角な断層画像もしくはモーションコントラスト画像の遅軸方向に生じた輝度段差をロバストに抑制できる。 This makes it possible to robustly suppress a luminance step occurring in the slow axis direction of a wide-angle tomographic image or motion contrast image of the eye to be inspected.
 [その他の実施形態]
 上記の各実施形態では、本発明を画像処理装置101として実現したが、本発明の実施形態は画像処理装置101のみに限定されるものではない。例えば、本発明はシステム、装置、方法、プログラムもしくは記憶媒体等としての実施態様をとることができる。また、本発明は、以下の処理を実行することによっても実現される。即ち、上述した様々な実施形態及び変形例の機能を実現するソフトウェア(プログラム)を、ネットワーク又は各種記憶媒体を介してシステム或いは装置に供給し、そのシステム或いは装置のコンピュータ(またはCPUやMPU等)がプログラムを読み出して実行する処理である。
[Other Embodiments]
In each of the above embodiments, the present invention is realized as the image processing apparatus 101, but the embodiments of the present invention are not limited to the image processing apparatus 101 alone. For example, the present invention can take an embodiment as a system, an apparatus, a method, a program, a storage medium, or the like. The present invention is also realized by executing the following processing. That is, software (program) that realizes the functions of the above-described various embodiments and modifications is supplied to a system or an apparatus via a network or various storage media, and a computer (or a CPU or an MPU or the like) of the system or the apparatus is provided. Is a process of reading and executing a program.
 本発明は上記実施の形態に制限されるものではなく、本発明の精神及び範囲から離脱することなく、様々な変更及び変形が可能である。従って、本発明の範囲を公にするために以下の請求項を添付する。 The present invention is not limited to the above embodiment, and various changes and modifications can be made without departing from the spirit and scope of the present invention. Therefore, the following claims are appended to make the scope of the present invention public.
 本願は、2018年9月13日提出の日本国特許出願特願2018-171736と2019年3月11日提出の日本国特許出願特願2019-044264と2019年7月19日提出の日本国特許出願特願2019-133788を基礎として優先権を主張するものであり、その記載内容の全てをここに援用する。 This application is based on Japanese Patent Application No. 2018-171736 filed on Sep. 13, 2018 and Japanese Patent Application No. 2019-044264 filed on Mar. 11, 2019, and Japanese Patent Application No. 2019-044264 filed on Jul. 19, 2019. The priority is claimed based on Japanese Patent Application No. 2019-133788, the entire contents of which are incorporated herein by reference.

Claims (31)

  1.  被検眼の3次元断層画像または3次元モーションコントラスト画像に基づく少なくとも1つの正面画像を2次元の変換処理を実行して得た第1の概略値分布と、前記少なくとも1つの正面画像を1次元の変換処理を実行して得た第2の概略値分布との演算により、補正係数の値の分布を取得する取得手段と、
     前記補正係数の値の分布を用いて、前記3次元断層画像または前記3次元モーションコントラスト画像の少なくとも一部を補正する補正手段と、
     前記補正された少なくとも一部の画像を生成する生成手段と、
     を備えることを特徴とする画像処理装置。
    A first approximate value distribution obtained by executing a two-dimensional conversion process on at least one front image based on a three-dimensional tomographic image or a three-dimensional motion contrast image of the subject's eye, and converting the at least one front image into a one-dimensional image Acquiring means for acquiring a distribution of correction coefficient values by calculation with a second approximate value distribution obtained by executing the conversion process;
    Correction means for correcting at least a part of the three-dimensional tomographic image or the three-dimensional motion contrast image using a distribution of the values of the correction coefficients;
    Generating means for generating at least a part of the corrected image,
    An image processing apparatus comprising:
  2.  前記1次元の変換処理は、前記3次元断層画像または前記3次元モーションコントラスト画像を取得する際に使用される測定光の速軸方向における変換処理であることを特徴とする請求項1に記載の画像処理装置。 2. The one-dimensional conversion process according to claim 1, wherein the one-dimensional conversion process is a conversion process in a fast axis direction of the measurement light used when acquiring the three-dimensional tomographic image or the three-dimensional motion contrast image. Image processing device.
  3.  前記演算は、前記第1の概略値分布を前記第2の概略値分布で除算または減算する演算であることを特徴とする請求項1または2に記載の画像処理装置。 3. The image processing apparatus according to claim 1, wherein the operation is an operation of dividing or subtracting the first approximate value distribution by the second approximate value distribution. 4.
  4.  前記生成手段は、前記補正された少なくとも一部の前記3次元断層画像または前記3次元モーションコントラスト画像に基づく少なくとも1つの正面画像を生成し、
     前記正面画像は、前記3次元断層画像から生成された正面断層画像と前記3次元モーションコントラスト画像から生成された正面モーションコントラスト画像とのいずれかであることを特徴とする請求項1乃至3のいずれか1項に記載の画像処理装置。
    The generating unit generates at least one front image based on the corrected at least a part of the three-dimensional tomographic image or the three-dimensional motion contrast image,
    4. The frontal image according to claim 1, wherein the frontal image is one of a frontal tomographic image generated from the three-dimensional tomographic image and a frontal motion contrast image generated from the three-dimensional motion contrast image. The image processing apparatus according to claim 1.
  5.  前記取得手段は、前記被検眼の所定の層境界に基づいて指定される複数の深度範囲の各々に対して補正係数の値の分布を取得し、
     前記補正手段は、前記複数の深度範囲の各々に対して取得した補正係数の値の分布を用いて、前記3次元断層画像または前記3次元モーションコントラスト画像の少なくとも一部を補正することを特徴とする請求項1乃至4のいずれか1項に記載の画像処理装置。
    The obtaining means obtains a distribution of values of correction coefficients for each of a plurality of depth ranges specified based on a predetermined layer boundary of the eye to be inspected,
    The correction unit corrects at least a part of the three-dimensional tomographic image or the three-dimensional motion contrast image using a distribution of correction coefficient values obtained for each of the plurality of depth ranges. The image processing apparatus according to claim 1.
  6.  前記取得手段は、前記3次元断層画像または前記3次元モーションコントラスト画像を取得する際に使用される測定光の速軸方向に沿って存在する血管に関する領域もしくは出血領域である所定の領域と、それ以外の領域とで異なる算出処理により、前記第2の概略値分布を取得することを特徴とする請求項1乃至5のいずれか1項に記載の画像処理装置。 The acquisition unit includes: a predetermined region that is a region related to a blood vessel or a bleeding region existing along the fast axis direction of the measurement light used when acquiring the three-dimensional tomographic image or the three-dimensional motion contrast image; The image processing apparatus according to any one of claims 1 to 5, wherein the second approximate value distribution is obtained by a different calculation process for a region other than the region.
  7.  前記取得手段は、前記少なくとも1つの正面画像における前記所定の領域の輝度値を、前記所定の領域に対して遅軸方向の側の近傍領域の輝度値に近づける算出処理を行うことにより、前記第2の概略値分布を取得することを特徴とする請求項6に記載の画像処理装置。 The acquisition unit performs a calculation process of bringing a luminance value of the predetermined region in the at least one front image closer to a luminance value of a nearby region on the side of the slow axis direction with respect to the predetermined region, thereby performing the second processing. The image processing apparatus according to claim 6, wherein an approximate value distribution of 2 is obtained.
  8.  前記取得手段は、前記第2の概略値分布を取得するために前記少なくとも1つの正面画像における前記所定の領域の輝度値を、前記所定の領域に対して遅軸方向の側の近傍領域の輝度値に近づける算出処理を行う場合に、前記少なくとも1つの正面画像が正面断層画像である場合の方が、前記少なくとも1つの正面画像がモーションコントラスト正面画像である場合よりも、前記所定の領域の輝度値を、前記所定の領域に対して遅軸方向の側の近傍領域の輝度値により近づける重みが小さくなるように前記算出処理を行うことを特徴とする請求項7に記載の画像処理装置。 The acquisition unit may be configured to obtain the second approximate value distribution by calculating a luminance value of the predetermined region in the at least one front image, and calculating a luminance value of a nearby region on a slow axis side with respect to the predetermined region. When performing the calculation process to approach the value, the brightness of the predetermined area is higher when the at least one front image is a front tomographic image than when the at least one front image is a motion contrast front image. The image processing apparatus according to claim 7, wherein the calculation processing is performed such that a weight for bringing a value closer to the predetermined area with a luminance value of a neighboring area on the slow axis direction side is reduced.
  9.  前記生成手段は、前記被検眼の深度方向に交差する面内方向の分布情報であって、前記3次元断層画像における複数の深度範囲に対応する複数の分布情報を比較することにより、前記所定の領域に関する分布情報を生成し、
     前記取得手段は、前記生成された分布情報に基づいて、前記所定の領域と、それ以外の領域とで異なる算出処理により、前記第2の概略値分布を取得することを特徴とする請求項6乃至8のいずれか1項に記載の画像処理装置。
    The generation unit is configured to compare a plurality of pieces of distribution information corresponding to a plurality of depth ranges in the three-dimensional tomographic image, which are distribution information in an in-plane direction intersecting with a depth direction of the eye to be inspected, and Generate distribution information about the area,
    7. The method according to claim 6, wherein the acquisition unit acquires the second approximate value distribution based on the generated distribution information by performing a different calculation process between the predetermined area and the other area. 9. The image processing apparatus according to any one of claims 1 to 8.
  10.  被検眼の3次元断層画像を取得する取得手段と、
     前記被検眼の深度方向に交差する面内方向の分布情報であって、前記3次元断層画像における複数の深度範囲に対応する複数の分布情報を比較することにより、前記深度方向に沿って発生する影の原因となる前記被検眼における所定の領域に関する分布情報を生成する生成手段と、
     を備えることを特徴とする画像処理装置。
    Acquiring means for acquiring a three-dimensional tomographic image of the eye to be inspected;
    The distribution information is generated along the depth direction by comparing a plurality of pieces of distribution information corresponding to a plurality of depth ranges in the three-dimensional tomographic image. Generating means for generating distribution information about a predetermined area in the eye to be examined that causes shadows,
    An image processing apparatus comprising:
  11.  前記生成手段は、前記複数の分布情報を比較して得た分布情報と、前記複数の分布情報を比較して得た分布情報を局所代表値で正規化して得た分布情報とを比較することにより、前記所定の領域に関する分布情報を生成することを特徴とする請求項9または10に記載の画像処理装置。 The generation unit compares distribution information obtained by comparing the plurality of pieces of distribution information with distribution information obtained by normalizing distribution information obtained by comparing the plurality of pieces of distribution information with a local representative value. The image processing apparatus according to claim 9, wherein distribution information related to the predetermined area is generated by the following.
  12.  前記生成手段は、前記複数の分布情報を比較して得た分布情報と、前記被検眼の所定の層の層厚に関する分布情報とを比較することにより、前記所定の領域に関する分布情報を生成することを特徴とする請求項9または10に記載の画像処理装置。 The generation unit generates distribution information about the predetermined region by comparing distribution information obtained by comparing the plurality of pieces of distribution information with distribution information about a layer thickness of a predetermined layer of the eye to be examined. The image processing apparatus according to claim 9, wherein:
  13.  前記複数の分布情報は、前記3次元断層画像における異なる2つ深度範囲それぞれで算出した輝度値を前記深度方向に平均することで得た2つの平均値であり、
     前記生成手段は、前記2つの平均値を比較することにより、前記所定の領域に関する分布情報を生成することを特徴とする請求項9乃至12のいずれか1項に記載の画像処理装置。
    The plurality of pieces of distribution information are two average values obtained by averaging in the depth direction luminance values calculated in each of two different depth ranges in the three-dimensional tomographic image,
    The image processing apparatus according to claim 9, wherein the generation unit generates distribution information on the predetermined area by comparing the two average values.
  14.  前記所定の領域に関する分布情報は、前記被検眼の血管に関する領域の分布情報であることを特徴とする請求項9乃至13のいずれか1項に記載の画像処理装置。 14. The image processing apparatus according to claim 9, wherein the distribution information about the predetermined region is distribution information of a region about a blood vessel of the eye to be inspected.
  15.  前記生成された分布情報を表示手段に表示させる表示制御手段を更に備えることを特徴とする請求項9乃至14のいずれか1項に記載の画像処理装置。 The image processing apparatus according to any one of claims 9 to 14, further comprising a display control unit that causes the display unit to display the generated distribution information.
  16.  被検者の所定部位の医用画像を取得する取得手段と、
     前記取得された医用画像におけるアーチファクトが低減された医用画像を生成する生成手段と、
     前記取得された医用画像を表示手段に表示される撮影確認画面に表示させ、前記表示手段に表示される表示画面が前記撮影確認画面からレポート画面に切り換わった後、前記生成された医用画像を前記レポート画面に表示させる表示制御手段と、
     を備えることを特徴とする画像処理装置。
    Acquisition means for acquiring a medical image of a predetermined part of the subject,
    Generating means for generating a medical image with reduced artifacts in the acquired medical image,
    The acquired medical image is displayed on an imaging confirmation screen displayed on display means, and after the display screen displayed on the display means is switched from the imaging confirmation screen to a report screen, the generated medical image is displayed. Display control means for displaying on the report screen,
    An image processing apparatus comprising:
  17.  被検者の所定部位の医用画像を取得する取得手段と、
     前記取得された医用画像におけるアーチファクトが低減された医用画像を生成する生成手段と、
     前記取得された医用画像を表示手段に表示される第1の表示画面に表示させ、前記表示手段に表示される表示画面が前記第1の表示画面から第2の表示画面に切り換わった後、前記生成された医用画像を前記表示手段に表示される第2の表示画面に表示させる表示制御手段と、
     を備えることを特徴とする画像処理装置。
    Acquisition means for acquiring a medical image of a predetermined part of the subject,
    Generating means for generating a medical image with reduced artifacts in the acquired medical image,
    The acquired medical image is displayed on a first display screen displayed on display means, and after the display screen displayed on the display means is switched from the first display screen to the second display screen, Display control means for displaying the generated medical image on a second display screen displayed on the display means;
    An image processing apparatus comprising:
  18.  前記取得された医用画像におけるアーチファクトの低減の要否に関する指定を受け付ける受付手段を更に備え、
     前記表示制御手段は、前記要否に関する指定に応じて、前記表示手段に表示される医用画像として、前記取得された医用画像と前記生成された医用画像とを切り換える表示制御を実行することを特徴とする請求項16または17に記載の画像処理装置。
    The apparatus further includes a receiving unit configured to receive a specification regarding whether or not reduction of an artifact in the acquired medical image is necessary,
    The display control unit executes display control for switching between the acquired medical image and the generated medical image as a medical image to be displayed on the display unit, in accordance with the specification regarding the necessity. The image processing device according to claim 16 or 17, wherein
  19.  前記所定部位の3次元断層画像または3次元モーションコントラスト画像に基づく少なくとも1つの正面画像を2次元の変換処理を実行して得た第1の概略値分布と、前記少なくとも1つの正面画像を1次元の変換処理を実行して得た第2の概略値分布との演算により、補正係数の値の分布を取得する第2の取得手段と、
     前記補正係数の値の分布を用いて、前記3次元断層画像または前記3次元モーションコントラスト画像の少なくとも一部を補正する補正手段と、を更に備え、
     前記生成手段は、前記補正された少なくとも一部の前記3次元断層画像または前記3次元モーションコントラスト画像に基づく少なくとも1つの正面画像を生成することを特徴とする請求項16乃至18のいずれか1項に記載の画像処理装置。
    A first approximate value distribution obtained by executing a two-dimensional conversion process on at least one front image based on a three-dimensional tomographic image or a three-dimensional motion contrast image of the predetermined part; A second obtaining unit that obtains a distribution of correction coefficient values by calculating with a second approximate value distribution obtained by executing the conversion process of
    Correction means for correcting at least a part of the three-dimensional tomographic image or the three-dimensional motion contrast image using a distribution of the values of the correction coefficients,
    19. The apparatus according to claim 16, wherein the generation unit generates at least one front image based on the corrected at least a part of the three-dimensional tomographic image or the three-dimensional motion contrast image. An image processing apparatus according to claim 1.
  20.  複数の医用画像を学習して得た学習済モデルを用いて、前記取得された医用画像におけるアーチファクトの状態を判定する判定手段を更に備え、
     前記表示制御手段は、前記取得された医用画像と前記判定手段による判定結果とを前記表示手段に表示させることを特徴とする請求項16乃至19のいずれか1項に記載の画像処理装置。
    Using a learned model obtained by learning a plurality of medical images, further comprising a determination unit that determines a state of an artifact in the acquired medical image,
    20. The image processing apparatus according to claim 16, wherein the display control unit causes the display unit to display the acquired medical image and a determination result by the determination unit.
  21.  被検者の所定部位の医用画像を取得する取得手段と、
     複数の医用画像を学習して得た学習済モデルを用いて、前記取得された医用画像におけるアーチファクトの状態を判定する判定手段と、
     前記取得された医用画像と前記判定手段による判定結果とを表示手段に表示される表示制御手段と、
     を備えることを特徴とする画像処理装置。
    Acquisition means for acquiring a medical image of a predetermined part of the subject,
    Using a learned model obtained by learning a plurality of medical images, determining means for determining the state of artifacts in the acquired medical images,
    Display control means for displaying on the display means the acquired medical image and the determination result by the determination means,
    An image processing apparatus comprising:
  22.  前記判定手段は、前記取得された医用画像を、前記アーチファクトの程度に応じた複数の段階のいずれかに分類することにより、前記アーチファクトの状態を判定することを特徴とする請求項20または21に記載の画像処理装置。 22. The method according to claim 20, wherein the determining unit determines the state of the artifact by classifying the acquired medical image into one of a plurality of stages according to the degree of the artifact. The image processing apparatus according to any one of the preceding claims.
  23.  前記判定手段は、前記取得された医用画像を、前記アーチファクトの複数の種類のいずれかに分類することにより、前記アーチファクトの状態を判定することを特徴とする請求項20乃至22のいずれか1項に記載の画像処理装置。 23. The state of the artifact according to claim 20, wherein the determination unit classifies the acquired medical image into one of a plurality of types of the artifact. An image processing apparatus according to claim 1.
  24.  前記学習済モデルは、互いに異なる種類である複数の医用画像をセットとする学習データにより学習して得られることを特徴とする請求項20乃至23のいずれか1項に記載の画像処理装置。 24. The image processing apparatus according to claim 20, wherein the learned model is obtained by learning using learning data in which a plurality of medical images of different types are set.
  25.  前記学習済モデルは、医用画像と解析結果とをセットとする学習データにより学習して得られることを特徴とする請求項20乃至24のいずれか1項に記載の画像処理装置。 25. The image processing apparatus according to claim 20, wherein the learned model is obtained by learning using learning data in which a medical image and an analysis result are set.
  26.  被検眼の3次元断層画像または3次元モーションコントラスト画像に基づく少なくとも1つの正面画像を2次元の変換処理を実行して得た第1の概略値分布と、前記少なくとも1つの正面画像を1次元の変換処理を実行して得た第2の概略値分布との演算により、補正係数の値の分布を取得する工程と、
     前記補正係数の値の分布を用いて、前記3次元断層画像または前記3次元モーションコントラスト画像の少なくとも一部を補正する工程と、
     前記補正された少なくとも一部の画像を生成する工程と、
     を有することを特徴とする画像処理方法。
    A first approximate value distribution obtained by performing a two-dimensional conversion process on at least one front image based on a three-dimensional tomographic image or a three-dimensional motion contrast image of the subject's eye, and converting the at least one front image into a one-dimensional image Obtaining a distribution of correction coefficient values by calculation with a second approximate value distribution obtained by executing the conversion process;
    Correcting at least a part of the three-dimensional tomographic image or the three-dimensional motion contrast image using a distribution of the values of the correction coefficients;
    Generating the corrected at least part of the image,
    An image processing method comprising:
  27.  被検眼の3次元断層画像を取得する工程と、
     前記被検眼の深度方向に交差する面内方向の分布情報であって、前記3次元断層画像における複数の深度範囲に対応する複数の分布情報を比較することにより、前記深度方向に沿って発生する影の原因となる前記被検眼における所定の領域に関する分布情報を生成する工程と、
     を有することを特徴とする画像処理方法。
    A step of acquiring a three-dimensional tomographic image of the subject's eye;
    The distribution information is generated along the depth direction by comparing a plurality of pieces of distribution information corresponding to a plurality of depth ranges in the three-dimensional tomographic image. Generating distribution information about a predetermined region in the subject's eye that causes shadows,
    An image processing method comprising:
  28.  被検者の所定部位の医用画像を取得する工程と、
     前記取得された医用画像におけるアーチファクトが低減された医用画像を生成する工程と、
     前記取得された医用画像を表示手段に表示される撮影確認画面に表示させ、前記表示手段に表示される表示画面が前記撮影確認画面からレポート画面に切り換わった後、前記生成された医用画像を前記レポート画面に表示させる工程と、
     を有することを特徴とする画像処理方法。
    A step of acquiring a medical image of a predetermined part of the subject,
    Generating a medical image with reduced artifacts in the acquired medical image,
    The acquired medical image is displayed on an imaging confirmation screen displayed on display means, and after the display screen displayed on the display means is switched from the imaging confirmation screen to a report screen, the generated medical image is displayed. Displaying on the report screen;
    An image processing method comprising:
  29.  被検者の所定部位の医用画像を取得する工程と、
     前記取得された医用画像におけるアーチファクトが低減された医用画像を生成する工程と、
     前記取得された医用画像を表示手段に表示される第1の表示画面に表示させ、前記表示手段に表示される表示画面が前記第1の表示画面から第2の表示画面に切り換わった後、前記生成された医用画像を前記表示手段に表示される第2の表示画面に表示させる工程と、
     を有することを特徴とする画像処理方法。
    A step of acquiring a medical image of a predetermined part of the subject,
    Generating a medical image with reduced artifacts in the acquired medical image,
    The acquired medical image is displayed on a first display screen displayed on display means, and after the display screen displayed on the display means is switched from the first display screen to the second display screen, Displaying the generated medical image on a second display screen displayed on the display means;
    An image processing method comprising:
  30.  被検者の所定部位の医用画像を取得する工程と、
     複数の医用画像を学習して得た学習済モデルを用いて、前記取得された医用画像におけるアーチファクトの状態を判定する工程と、
     前記取得された医用画像と前記判定する工程による判定結果とを表示手段に表示される工程と、
     を有することを特徴とする画像処理方法。
    A step of acquiring a medical image of a predetermined part of the subject,
    Using a learned model obtained by learning a plurality of medical images, determining the state of the artifact in the acquired medical image,
    A step of displaying the acquired medical image and a result of the determination by the determining step on a display unit;
    An image processing method comprising:
  31.  請求項26乃至30のいずれか1項に記載の画像処理方法の各工程をコンピュータに実行させることを特徴とするプログラム。 A program for causing a computer to execute each step of the image processing method according to any one of claims 26 to 30.
PCT/JP2019/034685 2018-09-13 2019-09-04 Image processing apparatus, image processing method, and program WO2020054524A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2018-171736 2018-09-13
JP2018171736 2018-09-13
JP2019-044264 2019-03-11
JP2019044264 2019-03-11
JP2019-133788 2019-07-19
JP2019133788A JP7446730B2 (en) 2018-09-13 2019-07-19 Image processing device, image processing method and program

Publications (1)

Publication Number Publication Date
WO2020054524A1 true WO2020054524A1 (en) 2020-03-19

Family

ID=69777623

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/034685 WO2020054524A1 (en) 2018-09-13 2019-09-04 Image processing apparatus, image processing method, and program

Country Status (1)

Country Link
WO (1) WO2020054524A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022071264A1 (en) * 2020-09-29 2022-04-07 テルモ株式会社 Program, model generation method, information processing device, and information processing method
WO2022196583A1 (en) * 2021-03-19 2022-09-22 株式会社トプコン Grade assessment device, opthalmic imaging device, program, recording medium and grade assessment method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170119242A1 (en) * 2015-10-28 2017-05-04 Oregon Health & Science University Systems and methods for retinal layer segmentation in oct imaging and oct angiography
WO2017143300A1 (en) * 2016-02-19 2017-08-24 Optovue, Inc. Methods and apparatus for reducing artifacts in oct angiography using machine learning techniques
JP2018015189A (en) * 2016-07-27 2018-02-01 株式会社トプコン Ophthalmic image processing apparatus and ophthalmic imaging apparatus
US20180182082A1 (en) * 2016-12-23 2018-06-28 Oregon Health & Science University Systems and methods for reflectance-based projection-resolved optical coherence tomography angiography
JP2018153611A (en) * 2017-03-17 2018-10-04 キヤノン株式会社 Information processor, image generation method and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170119242A1 (en) * 2015-10-28 2017-05-04 Oregon Health & Science University Systems and methods for retinal layer segmentation in oct imaging and oct angiography
WO2017143300A1 (en) * 2016-02-19 2017-08-24 Optovue, Inc. Methods and apparatus for reducing artifacts in oct angiography using machine learning techniques
JP2018015189A (en) * 2016-07-27 2018-02-01 株式会社トプコン Ophthalmic image processing apparatus and ophthalmic imaging apparatus
US20180182082A1 (en) * 2016-12-23 2018-06-28 Oregon Health & Science University Systems and methods for reflectance-based projection-resolved optical coherence tomography angiography
JP2018153611A (en) * 2017-03-17 2018-10-04 キヤノン株式会社 Information processor, image generation method and program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022071264A1 (en) * 2020-09-29 2022-04-07 テルモ株式会社 Program, model generation method, information processing device, and information processing method
WO2022196583A1 (en) * 2021-03-19 2022-09-22 株式会社トプコン Grade assessment device, opthalmic imaging device, program, recording medium and grade assessment method

Similar Documents

Publication Publication Date Title
JP7250653B2 (en) Image processing device, image processing method and program
KR20210041046A (en) Medical image processing device, medical image processing method, computer-readable medium, and learning completed model
WO2020036182A1 (en) Medical image processing device, medical image processing method, and program
US9615734B2 (en) Ophthalmologic apparatus
JP6322042B2 (en) Ophthalmic photographing apparatus, control method thereof, and program
US10165939B2 (en) Ophthalmologic apparatus and ophthalmologic apparatus control method
JP7305401B2 (en) Image processing device, method of operating image processing device, and program
WO2020183791A1 (en) Image processing device and image processing method
WO2021029231A1 (en) Ophthalmic device, method for controlling ophthalmic device, and program
JP7374615B2 (en) Information processing device, information processing method and program
JP7009265B2 (en) Image processing equipment, image processing methods and programs
JP2021037239A (en) Area classification method
JP7362403B2 (en) Image processing device and image processing method
JP2018153611A (en) Information processor, image generation method and program
JP2022155690A (en) Image processing device, image processing method, and program
WO2020050308A1 (en) Image processing device, image processing method and program
WO2020054524A1 (en) Image processing apparatus, image processing method, and program
JP2014048126A (en) Imaging device and imaging method
WO2020075719A1 (en) Image processing device, image processing method, and program
JP2021164535A (en) Image processing device, image processing method and program
WO2019230643A1 (en) Information processing device, information processing method, and program
US20190073776A1 (en) Image processing apparatus, optical coherence tomography apparatus, image processing method, and computer-readable medium
JP7344847B2 (en) Image processing device, image processing method, and program
JP7446730B2 (en) Image processing device, image processing method and program
JP2018114121A (en) Information processing device, information processing method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19860647

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19860647

Country of ref document: EP

Kind code of ref document: A1