WO2020050308A1 - Image processing device, image processing method and program - Google Patents

Image processing device, image processing method and program Download PDF

Info

Publication number
WO2020050308A1
WO2020050308A1 PCT/JP2019/034752 JP2019034752W WO2020050308A1 WO 2020050308 A1 WO2020050308 A1 WO 2020050308A1 JP 2019034752 W JP2019034752 W JP 2019034752W WO 2020050308 A1 WO2020050308 A1 WO 2020050308A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
tomographic
image processing
region
unit
Prior art date
Application number
PCT/JP2019/034752
Other languages
French (fr)
Japanese (ja)
Inventor
裕之 今村
秀謙 溝部
Original Assignee
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by キヤノン株式会社 filed Critical キヤノン株式会社
Publication of WO2020050308A1 publication Critical patent/WO2020050308A1/en

Links

Images

Classifications

    • G06T5/94
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • G06T5/60
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Definitions

  • the present invention relates to an image processing device, an image processing method, and a program.
  • a state inside the retinal layer can be observed three-dimensionally.
  • OCT optical coherence tomography
  • the tomographic imaging apparatus is widely used for ophthalmic medical treatment because it is useful for more accurately diagnosing a disease.
  • TD-OCT Time domain OCT
  • Michelson interferometer combines a broadband light source and a Michelson interferometer. This is configured to move the position of the reference mirror at a constant speed, measure interference light with backscattered light acquired by the signal arm, and obtain a reflected light intensity distribution in the depth direction.
  • SD-OCT Spectral domain OCT
  • SS-OCT SS-OCT
  • FIG. 4A is a diagram exemplifying a tomographic image in which a shadow region has occurred.
  • the object is, for example, a tissue such as a blood vessel 401 or a lesion such as vitiligo or bleeding.
  • the brightness becomes maximum near the photoreceptor inner / outer segment boundary (IS / OS) 4 or the retinal pigment epithelium 5 in the depth direction of the retina.
  • the shadow area 402 is generated below the blood vessel 401, the brightness near the IS / OS 4 or the retinal pigment epithelium 5 in the shadow area 402 is reduced or lost.
  • OCT Angiography (hereinafter referred to as OCTA) technique of non-invasively rendering the fundus blood vessels three-dimensionally using OCT is used.
  • OCT OCT Angiography
  • the same position is scanned a plurality of times with the measurement light, and the motion contrast obtained by the interaction between the displacement of the red blood cells and the measurement light is imaged.
  • the main scanning direction is the horizontal (x-axis) direction, and an interference signal is obtained at each position of x1, x2,..., Xm.
  • the scanning of the measurement light in the horizontal direction is referred to as a B scan here.
  • the figure shows an example of OCTA imaging in which B scanning is performed r times continuously at each position (yi; 1 ⁇ i ⁇ n) in the sub-scanning direction (y-axis direction).
  • FIG. 4C shows an example in which a three-dimensional motion contrast image is superimposed and displayed on a three-dimensional OCT tomographic image.
  • a high motion contrast which is a projection artifact (hereinafter referred to as PA), occurs in a region 405 below the surface blood vessels of the retina.
  • PA is a phenomenon in which the OCT signal in the shadow region 402 correspondingly increases or decreases as the OCT signal in the blood vessel 401 (retina superficial blood vessel) repeatedly increases and decreases, and high motion contrast occurs in the outer retina layer where blood vessels do not originally exist. Point.
  • a region 405 having a high motion contrast value is formed on the deep side of a region 404 having a high motion contrast value corresponding to a retinal surface blood vessel.
  • Patent Document 1 a layer boundary is obtained from a tomographic image of an eye obtained by OCT, and a shadow region is detected based on image features (brightness values and layer shapes) at the layer boundary. ing. Then, the luminance correction or the layer detection parameter in the shadow area is changed by image processing.
  • the present invention has been made in view of the above problems, and has as its object to correct a shadow region in a tomographic image regardless of a disease or a site.
  • the present invention is not limited to the above object, and it is another object of the present invention to provide an operation effect derived from each configuration shown in the embodiment for carrying out the invention described later, and to obtain an operation effect which cannot be obtained by the conventional technology. It can be positioned as one of the.
  • an image processing apparatus Image acquisition means for acquiring a tomographic image of the eye to be examined, And a correction unit configured to correct a pixel value of the shadow area in the tomographic image, which is a shadow area generated by an object included in the eye to be examined, using a learned model.
  • a shadow area in a tomographic image can be corrected regardless of a disease or a site.
  • FIG. 1 is a block diagram illustrating a configuration of an image processing device according to a first embodiment of the present invention.
  • FIG. 1 is a schematic configuration diagram of an image processing system according to an embodiment of the present invention.
  • FIG. 2B is a diagram illustrating a measurement optical system included in the tomographic image photographing apparatus included in the image processing system illustrated in FIG. 2A.
  • 5 is a flowchart of a process that can be executed by the image processing system according to the first embodiment of the present invention.
  • FIG. 3 is a diagram illustrating an OCT tomographic image including a shadow area.
  • FIG. 4 is a diagram illustrating a method of scanning measurement light during OCTA imaging.
  • FIG. 4 is a diagram for explaining PA occurring below a blood vessel in a motion contrast image.
  • FIG. 1 is a block diagram illustrating a configuration of an image processing device according to a first embodiment of the present invention.
  • FIG. 1 is a schematic configuration diagram of an image processing system according to an embodiment of the present invention
  • FIG. 9 is a diagram illustrating an example of a shadow area corrected OCT tomographic image.
  • FIG. 4 is a diagram illustrating an example of a size of a tomographic image used when obtaining a learned model according to the embodiment of the present invention.
  • FIG. 4 is a diagram illustrating an example of a size of a tomographic image used when obtaining a learned model according to the embodiment of the present invention.
  • FIG. 4 is a diagram illustrating an example of a size of a tomographic image used when obtaining a learned model according to the embodiment of the present invention.
  • FIG. 4 is a diagram illustrating an example of a size of a tomographic image used when obtaining a learned model according to the embodiment of the present invention.
  • FIG. 4 is a diagram illustrating an example of a size of a tomographic image used when obtaining a learned model according to the embodiment of the present invention.
  • FIG. 4 is a diagram illustrating an example of a size of a tomographic image used
  • FIG. 4 is a diagram illustrating an example of a size of a tomographic image used when obtaining a learned model according to the embodiment of the present invention.
  • FIG. 4 is a diagram illustrating an example of a size of a tomographic image used when obtaining a learned model according to the embodiment of the present invention.
  • FIG. 9 is a diagram illustrating an image in which a motion contrast image generated in S905 is superimposed on a tomographic image.
  • FIG. 9 is a diagram illustrating an example of a report screen displayed on a display unit in S906. It is a block diagram showing the composition of the image processing device concerning a third embodiment of the present invention. 13 is a flowchart of a process that can be executed by the image processing system according to the third embodiment of the present invention. It is a figure explaining the report screen displayed on a display means in S1206 in the processing explained as a third embodiment of the present invention.
  • FIG. 14 is a flowchart of a process that can be executed by the image processing system according to the fourth embodiment of the present invention. It is a figure explaining processing performed at S1508 in processing explained as a 4th embodiment of the present invention.
  • FIG. 9 is a diagram for describing processing executed in S1509.
  • the image processing apparatus makes a machine learning model by deep learning learn pairs of tomographic images of various parts and diseases and tomographic images obtained by applying a shadow area correction process to the tomographic images.
  • a shadow area on the tomographic image is robustly reduced.
  • a machine learning model refers to a learning model based on a machine learning algorithm such as deep learning.
  • the learned model is a model obtained (trained) by previously training an appropriate machine learning model or the like using an arbitrary machine learning algorithm using appropriate teacher data. However, it is assumed that the learned model does not perform any further learning and can perform additional learning.
  • FIG. 2A is a diagram illustrating a configuration of an image processing system 10 including the image processing apparatus 101 according to the present embodiment.
  • the image processing apparatus 101 is connected to a tomographic image capturing apparatus 100 (also referred to as OCT), an external storage unit 102, an input unit 103, and a display unit 104 via an interface. It is constituted by doing.
  • the tomographic image capturing apparatus 100 is an apparatus that captures a tomographic image of the eye.
  • SD-OCT is used as the tomographic imaging apparatus 100.
  • the mode of the tomographic imaging apparatus is not limited to this, and may be configured using, for example, SS-OCT.
  • the tomographic imaging apparatus 100 includes a measurement optical system 100-1, a stage unit 100-2, and a base unit 100-3.
  • a measurement optical system 100-1 is an optical system for acquiring an anterior eye image, an SLO fundus image of a subject's eye, and a tomographic image.
  • the stage unit 100-2 supports the measurement optical system 100-1 so as to be movable in the front, rear, left, and right directions.
  • the base unit 100-3 incorporates a spectroscope described later.
  • the image processing apparatus 101 is a computer that controls the stage unit 100-2, controls the alignment operation of the measurement optical system 100-1, and reconstructs a tomographic image.
  • the external storage unit 102 stores a program for tomography, patient information, imaging data, measurement data, and the like.
  • the input unit 103 issues an instruction to the computer, and specifically includes a keyboard and a mouse.
  • the display unit 104 includes, for example, a monitor.
  • An objective lens 201 is installed to face the subject's eye 200, and a first dichroic mirror 202 and a second dichroic mirror 203 are arranged on the optical axis.
  • the dichroic mirror divides the optical path to the eye 200 into the optical path 250 of the OCT optical system, the optical path 251 for the SLO optical system and the fixation lamp, and the optical path 252 for anterior eye observation for each wavelength band.
  • the SLO optical system and the optical path 251 for the fixation lamp include an SLO scanning unit 204 and lenses 205 and 206.
  • a mirror 207, a third dichroic mirror 208, an APD (Avalanche Photodiode) 209, an SLO light source 210, and a fixation lamp 211 are provided downstream of the lens 206.
  • the mirror 207 is a prism on which a perforated mirror or a hollow mirror is deposited, and separates illumination light from the SLO light source 210 from light returned from the subject's eye.
  • the third dichroic mirror 208 separates the optical path 251 into an optical path of the SLO light source 210 and an optical path of the fixation lamp 211 for each wavelength band.
  • the SLO scanning unit 204 scans the illumination light emitted from the SLO light source 210 on the subject's eye 200, and includes an X scanner that scans in the X direction and a Y scanner that scans in the Y direction.
  • the X scanner since the X scanner needs to perform high-speed scanning, the X scanner is constituted by a polygon mirror, and the Y scanner is constituted by a galvanometer mirror.
  • these mirrors can be appropriately replaced by various known deflection mirrors according to the required specifications.
  • the lens 205 is driven by a motor (not shown) for focusing the SLO optical system and the fixation lamp 211.
  • the SLO light source 210 generates light having a wavelength near 780 nm as illumination light.
  • the APD 209 detects the return light from the subject's eye.
  • the fixation lamp 211 generates visible light to urge the subject to fixate.
  • the illumination light emitted from the SLO light source 210 is reflected by the third dichroic mirror 208 and passes through the mirror 207. Thereafter, the light passes through the lenses 206 and 205 and is scanned on the eye 200 by the SLO scanning means 204.
  • the return light from the subject's eye 200 returns along the same path as the illumination light, is reflected by the mirror 207, and is guided to the APD 209, whereby an SLO fundus image is obtained.
  • the light emitted from the fixation lamp 211 passes through the third dichroic mirror 208 and the mirror 207. Thereafter, a predetermined shape is formed at an arbitrary position on the subject's eye 200 by the SLO scanning means 204 through the lenses 206 and 205, and the subject's fixation is promoted.
  • lenses 212 and 213, a split prism 214, and an anterior eye part observation CCD 215 for detecting infrared light are arranged.
  • the CCD 215 has sensitivity at the wavelength of irradiation light (not shown) for anterior ocular segment observation, specifically around 970 nm.
  • the split prism 214 is arranged at a position conjugate with the pupil of the eye 200 to be inspected. The distance in the Z-axis direction (optical axis direction) of the measurement optical system 100-1 from the subject's eye 200 can be detected from the split image of the anterior segment obtained through the split prism 214.
  • the optical path 250 of the OCT optical system constitutes the OCT optical system as described above, and captures a tomographic image of the eye 200 to be inspected. More specifically, an interference signal for forming a tomographic image is obtained.
  • the OCTXY scanner 216 scans the measurement light on the eye 200 to be inspected. Although shown as a single mirror in FIG. 2B, it is actually composed of two galvanometer mirrors that perform scanning in the XY biaxial directions. Note that the configuration of the OCTXY scanner 216 is not limited to this, and may be configured using any other deflecting mirror.
  • the lens 217 is used to focus light from the OCT light source 220 emitted from the fiber 224 connected to the optical coupler 219 to the eye 200. Specifically, it is driven in the optical axis direction indicated by an arrow in the figure by a motor (not shown). By this focusing, the return light from the eye 200 to be inspected is simultaneously focused on the tip of the fiber 224 in the form of a spot and is incident.
  • optical fibers 224 to 227 are single mode optical fibers connected to and integrated with an optical coupler.
  • the Michelson interferometer is configured by these configurations.
  • the light emitted from the OCT light source 220 is split through the optical fiber 225 into measurement light guided to the optical fiber 224 via the optical coupler 219 and reference light guided to the optical fiber 226.
  • the measurement light is applied to the subject's eye 200 to be observed through the above-described OCT optical system optical path, and reaches the optical coupler 219 via the same optical path due to reflection and scattering by the subject's eye 200.
  • the reference light reaches the reference mirror 221 via the optical fiber 226, the lens 223, and the dispersion compensation glass 222 inserted for adjusting the wavelength dispersion of the measurement light and the reference light, and is reflected. Then, the light returns to the same optical path and reaches the optical coupler 219.
  • the measuring light and the reference light are multiplexed by the optical coupler 219 to become interference light.
  • interference occurs when the optical path length of the measurement light and the optical path length of the reference light become substantially the same.
  • the reference mirror 221 is held by a motor and a driving mechanism (not shown) so as to be adjustable in the optical axis direction indicated by the arrow in the figure, and can adjust the optical path length of the reference light to the optical path length of the measurement light.
  • the obtained interference light is guided to the spectroscope 230 via the optical fiber 227.
  • the polarization adjusting units 228 and 229 are provided in the optical fibers 224 and 226, respectively, and perform polarization adjustment. These polarization adjusting sections 228 and 229 have several portions where optical fibers are looped. By rotating the loop portion about the longitudinal direction of the optical fiber, the optical fiber is twisted, and the polarization states of the measurement light and the reference light can be adjusted and matched.
  • the spectroscope 230 includes lenses 232 and 234, a diffraction grating 233, and a line sensor 231.
  • the interference light emitted from the optical fiber 227 is converted into parallel light through the lens 234, is then separated by the diffraction grating 233, and is imaged on the line sensor 231 by the lens 232.
  • the OCT light source 220 is an SLD (Super Luminescent Diode) that is a typical low coherent light source.
  • the center wavelength is 855 nm and the wavelength bandwidth is about 100 nm.
  • the bandwidth is an important parameter because it affects the resolution of the obtained tomographic image in the optical axis direction.
  • the SLD is selected as the light source, but it is sufficient that low coherent light can be emitted, and ASE (Amplified Spontaneous Emission) or the like can be used.
  • ASE Amptonified Spontaneous Emission
  • the center wavelength of the light used near-infrared light is suitable in view of measuring the eye. Since the center wavelength affects the resolution of the obtained tomographic image in the horizontal direction, it is desirable that the center wavelength be as short as possible. For both reasons, the center wavelength is 855 nm in this embodiment.
  • a Michelson interferometer is used as an interferometer, but a Mach-Zehnder interferometer may be used. According to the light amount difference between the measurement light and the reference light, it is desirable to use a Mach-Zehnder interferometer when the light amount difference is large and to use a Michelson interferometer when the light amount difference is relatively small.
  • the image processing apparatus 101 is configured as a personal computer (PC) connected to the tomographic image capturing apparatus 100.
  • the image processing apparatus 101 includes an image acquisition unit 101-01, a storage unit 101-02, a shooting control unit 101-03, an image processing unit 101-04, and a display control unit 101-05.
  • the arithmetic processing unit CPU executes software modules that implement the image acquisition unit 101-01, the imaging control unit 101-03, the image processing unit 101-04, and the display control unit 101-05. Implement the function.
  • the image processing unit 101-04 may be realized by dedicated hardware such as an ASIC, or the display control unit 101-05 may be realized by a dedicated processor such as a GPU different from a CPU. May be implemented.
  • the storage unit 101-2 may be configured by an arbitrary storage medium such as an arbitrary memory or an optical disk.
  • the connection between the tomographic imaging apparatus 100 and the image processing apparatus 101 may be configured via a network.
  • the image acquisition unit 101-01 acquires signal data of an SLO fundus image or a tomographic image captured by the tomographic image capturing apparatus 100.
  • the image acquisition unit 101-01 has a tomographic image generation unit 101-11.
  • the tomographic image generation unit 101-11 acquires signal data (interference signal) of the tomographic image captured by the tomographic image capturing apparatus 100, generates a tomographic image by signal processing, and stores the generated tomographic image in the storage unit 101-02. To be stored.
  • the imaging control unit 101-03 controls imaging of the tomographic imaging apparatus 100.
  • the imaging control includes instructing the tomographic imaging apparatus 100 regarding setting of imaging parameters, and instructing the start or end of imaging.
  • the image processing unit 101-04 includes a photographing condition acquisition unit 101-41, a positioning unit 101-42, a correction unit 101-43, an image feature acquisition unit 101-44, and a projection unit 101-45.
  • the above-described image acquisition unit 101-01 is an example of an acquisition unit according to the present invention.
  • the photographing condition acquisition unit 101-41 acquires photographing condition data of an input image required when the image processing unit 101-04 performs image processing.
  • the photographing condition data includes, for example, photographing date and time, part name, angle of view, scan mode, image resolution and number of gradations, pixel size, information on image data format, and the like.
  • the correction unit 101-43 performs a process of two-dimensionally or three-dimensionally suppressing a shadow region generated below an object such as a blood vessel in a tomographic image using the learned model.
  • the image feature acquisition unit 101-44 acquires a layer boundary of the retina or choroid, a boundary of the lamina cribrosa region, the position of the fovea or the center of the optic disc, and the like from the tomographic image.
  • the projection unit 101-45 projects a tomographic image in a depth range based on the boundary position acquired by the image feature acquiring unit 101-44, and generates a front tomographic image such as an Enface image.
  • the external storage unit 102 stores information on the subject's eye (eg, patient's name, age, and gender), captured images (tomographic images and SLO images), images obtained by processing the images, imaging parameters, and operators. Is stored in association with the parameter set by.
  • the input unit 103 is, for example, a mouse, a keyboard, a touch operation screen, or the like. An operator issues an instruction to the image processing apparatus 101 or the tomographic image capturing apparatus 100 via the input unit 103.
  • FIG. 3 is a flowchart showing a flow of operation processing of the entire system in the present embodiment.
  • Step 301 (S301)>
  • the procedure includes 1) selection of a scan mode, and 2) setting of shooting parameters corresponding to the scan mode.
  • the shooting conditions are set as follows. 1) Select the Macula 3D scan mode 2) Set the following shooting parameters 2-1) Scanning area size: 12x12mm 2-2) Main scanning direction: horizontal direction 2-3) Fixation lamp position: lighting position for macular imaging
  • Step 302 After setting the shooting conditions, the operator operates the input unit 103 and presses a shooting start button (not displayed) in the shooting screen. Accordingly, the tomographic image capturing apparatus 100 starts capturing an OCT tomographic image under the capturing conditions specified in S301. Specifically, the imaging control unit 101-03 instructs the tomographic imaging apparatus 100 to perform OCT imaging based on the settings instructed by the operator in S301. Thereby, the tomographic imaging apparatus 100 acquires an interference signal for generating a corresponding OCT tomographic image.
  • the tomographic imaging apparatus 100 also acquires an SLO image, and executes a tracking process based on the SLO moving image.
  • the number of times of repetitive imaging at the same scanning position is one (not repeated).
  • the number of repetitive imagings at the same scanning position is not limited to this, and may be set to an arbitrary number.
  • the reference SLO image used for tracking processing during imaging is the standard SLO image set at the time of the first OCT imaging, and a common reference SLO image is used for all repeated OCT imaging.
  • the same set values are used (not changed) for 1) selection of left and right eyes and 2) execution of tracking processing in addition to the shooting conditions set in S301.
  • the conditions for the tracking processing are not limited to the above, and can be changed as appropriate according to the conditions for capturing the OCT tomographic image.
  • Step 303 the image acquisition unit 101-01 and the image processing unit 101-04 reconstruct a tomographic image using the interference signal acquired in S302.
  • the tomographic image generation unit 101-11 performs wave number conversion, fast Fourier transform (hereinafter, referred to as FFT), and absolute value conversion (acquisition of amplitude) on the interference signal acquired by the image acquisition unit 101-01, thereby obtaining a tomographic image.
  • FFT fast Fourier transform
  • absolute value conversion acquisition of amplitude
  • a tomographic image is generated from the interference signal.
  • the tomographic image described here can also be handled as various data obtained by the above-described various conversions that are obtained when the image is generated. Therefore, the tomographic image described here can also be grasped as tomographic data including these various data.
  • the positioning unit 101-42 performs positioning between tomographic images obtained from each scanning position.
  • the positioning unit 101-42 also performs alignment in a tomographic image captured within the same scanning position. Then, the tomographic images after the alignment are superimposed to generate a superimposed tomographic image. The positioning unit 101-42 performs this operation in addition to the above-described positioning between tomographic images between scanning positions.
  • the image feature acquiring unit 101-44 acquires a layer boundary between the retina and the choroid and a boundary (not shown) between the anterior surface and the posterior surface of the cribriform plate from the single or superimposed tomographic images.
  • the Bruch's membrane 6 and the choroid-sclera boundary 7 are acquired (see FIG. 4A).
  • the detected end of the Bruch's membrane 6 (the end of the Bruch's membrane opening) is specified as a Disc boundary of the optic papilla.
  • the variable shape model is used as a method for acquiring the layer boundaries of the retina and the choroid, and the front and rear boundaries of the cribriform plate, but any known segmentation method may be used.
  • the layer boundary to be obtained is not limited to the above example.
  • any known segmentation method can be used to determine the inner plexus-inner plexus boundary, the inner plexus-outer plexus boundary, the outer plexus-outer plexus boundary, outer boundary membrane, photoreceptor outer segment tip (COST), etc.
  • the present invention includes the case where the choroid capillary plate-Sattler layer boundary and the Sattler layer-Haller layer boundary of the choroid are obtained by any known segmentation method.
  • the front and rear boundaries of the sieve plate may be manually set.
  • a layer boundary position can be set by manually moving a position of a specific layer boundary (for example, the inner limiting membrane 1) by a predetermined amount.
  • the process of acquiring the layer boundary and the front / rear surface boundary of the cribriform plate may be performed after the image correction process of S304 described later instead of this step.
  • the correction unit 101-43 corrects a shadow region generated below an object in the eye such as a blood vessel using the learned model.
  • the learned model in the description of the present invention is a model obtained by training in advance using an appropriate teacher data for an arbitrary known machine learning algorithm.
  • a tomographic image having a high possibility of being obtained when a shadow region does not appear is generated from an image having a shadow region or data for generating the image using the learned model.
  • the machine learning algorithm will be described with reference to FIGS. 4A to 4D, FIGS. 5A to 5F and FIG.
  • the teacher data is composed of one or more pairs of input data and output data. Specifically, teacher data composed of pairs of tomographic images of various parts and diseases including a shadow area obtained by OCT (FIG. 4A) and corresponding shadow area corrected tomographic images (FIG. 4D) (Hereinafter, first teacher data).
  • the first teacher data is composed of a group of pairs of a tomographic image in which a shadow area has occurred and a tomographic image in which the shadow area has been corrected by using the image processing technique disclosed in Patent Document 1, for example.
  • one of the pairs obtains the tomographic image corrected using the image processing technique by correcting the brightness value of the pixel determined by the operator to be insufficiently corrected for the shadow region to a desired value.
  • You may.
  • FIG. 4A to 4D, 5A to 5F, and 6 illustrate only a tomographic image including a macula region
  • the tomographic image actually includes a region of the optic papilla. Shadow areas also occur below the blood vessels in the area. A large number of thick shadow areas are formed in the optic disc due to the collection of large blood vessels in the retina, and this tends to hinder the depiction, identification, and measurement of the lamina cribrosa existing particularly in the deep portion of the optic disc.
  • training may be performed on a tomographic image obtained by photographing only the macula or the optic papilla.
  • teacher data examples include teacher data (hereinafter, second teacher data) configured by a pair group of the tomographic image 403 before the shadow area correction and the correction value data illustrated in FIG. 4A.
  • the correction value data is used to calculate each pixel value of the tomographic image after the shadow region correction (the shadow region corrected tomographic image 407) from each pixel value in the tomographic image 403 for the shadow region correction. Includes data used.
  • An image group forming a pair group of the tomographic image 403 before the shadow area correction and the tomographic image corrected for the shadow area 407 constituting the first teacher data is defined by a rectangular area image having a fixed pixel size corresponding to the positional relationship. create. This will be described below with reference to FIGS. 5A to 5F.
  • 5A to 5F are diagrams respectively showing a tomographic image before shadow area correction as input data and a tomographic image with shadow area corrected as output data.
  • one of the group of pairs forming the first teacher data includes a tomographic image 403 including a shadow area and a tomographic image 407 having a shadow area corrected.
  • the input data forming the pair is a tomographic image 501 and the output data is a tomographic image 501 ′.
  • the entire image is a pair image, but the configuration of the pair image is not limited to this.
  • rectangular area images 5021 and 5022 of the tomographic image 403 including the shadow area may be used as input data.
  • FIG. 5C rectangular area images 5021 and 5022 of the tomographic image 403 including the shadow area may be used as input data.
  • the output data is rectangular area images 5021 'and 5022' which are the same imaging area in the shadow area corrected tomographic image 407. That is, a pair of the input data and the output data may be constituted by these rectangular area images. Note that this rectangular area is based on the A-scan unit.
  • the A scan unit may be one A scan or several A scan units.
  • rectangular area images 5031 and 5032 of the tomographic image 403 including the shadow area may be used as input data.
  • rectangular area images 5031 'and 5032' which are the same imaging area in the shadow area corrected tomographic image 407, are set as output data and form a pair with the input data.
  • the rectangular area images shown in FIGS. 5A to 5F are examples of the rectangular area sizes when training is performed separately.
  • the number of rectangular areas can be set to one in FIGS. 5A and 5B, and a plurality can be set as described above in FIGS. 5C to 5F.
  • the rectangular area image 5022 on the tomographic image 403 including the shadow area in FIG. 5C is input data, and the rectangular area image 5022 'at the same position on the shadow area corrected tomographic image 407 in FIG. 5D is output data.
  • the group of pairs constituting the first teacher data is enhanced. 5C to 5F, rectangular regions are discretely shown. However, in practice, the image is divided into rectangular region images having a fixed pixel size and without gaps, and a pair group is generated. Good to do.
  • the original tomographic image and the tomographic image after the shadow correction by creating many pairs of rectangular area images while changing the position of the area to different coordinates, it is possible to enrich the group of pairs forming the teacher data.
  • FIG. 6 shows an example of the configuration of the learned model in the correction units 101-43.
  • the configuration illustrated in FIG. 6 is configured by a plurality of layer groups that are responsible for processing the input value group and outputting the processed value group.
  • the types of layers included in this configuration include a convolution layer, a downsampling (Downsampling) layer, an upsampling (Upsampling) layer, and a synthesis (Merger) layer.
  • the convolution layer is a layer that performs a convolution process on an input value group according to parameters such as a set filter kernel size, the number of filters, a stride value, and a dilation value.
  • the number of dimensions of the kernel size of the above-described filter may be changed according to the number of dimensions of the input image.
  • the downsampling layer is a process in which the number of output value groups is made smaller than the number of input value groups by thinning out or combining input value groups. Specifically, for example, there is a Max @ Pooling process.
  • the upsampling layer is a process of increasing the number of output value groups beyond the number of input value groups by duplicating the input value group or adding a value interpolated from the input value group.
  • the synthesis layer is a layer that performs processing of inputting a value group such as an output value group of a certain layer or a pixel value group forming an image from a plurality of sources, and connecting or adding them to synthesize.
  • the parameter setting for the layer group or the node group forming the neural network is different, the degree to which the tendency trained from the teacher data can be reproduced in the output data may be different. That is, in many cases, appropriate parameters are different depending on the mode of implementation, and it is preferable to change them as necessary.
  • the CNN can obtain better characteristics.
  • the better characteristics include, for example, a high ability to correct the shadow area, a short time for the shadow area correction processing, and a short time for training when obtaining a learned model.
  • a batch normalization (Batch Normalization) layer may be incorporated after the convolutional layer.
  • an activation layer using a normalized linear function (Rectifier @ Linear @ Unit) may be incorporated.
  • data When data is input to such a trained model, data according to the design of the trained model is output. For example, output data having a high possibility of corresponding to the input data is output according to the tendency trained from the teacher data. Further, for example, the possibility is output as a numerical value for each of the types of output data trained from the teacher data. Specifically, for example, when a tomographic image 601 obtained by OCT is input to a trained model trained by the first teacher data, a shadow region corrected tomographic image 602 is output.
  • a group of correction values to be applied when correcting the shadow area is output.
  • the format and combination of the input data and the output data of the pair group forming the teacher data may be such that one is an image and the other is a numerical value, one is a plurality of image groups and the other is a numerical value, or both are numerical values. It is performed in a combination suitable for the embodiment, such as an image.
  • the learned model sets the brightness value of the shadow area corrected tomographic image or the correction value for the shadow area correction in each rectangular area. Output.
  • the correction unit 101-43 calculates a correction value with respect to the input luminance value of the tomographic image 601 to output a shadow area corrected luminance value.
  • the correction unit 101-43 arranges and combines the respective shadow area corrected rectangular area image groups in the same positional relationship as the input rectangular area image group, and corrects the shadow area. 602 is obtained.
  • the segmentation of the layer boundary and the front / rear boundary of the lamina cribrosa is performed using the layer boundary and the front / rear boundary positions of the lamina cribrosa as initial positions for the tomographic image in which the shadow region is corrected. Shall be implemented again. This makes it possible to obtain accurate layer boundaries and front / rear boundaries of the cribriform plate that are not easily affected by the shadow area.
  • the projection unit 101-45 projects the shadow area corrected superimposed tomographic image using the position of the layer boundary and the front / back boundary of the lamina cribrosa acquired by the image feature acquisition unit 101-44, and A superimposed front tomographic image is generated.
  • any known projection is possible, and in this embodiment, average projection is performed.
  • the projection method is not limited to this, and various known projection methods can be applied.
  • the image processing apparatus 101 associates the acquired image group (SLO image or tomographic image), the imaging condition data of the image group, and the data obtained in S304 with the examination date and time and the information for identifying the eye to be examined. Are stored in the external storage unit 102.
  • the data obtained in S304 includes a tomographic image or a correction value for which the shadow region has been corrected, a layer boundary, and a front / rear boundary data of the lamina cribrosa.
  • the display control unit 101-05 causes the display unit 104 to display the information regarding the superimposed tomographic image (3D image / B-scan image / front image) and the imaging condition after the shadow area correction generated in S304.
  • FIG. 7 shows an example of a report screen 700 displayed on the display unit 104.
  • the display control unit 101-05 displays the SLO image 702 on the upper left of the report screen 700 and the superimposed tomographic image (B-scan image 704) with the shadow area corrected on the upper right.
  • the display control unit 101-05 superimposes and displays an arrow (gray) indicating the scanning position of the measurement light at the time of obtaining the B-scan image 704 (tomographic image) with the shadow area corrected on the SLO image 702.
  • the layer boundaries (the inner limiting membrane 1 and the retinal pigment epithelium 5 and the like) and the boundaries between the lamina cribrosa regions acquired in S303 and S304 are superimposed on the B scan image 704.
  • a superimposed front tomographic image 703 with the shadow area corrected is displayed. Since the B-scan and the front tomographic image in which the shadow area is suppressed are displayed, the front tomographic image can be observed with the original luminance of the subject's eye.
  • Step 306 After observing the above-mentioned front tomographic image on the report screen 700, the operator instructs, via the input unit 103, whether to perform another image observation again or to end the observation. When the end instruction is given, the operation processing ends. When an instruction to continue image observation is input, the flow returns to S302, and imaging of a new OCT tomographic image is started.
  • a user interface 705 for switching whether to apply the shadow area correction process may be displayed. Furthermore, a character string, a mark, or a correction amount value indicating the application state of the shadow area correction processing on the tomographic image (B scan / front / 3D image) displayed on the display unit 104 may be displayed. These may be displayed together with the user interface 705.
  • the user interface 705 inputs an instruction regarding whether or not the shadow area correction processing can be applied. Then, based on this instruction, whether to apply the shadow area correction processing to the tomographic image (B-scan image 704, front tomographic image 703, 3D image (not shown)) displayed on display unit 104 is switched. Further, a user interface (slider or the like) for adjusting the luminance correction amount of the shadow region may be displayed on the display unit 104 so that the user can manually adjust the correction amount of the shadow region.
  • the correction unit 101-43 two-dimensionally or three-dimensionally reduces a shadow region generated under an object such as a blood vessel in a tomographic image using a learned model.
  • the present invention is not limited to correction of blood vessel shadows.
  • a shadow region generated under vitiligo, opacity of an intermediate translucent body, bleeding, or the like may be reduced by using a learned model. That is, training of a learned model is performed using a pair of a tomographic image before correction including a shadow region due to vitiligo, vitreous opacity, and bleeding and a tomographic image corrected for the shadow region as teacher data.
  • the present invention also includes a case where a tomographic image including a shadow region caused by vitiligo, vitreous opacity, or bleeding is input to the trained model obtained by this training, so that the shadow region on the tomographic image is robustly reduced. .
  • a shadow generated below an object in the eye such as a blood vessel in a tomographic image is obtained using a single learned model trained using teacher data of various parts and diseases.
  • the area is being corrected.
  • the present invention is not limited to this.
  • the correction unit 101-43 is provided with a plurality of learned models, and for each of the learned models, teacher data composed of a “pair of shadow region pre-correction / corrected tomographic images of only a specific part” is used. It is good to train.
  • training may be performed using teacher data composed of “a pair of a shadow region before correction and a corrected tomographic image of only a specific disease” according to a disease or the like.
  • the present invention includes a case in which a shadow region generated below an object in the eye such as a blood vessel in a tomographic image is corrected based on outputs from a plurality of learned models.
  • the image processing apparatus 101 includes the image acquisition unit and the correction unit.
  • the image acquisition unit acquires a tomographic image of the eye 200 to be inspected.
  • the correction unit (correction unit 101-43) is a shadow region (402) generated by an object included in the subject's eye 200 using a learned model obtained by training, and is a shadow region generated in a tomographic image of the subject's eye 200. Correct the pixel value of the area.
  • the trained model described here is a tomographic image including a shadow region acquired from the eye 200 to be examined, and a tomographic image obtained by performing image processing for correcting a pixel value of the shadow region of the tomographic image including the shadow region.
  • the present invention can be understood as an image processing method including an image acquisition step and a correction step performed by the above-described units or the image processing apparatus.
  • the image processing apparatus according to the present embodiment may be configured to include the above-described image acquisition unit and the generation unit.
  • the generation unit corrects the pixel value of the shadow area (402) generated by the object included in the eye 200 and the shadow area generated in the tomographic image of the eye 200 using the learned model described above. Generated tomographic image.
  • the configuration of the component that generates the shadow region is different.
  • the form of PA differs depending on which part of the retina, choroid, or cribriform plate is focused. Therefore, it is preferable to perform image correction of the shadow area according to these parts.
  • the correction unit 101-43 converts a plurality of learned models obtained by training based on a pair provided for each of the retina, choroid, and lamina cribrosa in the tomographic image according to the region. It is preferable to use them.
  • the image value of the shadow area may be corrected in a known image processing method in a manner (processing method) corresponding to a part.
  • the image processing apparatus 101 converts a pair of a tomographic image of various parts and diseases and a tomographic image obtained by applying the shadow area correction processing to the tomographic image into a trained model by deep learning. Train. By inputting a tomographic image to the trained (learned) learned model, the shadow area on the tomographic image is robustly reduced. This makes it possible to correct a shadow region in a tomographic image regardless of a disease or a part.
  • the image processing apparatus generates a motion contrast image using a tomographic image obtained by applying the shadow area correction processing described in the first embodiment to a tomographic image scanned a plurality of times at the same position. I do. The manner in which the PA is reduced in the motion contrast image will be described.
  • FIG. 8 shows the configuration of the image processing apparatus 801 according to the present embodiment. Note that components having substantially the same functions as those of the components of the image processing apparatus 101 according to the first embodiment will be denoted by the same reference numerals, and description thereof will be omitted.
  • the image processing apparatus 801 according to the present embodiment is different from the first embodiment in that the image acquisition unit 101-01 includes the motion contrast data generation unit 101-12 and the image processing unit includes the synthesis unit 101-46. ing.
  • FIG. 9 is a flowchart showing a flow of operation processing of the entire system in the present embodiment. Note that among the operation processing flows in the present embodiment, except for S901, S902, S905, and S906 in FIG. 9, are the same as the processing performed in the corresponding steps in the first embodiment, and thus description thereof will be omitted. I do.
  • Step 901> By operating the input unit 103, the operator sets imaging conditions when an OCTA image is imaged by the tomographic image imaging apparatus 100.
  • the photographing conditions are set as follows, and OCTA photographing (under the same photographing conditions) is repeatedly executed a predetermined number of times while appropriately taking a break in S902.
  • 1) Select the OCTA scan mode 2 Set the following imaging parameters 2-1) Scan pattern: Small Square 2-2) Scanning area size: 3 x 3 mm 2-3) Main scanning direction: horizontal direction 2-4) Scan interval: 0.01 mm 2-5) Fixation light position: macular or lighting position at the time of imaging of optic disc 2-6) Number of B scans per cluster: 4
  • Step 902 (S902)> After setting the shooting conditions, the operator operates the input unit 103 and presses a shooting start button (not displayed) in the shooting screen. Thus, repeated OCTA imaging by the tomographic imaging apparatus 100 under the imaging conditions specified in S901 is started. More specifically, the imaging control unit 101-03 instructs the tomographic imaging apparatus 100 to repeatedly perform OCTA imaging based on the setting instructed by the operator in S901. Thereby, the tomographic imaging apparatus 100 acquires a corresponding OCT tomographic image.
  • the tomographic imaging apparatus 100 also acquires an SLO image, and executes a tracking process based on the SLO moving image.
  • the reference SLO image used in the tracking processing in the repeated OCTA imaging is the reference SLO image set in the first cluster imaging, and a common reference SLO image is used in all cluster imaging.
  • the same setting values are used (no change) as to 1) selection of the left and right eyes 2) whether or not to perform the tracking processing in addition to the imaging conditions set in S901.
  • the conditions for the tracking processing are not limited to the above, and can be changed as appropriate according to the conditions for capturing the OCT tomographic image.
  • Step 905 (S905)> Next, the image acquisition unit 101-01 and the image processing unit 101-04 generate a motion contrast image using the OCT tomographic image to which the shadow area correction and alignment processing generated in S904 has been applied. In S905, the motion contrast data generation unit 101-12 calculates a motion contrast between adjacent tomographic images in the same cluster.
  • the decorrelation value Mxy is obtained as the motion contrast based on the following equation (1).
  • Axy indicates the amplitude (of the complex number data after the FFT processing) at the position (x, y) of the tomographic image data A
  • Bxy indicates the amplitude at the same position (x, y) of the tomographic data B.
  • the tomographic image data A and B are tomographic image data in the same cluster, for example, obtained sequentially in time. 0 ⁇ Mxy ⁇ 1, and a value closer to 1 is taken as the difference between the two amplitude values is larger.
  • the decorrelation calculation processing as in Expression (1) is performed between arbitrary adjacent tomographic images (belonging to the same cluster). Then, an image having an average of the obtained (number of tomographic images per cluster minus one) motion contrast values as pixel values is generated as a final motion contrast image.
  • the motion contrast is calculated based on the amplitude of the complex data after the FFT processing.
  • the method of calculating the motion contrast is not limited to this.
  • the motion contrast may be calculated based on the phase information of the complex data, or the motion contrast may be calculated based on both the amplitude and the phase information.
  • the motion contrast may be calculated based on the real part or the imaginary part of the complex data.
  • the decorrelation value is calculated as the motion contrast, but the motion contrast calculation method is not limited to this.
  • the motion contrast may be calculated based on the difference between the two values, or the motion contrast may be calculated based on the ratio of the two values.
  • a final motion contrast image is obtained by calculating an average value of a plurality of acquired decorrelation values, but the present invention is not limited to this.
  • an image having the pixel value of the median value or the maximum value of the acquired plurality of decorrelation values may be generated as the final motion contrast image.
  • the image processing unit 101-04 three-dimensionally aligns a group of motion contrast images obtained through repeated OCTA imaging, and performs averaging to generate a high-contrast combined motion contrast image.
  • the combining process is not limited to the simple averaging.
  • an average value may be used after arbitrarily weighting the luminance value of each motion contrast image.
  • an arbitrary statistical value such as a median value may be calculated.
  • the present invention also includes a case where the positioning process is performed two-dimensionally.
  • the synthesizing unit 101-46 may be configured to determine whether a motion contrast image inappropriate for the synthesizing process is included, and then perform the synthesizing process excluding the motion contrast image determined to be inappropriate. For example, when the evaluation value (for example, the average value or median of decorrelation values) of each motion contrast image is out of a predetermined range, it may be determined that the motion contrast image is not suitable for the combination processing.
  • the evaluation value for example, the average value or median of decorrelation values
  • FIG. 10A is a schematic diagram of a normal motion contrast image
  • FIG. 10B is a schematic diagram of a motion contrast image generated using the shadow region corrected tomographic image. Normally, as shown in FIG. 10A, an area 405 corresponding to PA is generated below an area 404 corresponding to a blood vessel in the motion contrast image. On the other hand, as shown in FIG. 10B, in the present embodiment, the PA under the region 404 in the finally generated motion contrast image is suppressed.
  • the projection unit 101-45 projects a motion contrast image based on the layer boundaries and the positions of the front and back surfaces of the lamina acquired by the image feature acquisition unit 101-44. Then, these are superimposed to generate a front motion contrast image.
  • a projection method at this time either maximum intensity projection (MIP; Maximum Intensity Projection) or average intensity projection (AIP; Average Average Intensity Projection) can be selected.
  • MIP Maximum Intensity Projection
  • AIP Average Average Intensity Projection
  • the image processing apparatus 101 associates the acquired image group (SLO image or tomographic image), the imaging condition data of the image group, and the data obtained in S905 with the examination date and time and the information for identifying the subject's eye. Are stored in the external storage unit 102.
  • the data obtained in S905 includes the generated three-dimensional and frontal motion contrast images, and their accompanying generation condition data.
  • Step 906 (S906)>
  • the display control unit 101-05 causes the display unit 104 to display the tomographic images generated and corrected in S903 and S904, the three-dimensional and frontal motion contrast images synthesized in S905, and information on imaging conditions and synthesis conditions.
  • FIG. 10C shows an example of the report screen 1000 displayed on the display unit 104.
  • the SLO image, the tomographic image with the shadow area corrected, the frontal motion contrast images in different depth ranges generated by combining and projecting in S905, and the corresponding frontal OCT image are displayed.
  • front motion contrast images 1001 and 1005 generated with the retinal surface layer as the projection depth range in the upper row and the retinal deep layer in the lower row are displayed.
  • PA is suppressed in the front motion contrast image 1005 of the deep retina shown in the lower part.
  • the motion contrast image displayed on the display unit 104 is not limited to the front motion contrast image. For example, a three-dimensional (PA suppressed) motion contrast image may be displayed.
  • the projection range of the front motion contrast image can be changed by the operator selecting from a predetermined depth range set 1002, 1006 displayed in the list box. Further, the type and offset position of the layer boundary used to specify the projection range can be changed using the user interfaces 1003 and 1007. Further, by moving the layer boundary data 1004 and 1008 superimposed on the tomographic image by operating the input unit 103, the projection range of the motion contrast image can be changed. Further, the image processing apparatus 801 may be configured so that the user presses the button 1009 shown in FIG. 10C to perform the motion contrast combining processing performed in S905.
  • a B-scan tomographic image or a frontal tomographic image in which the applicability of the shadow area correction processing is changed may be displayed by using the user interface 705 for designating the applicability of the shadow area correction processing in the tomographic image.
  • the presence or absence of the PA suppression processing in the motion contrast image or the motion contrast front image superimposed on the B-scan tomographic image is also changed.
  • a user interface 705 for specifying whether or not to apply the shadow area correction processing on the tomographic image is displayed.
  • the display mode of the user interface for the instruction is not limited to this.
  • the present invention includes a case where a user interface for designating whether or not to apply the PA suppression processing is displayed on the display unit 104, and the image processing content of the image processing unit 101-04 is the same as the input for this display.
  • the PA suppression processing application state on the motion contrast image displayed on the display unit 104 is changed according to an input to the user interface for designating whether or not the PA suppression processing can be applied.
  • the shadow area on the tomographic image (B scan or front view) displayed on the display unit 104 may be corrected in conjunction with the input to the user interface for specifying whether or not to apply the PA suppression processing.
  • the case without correction is also included in the present invention.
  • the shadow region generated under the object in the eye such as the blood vessel in the tomographic image is corrected using the learned model.
  • the image correction of the shadow region is not limited to the image correction using the learned model described above.
  • the present invention includes a case in which PA is suppressed by generating a motion contrast image using a tomographic image in which image correction has been performed on a shadow region based on any known image processing method disclosed in Patent Document 1. It is. An example of such image processing will be described below.
  • the acquired tomographic image for example, points on the inner limiting membrane 1 and the retinal pigment epithelium layer (5) shown in FIG. 4A are detected as candidate points.
  • the shadow area 402 is generated near each candidate point (whether or not the image indicates a shadow area). If it is determined that the area is a shadow area, a statistic related to a luminance value in the shadow area is obtained. After that, the luminance value (pixel value) in the shadow area is corrected based on the obtained statistics. Even with such image processing, a tomographic image in which the pixel value of the shadow region 402 has been corrected can be obtained.
  • the shadow region 402 usually occurs behind a specific object included in the eye to be examined, such as a blood vessel, in the optical axis direction of the measurement light. For this reason, as a pre-processing, a main blood vessel which is the specific object may be identified, and an area where the image processing is performed may be limited to the surrounding area. By adopting such a method, it is expected that the processing speed of image processing of one tomographic image will be increased.
  • the above-described configuration for identifying a shadow region may be used as an identification unit included in the correction units 101-43. Such identification of a blood vessel or a shadow region is also effective in correcting a pixel value using the learned model described above.
  • teacher data consisting of a pair of one tomographic image
  • a case where a tomographic image is divided into a plurality of smaller rectangular areas and a pair with a corresponding image of an image-processed rectangular area is used as teacher data is also assumed.
  • the above-described identification unit it is possible to specify a blood vessel identified as having a possibility of generating a shadow area, or a rectangular area including a configuration identified as a shadow area. Accordingly, by performing image processing on the identified rectangular region including the blood vessel or the shadow region as a partial region in the tomographic image, a reduction in processing time required for image processing of one tomographic image can be expected.
  • the image processing device 801 includes a motion contrast image generation unit in addition to the above-described image acquisition unit and correction unit. Then, the motion contrast image generating means generates a motion contrast image using the plurality of tomographic images whose pixel values have been corrected by the image correcting means.
  • the image acquiring unit acquires a cluster including a plurality of tomographic images acquired by scanning the same position of the eye 200 a plurality of times. It is desirable that the tomographic images included in the cluster be originally acquired from the same position (on the scanning line) on the subject's eye, but are not actually acquired from the exact same position due to the so-called fixation fine movement of the subject's eye.
  • a plurality of tomographic images in a cluster are defined as a plurality of tomographic images acquired with the intention of scanning the same position a plurality of times.
  • the plurality of tomographic images in the cluster can be said to be a plurality of tomographic images obtained by photographing the same portion of the eye to be inspected, and this is obtained by controlling the same portion of the eye to be inspected so that the measurement light is scanned.
  • a plurality of tomographic images may be used.
  • the motion contrast image generation unit (motion contrast data generation unit 101-12, image processing unit 101-04) generates a motion contrast image using a cluster or a plurality of tomographic images.
  • the correction unit corrects the pixel value of the shadow area generated by the object included in the subject's eye 200 in the plurality of tomographic images.
  • the motion contrast image generation unit generates a motion contrast image using the tomographic image after the pixel value correction.
  • a blood vessel is taken as an example of an object included in the eye to be inspected, but an object such as a vitiligo, opacity of an intermediate light-transmitting body, and a lesion such as bleeding is also included.
  • the image processing device 801 can also be constructed as a mode including an image acquisition unit, a motion contrast generation unit, and a correction unit.
  • the image acquiring unit acquires a cluster including a plurality of tomographic images acquired with the intention of scanning the same position of the eye 200 a plurality of times.
  • the motion contrast generation means (motion contrast data generation unit 101-12, image processing unit 101-04) can generate a motion contrast image using this cluster.
  • the correction unit correction unit 101-43) corrects the pixel value of the shadow area in the tomographic image, which is the shadow area generated by the object included in the eye 200 to be inspected.
  • the motion contrast image generation means generates a motion contrast image using a cluster including the tomographic image in which the pixel value of the shadow area has been corrected.
  • the correction of the pixel value at that time can also be performed by various known image processing.
  • the image correction unit performs learning using a pair of a tomographic image including a shadow region acquired from the subject's eye 200 and a tomographic image obtained by performing image processing on the shadow region of the tomographic image including the shadow region. It has a trained model obtained based on it. By using this learned model, a tomographic image after pixel value correction can be easily obtained. However, as described above, the image correction unit can correct the pixel value of the pixel corresponding to the shadow area by a known image processing method without using the learned model.
  • the image processing apparatus 801 causes the display control unit 101-05 (display control unit) to display the display target image on the display unit 104 (display unit). ) May be further provided.
  • the display control unit causes the display unit 104 to display at least one of a tomographic image and a motion contrast image whose pixel values have been corrected.
  • the operator determines whether or not the pixel value is appropriately corrected in the tomographic image. For example, when the correction is incomplete or excessive, the motion contrast image generated using the correction is likely to include PA and the like.
  • an input unit (the input unit 103 and the user interface 705) that receives a determination as to whether the tomographic image after the pixel value correction is acceptable. If the input means determines that the displayed tomographic image is unacceptable, it is not suitable for diagnosis, etc., if a tomographic image with improper pixel value correction is obtained. A motion contrast image may be generated. Therefore, in this case, the display control unit 101-05 causes the display unit 104 to display a motion contrast image generated using a plurality of tomographic images before pixel value correction. Therefore, an image more suitable for diagnosis or the like is always displayed on the display unit 104.
  • the display control unit 101-05 outputs one of the motion contrast image generated using the tomographic image before the pixel value correction and the motion contrast image generated using the tomographic image after the pixel value correction. It is desirable to be able to display.
  • the display control unit 101-05 may cause the display unit 104 to display a user interface for switching whether to apply the correction of the image value of the shadow area to the acquired plurality of tomographic images.
  • the display unit 104 may further display at least one of a character string or a mark indicating the application state of the correction of the image value in the shadow area, and the value of the correction amount of the pixel value.
  • the display unit 104 may display the tomographic image with the corrected pixel value.
  • an image desired by the operator can be provided as appropriate.
  • the image processing device 801 applies the shadow area correction processing to the tomographic image scanned a plurality of times at the same position, and generates a motion contrast image using the obtained tomographic image.
  • a motion contrast image with reduced PA can be obtained.
  • the image processing apparatus describes a case in which a layer shape or a cribriform plate shape is measured using a tomographic image to which a shadow region correction process using a learned model described in the first embodiment is applied. .
  • FIG. 11 shows the configuration of the image processing apparatus 1101 according to the present embodiment. Note that components having substantially the same functions as those of the components of the image processing apparatus 101 according to the first embodiment will be denoted by the same reference numerals, and description thereof will be omitted.
  • the image processing apparatus 1101 according to the present embodiment is different from the first embodiment in that an image processing unit 101-04 includes an analysis unit 101-47.
  • the analyzing unit 101-47 includes an extracting unit 101-471 and a measuring unit 101-472.
  • FIG. 12 is a flowchart showing a flow of operation processing of the entire system in the present embodiment.
  • the processes other than S1205 and S1206 in FIG. 12 are the same as the processes performed in the corresponding steps in the first embodiment, and thus description thereof will be omitted.
  • Step 1205 The extraction unit 101-471 performs a predetermined layer region, a cristal plate region, and a cristal plate hole region based on the retinal and choroid layer boundaries and the anterior / posterior boundary of the lamina cribrosa acquired from the shadow region corrected tomographic image.
  • a high-luminance region within the depth range surrounded by the end of the Bruch's membrane boundary and the front and back surfaces of the cribriform plate detected in S1203 is specified as the cribriform plate region.
  • the low-luminance mass region acquired from the superimposed tomographic front image is specified as the cribriform plate hole region.
  • the sieve plate hole region is not limited to being specified as a two-dimensional region.
  • a three-dimensional Hessian filter is applied to the three-dimensional tomographic image in which the shadow region has been corrected to emphasize the cribriform plate hole region, and then the depth surrounded by the edge of the Bruch's membrane boundary and the front and back surfaces of the cribriform plate.
  • the low-luminance tubular region existing within the range may be specified as a three-dimensional cribrosa plate region.
  • the shadow region correction processing applied to the tomographic image in S1204 reduces the shadow region in the deep and outer layers of the retina, the layer regions belonging to the choroid, and the cribriform plate, in particular. Therefore, the deep and outer layers of the retina, the choroid, the lamina cribrosa, and the lamina cribrosa region can be more accurately specified.
  • the measuring unit 101-472 calculates a measured value related to the layer region belonging to the retina and the choroid and the shape of the lamina cribrosa.
  • the retinal thickness and the choroid thickness are measured as the measurement values relating to the layer shape
  • the thickness in the depth direction of the sieve plate and the diameter of the sieve plate hole are measured as the measurement values relating to the sieve plate shape.
  • the segmentation process and the measurement process performed in S1205 are not limited to being applied to the entire image.
  • the operator may use the input unit 103 to perform a segmentation process or a measurement process only on an area of an arbitrary shape set on a tomographic image or an enhanced image of the tomographic image.
  • the segmentation or measurement processing may be performed only in the ETDRS chart for the macula portion, and only in the (circular graph-shaped) divided circle region for the optic disc.
  • the segmentation process is directly performed on the tomographic image.
  • the present invention is not limited to the exemplified processing, and the segmentation processing may be performed after applying any known enhancement processing to the tomographic image.
  • the display control unit 101-05 causes the display unit 104 to display the information related to the measurement acquired in S1205.
  • FIG. 13 shows an example of a report screen 1300 displayed on the display unit 104.
  • the display control unit 101-05 superimposes a translucent color map 1301 indicating the retinal thickness measured by the measurement unit 101-472 in S1205 on the SLO image 702 at the upper left of the report screen 1300.
  • the thickness map to be displayed is not limited to the retinal thickness, and an arbitrary layer thickness map may be displayed as long as the layer thickness can be measured in S1205.
  • a choroid thickness map may be displayed.
  • a superimposed tomographic image (B-scan image) 704 with the shadow area corrected is displayed on the upper right of the report screen 1300, and a superimposed front tomographic image 703 with the shadow area corrected is displayed on the lower left. Is displayed.
  • a layer thickness (retinal thickness in the present embodiment) graph 1302 measured on the currently displayed B-scan image is displayed.
  • the measurement data calculated based on the layer boundary and the boundary of the lamina cribrosa acquired from the B-scan tomographic image in which the shadow area is suppressed is displayed. For this reason, it is possible to measure the shape of the robust layer or the cribrosa plate that is hardly affected by the shadow region.
  • a user interface 705 for switching whether to apply the shadow area correction processing may be displayed. Furthermore, a character string, a mark, or a correction amount value indicating the application state of the shadow area correction processing on the tomographic image (B scan / front / 3D image) displayed on the display unit 104 may be displayed. These may be displayed together with the user interface 705. The user interface 705 inputs an instruction regarding whether or not the shadow area correction processing can be applied. Then, based on this instruction, whether to apply the shadow area correction processing to the tomographic image (B-scan image 704, front tomographic image 703, 3D image (not shown)) displayed on display unit 104 is switched.
  • the measurement data may be switched and displayed on the display unit 104 in conjunction with the switching of the application of the shadow area correction process to the tomographic image (the B-scan image 704 or the front tomographic image 703, the 3D image).
  • the measurement data includes measurement data relating to the layer shape and the cribriform plate shape for the tomographic image (with / without shadow area correction).
  • a user interface for adjusting the luminance correction amount of the shadow area may be provided so that the user can manually adjust the correction amount in the shadow area.
  • the extraction unit 101-471 and the measurement unit 101-472 calculate measurement data based on the adjusted tomographic image of the shadow correction amount, and the display control unit 101-05 causes the display unit 104 to display this.
  • the correction unit (correction unit 101-43) generates at least one of the retina, choroid, and cribriform plate in the tomographic image.
  • the correction of the pixel value of the shadow region is performed.
  • the specifying unit (extracting units 101-471) specifies at least one boundary of the layer region, the cribrosa region, and the cribrosa hole region of the eye 200 from the tomographic image in which the pixel values have been corrected.
  • the measurement means (measurement units 101-472) calculates a measurement value for at least one of the layer region, the cribriform plate region, the cribriform plate hole region, the vascular region, and the avascular region specified by the specifying unit.
  • the display control unit 101-05 in the image processing device 1101 according to the present embodiment is not described here. However, the display control unit 101-05 can control the display unit 104 to perform similar display, for example, as described in the second embodiment.
  • the image processing apparatus 1101 may be configured to include an image acquisition unit, a correction unit using the learned model, a specifying unit, and a measurement unit.
  • the specifying unit and the measuring unit perform the specifying and measuring process using the tomographic image whose pixel value has been corrected by the correcting unit.
  • the image acquisition unit acquires a tomographic image (intensity image) of the eye 200 to be inspected.
  • the correction unit correction unit 101-43 corrects a pixel value of a shadow region generated in a tomographic image by a structure included in the eye to be inspected.
  • the specifying means specifies at least one boundary of the layer region, the lamina cribrosa region, the lamina cribrosa region, the blood vessel region, and the avascular region of the eye 200 to be examined in the tomographic image.
  • the measurement unit calculates a measurement value for at least one of the layer region, the lamina cribrosa region, the lamina cribrosa region, the blood vessel region, and the avascular region specified by the identification unit.
  • the image processing apparatus 1101 measures the layer shape and the laminar shape on the tomographic image to which the shadow region correction processing using the learned model described in the first embodiment is applied. This makes it possible to measure the shape of a robust layer or cribriform plate that is less likely to be affected by the shadow area.
  • the image processing apparatus uses a PA-suppressed motion contrast image generated using a tomographic image to which the shadow area correction processing described in the second embodiment is applied.
  • the blood vessel region is specified from the motion contrast image, and the shape and distribution of the blood vessel are measured.
  • FIG. 14 shows the configuration of the image processing apparatus 101 according to the present embodiment. Note that components having substantially the same functions as those of the components of the image processing apparatuses 101 and 801 in the first and second embodiments will be denoted by the same reference numerals, and description thereof will be omitted.
  • An image processing apparatus 1401 according to the present embodiment is different from the second embodiment in that an image processing unit 101-04 includes an analysis unit 101-47.
  • the analyzing unit 101-47 includes an emphasizing unit 101-473 for emphasizing a blood vessel region in addition to the extracting unit 101-471 and the measuring unit 101-472 described in the third embodiment.
  • FIG. 15 is a flowchart showing the flow of the operation processing of the entire system in the present embodiment.
  • the processing is the same as the processing performed in each corresponding step in the second embodiment, and the description thereof will be omitted.
  • Step 1507> The operator operates the input unit 103 to instruct the start of the measurement process.
  • the measurement screen is displayed by double-clicking the image on the report screen in FIG. 10C, and the analysis unit 101-47 starts the measurement processing.
  • the operator uses the input unit 103 to select the type of measurement processing on a measurement start screen (not shown).
  • a type of measurement for a motion contrast image 1) Vessel density (VAD) and 2) Vessel density (VLD) The operator selects a desired measurement type from these.
  • VAD Vessel density
  • VLD Vessel density
  • the present invention includes a case where the area or shape of a non-vascular area (NPA) is calculated for a motion contrast image.
  • NPA non-vascular area
  • VAD is an abbreviation for Vessel Area Density, and is a blood vessel density (unit:%) defined by the ratio of the blood vessel region included in the measurement target.
  • VLD is an abbreviation of Vessel Length Density, and is a blood vessel density defined by the total length of blood vessels included in a unit area (unit: mm -1 ).
  • Vessel density is an index for quantifying the occlusion range of blood vessels and the degree of density of the vascular network, and VAD is most often used.
  • VAD the contribution of the large blood vessel region to the measured value is large. Therefore, when it is desired to perform the measurement by paying attention to the pathology of the capillary, VLD is used (as an index more sensitive to the occlusion of the capillary).
  • VLD is used (as an index more sensitive to the occlusion of the capillary).
  • the type of measurement to which the present invention can be applied to blood vessels is not limited to this. For example, Fractal @ Dimension for quantifying the complexity of the blood vessel structure or Vessel @ Diameter @ Index representing the distribution of the blood vessel diameter (the distribution of an aneurysm or stenosis of a blood vessel) may be measured.
  • the analysis unit 101-047 performs pre-processing of the measurement processing. Any known image processing can be applied to the pre-processing.
  • a top-hat filter process which is a type of morphological operation, is performed on a motion contrast image. By applying the top hat filter, the luminance unevenness of the background component can be reduced.
  • the analysis unit 101-47 performs a process of specifying a blood vessel region on the motion contrast image.
  • the enhancement units 101 to 473 perform a blood vessel enhancement process on the motion contrast image based on the Hessian filter.
  • the extraction unit 101-471 performs a segmentation process on the blood vessel enhanced image, and specifies a blood vessel region by performing a shaping process. Details of the blood vessel region identification processing will be described later with reference to S1610 to S1650 shown in the flowchart of FIG. 16A.
  • the measurement units 101-472 measure the blood vessel distribution with respect to the image of the single examination based on the information on the measurement target area specified by the operator. Subsequently, the display control unit 101-05 displays the measurement result on the display unit 104.
  • the blood vessel density which is an index of the blood vessel distribution there are two kinds of indices of VAD and VLD described above.
  • an example of a procedure for calculating VLD which is an index more sensitive to capillary failure is described. Will be described.
  • the measurement of the VLD for the motion contrast image will be described later with reference to S1660 to S1670 shown in the flowchart of FIG. 16B.
  • the display control unit 101-05 causes the display unit 104 to display a report on the measurement performed in step S1509. At this time, the display unit 104 also displays the left and right eyes, the shooting date and time, the angle of view and the number of pixels, the number of tomographic images at substantially the same position, and the OCTA superimposition processing execution conditions for each measurement target image. You may. Further, information regarding the evaluation value of the motion contrast image, the projection method, and whether or not the PA removal has been performed may be displayed on the display unit 104 together.
  • a motion contrast image or a binary image of a blood vessel region or a blood vessel center line may be superimposed on the tomographic front image on the display unit 104 by appropriately changing the color and transparency for each predetermined depth range.
  • the motion contrast image or the binary image of the blood vessel region or the blood vessel center line is not limited to the projection display as the front image, and may be rendered three-dimensionally and displayed as a three-dimensional image.
  • the projection method (MIP / AIP) and the projection artifact suppression processing may be changed by a method such as selection from a context menu.
  • the binary image relating to the blood vessel region specified in S1508, the measured value or the measured value map calculated in S1509 may be output to the external storage unit 102 as a file and stored.
  • the emphasis scale adjustment processing is performed by setting an appropriate emphasis scale (parameter setting) according to the depth range and the region.
  • an appropriate emphasis scale parameter setting
  • a large emphasis scale can be used to reduce the size of vessels with large diameters, such as retinal arteries and veins or deep choroidal vessels. Appropriately emphasized (without separation). Therefore, the blood vessel region can be accurately specified.
  • the edges of thin blood vessels are emphasized by performing the enhancement process on a small scale. Therefore, the blood vessel region can be specified more accurately when binarized (the phenomenon of overdetecting the blood vessel region can be prevented).
  • the emphasis scale is set for the front motion contrast image with respect to the emphasis scale adjustment processing for blood vessels having different thicknesses.
  • an emphasis scale may be adaptively set for a three-dimensional motion contrast image.
  • the enhancement unit 101-473 performs a blood vessel enhancement filter process (tubular structure enhancement process) based on the eigenvalue of the Hessian matrix on the motion contrast image that has been subjected to the preprocessing of S1507.
  • a blood vessel enhancement filter process tubular structure enhancement process
  • Such an emphasis filter is generically referred to as a Hessian filter, and includes, for example, a Vesselness filter and a Multi-scale line filter.
  • a multi-scale line filter is used, but any known blood vessel enhancement filter may be used.
  • the Hessian filter smoothes the image with a size suitable for the diameter of the blood vessel to be emphasized. Then, a Hessian matrix having a second derivative of the luminance value as an element at each pixel of the smoothed image is calculated, and the local structure is emphasized based on the magnitude relation of the eigenvalues of the matrix.
  • the Hessian matrix is a square matrix given by Expression (2), and each element of the matrix is, for example, a second derivative of the luminance value Is of the image obtained by smoothing the luminance value I of the image as shown in Expression (3). Expressed by value.
  • the Hessian filter emphasizes such a Hessian matrix when "one of the eigenvalues ( ⁇ 1, ⁇ 2) is close to 0 and the other is negative and has a large absolute value" as a linear structure. This is to emphasize pixels that satisfy the characteristics of the blood vessel region on the motion contrast image, that is, "the luminance change is small in the running direction and the brightness value is greatly reduced in the direction orthogonal to the running direction" as a linear structure. Is equivalent to
  • the motion contrast image includes blood vessels of various diameters from capillaries to fibrillation veins. Therefore, a line-enhanced image is generated using an Hessian matrix for an image smoothed by a Gaussian filter at a plurality of scales.
  • Expression (4) after multiplying the square of the smoothing parameter ⁇ of the Gaussian filter as a correction coefficient, the combined image is synthesized by a maximum value operation, and the synthesized image Ihesian is set as the output of the Hessian filter.
  • the present invention is not limited to applying a two-dimensional Hessian filter to a front motion contrast image.
  • a three-dimensional hessian filter may be applied to a three-dimensional motion contrast image to generate a three-dimensional enhanced image.
  • the Hessian filter has the advantage of being resistant to noise and improving the continuity of blood vessels.
  • the maximum diameter of the blood vessel included in the image is often unknown in advance, and particularly when the smoothing parameter is too large relative to the maximum diameter of the blood vessel in the image, the emphasized blood vessel region is There is a disadvantage that it tends to be thick.
  • the defect is suppressed by performing the enhancement scale adjustment processing as described in S1610.
  • the method for appropriately enhancing and binarizing the motion contrast image regardless of the blood vessel diameter is not limited to the method described in the present embodiment.
  • a common region of the binary image of the Hessian-enhanced image and the binary image of the blood vessel-enhanced image based on the edge selection sharpening may be specified as a blood vessel region.
  • the extraction unit 101-471 binarizes the blood vessel-enhanced image (hereinafter, referred to as a Hessian-enhanced image) using the Hessian filter generated in S1620.
  • binarization is performed using the average value of the Hessian emphasized image as a threshold.
  • a predetermined lower limit as the threshold value for binarization, it is possible to prevent a region other than a blood vessel from being erroneously detected as an artifact.
  • the binarization processing described here is not limited to threshold processing, and may be binarized by any known segmentation method.
  • the segmentation processing by binarization is not limited to being applied to the entire image.
  • the operator may use the input unit 103 to perform the segmentation processing only on a motion contrast image or an arbitrary shape region set on the enhanced image of the motion contrast image.
  • the extraction unit 101-471 performs a thinning process on the binary image of the blood vessel region generated in S1630 to generate a binary image having a line width of one pixel corresponding to the center line of the blood vessel (hereinafter, referred to as a skeleton image). I do.
  • a skeleton image a binary image having a line width of one pixel corresponding to the center line of the blood vessel.
  • any thinning method or skeleton processing may be used, but in the present embodiment, Hilditch's thinning method is used as the thinning method.
  • the extraction units 101 to 471 perform morphological operation processing (opening processing (performing expansion processing after erosion processing) and closing processing (performing erosion processing after expansion processing)) as shaping processing of a blood vessel region.
  • the shaping process is not limited to this, and for example, small regions may be removed based on the area of each label when a binary image is labeled.
  • the analysis unit 101-47 sets a region of interest (measurement target image and measurement region) based on the content specified by the operator using the input unit 103.
  • the ETDRS sector region is set as the region of interest.
  • the OCTA map refers to a color (or gray scale) map related to the blood vessel density measured in pixel units for the entire OCTA image.
  • the OCTA sector map refers to a statistical value (for example, an average) map of a blood vessel density distribution in the ETDRS sector set in the OCTA image.
  • the mode of the region of interest is not limited to this, and the region of interest may be divided into a pie chart-shaped segmented region, or a region of interest of any shape may be set.
  • the measurement unit 101-472 performs a measurement process based on the skeleton image obtained in S1640.
  • the sum [mm ⁇ 1 ] of the length of non-zero pixels (white pixels) per unit area in the vicinity area around the pixel is determined by the blood vessel density ( VLD).
  • VLD map an image having a value of the blood vessel density (VLD) calculated for each pixel is generated.
  • VAD blood vessel density
  • the shadow region generated under the object in the eye such as the blood vessel in the tomographic image is corrected using the learned model.
  • the method of correcting the shadow area is not limited to the image correction using the learned model.
  • the present invention includes a case in which PA is suppressed by generating a motion contrast image using a tomographic image in which image correction has been performed on a shadow region based on any known image processing method disclosed in Patent Document 1. It is.
  • the image processing apparatus 1401 further includes an area specifying unit in addition to the configuration described in the first embodiment and the second embodiment.
  • the region specifying means (extracting units 101-471) specifies a blood vessel region or an avascular region by, for example, the above-described method from a motion contrast image generated using a plurality of tomographic images with corrected pixel values.
  • the image processing apparatus 1401 further includes a calculation unit (measurement unit 101-472) that calculates a measurement value for at least one of the blood vessel region and the avascular region specified by the region specification unit.
  • the image processing apparatus 1401 identifies a blood vessel region from a PA-suppressed motion contrast image generated using a tomographic image to which the shadow region correction processing described in the second embodiment has been applied, Measure and calculate the density. This makes it possible to accurately specify a blood vessel region and measure a blood vessel shape / distribution from a motion contrast image in which PA is suppressed based on an efficiently trained model in which learning of image features on an OCTA image is unnecessary. it can.
  • the present invention is implemented as the image processing apparatuses 101, 801, 1101, and 1401, but the embodiments of the present invention are not limited to the illustrated image processing apparatuses.
  • the present invention can take an embodiment as a system, an apparatus, a method, a program, a storage medium, or the like. That is, the present invention supplies a program for realizing one or more functions of the above-described embodiments to a system or an apparatus via a network or a storage medium.
  • the present invention can also be realized by processing in which one or more processors in a computer of the system or the apparatus read and execute a program. Further, it can also be realized by a circuit (for example, an ASIC) that realizes one or more functions.
  • a circuit for example, an ASIC
  • the present invention has been described with reference to the exemplary embodiments, the present invention is not limited to the above-described exemplary embodiments.
  • the present invention includes inventions that have been modified within a scope not contrary to the gist of the present invention and inventions equivalent to the present invention.
  • the above embodiments can be combined as appropriate without departing from the spirit of the present invention.

Abstract

This image processing device, for correcting a shadow region in a tomographic image regardless of the ailment or the site, comprises an image acquisition means for acquiring a tomographic image of an eye under examination, and a correction means using a trained model for correcting an image value for the shadow region in the tomographic image, which is a shadow region arising due to an object contained in the eye under examination.

Description

画像処理装置、画像処理方法及びプログラムImage processing apparatus, image processing method, and program
 本発明は、画像処理装置、画像処理方法及びプログラムに関する。 The present invention relates to an image processing device, an image processing method, and a program.
 光干渉断層計(Optical Coherence Tomography:以下OCTと表記)などの眼部の断層画像撮影装置を用いると、網膜層内部の状態を3次元的に観察できる。この断層画像撮影装置は、疾病の診断をより的確に行うのに有用であることから眼科診療に広く用いられている。 With the use of a tomographic imaging apparatus for the eye such as an optical coherence tomograph (optical coherence tomography: hereinafter referred to as OCT), a state inside the retinal layer can be observed three-dimensionally. The tomographic imaging apparatus is widely used for ophthalmic medical treatment because it is useful for more accurately diagnosing a disease.
 OCTの形態として、例えば広帯域な光源とマイケルソン干渉計を組み合わせたTD-OCT(Time domain OCT)がある。これは、参照ミラーの位置を一定速度で移動させて信号アームで取得した後方散乱光との干渉光を計測し、深さ方向の反射光強度分布を得るように構成されている。しかし、このようなTD-OCTでは機械的な走査が必要となるため、高速な画像取得は難しい。そこで、より高速な画像取得法として、SD-OCT(Spectral domain OCT)や、SS-OCT(Swept Source OCT)が開発されている。SD-OCTでは、広帯域光源を用い、分光器で干渉信号を取得する。また、SS-OCTでは、高速波長掃引光源を用いることで時間的に分光する。これらOCTによれば、より広画角・高深達な断層画像を取得できる。 As a form of the OCT, for example, there is a TD-OCT (Time domain OCT) that combines a broadband light source and a Michelson interferometer. This is configured to move the position of the reference mirror at a constant speed, measure interference light with backscattered light acquired by the signal arm, and obtain a reflected light intensity distribution in the depth direction. However, such TD-OCT requires mechanical scanning, so that high-speed image acquisition is difficult. Therefore, SD-OCT (Spectral domain OCT) and SS-OCT (Swept source OCT) have been developed as faster image acquisition methods. In SD-OCT, a broadband light source is used, and an interference signal is acquired by a spectroscope. In the SS-OCT, the light is temporally separated by using a high-speed wavelength sweep light source. According to these OCT, a tomographic image having a wider angle of view and a higher depth can be obtained.
 一方、図4Aに示すように、断層画像403では測定光が血管401等の物体で強く反射または吸収されると、物体の後方において、信号が減衰もしくは欠損する影(シャドー)領域402が発生することがある。なお、図4Aは、影領域が生じている断層画像を例示する図である。ここで物体とは、例えば血管401などの組織や白斑、出血などの病変である。通常は網膜の深さ方向において、視細胞内節外節境界(IS/OS)4もしくは網膜色素上皮5の付近で輝度が最大となる。しかし、血管401下に影領域402が発生すると、該影領域402におけるIS/OS4もしくは網膜色素上皮5付近の輝度が減弱もしくは欠損してしまう。 On the other hand, as shown in FIG. 4A, in the tomographic image 403, when the measurement light is strongly reflected or absorbed by an object such as the blood vessel 401, a shadow (shadow) region 402 where the signal is attenuated or lost occurs behind the object. Sometimes. FIG. 4A is a diagram exemplifying a tomographic image in which a shadow region has occurred. Here, the object is, for example, a tissue such as a blood vessel 401 or a lesion such as vitiligo or bleeding. Normally, the brightness becomes maximum near the photoreceptor inner / outer segment boundary (IS / OS) 4 or the retinal pigment epithelium 5 in the depth direction of the retina. However, when the shadow area 402 is generated below the blood vessel 401, the brightness near the IS / OS 4 or the retinal pigment epithelium 5 in the shadow area 402 is reduced or lost.
 また、眼底血管の病態を把握するために、OCTを用いて非侵襲に眼底血管を3次元で描出するOCT Angiography(以下、OCTAと表記)技術が用いられる。OCTAでは測定光で同一位置を複数回走査し、赤血球の変位と測定光との相互作用により得られるモーションコントラストを画像化する。 O Further, in order to grasp the pathological condition of the fundus blood vessels, an OCT Angiography (hereinafter referred to as OCTA) technique of non-invasively rendering the fundus blood vessels three-dimensionally using OCT is used. In OCTA, the same position is scanned a plurality of times with the measurement light, and the motion contrast obtained by the interaction between the displacement of the red blood cells and the measurement light is imaged.
 この場合の測定光の走査様式について、図4Bを用いて説明する。同図において、主走査方向が水平(x軸)方向であり、x1、x2、・・・、xmの各位置で、干渉信号の取得が行われる。この水平方向の測定光の走査を、ここではBスキャンと称する。同図では、副走査方向(y軸方向)の各位置(yi;1≦i≦n)においてr回連続でBスキャンを行うOCTA撮影の例を示している。 走 査 The scanning mode of the measuring light in this case will be described with reference to FIG. 4B. In the figure, the main scanning direction is the horizontal (x-axis) direction, and an interference signal is obtained at each position of x1, x2,..., Xm. The scanning of the measurement light in the horizontal direction is referred to as a B scan here. The figure shows an example of OCTA imaging in which B scanning is performed r times continuously at each position (yi; 1 ≦ i ≦ n) in the sub-scanning direction (y-axis direction).
 なおOCTA撮影において同一位置で測定光を複数回走査することをクラスタ走査、同一位置で得られた複数枚の断層画像のことをクラスタと呼ぶ。モーションコントラスト画像は、クラスタ単位で生成される。図4Cに3次元OCT断層画像上に3次元モーションコントラスト画像を重畳表示した例を示す。網膜表層血管下の領域405においてプロジェクションアーチファクト(以下PAと表記)である高いモーションコントラストが生じている。PAとは、血管401(網膜表装血管)内のOCT信号が繰り返し増減することによって対応する影領域402のOCT信号も増減し、本来は血管の存在しない網膜外層などに高いモーションコントラストが生じる現象を指す。図4Cでは網膜表層血管に対応する高いモーションコントラスト値を持つ領域404の深層側に、高いモーションコントラスト値を持つ領域405が生じている。 In OCTA imaging, scanning the measurement light at the same position a plurality of times is called cluster scanning, and a plurality of tomographic images obtained at the same position is called a cluster. The motion contrast image is generated in cluster units. FIG. 4C shows an example in which a three-dimensional motion contrast image is superimposed and displayed on a three-dimensional OCT tomographic image. A high motion contrast, which is a projection artifact (hereinafter referred to as PA), occurs in a region 405 below the surface blood vessels of the retina. PA is a phenomenon in which the OCT signal in the shadow region 402 correspondingly increases or decreases as the OCT signal in the blood vessel 401 (retina superficial blood vessel) repeatedly increases and decreases, and high motion contrast occurs in the outer retina layer where blood vessels do not originally exist. Point. In FIG. 4C, a region 405 having a high motion contrast value is formed on the deep side of a region 404 having a high motion contrast value corresponding to a retinal surface blood vessel.
 これに対し、特許文献1に開示する技術では、OCTで取得した眼部の断層画像から層境界を取得し、該層境界における画像特徴(輝度値や層形状)に基づいて影領域を検出している。そして、画像処理によって、該影領域における輝度補正あるいは層検出パラメータの変更を行っている。 On the other hand, in the technology disclosed in Patent Document 1, a layer boundary is obtained from a tomographic image of an eye obtained by OCT, and a shadow region is detected based on image features (brightness values and layer shapes) at the layer boundary. ing. Then, the luminance correction or the layer detection parameter in the shadow area is changed by image processing.
特開2010-279440号公報JP 2010-279440 A
 しかしながら、層境界上の画像特徴に基づき画像処理によりOCT断層像上の影領域を補正する特許文献1に開示される技術では、影領域補正の可否が層検出の可否に依存する。このため、部位や疾病によっては、層検出が難しいため、結果的に影領域補正が困難な場合があった。 However, in the technique disclosed in Patent Document 1 that corrects a shadow area on an OCT tomographic image by image processing based on image features on a layer boundary, the propriety of shadow area correction depends on the propriety of layer detection. For this reason, depending on a part or disease, layer detection is difficult, and as a result, shadow area correction may be difficult.
 本発明は上記課題に鑑みてなされたものであり、疾患や部位によらず断層画像における影領域を補正することを目的の一つとする。 The present invention has been made in view of the above problems, and has as its object to correct a shadow region in a tomographic image regardless of a disease or a site.
 なお、上記目的に限らず、後述する発明を実施するための形態に示す各構成により導かれる作用効果であって、従来の技術によっては得られない作用効果を奏することも本発明の他の目的の一つとして位置付けることができる。 It is to be noted that the present invention is not limited to the above object, and it is another object of the present invention to provide an operation effect derived from each configuration shown in the embodiment for carrying out the invention described later, and to obtain an operation effect which cannot be obtained by the conventional technology. It can be positioned as one of the.
 上記課題を解決するために、本発明の一態様に係る画像処理装置は、
 被検眼の断層画像を取得する画像取得手段と、
 前記被検眼に含まれる物体により生じる影領域であって、前記断層画像における影領域の画素値を、学習済モデルを用いて補正する補正手段と、を備えることを特徴とする。
In order to solve the above problems, an image processing apparatus according to one embodiment of the present invention
Image acquisition means for acquiring a tomographic image of the eye to be examined,
And a correction unit configured to correct a pixel value of the shadow area in the tomographic image, which is a shadow area generated by an object included in the eye to be examined, using a learned model.
 本発明の一つによれば、疾患や部位によらず断層画像における影領域を補正できる。 According to one aspect of the present invention, a shadow area in a tomographic image can be corrected regardless of a disease or a site.
本発明の第一実施形態に係る画像処理装置の構成を示すブロック図である。FIG. 1 is a block diagram illustrating a configuration of an image processing device according to a first embodiment of the present invention. 本発明の実施形態に係る画像処理システムの概略構成図である。FIG. 1 is a schematic configuration diagram of an image processing system according to an embodiment of the present invention. 図2Aに示す画像処理システムを構成する断層画像撮影装置に含まれる測定光学系を説明する図である。FIG. 2B is a diagram illustrating a measurement optical system included in the tomographic image photographing apparatus included in the image processing system illustrated in FIG. 2A. 本発明の第一実施形態に係る画像処理システムが実行可能な処理のフローチャートである。5 is a flowchart of a process that can be executed by the image processing system according to the first embodiment of the present invention. 影領域を含むOCT断層画像を説明する図である。FIG. 3 is a diagram illustrating an OCT tomographic image including a shadow area. OCTA撮影時の測定光の走査方法を説明する図である。FIG. 4 is a diagram illustrating a method of scanning measurement light during OCTA imaging. モーションコントラスト画像における血管下に生じるPAを説明する図である。FIG. 4 is a diagram for explaining PA occurring below a blood vessel in a motion contrast image. 影領域補正済OCT断層画像の例を説明する図である。FIG. 9 is a diagram illustrating an example of a shadow area corrected OCT tomographic image. 本発明の実施形態における学習済モデルを得る際に用いる断層画像のサイズの例を説明する図である。FIG. 4 is a diagram illustrating an example of a size of a tomographic image used when obtaining a learned model according to the embodiment of the present invention. 本発明の実施形態における学習済モデルを得る際に用いる断層画像のサイズの例を説明する図である。FIG. 4 is a diagram illustrating an example of a size of a tomographic image used when obtaining a learned model according to the embodiment of the present invention. 本発明の実施形態における学習済モデルを得る際に用いる断層画像のサイズの例を説明する図である。FIG. 4 is a diagram illustrating an example of a size of a tomographic image used when obtaining a learned model according to the embodiment of the present invention. 本発明の実施形態における学習済モデルを得る際に用いる断層画像のサイズの例を説明する図である。FIG. 4 is a diagram illustrating an example of a size of a tomographic image used when obtaining a learned model according to the embodiment of the present invention. 本発明の実施形態における学習済モデルを得る際に用いる断層画像のサイズの例を説明する図である。FIG. 4 is a diagram illustrating an example of a size of a tomographic image used when obtaining a learned model according to the embodiment of the present invention. 本発明の実施形態における学習済モデルを得る際に用いる断層画像のサイズの例を説明する図である。FIG. 4 is a diagram illustrating an example of a size of a tomographic image used when obtaining a learned model according to the embodiment of the present invention. 本発明の実施形態における学習モデルの例を説明する図である。FIG. 4 is a diagram illustrating an example of a learning model according to the embodiment of the present invention. 本発明の第一実施形態として説明した処理におけるS305で表示手段に表示するレポート画面を説明する図である。It is a figure explaining the report screen displayed on a display means in S305 in the process demonstrated as the first embodiment of the present invention. 本発明の第二実施形態に係る画像処理装置の構成を示すブロック図である。It is a block diagram showing the composition of the image processing device concerning a second embodiment of the present invention. 本発明の第二実施形態に係る画像処理システムが実行可能な処理のフローチャートである。9 is a flowchart of a process that can be executed by the image processing system according to the second embodiment of the present invention. 本発明の第二実施形態として説明した処理におけるS905の処理前のOCT断層画像を説明する図である。It is a figure explaining the OCT tomographic image before processing of S905 in processing explained as a second embodiment of the present invention. S905で生成するモーションコントラスト画像を断層画像に重畳させた画像を説明する図である。FIG. 9 is a diagram illustrating an image in which a motion contrast image generated in S905 is superimposed on a tomographic image. S906で表示手段に表示するレポート画面の例を説明する図である。FIG. 9 is a diagram illustrating an example of a report screen displayed on a display unit in S906. 本発明の第三実施形態に係る画像処理装置の構成を示すブロック図である。It is a block diagram showing the composition of the image processing device concerning a third embodiment of the present invention. 本発明の第三実施形態に係る画像処理システムが実行可能な処理のフローチャートである。13 is a flowchart of a process that can be executed by the image processing system according to the third embodiment of the present invention. 本発明の第三実施形態として説明した処理におけるS1206で表示手段に表示するレポート画面を説明する図である。It is a figure explaining the report screen displayed on a display means in S1206 in the processing explained as a third embodiment of the present invention. 本発明の第四実施形態に係る画像処理装置の構成を示すブロック図である。It is a block diagram showing the composition of the image processing device concerning a 4th embodiment of the present invention. 本発明の第四実施形態に係る画像処理システムが実行可能な処理のフローチャートである。14 is a flowchart of a process that can be executed by the image processing system according to the fourth embodiment of the present invention. 本発明の第四実施形態として説明した処理におけるS1508で実行される処理を説明する図である。It is a figure explaining processing performed at S1508 in processing explained as a 4th embodiment of the present invention. S1509で実行される処理を説明する図である。FIG. 9 is a diagram for describing processing executed in S1509.
 以下、本発明を実施するための例示的な実施例を、図面を参照して詳細に説明する。ただし、以下の実施例で説明する寸法、材料、形状、及び構成要素の相対的な位置等は任意であり、本発明が適用される装置の構成又は様々な条件に応じて変更できる。また、図面において、同一であるか又は機能的に類似している要素を示すために図面間で同じ参照符号を用いる。 Hereinafter, exemplary embodiments for carrying out the present invention will be described in detail with reference to the drawings. However, dimensions, materials, shapes, relative positions of components and the like described in the following embodiments are arbitrary, and can be changed according to the configuration of the apparatus to which the present invention is applied or various conditions. Also, in the drawings, the same reference numerals are used between the drawings to indicate the same or functionally similar elements.
[第一実施形態]
 本実施形態に係る画像処理装置は、様々な部位や疾病の断層画像と、該断層画像に対して影領域補正処理を適用した断層画像とのペアをディープラーニングによる機械学習モデルに学習させる。該学習済の機械学習モデルに対して断層画像を入力することで、断層画像上の影領域をロバストに低減する。
[First embodiment]
The image processing apparatus according to the present embodiment makes a machine learning model by deep learning learn pairs of tomographic images of various parts and diseases and tomographic images obtained by applying a shadow area correction process to the tomographic images. By inputting a tomographic image to the learned machine learning model, a shadow area on the tomographic image is robustly reduced.
 なお、以下において、機械学習モデルとは、ディープラーニング等の機械学習アルゴリズムによる学習モデルをいう。また、学習済モデルとは、任意の機械学習アルゴリズムによる機械学習モデル等に対して、事前に適切な教師データを用いてトレーニングすることで得られた(学習を行った)モデルである。ただし、学習済モデルは、それ以上の学習を行わないものではなく、追加の学習を行うこともできるものとする。 In the following, a machine learning model refers to a learning model based on a machine learning algorithm such as deep learning. The learned model is a model obtained (trained) by previously training an appropriate machine learning model or the like using an arbitrary machine learning algorithm using appropriate teacher data. However, it is assumed that the learned model does not perform any further learning and can perform additional learning.
 以下、図面を参照しながら、本発明の第一実施形態に係る画像処理装置を備える画像処理システムについて説明する。 Hereinafter, an image processing system including an image processing apparatus according to the first embodiment of the present invention will be described with reference to the drawings.
 図2Aは、本実施形態に係る画像処理装置101を備える画像処理システム10の構成を示す図である。図2Aに示すように、画像処理システム10は、画像処理装置101が、インタフェースを介して断層画像撮影装置100(OCTとも言う)、外部記憶部102、入力部103、及び表示部104と接続されることにより構成される。 FIG. 2A is a diagram illustrating a configuration of an image processing system 10 including the image processing apparatus 101 according to the present embodiment. As shown in FIG. 2A, in the image processing system 10, the image processing apparatus 101 is connected to a tomographic image capturing apparatus 100 (also referred to as OCT), an external storage unit 102, an input unit 103, and a display unit 104 via an interface. It is constituted by doing.
 断層画像撮影装置100は、眼部の断層画像を撮影する装置である。本実施形態においては、断層画像撮影装置100としてSD-OCTを用いるものとする。しかし、断層画像撮影装置の態様はこれに限らず、例えばSS-OCTを用いて構成してもよい。 The tomographic image capturing apparatus 100 is an apparatus that captures a tomographic image of the eye. In the present embodiment, it is assumed that SD-OCT is used as the tomographic imaging apparatus 100. However, the mode of the tomographic imaging apparatus is not limited to this, and may be configured using, for example, SS-OCT.
 断層画像撮影装置100は、測定光学系100-1、ステージ部100-2、及びベース部100-3を備える。図2Aにおいて、測定光学系100-1は前眼部像、被検眼のSLO眼底像、断層画像を取得するための光学系である。ステージ部100-2は、測定光学系100-1を前後左右に移動可能に支持する。ベース部100-3は、後述の分光器を内蔵する。 The tomographic imaging apparatus 100 includes a measurement optical system 100-1, a stage unit 100-2, and a base unit 100-3. In FIG. 2A, a measurement optical system 100-1 is an optical system for acquiring an anterior eye image, an SLO fundus image of a subject's eye, and a tomographic image. The stage unit 100-2 supports the measurement optical system 100-1 so as to be movable in the front, rear, left, and right directions. The base unit 100-3 incorporates a spectroscope described later.
 本実施形態において、画像処理装置101は、ステージ部100-2の制御、測定光学系100-1のアラインメント動作の制御、断層画像の再構成などを実行するコンピュータである。外部記憶部102は、断層撮影用のプログラム、患者情報、撮影データや計測データなどを記憶する。入力部103はコンピュータへの指示を行い、具体的にはキーボードとマウスから構成される。表示部104は、例えばモニタからなる。 In the present embodiment, the image processing apparatus 101 is a computer that controls the stage unit 100-2, controls the alignment operation of the measurement optical system 100-1, and reconstructs a tomographic image. The external storage unit 102 stores a program for tomography, patient information, imaging data, measurement data, and the like. The input unit 103 issues an instruction to the computer, and specifically includes a keyboard and a mouse. The display unit 104 includes, for example, a monitor.
(断層画像撮影装置の構成)
 本実施形態の断層画像撮影装置100における測定光学系及び分光器の構成について図2Bを用いて説明する。
(Configuration of tomographic imaging device)
The configuration of the measuring optical system and the spectroscope in the tomographic imaging apparatus 100 of the present embodiment will be described with reference to FIG. 2B.
 まず、測定光学系100-1の内部について説明する。被検眼200に対向して対物レンズ201が設置され、その光軸上に第1ダイクロイックミラー202及び第2ダイクロイックミラー203が配置される。被検眼200に至る光路は、これらのダイクロイックミラーによってOCT光学系の光路250、SLO光学系と固視灯用の光路251、及び前眼観察用の光路252とに波長帯域ごとに分岐される。 First, the inside of the measuring optical system 100-1 will be described. An objective lens 201 is installed to face the subject's eye 200, and a first dichroic mirror 202 and a second dichroic mirror 203 are arranged on the optical axis. The dichroic mirror divides the optical path to the eye 200 into the optical path 250 of the OCT optical system, the optical path 251 for the SLO optical system and the fixation lamp, and the optical path 252 for anterior eye observation for each wavelength band.
 SLO光学系と固視灯用の光路251は、SLO走査手段204、レンズ205、206を有する。また、レンズ206の後段に、ミラー207、第3ダイクロイックミラー208、APD(Avalanche Photodiode)209、SLO光源210、及び固視灯211を有する。ミラー207は、穴あきミラーや中空のミラーが蒸着されたプリズムであり、SLO光源210による照明光と、被検眼からの戻り光とを分離する。第3ダイクロイックミラー208は、光路251を、SLO光源210の光路と固視灯211の光路とに波長帯域ごとに分離する。 The SLO optical system and the optical path 251 for the fixation lamp include an SLO scanning unit 204 and lenses 205 and 206. A mirror 207, a third dichroic mirror 208, an APD (Avalanche Photodiode) 209, an SLO light source 210, and a fixation lamp 211 are provided downstream of the lens 206. The mirror 207 is a prism on which a perforated mirror or a hollow mirror is deposited, and separates illumination light from the SLO light source 210 from light returned from the subject's eye. The third dichroic mirror 208 separates the optical path 251 into an optical path of the SLO light source 210 and an optical path of the fixation lamp 211 for each wavelength band.
 SLO走査手段204は、SLO光源210から発せられた照明光を被検眼200上で走査するものであり、X方向に走査するXスキャナ、Y方向に走査するYスキャナから構成される。本実施形態では、Xスキャナは高速走査を行う必要があるためポリゴンミラーによって、Yスキャナはガルバノミラーによって構成されている。しかし、これらミラーは、求められる仕様に応じて公知の種々の偏向ミラーに適宜置き換えられる。 The SLO scanning unit 204 scans the illumination light emitted from the SLO light source 210 on the subject's eye 200, and includes an X scanner that scans in the X direction and a Y scanner that scans in the Y direction. In the present embodiment, since the X scanner needs to perform high-speed scanning, the X scanner is constituted by a polygon mirror, and the Y scanner is constituted by a galvanometer mirror. However, these mirrors can be appropriately replaced by various known deflection mirrors according to the required specifications.
 レンズ205は、SLO光学系及び固視灯211の焦点合わせのため、不図示のモータによって駆動される。SLO光源210は、780nm付近の波長の光を照明光として発生する。APD209は、被検眼からの戻り光を検出する。固視灯211は可視光を発生して被検者の固視を促す。 The lens 205 is driven by a motor (not shown) for focusing the SLO optical system and the fixation lamp 211. The SLO light source 210 generates light having a wavelength near 780 nm as illumination light. The APD 209 detects the return light from the subject's eye. The fixation lamp 211 generates visible light to urge the subject to fixate.
 SLO光源210から発せられた照明光は、第3ダイクロイックミラー208で反射され、ミラー207を通過する。その後、レンズ206、205を通って、SLO走査手段204によって被検眼200上で走査される。被検眼200からの戻り光は、照明光と同じ経路を戻った後、ミラー207によって反射され、APD209へと導かれ、これによりSLO眼底像が得られる。 The illumination light emitted from the SLO light source 210 is reflected by the third dichroic mirror 208 and passes through the mirror 207. Thereafter, the light passes through the lenses 206 and 205 and is scanned on the eye 200 by the SLO scanning means 204. The return light from the subject's eye 200 returns along the same path as the illumination light, is reflected by the mirror 207, and is guided to the APD 209, whereby an SLO fundus image is obtained.
 固視灯211から発せられた光は、第3ダイクロイックミラー208、ミラー207を透過する。その後、レンズ206、205を通り、SLO走査手段204によって被検眼200上の任意の位置に所定の形状を作り、被検者の固視を促す。 光 The light emitted from the fixation lamp 211 passes through the third dichroic mirror 208 and the mirror 207. Thereafter, a predetermined shape is formed at an arbitrary position on the subject's eye 200 by the SLO scanning means 204 through the lenses 206 and 205, and the subject's fixation is promoted.
 前眼観察用の光路252には、レンズ212、213、スプリットプリズム214、赤外光を検知する前眼部観察用のCCD215が配置される。このCCD215は、不図示の前眼部観察用照射光の波長、具体的には970nm付近に感度を持つ。スプリットプリズム214は、被検眼200の瞳孔と共役な位置に配置されている。スプリットプリズム214を介して得られる前眼部のスプリット像により、被検眼200に対する測定光学系100-1のZ軸方向(光軸方向)の距離を検出できる。 In the optical path 252 for anterior eye observation, lenses 212 and 213, a split prism 214, and an anterior eye part observation CCD 215 for detecting infrared light are arranged. The CCD 215 has sensitivity at the wavelength of irradiation light (not shown) for anterior ocular segment observation, specifically around 970 nm. The split prism 214 is arranged at a position conjugate with the pupil of the eye 200 to be inspected. The distance in the Z-axis direction (optical axis direction) of the measurement optical system 100-1 from the subject's eye 200 can be detected from the split image of the anterior segment obtained through the split prism 214.
 OCT光学系の光路250は前述の通りOCT光学系を構成して、被検眼200の断層画像を撮影するためのものである。より具体的には、断層画像を形成するための干渉信号を得るものである。OCTXYスキャナ216は、測定光を被検眼200上で走査する。図2Bでは1枚のミラーとして図示されているが、実際はXY2軸方向の走査を行う2つのガルバノミラーより構成される。なお、OCTXYスキャナ216の構成はこれに限られず、他の任意の偏向ミラーを用いて構成されてもよい。 The optical path 250 of the OCT optical system constitutes the OCT optical system as described above, and captures a tomographic image of the eye 200 to be inspected. More specifically, an interference signal for forming a tomographic image is obtained. The OCTXY scanner 216 scans the measurement light on the eye 200 to be inspected. Although shown as a single mirror in FIG. 2B, it is actually composed of two galvanometer mirrors that perform scanning in the XY biaxial directions. Note that the configuration of the OCTXY scanner 216 is not limited to this, and may be configured using any other deflecting mirror.
 レンズ217、218のうち、レンズ217は光カプラー219に接続されているファイバー224から出射するOCT光源220からの光を、被検眼200に焦点合わせするために用いられる。具体的には、不図示のモータによって、図中矢印にて示す光軸方向に駆動される。この焦点合わせによって、被検眼200からの戻り光は同時にファイバー224の先端に、スポット状に結像されて入射される。 Among the lenses 217 and 218, the lens 217 is used to focus light from the OCT light source 220 emitted from the fiber 224 connected to the optical coupler 219 to the eye 200. Specifically, it is driven in the optical axis direction indicated by an arrow in the figure by a motor (not shown). By this focusing, the return light from the eye 200 to be inspected is simultaneously focused on the tip of the fiber 224 in the form of a spot and is incident.
 次に、OCT光源220からの光路と参照光学系、分光器の構成について説明する。これら構成には、OCT光源220、参照ミラー221、分散補償ガラス222、レンズ223、光カプラー219、光ファイバー224~227、及び分光器230が含まれる。光ファイバー224~227は、光カプラーに接続されて一体化しているシングルモードの光ファイバーである。 Next, the configuration of the optical path from the OCT light source 220, the reference optical system, and the spectroscope will be described. These components include an OCT light source 220, a reference mirror 221, a dispersion compensation glass 222, a lens 223, an optical coupler 219, optical fibers 224 to 227, and a spectroscope 230. The optical fibers 224 to 227 are single mode optical fibers connected to and integrated with an optical coupler.
 本実施形態では、これらの構成によってマイケルソン干渉計を構成している。OCT光源220から出射された光は、光ファイバー225を通じ、光カプラー219を介して光ファイバー224側に導かれる測定光と、光ファイバー226側に導かれる参照光とに分割される。測定光は前述のOCT光学系光路を通じ、観察対象である被検眼200に照射され、被検眼200による反射や散乱により同じ光路を通じて光カプラー219に到達する。 で は In the present embodiment, the Michelson interferometer is configured by these configurations. The light emitted from the OCT light source 220 is split through the optical fiber 225 into measurement light guided to the optical fiber 224 via the optical coupler 219 and reference light guided to the optical fiber 226. The measurement light is applied to the subject's eye 200 to be observed through the above-described OCT optical system optical path, and reaches the optical coupler 219 via the same optical path due to reflection and scattering by the subject's eye 200.
 一方、参照光は光ファイバー226、レンズ223、及び測定光と参照光の波長分散を合わせるために挿入された分散補償ガラス222を介して参照ミラー221に到達し反射される。そして同じ光路を戻り、光カプラー219に到達する。 On the other hand, the reference light reaches the reference mirror 221 via the optical fiber 226, the lens 223, and the dispersion compensation glass 222 inserted for adjusting the wavelength dispersion of the measurement light and the reference light, and is reflected. Then, the light returns to the same optical path and reaches the optical coupler 219.
 光カプラー219によって、測定光と参照光は合波され干渉光となる。ここで、測定光の光路長と参照光の光路長がほぼ同一となったときに干渉を生じる。参照ミラー221は、不図示のモータ及び駆動機構によって図中矢印にて示す光軸方向に調整可能に保持され、測定光の光路長に参照光の光路長を合わせることが可能である。得られた干渉光は、光ファイバー227を介して分光器230に導かれる。 測定 The measuring light and the reference light are multiplexed by the optical coupler 219 to become interference light. Here, interference occurs when the optical path length of the measurement light and the optical path length of the reference light become substantially the same. The reference mirror 221 is held by a motor and a driving mechanism (not shown) so as to be adjustable in the optical axis direction indicated by the arrow in the figure, and can adjust the optical path length of the reference light to the optical path length of the measurement light. The obtained interference light is guided to the spectroscope 230 via the optical fiber 227.
 本実施形態では、偏光調整部228、229が、各々光ファイバー224、226中に設けられ、偏光調整を行う。これらの偏光調整部228、229は、光ファイバーをループ状に引きまわした部分を幾つか持っている。このループ状の部分を光ファイバーの長手方向を中心として回転させることで光ファイバーに捩じりを加え、測定光と参照光の偏光状態を各々調整して合わせることができる。 In the present embodiment, the polarization adjusting units 228 and 229 are provided in the optical fibers 224 and 226, respectively, and perform polarization adjustment. These polarization adjusting sections 228 and 229 have several portions where optical fibers are looped. By rotating the loop portion about the longitudinal direction of the optical fiber, the optical fiber is twisted, and the polarization states of the measurement light and the reference light can be adjusted and matched.
 分光器230はレンズ232、234、回折格子233、及びラインセンサ231から構成される。光ファイバー227から出射された干渉光はレンズ234を介して平行光となった後、回折格子233で分光され、レンズ232によってラインセンサ231に結像される。 The spectroscope 230 includes lenses 232 and 234, a diffraction grating 233, and a line sensor 231. The interference light emitted from the optical fiber 227 is converted into parallel light through the lens 234, is then separated by the diffraction grating 233, and is imaged on the line sensor 231 by the lens 232.
 次に、OCT光源220の周辺について説明する。OCT光源220は、代表的な低コヒーレント光源であるSLD(Super Luminescent Diode)である。中心波長は855nm、波長バンド幅は約100nmである。ここで、バンド幅は、得られる断層画像の光軸方向の分解能に影響するため、重要なパラメータである。 Next, the periphery of the OCT light source 220 will be described. The OCT light source 220 is an SLD (Super Luminescent Diode) that is a typical low coherent light source. The center wavelength is 855 nm and the wavelength bandwidth is about 100 nm. Here, the bandwidth is an important parameter because it affects the resolution of the obtained tomographic image in the optical axis direction.
 なお、光源として本実施形態ではSLDを選択したが、低コヒーレント光が出射できればよく、ASE(Amplified Spontaneous Emission)などを用いることができる。用いる光の中心波長は、眼を測定することを鑑みると近赤外光が適する。また、中心波長は得られる断層画像の横方向の分解能に影響するため、なるべく短波長であることが望ましい。双方の理由から、本実施形態では中心波長は855nmとした。 In the present embodiment, the SLD is selected as the light source, but it is sufficient that low coherent light can be emitted, and ASE (Amplified Spontaneous Emission) or the like can be used. As the center wavelength of the light used, near-infrared light is suitable in view of measuring the eye. Since the center wavelength affects the resolution of the obtained tomographic image in the horizontal direction, it is desirable that the center wavelength be as short as possible. For both reasons, the center wavelength is 855 nm in this embodiment.
 また、本実施形態では干渉計としてマイケルソン干渉計を用いたが、マッハツェンダー干渉計を用いてもよい。測定光と参照光との光量差に応じて、光量差が大きい場合にはマッハツェンダー干渉計を、光量差が比較的小さい場合にはマイケルソン干渉計を用いることが望ましい。 In the present embodiment, a Michelson interferometer is used as an interferometer, but a Mach-Zehnder interferometer may be used. According to the light amount difference between the measurement light and the reference light, it is desirable to use a Mach-Zehnder interferometer when the light amount difference is large and to use a Michelson interferometer when the light amount difference is relatively small.
(画像処理装置の構成)
 次に、本実施形態の画像処理装置101の構成について図1を用いて説明する。本実施形態において、画像処理装置101は、断層画像撮影装置100に接続されたパーソナルコンピュータ(PC)として構成される。画像処理装置101は、画像取得部101-01、記憶部101-02、撮影制御部101-03、画像処理部101-04、及び表示制御部101-05を備える。
(Configuration of image processing device)
Next, the configuration of the image processing apparatus 101 according to the present embodiment will be described with reference to FIG. In the present embodiment, the image processing apparatus 101 is configured as a personal computer (PC) connected to the tomographic image capturing apparatus 100. The image processing apparatus 101 includes an image acquisition unit 101-01, a storage unit 101-02, a shooting control unit 101-03, an image processing unit 101-04, and a display control unit 101-05.
 また、画像処理装置101は、演算処理装置CPUが画像取得部101-01、撮影制御部101-03、画像処理部101-04及び表示制御部101-05を実現するソフトウェアモジュールを実行することで機能を実現する。なお、本発明はこれに限定されず、例えば画像処理部101-04をASIC等の専用のハードウェアで実現してもよいし、表示制御部101-05をCPUとは異なるGPU等の専用プロセッサを用いて実現してもよい。また、記憶部101-2は、任意のメモリや光学ディスクなどの任意の記憶媒体によって構成されてもよい。さらに、断層画像撮影装置100と画像処理装置101との接続はネットワークを介した構成であってもよい。 In the image processing apparatus 101, the arithmetic processing unit CPU executes software modules that implement the image acquisition unit 101-01, the imaging control unit 101-03, the image processing unit 101-04, and the display control unit 101-05. Implement the function. Note that the present invention is not limited to this. For example, the image processing unit 101-04 may be realized by dedicated hardware such as an ASIC, or the display control unit 101-05 may be realized by a dedicated processor such as a GPU different from a CPU. May be implemented. Further, the storage unit 101-2 may be configured by an arbitrary storage medium such as an arbitrary memory or an optical disk. Further, the connection between the tomographic imaging apparatus 100 and the image processing apparatus 101 may be configured via a network.
 画像取得部101-01は、断層画像撮影装置100により撮影されたSLO眼底像や断層画像の信号データを取得する。また画像取得部101-01は断層画像生成部101-11を有する。断層画像生成部101-11は断層画像撮影装置100により撮影された断層画像の信号データ(干渉信号)を取得して、信号処理により断層画像を生成し、生成した断層画像を記憶部101-02に格納する。 The image acquisition unit 101-01 acquires signal data of an SLO fundus image or a tomographic image captured by the tomographic image capturing apparatus 100. The image acquisition unit 101-01 has a tomographic image generation unit 101-11. The tomographic image generation unit 101-11 acquires signal data (interference signal) of the tomographic image captured by the tomographic image capturing apparatus 100, generates a tomographic image by signal processing, and stores the generated tomographic image in the storage unit 101-02. To be stored.
 撮影制御部101-03は、断層画像撮影装置100に対する撮影制御を行う。撮影制御には、断層画像撮影装置100に対して撮影パラメータの設定に関して指示することや、撮影の開始もしくは終了に関して指示することも含まれる。 (4) The imaging control unit 101-03 controls imaging of the tomographic imaging apparatus 100. The imaging control includes instructing the tomographic imaging apparatus 100 regarding setting of imaging parameters, and instructing the start or end of imaging.
 画像処理部101-04は、撮影条件取得部101-41、位置合わせ部101-42、補正部101-43、画像特徴取得部101-44、及び投影部101-45を有する。前述の画像取得部101-01は、本発明に係る取得手段の一例である。撮影条件取得部101-41は、画像処理部101-04が画像処理を行う際に必要となる入力画像の撮影条件データを取得する。撮影条件データには、例えば、撮影日時、部位名、画角、スキャンモード、画像の解像度や階調数、画素サイズ、画像のデータ形式に関する情報などが含まれる。 The image processing unit 101-04 includes a photographing condition acquisition unit 101-41, a positioning unit 101-42, a correction unit 101-43, an image feature acquisition unit 101-44, and a projection unit 101-45. The above-described image acquisition unit 101-01 is an example of an acquisition unit according to the present invention. The photographing condition acquisition unit 101-41 acquires photographing condition data of an input image required when the image processing unit 101-04 performs image processing. The photographing condition data includes, for example, photographing date and time, part name, angle of view, scan mode, image resolution and number of gradations, pixel size, information on image data format, and the like.
 補正部101-43は、学習済モデルを用いて断層画像における血管等の物体下に生じる影領域を2次元もしくは3次元的に抑制する処理を行う。画像特徴取得部101-44は断層画像から網膜や脈絡膜の層境界、篩状板領域の境界、中心窩や視神経乳頭中心の位置などを取得する。投影部101-45は画像特徴取得部101-44が取得した境界位置に基づく深度範囲で断層画像を投影し、Enface画像などの正面断層画像を生成する。 The correction unit 101-43 performs a process of two-dimensionally or three-dimensionally suppressing a shadow region generated below an object such as a blood vessel in a tomographic image using the learned model. The image feature acquisition unit 101-44 acquires a layer boundary of the retina or choroid, a boundary of the lamina cribrosa region, the position of the fovea or the center of the optic disc, and the like from the tomographic image. The projection unit 101-45 projects a tomographic image in a depth range based on the boundary position acquired by the image feature acquiring unit 101-44, and generates a front tomographic image such as an Enface image.
 外部記憶部102は、被検眼の情報(患者の氏名、年齢、性別など)と、撮影した画像(断層画像及びSLO画像)や該画像を処理して得られた画像、撮影パラメータ、及び操作者が設定したパラメータとを関連付けて保持する。入力部103は、例えば、マウス、キーボード、タッチ操作画面などであり、操作者は、入力部103を介して、画像処理装置101や断層画像撮影装置100へ指示を行う。 The external storage unit 102 stores information on the subject's eye (eg, patient's name, age, and gender), captured images (tomographic images and SLO images), images obtained by processing the images, imaging parameters, and operators. Is stored in association with the parameter set by. The input unit 103 is, for example, a mouse, a keyboard, a touch operation screen, or the like. An operator issues an instruction to the image processing apparatus 101 or the tomographic image capturing apparatus 100 via the input unit 103.
 次に、図3を参照して本実施形態の画像処理装置101の処理手順を説明する。図3は、本実施形態における本システム全体の動作処理の流れを示すフローチャートである。 Next, the processing procedure of the image processing apparatus 101 according to the present embodiment will be described with reference to FIG. FIG. 3 is a flowchart showing a flow of operation processing of the entire system in the present embodiment.
<ステップ301(S301)>
 操作者は、入力部103を操作することにより、断層画像撮影装置100によりOCT画像を撮影する際の撮影条件を設定する。
具体的には、
1) スキャンモードの選択、及び
2) スキャンモードに対応する撮影パラメータ設定
の手順からなる。本実施形態では、以下のように撮影条件を設定する。
1) Macula 3Dスキャンモードを選択
2) 以下の撮影パラメータを設定
2-1) 走査領域サイズ:12x12mm
2-2) 主走査方向:水平方向
2-3) 固視灯位置:黄斑撮影時の点灯位置
<Step 301 (S301)>
By operating the input unit 103, the operator sets imaging conditions when the OCT image is captured by the tomographic image capturing apparatus 100.
In particular,
The procedure includes 1) selection of a scan mode, and 2) setting of shooting parameters corresponding to the scan mode. In the present embodiment, the shooting conditions are set as follows.
1) Select the Macula 3D scan mode 2) Set the following shooting parameters 2-1) Scanning area size: 12x12mm
2-2) Main scanning direction: horizontal direction 2-3) Fixation lamp position: lighting position for macular imaging
<ステップ302(S302)>
 撮影条件の設定終了後、操作者は入力部103を操作して撮影画面中の撮影開始ボタン(非表示)を押下する。これにより、断層画像撮影装置100による、S301で指定した撮影条件でのOCT断層画像の撮影が開始される。具体的には、撮影制御部101-03は、断層画像撮影装置100に対して、S301で操作者が指示した設定に基づいてOCT撮影を実施することを指示する。これにより、断層画像撮影装置100は、対応するOCT断層画像を生成するための干渉信号を取得する。
<Step 302 (S302)>
After setting the shooting conditions, the operator operates the input unit 103 and presses a shooting start button (not displayed) in the shooting screen. Accordingly, the tomographic image capturing apparatus 100 starts capturing an OCT tomographic image under the capturing conditions specified in S301. Specifically, the imaging control unit 101-03 instructs the tomographic imaging apparatus 100 to perform OCT imaging based on the settings instructed by the operator in S301. Thereby, the tomographic imaging apparatus 100 acquires an interference signal for generating a corresponding OCT tomographic image.
 また断層画像撮影装置100はSLO画像の取得も行い、SLO動画像に基づく追尾処理を実行する。なお、本実施形態では同一走査位置における繰り返し撮影回数を1回(繰り返さない)とする。しかし、同一走査位置における繰り返し撮影回数はこれに限られず、任意の回数に設定してよい。なお繰り返し撮影回数が2以上の場合、撮影中の追尾処理に用いる基準SLO画像は1回目のOCT撮影時に設定した基準SLO画像とし、全ての繰り返しOCT撮影において共通の基準SLO画像を用いる。また繰り返し撮影中は、S301で設定した撮影条件に加えて
1) 左右眼の選択、及び
2) 追尾処理の実行の有無
についても同じ設定値を用いる(変更しない)ものとする。しかし、追尾処理の条件はこれに限られず、OCT断層画像の撮影条件などに応じて適宜変更できる。
Further, the tomographic imaging apparatus 100 also acquires an SLO image, and executes a tracking process based on the SLO moving image. In the present embodiment, the number of times of repetitive imaging at the same scanning position is one (not repeated). However, the number of repetitive imagings at the same scanning position is not limited to this, and may be set to an arbitrary number. When the number of times of repetitive imaging is two or more, the reference SLO image used for tracking processing during imaging is the standard SLO image set at the time of the first OCT imaging, and a common reference SLO image is used for all repeated OCT imaging. During repeated shooting, the same set values are used (not changed) for 1) selection of left and right eyes and 2) execution of tracking processing in addition to the shooting conditions set in S301. However, the conditions for the tracking processing are not limited to the above, and can be changed as appropriate according to the conditions for capturing the OCT tomographic image.
<ステップ303(S303)>
 次に、画像取得部101-01及び画像処理部101-04は、S302で取得された干渉信号を用いて断層画像を再構成する。まず断層画像生成部101-11は画像取得部101-01が取得した干渉信号に対して波数変換及び高速フーリエ変換(以下FFTと表記)、絶対値変換(振幅の取得)を行うことで断層画像を生成する。なお、本実施例では干渉信号から断層画像を生成することとしている。しかし、ここで述べる断層画像は、画像生成に至る際に得られる上述した各種変換により得られた種々のデータとして扱うことも可能である。よって、ここで述べる断層画像は、これら種々のデータを含む断層データとも把握できる。次に位置合わせ部101-42は各走査位置より得られた断層画像間での位置合わせを行う。
<Step 303 (S303)>
Next, the image acquisition unit 101-01 and the image processing unit 101-04 reconstruct a tomographic image using the interference signal acquired in S302. First, the tomographic image generation unit 101-11 performs wave number conversion, fast Fourier transform (hereinafter, referred to as FFT), and absolute value conversion (acquisition of amplitude) on the interference signal acquired by the image acquisition unit 101-01, thereby obtaining a tomographic image. Generate In this embodiment, a tomographic image is generated from the interference signal. However, the tomographic image described here can also be handled as various data obtained by the above-described various conversions that are obtained when the image is generated. Therefore, the tomographic image described here can also be grasped as tomographic data including these various data. Next, the positioning unit 101-42 performs positioning between tomographic images obtained from each scanning position.
 なお、S301において同一走査位置における繰り返し撮影回数が2以上の場合は、位置合わせ部101-42が同一走査位置内で撮影した断層画像内の位置合わせも実施する。そして、位置合せ後のこれら断層画像を重ね合わせ、重ね合わせ断層画像を生成する。位置合わせ部101-42は、前述の走査位置間での断層画像間の位置合わせに加え、この操作も併せて実施する。 In the case where the number of times of repetitive imaging at the same scanning position is 2 or more in S301, the positioning unit 101-42 also performs alignment in a tomographic image captured within the same scanning position. Then, the tomographic images after the alignment are superimposed to generate a superimposed tomographic image. The positioning unit 101-42 performs this operation in addition to the above-described positioning between tomographic images between scanning positions.
 さらに、画像特徴取得部101-44は、単独もしくは重ね合わせ済の断層画像から網膜及び脈絡膜の層境界、及び篩状板部の前面・後面の境界(非図示)を取得する。本実施形態では、層境界として内境界膜1、神経線維層-神経節細胞層境界2、神経節細胞層-内網状層境界3、視細胞内節外節接合部4、網膜色素上皮5、ブルッフ膜6、及び脈絡膜-強膜境界7を取得する(図4A参照)。また検出したブルッフ膜6の端部(ブルッフ膜開口端部)を視神経乳頭部のDisc境界として特定する。本実施形態では、網膜及び脈絡膜の層境界、並びに篩状板部の前面・後面境界の取得法として可変形状モデルを用いるが、任意の公知のセグメンテーション手法を用いてよい。また取得する層境界は前述の例に限らない。例えば網膜の内網状層-内顆粒層境界、内顆粒層-外網状層境界、外網状層-外顆粒層境界、外境界膜、視細胞外節先端(COST)などを任意の公知のセグメンテーション法により取得してもよい。あるいは、脈絡膜の脈絡膜毛細血管板-Sattler層境界、Sattler層-Haller層境界を任意の公知のセグメンテーション法により取得する場合も本発明に含まれる。また、篩状板部の前面・後面境界は手動で設定してもよい。例えば、手動により、特定の層境界(例えば内境界膜1)の位置を所定量だけ動かすことにより層境界位置を設定できることとするとよい。なお、層境界及び篩状板の前面・後面境界の取得処理は本ステップでなく後述するS304の画像補正処理後に実施してもよい。 {Circle around (4)} The image feature acquiring unit 101-44 acquires a layer boundary between the retina and the choroid and a boundary (not shown) between the anterior surface and the posterior surface of the cribriform plate from the single or superimposed tomographic images. In the present embodiment, the inner boundary membrane 1, the nerve fiber layer-ganglion cell layer boundary 2, the ganglion cell layer-inner plexiform layer boundary 3, the photoreceptor inner and outer segment junction 4, the retinal pigment epithelium 5, The Bruch's membrane 6 and the choroid-sclera boundary 7 are acquired (see FIG. 4A). In addition, the detected end of the Bruch's membrane 6 (the end of the Bruch's membrane opening) is specified as a Disc boundary of the optic papilla. In the present embodiment, the variable shape model is used as a method for acquiring the layer boundaries of the retina and the choroid, and the front and rear boundaries of the cribriform plate, but any known segmentation method may be used. The layer boundary to be obtained is not limited to the above example. For example, any known segmentation method can be used to determine the inner plexus-inner plexus boundary, the inner plexus-outer plexus boundary, the outer plexus-outer plexus boundary, outer boundary membrane, photoreceptor outer segment tip (COST), etc. May be obtained by Alternatively, the present invention includes the case where the choroid capillary plate-Sattler layer boundary and the Sattler layer-Haller layer boundary of the choroid are obtained by any known segmentation method. The front and rear boundaries of the sieve plate may be manually set. For example, it is preferable that a layer boundary position can be set by manually moving a position of a specific layer boundary (for example, the inner limiting membrane 1) by a predetermined amount. Note that the process of acquiring the layer boundary and the front / rear surface boundary of the cribriform plate may be performed after the image correction process of S304 described later instead of this step.
<ステップ304>
 次に、補正部101-43は、学習済モデルを用いて血管をはじめとする眼部内の物体下に生じる影領域の補正を行う。上述したように、本発明の説明における学習済モデルとは、任意の公知の機械学習アルゴリズムに対して事前に適切な教師データを用いてトレーニングすることで得られたモデルである。本実施形態では、この学習済モデルを用いて、影領域を有する画像あるいははこれを生成するデータから、影領域が現れなかった場合に得られる可能性の高い断層画像を生成する。ここで、機械学習アルゴリズムについて図4A~図4D、図5A~図5F及び図6を用いて説明をする。
<Step 304>
Next, the correction unit 101-43 corrects a shadow region generated below an object in the eye such as a blood vessel using the learned model. As described above, the learned model in the description of the present invention is a model obtained by training in advance using an appropriate teacher data for an arbitrary known machine learning algorithm. In the present embodiment, a tomographic image having a high possibility of being obtained when a shadow region does not appear is generated from an image having a shadow region or data for generating the image using the learned model. Here, the machine learning algorithm will be described with reference to FIGS. 4A to 4D, FIGS. 5A to 5F and FIG.
 教師データは、1つ以上の入力データと出力データとのペア群で構成される。具体的には、OCTによって取得された影領域を含む様々な部位・疾患の断層画像(図4A)と、対応する影領域補正済断層画像(図4D)とのペア群によって構成された教師データ(以下、第一の教師データ)が挙げられる。 The teacher data is composed of one or more pairs of input data and output data. Specifically, teacher data composed of pairs of tomographic images of various parts and diseases including a shadow area obtained by OCT (FIG. 4A) and corresponding shadow area corrected tomographic images (FIG. 4D) (Hereinafter, first teacher data).
 第一の教師データは、例えば影領域が生じている断層画像と例えば特許文献1に開示される画像処理技術を用いて影領域を補正した断層画像とのペア群により構成される。なお、この場合、ペアの一方は、画像処理技術を用いて補正した断層画像に対して、影領域補正が不十分と操作者が判断した画素の輝度値を所望の値に修正して取得してもよい。あるいは、血管のように後方に影を引き起こす原因となる物体を十分含むサイズの領域内で、かつ同一層領域内で計算した近傍領域の輝度統計値(例えば平均値や中央値)で置き換えることにより取得してもよい。なお図4A~図4D、図5A~図5F及び図6では黄斑部領域を含んだ断層画像のみを例示しているが、実際には断層画像内に視神経乳頭部の領域も含まれ、視神経乳頭部の血管下にも影領域が生じる。視神経乳頭部には網膜の大血管が集まる関係で太い影領域が多数生じ、特に視神経乳頭部の深部に存在する篩状板部の描出や特定、計測の妨げになりやすい。もちろん、黄斑部もしくは視神経乳頭部のみを撮影した断層画像を対象としてトレーニングさせてもよい。また、その他の教師データの例として図4Aに示す影領域補正前の断層画像403と、補正値データとのペア群によって構成されている教師データ(以下、第二の教師データ)等が挙げられる。この場合、補正値データとは、影領域補正のために断層画像403における各画素値から、影領域補正後の断層画像(影領域補正済断層画像407)の各画素値を得る際の演算に用いるデータを含む。 The first teacher data is composed of a group of pairs of a tomographic image in which a shadow area has occurred and a tomographic image in which the shadow area has been corrected by using the image processing technique disclosed in Patent Document 1, for example. In this case, one of the pairs obtains the tomographic image corrected using the image processing technique by correcting the brightness value of the pixel determined by the operator to be insufficiently corrected for the shadow region to a desired value. You may. Alternatively, by replacing the brightness statistic (for example, the average value or the median value) of a neighboring region calculated in a region having a size sufficiently including an object causing a shadow behind, such as a blood vessel, and calculated in the same layer region. May be acquired. Although FIGS. 4A to 4D, 5A to 5F, and 6 illustrate only a tomographic image including a macula region, the tomographic image actually includes a region of the optic papilla. Shadow areas also occur below the blood vessels in the area. A large number of thick shadow areas are formed in the optic disc due to the collection of large blood vessels in the retina, and this tends to hinder the depiction, identification, and measurement of the lamina cribrosa existing particularly in the deep portion of the optic disc. Of course, training may be performed on a tomographic image obtained by photographing only the macula or the optic papilla. Further, examples of other teacher data include teacher data (hereinafter, second teacher data) configured by a pair group of the tomographic image 403 before the shadow area correction and the correction value data illustrated in FIG. 4A. . In this case, the correction value data is used to calculate each pixel value of the tomographic image after the shadow region correction (the shadow region corrected tomographic image 407) from each pixel value in the tomographic image 403 for the shadow region correction. Includes data used.
 次に、トレーニング時に用いる画像について説明する。第一の教師データを構成する、影領域補正前の断層画像403と影領域補正済断層画像407とのペア群を構成する画像群を、位置関係が対応する一定の画素サイズの矩形領域画像によって作成する。これに関して図5A~図5Fを用いて以下に説明する。図5A~図5Fは、入力データとしての影領域補正前の断層画像と、出力データとしての影領域補正済断層画像とを各々示す図である。 Next, the images used during training will be described. An image group forming a pair group of the tomographic image 403 before the shadow area correction and the tomographic image corrected for the shadow area 407 constituting the first teacher data is defined by a rectangular area image having a fixed pixel size corresponding to the positional relationship. create. This will be described below with reference to FIGS. 5A to 5F. 5A to 5F are diagrams respectively showing a tomographic image before shadow area correction as input data and a tomographic image with shadow area corrected as output data.
 第一の教師データを構成するペア群の1つに、影領域を含む断層画像403と、影領域補正済断層画像407があるとした場合を考える。この場合、図5A及び図5Bに示すように、ペアを構成する入力データを断層画像501、出力データを断層画像501’とする。なお、図5A及び図5Bでは画像全体をペア画像としているが、ペア画像の構成はこれに限らない。例えば、図5Cに示すように、影領域を含む断層画像403のうちの矩形領域画像5021、5022を入力データとしてもよい。この場合、出力データは、図5Dに示すように、影領域補正済断層画像407における同じ撮影領域である矩形領域画像5021’、5022’となる。すなわち、入力データと出力データのペアを、これら矩形領域画像より構成してもよい。なお、この矩形領域は、Aスキャン単位を基本としている。Aスキャン単位とは、1本のAスキャンでもよいし、数本のAスキャン単位でもよい。あるいは図5Eに示すように、影領域を含む断層画像403のうちの矩形領域画像5031、5032を入力データとしてもよい。この場合、図5Fに示すように、影領域補正済断層画像407における同じ撮影領域である矩形領域画像5031’、5032’が出力データとされ、入力データとのペアを構成する。 {Suppose a case where one of the group of pairs forming the first teacher data includes a tomographic image 403 including a shadow area and a tomographic image 407 having a shadow area corrected. In this case, as shown in FIG. 5A and FIG. 5B, the input data forming the pair is a tomographic image 501 and the output data is a tomographic image 501 ′. 5A and 5B, the entire image is a pair image, but the configuration of the pair image is not limited to this. For example, as shown in FIG. 5C, rectangular area images 5021 and 5022 of the tomographic image 403 including the shadow area may be used as input data. In this case, as shown in FIG. 5D, the output data is rectangular area images 5021 'and 5022' which are the same imaging area in the shadow area corrected tomographic image 407. That is, a pair of the input data and the output data may be constituted by these rectangular area images. Note that this rectangular area is based on the A-scan unit. The A scan unit may be one A scan or several A scan units. Alternatively, as shown in FIG. 5E, rectangular area images 5031 and 5032 of the tomographic image 403 including the shadow area may be used as input data. In this case, as shown in FIG. 5F, rectangular area images 5031 'and 5032', which are the same imaging area in the shadow area corrected tomographic image 407, are set as output data and form a pair with the input data.
 なお、トレーニング時には、スキャン範囲(撮影画角)、スキャン密度(Aスキャン数)を正規化して画像サイズを揃えて、学習時の矩形領域サイズを一定に揃えることが望ましい。図5A~図5Fに示した矩形領域画像は各々別々にトレーニングをする際の矩形領域サイズの一例である。ここで、矩形領域の数は、図5A及び図5Bの場合では1つ、図5C~図5Fの場合では前述のように複数設定可能である。例えば図5Cにおける影領域を含む断層画像403上の矩形領域画像5022を入力データとし、図5Dにおける影領域補正済断層画像407上の同一位置における矩形領域画像5022’を出力データとする。このように、第一の矩形領域画像ペア5021、5021’とは別の矩形領域画像ペアを作成できる。なお、異なる座標に変えながら多数の矩形領域画像のペアを作成することで第一の教師データを構成するペア群が充実する。また、図5C~図5Fの画像の例では離散的に矩形領域を示しているが、実際には、画像を隙間なく連続する一定の画素サイズの矩形領域画像群に分割してペア群を生成するとよい。元となる断層画像及び影補正後の断層画像において、領域の位置を異なる座標に変えながら多数の矩形領域画像のペアを作成することで、教師データを構成するペア群を充実させることができる。また、矩形領域として、より小さな領域の画像を入力データ及び出力データのペアとして選択することで、もともとのペアを構成する断層画像及び影補正後の断層画像から多くのペアデータを生成できる。そのため、学習済モデルのトレーニングにかかる時間を短縮することができる。 At the time of training, it is desirable to normalize the scan range (angle of view) and scan density (number of A-scans) to make the image size uniform, and to make the rectangular area size during learning uniform. The rectangular area images shown in FIGS. 5A to 5F are examples of the rectangular area sizes when training is performed separately. Here, the number of rectangular areas can be set to one in FIGS. 5A and 5B, and a plurality can be set as described above in FIGS. 5C to 5F. For example, the rectangular area image 5022 on the tomographic image 403 including the shadow area in FIG. 5C is input data, and the rectangular area image 5022 'at the same position on the shadow area corrected tomographic image 407 in FIG. 5D is output data. Thus, a rectangular area image pair different from the first rectangular area image pairs 5021 and 5021 'can be created. It should be noted that by creating many pairs of rectangular area images while changing to different coordinates, the group of pairs constituting the first teacher data is enhanced. 5C to 5F, rectangular regions are discretely shown. However, in practice, the image is divided into rectangular region images having a fixed pixel size and without gaps, and a pair group is generated. Good to do. In the original tomographic image and the tomographic image after the shadow correction, by creating many pairs of rectangular area images while changing the position of the area to different coordinates, it is possible to enrich the group of pairs forming the teacher data. Further, by selecting an image of a smaller area as a rectangular area as a pair of input data and output data, it is possible to generate a lot of pair data from the tomographic image forming the original pair and the tomographic image after shadow correction. Therefore, the time required for training the trained model can be reduced.
 次に、本実施形態に係る学習済モデルの一例として、入力された断層画像に対して、影領域補正処理を行う畳み込みニューラルネットワーク(以下CNNと表記)に関して図6を用いて説明する。図6は、補正部101-43における学習済モデルの構成の一例を示している。図6で示す構成は、入力値群を加工して出力する処理を担う、複数の層群によって構成される。なお、この構成に含まれる層の種類としては、図6に示すように、畳み込み(Convolution)層、ダウンサンプリング(Downsampling)層、アップサンプリング(Upsampling)層、合成(Merger)層がある。畳み込み層は、設定されたフィルタのカーネルサイズ、フィルタの数、ストライドの値、ダイレーションの値などのパラメータに従い、入力値群に対して畳み込み処理を行う層である。なお、入力される画像の次元数に応じて、前述のフィルタのカーネルサイズの次元数も変更してもよい。ダウンサンプリング層は、入力値群を間引いたり、合成したりすることによって、出力値群の数を入力値群の数よりも少なくする処理である。具体的には、例えば、Max Pooling処理がある。アップサンプリング層は、入力値群を複製したり、入力値群から補間した値を追加したりすることによって、出力値群の数を入力値群の数よりも多くする処理である。具体的には、例えば、線形補間処理がある。合成層は、ある層の出力値群や画像を構成する画素値群といった値群を、複数のソースから入力し、それらを連結したり、加算したりして合成する処理を行う層である。 Next, as an example of the learned model according to the present embodiment, a convolutional neural network (hereinafter, referred to as CNN) that performs a shadow area correction process on an input tomographic image will be described with reference to FIG. FIG. 6 shows an example of the configuration of the learned model in the correction units 101-43. The configuration illustrated in FIG. 6 is configured by a plurality of layer groups that are responsible for processing the input value group and outputting the processed value group. Note that, as shown in FIG. 6, the types of layers included in this configuration include a convolution layer, a downsampling (Downsampling) layer, an upsampling (Upsampling) layer, and a synthesis (Merger) layer. The convolution layer is a layer that performs a convolution process on an input value group according to parameters such as a set filter kernel size, the number of filters, a stride value, and a dilation value. The number of dimensions of the kernel size of the above-described filter may be changed according to the number of dimensions of the input image. The downsampling layer is a process in which the number of output value groups is made smaller than the number of input value groups by thinning out or combining input value groups. Specifically, for example, there is a Max @ Pooling process. The upsampling layer is a process of increasing the number of output value groups beyond the number of input value groups by duplicating the input value group or adding a value interpolated from the input value group. Specifically, for example, there is a linear interpolation process. The synthesis layer is a layer that performs processing of inputting a value group such as an output value group of a certain layer or a pixel value group forming an image from a plurality of sources, and connecting or adding them to synthesize.
 なお、図6の構成に含まれる畳み込み層群に設定されるパラメータとして、例えば、フィルタのカーネルサイズを幅3画素、高さ3画素、フィルタの数を64とすることで、一定の精度の影領域補正処理が可能である。ただし、ニューラルネットワークを構成する層群やノード群に対するパラメータの設定が異なると、教師データからトレーニングされた傾向を出力データに再現可能な程度が異なる場合があるので注意が必要である。つまり、多くの場合、実施する際の形態に応じて適切なパラメータは異なるので、必要に応じて変更することが好ましい。また、上述したようなパラメータを変更するという方法だけでなく、CNNの構成を変更することによって、CNNがよりよい特性を得られる場合がある。よりよい特性とは、例えば、影領域の補正能が高かったり、影領域補正処理の時間が短かったり、学習済モデルを得る際のトレーニングにかかる時間が短かったりするなどである。 As parameters set in the convolutional layer group included in the configuration of FIG. 6, for example, by setting the kernel size of the filter to 3 pixels in width, 3 pixels in height, and 64 to the number of filters, shadows with a certain accuracy Area correction processing is possible. However, it should be noted that if the parameter setting for the layer group or the node group forming the neural network is different, the degree to which the tendency trained from the teacher data can be reproduced in the output data may be different. That is, in many cases, appropriate parameters are different depending on the mode of implementation, and it is preferable to change them as necessary. Further, by changing the configuration of the CNN as well as the method of changing the parameters as described above, there may be a case where the CNN can obtain better characteristics. The better characteristics include, for example, a high ability to correct the shadow area, a short time for the shadow area correction processing, and a short time for training when obtaining a learned model.
 なお、図示はしないが、CNNの構成の変更例として、例えば、畳み込み層の後にバッチ正規化(Batch Normalization)層を組み込むなどしてもよい。あるいは、正規化線形関数(Rectifier Linear Unit)を用いた活性化層を組み込むなどをしてもよい。 Although not shown, as a modification of the configuration of the CNN, for example, a batch normalization (Batch Normalization) layer may be incorporated after the convolutional layer. Alternatively, an activation layer using a normalized linear function (Rectifier @ Linear @ Unit) may be incorporated.
 このような学習済モデルにデータを入力すると、学習済モデルの設計に従ったデータが出力される。例えば、教師データからトレーニングされた傾向に従って入力データに対応する可能性の高い出力データが出力される。また、例えば、教師データからトレーニングされた出力データの種類のそれぞれについて可能性が数値として出力される、などである。具体的には、例えば、第一の教師データによってトレーニングされた学習済モデルにOCTによって取得された断層画像601を入力すると、影領域補正済断層画像602が出力される。 When data is input to such a trained model, data according to the design of the trained model is output. For example, output data having a high possibility of corresponding to the input data is output according to the tendency trained from the teacher data. Further, for example, the possibility is output as a numerical value for each of the types of output data trained from the teacher data. Specifically, for example, when a tomographic image 601 obtained by OCT is input to a trained model trained by the first teacher data, a shadow region corrected tomographic image 602 is output.
 また、例えば第二の教師データによってトレーニングされた学習済モデルは、OCTの断層画像601を入力すると、影領域を補正する際に適用する補正値群が出力される。なお、教師データを構成するペア群の入力データと出力データの形式や組み合わせは、一方が画像で他方が数値であったり、一方が複数の画像群で構成され他方が数値であったり、双方が画像であったりするなど、実施形態に適した組み合わせで実施される。なお、図5C~図5Fのように領域を分割して学習をしている場合、学習済モデルは、各矩形領域において影領域補正済の断層画像の輝度値もしくは影領域補正用の補正値を出力する。学習済モデルが補正値を出力する場合、補正部101-43は入力された断層画像601の輝度値に対して補正値を演算することで影領域補正済の輝度値を出力する。補正部101-43は、影領域補正済の矩形領域画像群の各々を、入力された矩形領域画像群と同様の位置関係に配置して結合し、影領域を補正した影領域補正済断層画像602を得る。 Also, for example, when the OCT tomographic image 601 is input to the learned model trained by the second teacher data, a group of correction values to be applied when correcting the shadow area is output. The format and combination of the input data and the output data of the pair group forming the teacher data may be such that one is an image and the other is a numerical value, one is a plurality of image groups and the other is a numerical value, or both are numerical values. It is performed in a combination suitable for the embodiment, such as an image. When learning is performed by dividing an area as shown in FIGS. 5C to 5F, the learned model sets the brightness value of the shadow area corrected tomographic image or the correction value for the shadow area correction in each rectangular area. Output. When the learned model outputs a correction value, the correction unit 101-43 calculates a correction value with respect to the input luminance value of the tomographic image 601 to output a shadow area corrected luminance value. The correction unit 101-43 arranges and combines the respective shadow area corrected rectangular area image groups in the same positional relationship as the input rectangular area image group, and corrects the shadow area. 602 is obtained.
 なお、本実施形態では影領域を補正した断層画像に対し、S303で取得した層境界及び篩状板の前面・後面境界位置を初期位置として、層境界及び篩状板の前面・後面境界のセグメンテーションを再度実施するものとする。これにより、影領域の影響を受けにくい、正確な層境界及び篩状板の前面・後面境界を取得できる。さらに、投影部101-45は画像特徴取得部101-44が取得した層境界及び篩状板前面・後面境界の位置を用いて影領域補正済重ね合わせ断層画像を投影し、影領域補正済の重ね合わせ正面断層画像を生成する。なお投影法としては任意の公知の投影が可能であり、本実施形態では平均値投影を行うものとする。しかし、投影法はこれに限られず、公知の種々の投影法が適用可能である。 In the present embodiment, the segmentation of the layer boundary and the front / rear boundary of the lamina cribrosa is performed using the layer boundary and the front / rear boundary positions of the lamina cribrosa as initial positions for the tomographic image in which the shadow region is corrected. Shall be implemented again. This makes it possible to obtain accurate layer boundaries and front / rear boundaries of the cribriform plate that are not easily affected by the shadow area. Further, the projection unit 101-45 projects the shadow area corrected superimposed tomographic image using the position of the layer boundary and the front / back boundary of the lamina cribrosa acquired by the image feature acquisition unit 101-44, and A superimposed front tomographic image is generated. As a projection method, any known projection is possible, and in this embodiment, average projection is performed. However, the projection method is not limited to this, and various known projection methods can be applied.
 最後に、画像処理装置101は、取得した画像群(SLO画像や断層画像)、該画像群の撮影条件データ、及びS304で得られたデータを、検査日時及び被検眼を同定する情報を関連付けて、外部記憶部102に保存する。S304で得られたデータには、影領域補正済の断層画像もしくは補正値、層境界及び篩状板前面・後面境界データなどが含まれる。 Finally, the image processing apparatus 101 associates the acquired image group (SLO image or tomographic image), the imaging condition data of the image group, and the data obtained in S304 with the examination date and time and the information for identifying the eye to be examined. Are stored in the external storage unit 102. The data obtained in S304 includes a tomographic image or a correction value for which the shadow region has been corrected, a layer boundary, and a front / rear boundary data of the lamina cribrosa.
<ステップ305(S305)>
 表示制御部101-05は、S304で生成した影領域補正済の重ね合わせ断層画像(3D画像/Bスキャン画像/正面画像)や撮像条件に関する情報を表示部104に表示させる。図7に、表示部104に表示させるレポート画面700の例を示す。
<Step 305 (S305)>
The display control unit 101-05 causes the display unit 104 to display the information regarding the superimposed tomographic image (3D image / B-scan image / front image) and the imaging condition after the shadow area correction generated in S304. FIG. 7 shows an example of a report screen 700 displayed on the display unit 104.
 表示制御部101-05は、レポート画面700の左上にSLO画像702、右上に影領域補正済の重ね合わせた断層画像(Bスキャン画像704)を表示させる。表示制御部101-05は、SLO画像702上に、影領域補正済のBスキャン画像704(断層画像)を得る際の測定光の走査位置を示す矢印(灰色)を重畳表示させる。また、Bスキャン画像704にS303及びS304で取得した層境界(内境界膜1と網膜色素上皮5等)及び篩状板領域の境界を重畳表示させる。レポート画面700の左下には、影領域補正済の重ね合わせ正面断層画像703を表示する。影領域が抑制されたBスキャン及び正面断層画像が表示されるため、被検眼が持つ本来の輝度で正面断層画像を観察できる。 The display control unit 101-05 displays the SLO image 702 on the upper left of the report screen 700 and the superimposed tomographic image (B-scan image 704) with the shadow area corrected on the upper right. The display control unit 101-05 superimposes and displays an arrow (gray) indicating the scanning position of the measurement light at the time of obtaining the B-scan image 704 (tomographic image) with the shadow area corrected on the SLO image 702. Further, the layer boundaries (the inner limiting membrane 1 and the retinal pigment epithelium 5 and the like) and the boundaries between the lamina cribrosa regions acquired in S303 and S304 are superimposed on the B scan image 704. On the lower left of the report screen 700, a superimposed front tomographic image 703 with the shadow area corrected is displayed. Since the B-scan and the front tomographic image in which the shadow area is suppressed are displayed, the front tomographic image can be observed with the original luminance of the subject's eye.
<ステップ306(S306)>
 レポート画面700において、前述の正面断層画像などを観察した後、操作者は再度新たな画像観察を行うか、これを終了するかについて、入力部103を介して指示する。終了の指示が行われると動作処理は終了する。画像観察継続の指示が入力されると、フローはS302に戻り、新たなOCT断層画像の撮影が始められる。
<Step 306 (S306)>
After observing the above-mentioned front tomographic image on the report screen 700, the operator instructs, via the input unit 103, whether to perform another image observation again or to end the observation. When the end instruction is given, the operation processing ends. When an instruction to continue image observation is input, the flow returns to S302, and imaging of a new OCT tomographic image is started.
 なお図7に示すように、影領域の補正処理の適用可否を切り替えるためのユーザインターフェース705を表示させてもよい。さらには、表示部104に表示する断層画像(Bスキャン/正面/3D画像)に対する影領域補正処理の適用状態を示す文字列やマーク、補正量の値を表示させてもよい。これらはユーザインターフェース705と併せて表示させてもよい。該ユーザインターフェース705により影領域補正処理の適用可否に関する指示が入力される。そして、この指示に基づき、表示部104に表示している断層画像(Bスキャン画像704や正面断層画像703、3D画像(非図示))に対する影領域補正処理の適用可否が切り替えられる。また、影領域の輝度補正量を調節するためのユーザインターフェース(スライダー等)を表示部104に表示させ、ユーザが手動で影領域における補正量を調節可能なように構成してもよい。 As shown in FIG. 7, a user interface 705 for switching whether to apply the shadow area correction process may be displayed. Furthermore, a character string, a mark, or a correction amount value indicating the application state of the shadow area correction processing on the tomographic image (B scan / front / 3D image) displayed on the display unit 104 may be displayed. These may be displayed together with the user interface 705. The user interface 705 inputs an instruction regarding whether or not the shadow area correction processing can be applied. Then, based on this instruction, whether to apply the shadow area correction processing to the tomographic image (B-scan image 704, front tomographic image 703, 3D image (not shown)) displayed on display unit 104 is switched. Further, a user interface (slider or the like) for adjusting the luminance correction amount of the shadow region may be displayed on the display unit 104 so that the user can manually adjust the correction amount of the shadow region.
 本実施形態では補正部101-43は学習済モデルを用いて断層画像における血管等の物体下に生じる影領域を2次元もしくは3次元的に低減している。しかし、本発明は血管影の補正に限定されるものではない。例えば学習済モデルを用いて白斑や中間透光体の混濁、出血等の下に生じる影領域を低減してもよい。すなわち、白斑や硝子体混濁、出血による影領域を含む補正前の断層画像と該影領域を補正した断層画像とのペアを教師データとして学習済モデルのトレーニングを行う。このトレーニングで得られた学習済モデルに対して白斑や硝子体混濁、出血による影領域を含む断層画像を入力することで、断層画像上の影領域をロバストに低減する場合も本発明に含まれる。 で は In the present embodiment, the correction unit 101-43 two-dimensionally or three-dimensionally reduces a shadow region generated under an object such as a blood vessel in a tomographic image using a learned model. However, the present invention is not limited to correction of blood vessel shadows. For example, a shadow region generated under vitiligo, opacity of an intermediate translucent body, bleeding, or the like may be reduced by using a learned model. That is, training of a learned model is performed using a pair of a tomographic image before correction including a shadow region due to vitiligo, vitreous opacity, and bleeding and a tomographic image corrected for the shadow region as teacher data. The present invention also includes a case where a tomographic image including a shadow region caused by vitiligo, vitreous opacity, or bleeding is input to the trained model obtained by this training, so that the shadow region on the tomographic image is robustly reduced. .
 なお、本実施形態では、S304において、様々な部位や疾病の教師データを用いてトレーニングした単一の学習済モデルを用いて断層画像内の血管をはじめとする眼部内の物体下に生じる影領域の補正を行っている。しかし、本発明はこれに限定されるものではない。例えば含まれる血管の密度や太さなどは眼底における部位によって異なるため、単一の学習済モデルにて全ての部位に対応しようとした場合、影領域の低減が不十分となることも考えられる。この場合、補正部101-43に複数の学習済モデルを備えさせ、各学習済モデルについて各々「特定の部位のみの影領域補正前・補正済断層画像のペア」で構成される教師データを用いてトレーニングしておくとよい。同様に、疾病などに応じて、「特定の疾病のみの影領域補正前・補正済断層画像のペア」で構成される教師データを用いてトレーニングしておいてもよい。このような、複数の学習済モデルによる出力に基づいて断層画像内の血管をはじめとする眼部内の物体下に生じる影領域の補正を行う場合も本発明に含まれる。 In the present embodiment, in S304, a shadow generated below an object in the eye such as a blood vessel in a tomographic image is obtained using a single learned model trained using teacher data of various parts and diseases. The area is being corrected. However, the present invention is not limited to this. For example, since the density and thickness of the included blood vessels differ depending on the site in the fundus, it is conceivable that the reduction of the shadow area may be insufficient if a single learned model is used for all sites. In this case, the correction unit 101-43 is provided with a plurality of learned models, and for each of the learned models, teacher data composed of a “pair of shadow region pre-correction / corrected tomographic images of only a specific part” is used. It is good to train. Similarly, training may be performed using teacher data composed of “a pair of a shadow region before correction and a corrected tomographic image of only a specific disease” according to a disease or the like. The present invention includes a case in which a shadow region generated below an object in the eye such as a blood vessel in a tomographic image is corrected based on outputs from a plurality of learned models.
 前述のように、本実施形態に係る画像処理装置101は、画像取得手段と、補正手段とを備える。画像取得手段(画像取得部101-1)は、被検眼200の断層画像を取得する。補正手段(補正部101-43)は、トレーニングにより得られた学習済モデルを用いて、被検眼200に含まれる物体により生じる影領域(402)であって、被検眼200の断層画像に生じる影領域の画素値を補正する。なお、ここで述べる学習済モデルは、被検眼200から取得された影領域を含む断層画像と、影領域を含む断層画像の該影領域に対して画素値を補正する画像処理を行った断層画像とのペアを用いた学習に基づいて得られる。また、本発明は、上述した各手段、もしくは画像処理装置によって実行される画像取得工程及び補正工程を含む画像処理方法としても把握できる。また、本実施形態に係る画像処理装置は、上述した画像取得手段と、生成手段とを備える態様としてもよい。この場合、生成手段は、上述した学習済モデルを用いて、被検眼200に含まれる物体により生じる影領域(402)であって、被検眼200の断層画像に生じる影領域の画素値が補正された断層画像を生成する。 画像 As described above, the image processing apparatus 101 according to the present embodiment includes the image acquisition unit and the correction unit. The image acquisition unit (image acquisition unit 101-1) acquires a tomographic image of the eye 200 to be inspected. The correction unit (correction unit 101-43) is a shadow region (402) generated by an object included in the subject's eye 200 using a learned model obtained by training, and is a shadow region generated in a tomographic image of the subject's eye 200. Correct the pixel value of the area. The trained model described here is a tomographic image including a shadow region acquired from the eye 200 to be examined, and a tomographic image obtained by performing image processing for correcting a pixel value of the shadow region of the tomographic image including the shadow region. Is obtained based on learning using a pair with. In addition, the present invention can be understood as an image processing method including an image acquisition step and a correction step performed by the above-described units or the image processing apparatus. Further, the image processing apparatus according to the present embodiment may be configured to include the above-described image acquisition unit and the generation unit. In this case, the generation unit corrects the pixel value of the shadow area (402) generated by the object included in the eye 200 and the shadow area generated in the tomographic image of the eye 200 using the learned model described above. Generated tomographic image.
 なお、被検眼200の深さ方向において、影領域を生じさせる構成物の態様は異なる。例えば、断層画像内において、網膜、脈絡膜、及び篩状板部の何れの部位に着目するかに応じてPAの態様にも違いが生じる。従って、これら部位に応じた影領域の画像補正を行うことが好ましい。この場合、補正部101-43は、断層画像内の網膜、脈絡膜、及び篩状板部の各々に部位について設けたペアに基づいてトレーニングすることで得た複数の学習済モデルを、部位に応じて用いることが好ましい。さらに、この学習済モデルを用いない場合には、公知の画像処理方法について、部位に応じた様式(処理方法)にて影領域の画像値の補正を行なってもよい。 態 様 In the depth direction of the subject's eye 200, the configuration of the component that generates the shadow region is different. For example, in the tomographic image, the form of PA differs depending on which part of the retina, choroid, or cribriform plate is focused. Therefore, it is preferable to perform image correction of the shadow area according to these parts. In this case, the correction unit 101-43 converts a plurality of learned models obtained by training based on a pair provided for each of the retina, choroid, and lamina cribrosa in the tomographic image according to the region. It is preferable to use them. Further, when the learned model is not used, the image value of the shadow area may be corrected in a known image processing method in a manner (processing method) corresponding to a part.
 以上述べた構成によれば、画像処理装置101は、様々な部位や疾病の断層画像と、該断層画像に対して影領域補正処理を適用した断層画像とのペアをディープラーニングにより学習済モデルにトレーニングさせる。該トレーニング済(学習済)の学習済モデルに対して断層画像を入力することで、断層画像上の影領域をロバストに低減する。これにより、疾患や部位によらず断層画像における影領域を補正できる。 According to the configuration described above, the image processing apparatus 101 converts a pair of a tomographic image of various parts and diseases and a tomographic image obtained by applying the shadow area correction processing to the tomographic image into a trained model by deep learning. Train. By inputting a tomographic image to the trained (learned) learned model, the shadow area on the tomographic image is robustly reduced. This makes it possible to correct a shadow region in a tomographic image regardless of a disease or a part.
[第二実施形態]
 本実施形態に係る画像処理装置は、同一位置で複数回走査した断層画像に対して、第一実施形態で述べた影領域補正処理を適用して得た断層画像を用いてモーションコントラスト画像を生成する。これにより、モーションコントラスト画像において、PAを低減する態様について説明する。
[Second embodiment]
The image processing apparatus according to the present embodiment generates a motion contrast image using a tomographic image obtained by applying the shadow area correction processing described in the first embodiment to a tomographic image scanned a plurality of times at the same position. I do. The manner in which the PA is reduced in the motion contrast image will be described.
 本実施形態に係る画像処理装置801の構成を図8に示す。なお、第一実施形態における画像処理装置101における諸構成と略同じ機能を呈する構成については同じ参照番号を付記することとし、ここでの説明を省略する。本実施形態に係る画像処理装置801は、画像取得部101-01にモーションコントラストデータ生成部101-12を備えること、及び画像処理部に合成部101-46を備えることが第一実施形態と異なっている。 FIG. 8 shows the configuration of the image processing apparatus 801 according to the present embodiment. Note that components having substantially the same functions as those of the components of the image processing apparatus 101 according to the first embodiment will be denoted by the same reference numerals, and description thereof will be omitted. The image processing apparatus 801 according to the present embodiment is different from the first embodiment in that the image acquisition unit 101-01 includes the motion contrast data generation unit 101-12 and the image processing unit includes the synthesis unit 101-46. ing.
 次に、図9を参照して本実施形態の画像処理装置801の処理手順を説明する。図9は、本実施形態における本システム全体の動作処理の流れを示すフローチャートである。なお、本実施形態における動作処理フローのうち、図9におけるS901、S902、S905、及びS906以外は第一実施形態における対応する各ステップで行われる処理と同様であるので、ここでの説明は省略する。 Next, a processing procedure of the image processing apparatus 801 according to the present embodiment will be described with reference to FIG. FIG. 9 is a flowchart showing a flow of operation processing of the entire system in the present embodiment. Note that among the operation processing flows in the present embodiment, except for S901, S902, S905, and S906 in FIG. 9, are the same as the processing performed in the corresponding steps in the first embodiment, and thus description thereof will be omitted. I do.
<ステップ901>
 操作者は入力部103を操作することにより、断層画像撮影装置100によりOCTA画像を撮影する際の撮影条件を設定する。なお、本実施形態では、以下のように撮影条件を設定して、S902において適宜休憩を挟みながら(同一撮影条件の)OCTA撮影を所定の回数だけ繰り返し実行する。
1) OCTAスキャンモードの選択
2) 以下の撮影パラメータを設定
2-1) 走査パターン:Small Square
2-2) 走査領域サイズ:3x3mm
2-3) 主走査方向:水平方向
2-4) 走査間隔:0.01mm
2-5) 固視灯位置:黄斑、又は視神経乳頭撮影時の点灯位置
2-6) 1クラスタあたりのBスキャン数:4
<Step 901>
By operating the input unit 103, the operator sets imaging conditions when an OCTA image is imaged by the tomographic image imaging apparatus 100. In this embodiment, the photographing conditions are set as follows, and OCTA photographing (under the same photographing conditions) is repeatedly executed a predetermined number of times while appropriately taking a break in S902.
1) Select the OCTA scan mode 2) Set the following imaging parameters 2-1) Scan pattern: Small Square
2-2) Scanning area size: 3 x 3 mm
2-3) Main scanning direction: horizontal direction 2-4) Scan interval: 0.01 mm
2-5) Fixation light position: macular or lighting position at the time of imaging of optic disc 2-6) Number of B scans per cluster: 4
<ステップ902(S902)>
 撮影条件の設定終了後、操作者は入力部103を操作して撮影画面中の撮影開始ボタン(非表示)を押下する。これにより、断層画像撮影装置100による、S901で指定した撮影条件による繰り返しOCTA撮影が開始される。具体的には、撮影制御部101-03は、断層画像撮影装置100に対して、S901で操作者が指示した設定に基づいて繰り返しOCTA撮影を実施することを指示する。これにより、断層画像撮影装置100は、対応するOCT断層画像を取得する。
<Step 902 (S902)>
After setting the shooting conditions, the operator operates the input unit 103 and presses a shooting start button (not displayed) in the shooting screen. Thus, repeated OCTA imaging by the tomographic imaging apparatus 100 under the imaging conditions specified in S901 is started. More specifically, the imaging control unit 101-03 instructs the tomographic imaging apparatus 100 to repeatedly perform OCTA imaging based on the setting instructed by the operator in S901. Thereby, the tomographic imaging apparatus 100 acquires a corresponding OCT tomographic image.
 なお、本実施形態では本ステップにおける繰り返し撮像回数(クラスタ数)を5回とする。しかし、繰り返し撮像回数(クラスタ数)はこれに限られず、任意の回数に設定してよい。 In the present embodiment, the number of times of repetitive imaging (the number of clusters) in this step is five. However, the number of times of repetitive imaging (the number of clusters) is not limited to this, and may be set to an arbitrary number.
 また断層画像撮影装置100はSLO画像の取得も行い、SLO動画像に基づく追尾処理を実行する。なお、クラスタ数が2以上の場合、繰り返しOCTA撮影における追尾処理に用いる基準SLO画像は1回目のクラスタ撮影時に設定した基準SLO画像とし、全てのクラスタ撮影において共通の基準SLO画像を用いる。また2回目以降のクラスタ撮影中は、S901で設定した撮影条件に加えて
1) 左右眼の選択
2) 追尾処理の実行有無
についても同じ設定値を用いる(変更しない)ものとする。しかし、追尾処理の条件はこれに限られず、OCT断層画像の撮影条件などに応じて適宜変更できる。
Further, the tomographic imaging apparatus 100 also acquires an SLO image, and executes a tracking process based on the SLO moving image. When the number of clusters is two or more, the reference SLO image used in the tracking processing in the repeated OCTA imaging is the reference SLO image set in the first cluster imaging, and a common reference SLO image is used in all cluster imaging. During the second and subsequent cluster imaging, the same setting values are used (no change) as to 1) selection of the left and right eyes 2) whether or not to perform the tracking processing in addition to the imaging conditions set in S901. However, the conditions for the tracking processing are not limited to the above, and can be changed as appropriate according to the conditions for capturing the OCT tomographic image.
<ステップ905(S905)>
 次に、画像取得部101-01及び画像処理部101-04は、S904で生成された影領域補正及び位置合わせ処理が適用済のOCT断層画像を用いてモーションコントラスト画像を生成する。S905において、モーションコントラストデータ生成部101-12が、同一クラスタ内の隣接する断層画像間でモーションコントラストを算出する。
<Step 905 (S905)>
Next, the image acquisition unit 101-01 and the image processing unit 101-04 generate a motion contrast image using the OCT tomographic image to which the shadow area correction and alignment processing generated in S904 has been applied. In S905, the motion contrast data generation unit 101-12 calculates a motion contrast between adjacent tomographic images in the same cluster.
 本実施形態では、モーションコントラストとして脱相関値Mxyを以下の式(1)に基づき求める。
Figure JPOXMLDOC01-appb-M000001
ここで、Axyは断層画像データAの位置(x,y)における(FFT処理後の複素数データの)振幅、Bxyは断層データBの同一位置(x,y)における振幅を示している。断層画像データA及びBは、同じクラスタ内の、例えば時間的に連続して得られた断層画像データである。0≦Mxy≦1であり、両振幅値の差異が大きいほど1に近い値をとる。式(1)のような脱相関演算処理を(同一クラスタに属する)任意の隣接する断層画像間で行う。そして、得られた(1クラスタあたりの断層画像数 -1)個のモーションコントラスト値の平均を画素値として持つ画像を、最終的なモーションコントラスト画像として生成する。
In the present embodiment, the decorrelation value Mxy is obtained as the motion contrast based on the following equation (1).
Figure JPOXMLDOC01-appb-M000001
Here, Axy indicates the amplitude (of the complex number data after the FFT processing) at the position (x, y) of the tomographic image data A, and Bxy indicates the amplitude at the same position (x, y) of the tomographic data B. The tomographic image data A and B are tomographic image data in the same cluster, for example, obtained sequentially in time. 0 ≦ Mxy ≦ 1, and a value closer to 1 is taken as the difference between the two amplitude values is larger. The decorrelation calculation processing as in Expression (1) is performed between arbitrary adjacent tomographic images (belonging to the same cluster). Then, an image having an average of the obtained (number of tomographic images per cluster minus one) motion contrast values as pixel values is generated as a final motion contrast image.
 なお、本実施形態ではFFT処理後の複素数データの振幅に基づいてモーションコントラストを計算したが、モーションコントラストの計算法はこれに限定されない。例えば複素数データの位相情報に基づいてモーションコントラストを計算してもよいし、振幅と位相の両方の情報に基づいてモーションコントラストを計算してもよい。あるいは、複素数データの実部や虚部に基づいてモーションコントラストを計算してもよい。 In the present embodiment, the motion contrast is calculated based on the amplitude of the complex data after the FFT processing. However, the method of calculating the motion contrast is not limited to this. For example, the motion contrast may be calculated based on the phase information of the complex data, or the motion contrast may be calculated based on both the amplitude and the phase information. Alternatively, the motion contrast may be calculated based on the real part or the imaginary part of the complex data.
 また、本実施形態ではモーションコントラストとして脱相関値を計算したが、モーションコントラストの計算法はこれに限定されない。例えば二つの値の差分に基づいてモーションコントラストを計算してもよいし、二つの値の比に基づいてモーションコントラストを計算してもよい。さらに、本実施形態では取得された複数の脱相関値の平均値を求めることで最終的なモーションコントラスト画像を得ているが、本発明はこれに限定されない。例えば取得された複数の脱相関値の中央値、あるいは最大値を画素値として持つ画像を最終的なモーションコントラスト画像として生成してもよい。 In the present embodiment, the decorrelation value is calculated as the motion contrast, but the motion contrast calculation method is not limited to this. For example, the motion contrast may be calculated based on the difference between the two values, or the motion contrast may be calculated based on the ratio of the two values. Furthermore, in the present embodiment, a final motion contrast image is obtained by calculating an average value of a plurality of acquired decorrelation values, but the present invention is not limited to this. For example, an image having the pixel value of the median value or the maximum value of the acquired plurality of decorrelation values may be generated as the final motion contrast image.
 画像処理部101-04は、繰り返しOCTA撮影を通して得られたモーションコントラスト画像群を3次元的に位置合わせし、加算平均することで高コントラストな合成モーションコントラスト画像を生成する。なお、合成処理は単純加算平均に限定されない。例えば各モーションコントラスト画像の輝度値に対して任意の重みづけをした上で平均した値でもよい。あるいは、中央値をはじめとする任意の統計値を算出してもよい。また位置合わせ処理を2次元的に行う場合も本発明に含まれる。 The image processing unit 101-04 three-dimensionally aligns a group of motion contrast images obtained through repeated OCTA imaging, and performs averaging to generate a high-contrast combined motion contrast image. The combining process is not limited to the simple averaging. For example, an average value may be used after arbitrarily weighting the luminance value of each motion contrast image. Alternatively, an arbitrary statistical value such as a median value may be calculated. The present invention also includes a case where the positioning process is performed two-dimensionally.
 なお、合成部101-46が合成処理に不適なモーションコントラスト画像が含まれているか否かを判定した上で、不適と判定したモーションコントラスト画像を除いて合成処理を行うよう構成してもよい。例えば、各モーションコントラスト画像に対して評価値(例えば脱相関値の平均値や中央値)が所定の範囲外である場合に、合成処理に不適と判定すればよい。 Note that the synthesizing unit 101-46 may be configured to determine whether a motion contrast image inappropriate for the synthesizing process is included, and then perform the synthesizing process excluding the motion contrast image determined to be inappropriate. For example, when the evaluation value (for example, the average value or median of decorrelation values) of each motion contrast image is out of a predetermined range, it may be determined that the motion contrast image is not suitable for the combination processing.
 本実施形態では、影領域補正済の断層像を用いてモーションコントラストを算出している。このため、同一クラスタ内の隣接する断層画像間の血管領域下の位置(x,y)における(FFT処理後の複素数データの)振幅が類似し、結果としてモーションコントラストが抑制される。通常のモーションコントラスト画像の模式図を図10Aに、影領域補正済断層画像を用いて生成したモーションコントラスト画像の模式図を図10Bに示す。通常は、図10Aに示すように、モーションコントラスト画像の血管に対応する領域404下にPAに対応する領域405が生じる。これに対し、図10Bに示すように、本実施形態では最終的に生成されるモーションコントラスト画像における領域404下のPAが抑制される。 In the present embodiment, the motion contrast is calculated using the tomographic image in which the shadow area has been corrected. Therefore, the amplitude (of the complex number data after the FFT processing) at the position (x, y) under the blood vessel region between the adjacent tomographic images in the same cluster is similar, and as a result, the motion contrast is suppressed. FIG. 10A is a schematic diagram of a normal motion contrast image, and FIG. 10B is a schematic diagram of a motion contrast image generated using the shadow region corrected tomographic image. Normally, as shown in FIG. 10A, an area 405 corresponding to PA is generated below an area 404 corresponding to a blood vessel in the motion contrast image. On the other hand, as shown in FIG. 10B, in the present embodiment, the PA under the region 404 in the finally generated motion contrast image is suppressed.
 投影部101-45は、画像特徴取得部101-44が取得した層境界及び篩状板前面及び後面の位置に基づいてモーションコントラスト画像を投影する。そして、これらを重ね合わせて、正面モーションコントラスト画像を生成する。なお、その際の投影法としては、最大値投影(MIP; Maximum Intensity Projection)あるいは平均値投影(AIP; Average Intensity Projection)のいずれかを選択できる。本実施形態では、最大値投影で、モーションコントラスト画像を投影するものとする。 The projection unit 101-45 projects a motion contrast image based on the layer boundaries and the positions of the front and back surfaces of the lamina acquired by the image feature acquisition unit 101-44. Then, these are superimposed to generate a front motion contrast image. As a projection method at this time, either maximum intensity projection (MIP; Maximum Intensity Projection) or average intensity projection (AIP; Average Average Intensity Projection) can be selected. In the present embodiment, it is assumed that a motion contrast image is projected by maximum intensity projection.
 最後に、画像処理装置101は、取得した画像群(SLO画像や断層画像)、該画像群の撮影条件データ、及びS905で得られたデータを、検査日時及び被検眼を同定する情報と関連付けて、外部記憶部102に保存する。S905で得られたデータには、生成した3次元及び正面モーションコントラスト画像、並びにこれら付随する生成条件データが含まれる。 Finally, the image processing apparatus 101 associates the acquired image group (SLO image or tomographic image), the imaging condition data of the image group, and the data obtained in S905 with the examination date and time and the information for identifying the subject's eye. Are stored in the external storage unit 102. The data obtained in S905 includes the generated three-dimensional and frontal motion contrast images, and their accompanying generation condition data.
<ステップ906(S906)>
 表示制御部101-05は、S903及びS904で生成及び補正した断層画像や、S905で合成した3次元及び正面モーションコントラスト画像、及び撮影条件や合成条件に関する情報を表示部104に表示させる。図10Cに、表示部104に表示されるレポート画面1000の例を示す。
<Step 906 (S906)>
The display control unit 101-05 causes the display unit 104 to display the tomographic images generated and corrected in S903 and S904, the three-dimensional and frontal motion contrast images synthesized in S905, and information on imaging conditions and synthesis conditions. FIG. 10C shows an example of the report screen 1000 displayed on the display unit 104.
 本実施形態では、SLO画像及び影領域補正済断層画像、S905で合成及び投影することにより生成した異なる深度範囲の正面モーションコントラスト画像、対応する正面OCT画像を表示する。図10Cに示すレポート画面1000では、上段に網膜表層、下段に網膜深層を各々投影深度範囲として生成した正面モーションコントラスト画像1001、1005を表示している。下段に示す網膜深層の正面モーションコントラスト画像1005ではPAが抑制されている。なお表示部104に表示するモーションコントラスト画像は正面モーションコントラスト画像に限定されるものではない。例えば、3次元の(PAが抑制された)モーションコントラスト画像を表示してもよい。 In the present embodiment, the SLO image, the tomographic image with the shadow area corrected, the frontal motion contrast images in different depth ranges generated by combining and projecting in S905, and the corresponding frontal OCT image are displayed. On the report screen 1000 shown in FIG. 10C, front motion contrast images 1001 and 1005 generated with the retinal surface layer as the projection depth range in the upper row and the retinal deep layer in the lower row are displayed. PA is suppressed in the front motion contrast image 1005 of the deep retina shown in the lower part. The motion contrast image displayed on the display unit 104 is not limited to the front motion contrast image. For example, a three-dimensional (PA suppressed) motion contrast image may be displayed.
 なお、レポート画面1000において、正面モーションコントラスト画像の投影範囲は、リストボックスに表示された既定の深度範囲セット1002、1006から操作者が選択することで変更できる。また、投影範囲の指定に用いる層境界の種類とオフセット位置を、ユーザインターフェース1003、1007を用いて変更できる。さらに、断層画像に重畳された層境界データ1004、1008を入力部103から操作して移動させたりすることで、モーションコントラスト画像の投影範囲を変更できる。また、図10Cに示すボタン1009をユーザが押下ることによって、S905で実施したモーションコントラストの合成処理を実施するように画像処理装置801を構成してもよい。 In the report screen 1000, the projection range of the front motion contrast image can be changed by the operator selecting from a predetermined depth range set 1002, 1006 displayed in the list box. Further, the type and offset position of the layer boundary used to specify the projection range can be changed using the user interfaces 1003 and 1007. Further, by moving the layer boundary data 1004 and 1008 superimposed on the tomographic image by operating the input unit 103, the projection range of the motion contrast image can be changed. Further, the image processing apparatus 801 may be configured so that the user presses the button 1009 shown in FIG. 10C to perform the motion contrast combining processing performed in S905.
 さらに、断層画像における影領域補正処理の適用可否を指定するユーザインターフェース705を用いることにより、影領域補正処理の適用可否を変更したBスキャン断層画像や正面断層画像を表示させてもよい。この場合、ユーザインターフェース705の操作に連動して、Bスキャン断層画像上に重畳されたモーションコントラスト画像やモーションコントラスト正面画像におけるPA抑制処理の有無も変更される。なお図10Cでは断層画像における影領域補正処理の適用可否を指定するユーザインターフェース705を表示することしている。しかし、当該指示用のユーザインターフェースの表示態様は、これに限定されるものではない。例えば、PA抑制処理の適用可否を指定するユーザインターフェースを表示部104に表示しておき、画像処理部101-04における画像処理内容をこの表示に対する入力と同じとする場合も本発明に含まれる。PA抑制処理の適用可否を指定するユーザインターフェースへの入力に応じて、表示部104に表示するモーションコントラスト画像上のPA抑制処理適用状態が変更される。PA抑制処理の適用可否を指定するユーザインターフェースへの入力に応じて、表示部104に表示する(Bスキャンもしくは正面)断層画像上の影領域は連動して補正してもよいし、連動して補正しない場合も本発明に含まれる。 Further, a B-scan tomographic image or a frontal tomographic image in which the applicability of the shadow area correction processing is changed may be displayed by using the user interface 705 for designating the applicability of the shadow area correction processing in the tomographic image. In this case, in conjunction with the operation of the user interface 705, the presence or absence of the PA suppression processing in the motion contrast image or the motion contrast front image superimposed on the B-scan tomographic image is also changed. In FIG. 10C, a user interface 705 for specifying whether or not to apply the shadow area correction processing on the tomographic image is displayed. However, the display mode of the user interface for the instruction is not limited to this. For example, the present invention includes a case where a user interface for designating whether or not to apply the PA suppression processing is displayed on the display unit 104, and the image processing content of the image processing unit 101-04 is the same as the input for this display. The PA suppression processing application state on the motion contrast image displayed on the display unit 104 is changed according to an input to the user interface for designating whether or not the PA suppression processing can be applied. The shadow area on the tomographic image (B scan or front view) displayed on the display unit 104 may be corrected in conjunction with the input to the user interface for specifying whether or not to apply the PA suppression processing. The case without correction is also included in the present invention.
[第二実施形態の変形例]
 なお、第一実施形態では、S904において学習済モデルを用いて断層画像内の血管をはじめとする眼部内の物体下に生じる影領域の補正を行った。しかし、影領域の画像補正は、ここで述べた学習済モデルを用いた画像補正に限定されるものではない。例えば特許文献1に開示される任意の公知の画像処理手法に基づいて影領域に対する画像補正を実施した断層画像を用いてモーションコントラスト画像を生成することにより、PAを抑制する場合も本発明に含まれる。このような画像処理の例について、以下に述べる。
[Modification of Second Embodiment]
In the first embodiment, in S904, the shadow region generated under the object in the eye such as the blood vessel in the tomographic image is corrected using the learned model. However, the image correction of the shadow region is not limited to the image correction using the learned model described above. For example, the present invention includes a case in which PA is suppressed by generating a motion contrast image using a tomographic image in which image correction has been performed on a shadow region based on any known image processing method disclosed in Patent Document 1. It is. An example of such image processing will be described below.
 この場合、取得された断層画像において、図4Aに示される、例えば内境界膜1及び網膜色素上皮層(5)上の点を候補点として検出する。次に、検出した候補点列の連結性に基づいて、各候補点付近において影領域402が生じているか否か(像が影領域を示すものであるか否か)の判定を行う。影領域であると判定された場合には、該影領域内の輝度値に関する統計量を求める。その後、求められた統計量を基に、影領域における輝度値(画素値)の補正を行う。このような画像処理によっても、影領域402の画素値が補正された断層像が得られる。 In this case, in the acquired tomographic image, for example, points on the inner limiting membrane 1 and the retinal pigment epithelium layer (5) shown in FIG. 4A are detected as candidate points. Next, based on the connectivity of the detected candidate point sequence, it is determined whether or not the shadow area 402 is generated near each candidate point (whether or not the image indicates a shadow area). If it is determined that the area is a shadow area, a statistic related to a luminance value in the shadow area is obtained. After that, the luminance value (pixel value) in the shadow area is corrected based on the obtained statistics. Even with such image processing, a tomographic image in which the pixel value of the shadow region 402 has been corrected can be obtained.
 なお、影領域402は、例えば血管等の被検眼に含まれる特定の物体における、測定光の光軸方向の後方に、通常生じる。このため、前処理として、この特定の物体である主要な血管を同定し、画像処理を行う領域をその周辺に限定してもよい。このような手法を取り入れることにより、1断層画像の画像処理の処理速度を上げることが見込まれる。なお、この場合、上述した影領域を同定する構成を、補正部101-43に含まれる同定手段としてもよい。このような血管あるいは影領域の同定は、上述した学習済モデルを用いた画素値の補正においても有効である。例えば、1断層画像同士のペアからなる教師データではなく、断層画像をより小さな複数の矩形領域に分割し、対応する画像処理済の矩形領域の画像とのペアを教師データとする場合も想定される。上述した同定手段を備える場合、影領域を生成する可能性があるとして同定された血管、あるいは影領域として同定された構成を含む矩形領域を特定できる。従って、これら同定された血管あるいは影領域を含むとされた矩形領域を、断層画像内における部分領域として画像処理することにより、1断層画像の画像処理に要する処理時間の短縮も見込める。 Note that the shadow region 402 usually occurs behind a specific object included in the eye to be examined, such as a blood vessel, in the optical axis direction of the measurement light. For this reason, as a pre-processing, a main blood vessel which is the specific object may be identified, and an area where the image processing is performed may be limited to the surrounding area. By adopting such a method, it is expected that the processing speed of image processing of one tomographic image will be increased. In this case, the above-described configuration for identifying a shadow region may be used as an identification unit included in the correction units 101-43. Such identification of a blood vessel or a shadow region is also effective in correcting a pixel value using the learned model described above. For example, instead of teacher data consisting of a pair of one tomographic image, a case where a tomographic image is divided into a plurality of smaller rectangular areas and a pair with a corresponding image of an image-processed rectangular area is used as teacher data is also assumed. You. When the above-described identification unit is provided, it is possible to specify a blood vessel identified as having a possibility of generating a shadow area, or a rectangular area including a configuration identified as a shadow area. Accordingly, by performing image processing on the identified rectangular region including the blood vessel or the shadow region as a partial region in the tomographic image, a reduction in processing time required for image processing of one tomographic image can be expected.
 前述したように、本実施形態に係る画像処理装置801は、前述の画像取得手段、及び補正手段に加え、モーションコントラスト画像生成手段を備える。そして、モーションコントラスト画像生成手段は、画像補正手段により画素値が補正された複数の断層画像を用いてモーションコントラスト画像を生成する。本実施形態において、画像取得手段(画像取得部101-1)は、被検眼200の同一位置を複数回走査して取得した複数の断層画像を含むクラスタを取得する。なお、クラスタに含まれる断層画像は本来被検眼上の同一位置(走査線上)より取得されることが望ましいが、被検眼の所謂固視微動などにより実際には厳密な同一位置からは取得されない。このため、本明細書においては、クラスタにおける複数の断層画像は、同一位置を複数回走査することを意図して取得された複数の断層画像と定義する。なお、クラスタにおける複数の断層画像は、被検眼の同一箇所を撮影して取得した複数の断層画像ともいえるが、これは被検眼の同一箇所を測定光が走査されるように制御して得た複数の断層画像であってもよい。モーションコントラスト画像生成部(モーションコントラストデータ生成部101-12、画像処理部101-04)は、クラスタもしくは複数の断層画像を用いてモーションコントラスト画像を生成する。その際、補正手段(補正部101-43)は、複数の断層画像において、被検眼200に含まれる物体により生じる影領域の画素値を補正する。そして、モーションコントラスト画像生成部は、この画素値補正後の断層画像を用いてモーションコントラスト画像を生成する。なお、被検眼に含まれる物体として本実施形態では血管を例としているが、白斑、中間透光体の混濁、出血などの病変部といったものもこれに含まれる。 As described above, the image processing device 801 according to the present embodiment includes a motion contrast image generation unit in addition to the above-described image acquisition unit and correction unit. Then, the motion contrast image generating means generates a motion contrast image using the plurality of tomographic images whose pixel values have been corrected by the image correcting means. In the present embodiment, the image acquiring unit (image acquiring unit 101-1) acquires a cluster including a plurality of tomographic images acquired by scanning the same position of the eye 200 a plurality of times. It is desirable that the tomographic images included in the cluster be originally acquired from the same position (on the scanning line) on the subject's eye, but are not actually acquired from the exact same position due to the so-called fixation fine movement of the subject's eye. Therefore, in this specification, a plurality of tomographic images in a cluster are defined as a plurality of tomographic images acquired with the intention of scanning the same position a plurality of times. Note that the plurality of tomographic images in the cluster can be said to be a plurality of tomographic images obtained by photographing the same portion of the eye to be inspected, and this is obtained by controlling the same portion of the eye to be inspected so that the measurement light is scanned. A plurality of tomographic images may be used. The motion contrast image generation unit (motion contrast data generation unit 101-12, image processing unit 101-04) generates a motion contrast image using a cluster or a plurality of tomographic images. At this time, the correction unit (correction unit 101-43) corrects the pixel value of the shadow area generated by the object included in the subject's eye 200 in the plurality of tomographic images. Then, the motion contrast image generation unit generates a motion contrast image using the tomographic image after the pixel value correction. In the present embodiment, a blood vessel is taken as an example of an object included in the eye to be inspected, but an object such as a vitiligo, opacity of an intermediate light-transmitting body, and a lesion such as bleeding is also included.
 なお、本実施形態に係る画像処理装置801は、画像取得手段と、モーションコントラスト生成手段と、補正手段とを備える態様として構築することも可能である。この場合、画像取得手段(画像取得部101-1)は、被検眼200の同一位置を複数回走査することを意図して取得した複数の断層画像を含むクラスタを取得する。モーションコントラスト生成手段(モーションコントラストデータ生成部101-12、画像処理部101-04)は、このクラスタを用いてモーションコントラスト画像を生成可能である。補正手段(補正部101-43)は、前述のように、被検眼200に含まれる物体により生じる影領域であって、断層画像における影領域の画素値を補正する。そして、モーションコントラスト画像生成手段は、影領域の画素値が補正された断層画像を含むクラスタを用いてモーションコントラスト画像を生成する。その際の画素値の補正は、公知の種々の画像処理によって行うこともできる。 Note that the image processing device 801 according to the present embodiment can also be constructed as a mode including an image acquisition unit, a motion contrast generation unit, and a correction unit. In this case, the image acquiring unit (image acquiring unit 101-1) acquires a cluster including a plurality of tomographic images acquired with the intention of scanning the same position of the eye 200 a plurality of times. The motion contrast generation means (motion contrast data generation unit 101-12, image processing unit 101-04) can generate a motion contrast image using this cluster. As described above, the correction unit (correction unit 101-43) corrects the pixel value of the shadow area in the tomographic image, which is the shadow area generated by the object included in the eye 200 to be inspected. Then, the motion contrast image generation means generates a motion contrast image using a cluster including the tomographic image in which the pixel value of the shadow area has been corrected. The correction of the pixel value at that time can also be performed by various known image processing.
 なお、画像補正手段は、被検眼200から取得された影領域を含む断層画像と、影領域を含む断層画像の該影領域に対して画像処理を行った断層画像とのペアを用いた学習に基づいて得られた学習済モデルを備える。この学習済モデルを用いることにより、画素値補正後の断層画像を簡易に得られる。しかし、前述したように、画像補正手段は、この学習済モデルによらず、公知の画像処理方法によって影領域に対応する画素の画素値を補正することもできる。 Note that the image correction unit performs learning using a pair of a tomographic image including a shadow region acquired from the subject's eye 200 and a tomographic image obtained by performing image processing on the shadow region of the tomographic image including the shadow region. It has a trained model obtained based on it. By using this learned model, a tomographic image after pixel value correction can be easily obtained. However, as described above, the image correction unit can correct the pixel value of the pixel corresponding to the shadow area by a known image processing method without using the learned model.
 なお、操作者が被検眼の画像を観察するために、本実施形態に係る画像処理装置801は、表示対象画像を表示部104(表示手段)に表示させる表示制御部101-05(表示制御手段)をさらに備えるとよい。本実施形態において、該表示制御部は、画素値が補正された断層画像及びモーションコントラスト画像の少なくともいずれかを表示部104に表示させる。このような態様において、画素値が補正された断層画像が表示部104に表示された場合に、この断層画像において画素値が適切に補正されているか否かを操作者が判断することが好ましい。例えば、不完全或いは過剰な補正がされている場合、これを用いて生成されるモーションコントラスト画像もPAなどを含むものとなる可能性が高い。 Note that, in order for the operator to observe the image of the eye to be inspected, the image processing apparatus 801 according to the present embodiment causes the display control unit 101-05 (display control unit) to display the display target image on the display unit 104 (display unit). ) May be further provided. In the present embodiment, the display control unit causes the display unit 104 to display at least one of a tomographic image and a motion contrast image whose pixel values have been corrected. In such an embodiment, when the tomographic image with the corrected pixel value is displayed on the display unit 104, it is preferable that the operator determines whether or not the pixel value is appropriately corrected in the tomographic image. For example, when the correction is incomplete or excessive, the motion contrast image generated using the correction is likely to include PA and the like.
 これに対し、画素値補正後の断層画像が受け入れられるものであるか否かの判断を受け付ける入力手段(入力部103、ユーザインターフェース705)をさらに備えることが好ましい。表示された断層画像が受け入れられないものであるとの判断が入力手段に受け付けられた場合には、画素値の補正が不適当な断層画像が得られている場合には、診断などに適さないモーションコントラスト画像が生成される恐れがある。従って、この場合、表示制御部101-05は、画素値補正前の複数の断層画像を用いて生成したモーションコントラスト画像を表示部104に表示させる。従って、表示部104において診断などにより適した画像が常に表示される。また、前述したように、画素値補正後の断層画像から生成されたモーションコントラスト画像がPAなどにより例えば診断に不適切である場合も考えられる。このために、表示制御部101-05は、画素値補正前の断層画像を用いて生成したモーションコントラスト画像と、画素値補正後の断層画像を用いて生成したモーションコントラスト画像とのいずれかを、表示させることが可能であるとよい。 In contrast, it is preferable to further include an input unit (the input unit 103 and the user interface 705) that receives a determination as to whether the tomographic image after the pixel value correction is acceptable. If the input means determines that the displayed tomographic image is unacceptable, it is not suitable for diagnosis, etc., if a tomographic image with improper pixel value correction is obtained. A motion contrast image may be generated. Therefore, in this case, the display control unit 101-05 causes the display unit 104 to display a motion contrast image generated using a plurality of tomographic images before pixel value correction. Therefore, an image more suitable for diagnosis or the like is always displayed on the display unit 104. Further, as described above, a case where the motion contrast image generated from the tomographic image after the pixel value correction is inappropriate for diagnosis due to PA or the like may be considered. For this purpose, the display control unit 101-05 outputs one of the motion contrast image generated using the tomographic image before the pixel value correction and the motion contrast image generated using the tomographic image after the pixel value correction. It is desirable to be able to display.
 前述したように、表示制御部101-05は、取得された複数の断層画像に対する前記影領域の画像値の補正の適用可否を切り替えるためのユーザインターフェースを表示部104に表示させてもよい。この場合、さらに、影領域の画像値の補正の適用状態を示す文字列もしくはマーク、又は画素値の補正量の値の少なくとも一つについても、併せて表示部104に表示させるとよい。これらを表示することにより、操作者は画素値の補正が行われているか否か、更には補正の程度が適切か否かを瞬時に判断でき、必要に応じてモーションコントラスト画像を切り替えることが可能となる。さらに、このユーザインターフェースに、影領域の画像値の補正を適用する指示が入力された場合に、画素値が補正された断層画像を表示部104に表示させることとするとよい。このような構成とすることにより、操作者が見たい画像を適宜提供できるようになる。 As described above, the display control unit 101-05 may cause the display unit 104 to display a user interface for switching whether to apply the correction of the image value of the shadow area to the acquired plurality of tomographic images. In this case, the display unit 104 may further display at least one of a character string or a mark indicating the application state of the correction of the image value in the shadow area, and the value of the correction amount of the pixel value. By displaying these, the operator can instantaneously determine whether or not the pixel value has been corrected, and whether or not the degree of correction is appropriate, and can switch the motion contrast image as necessary. Becomes Furthermore, when an instruction to apply the correction of the image value of the shadow area is input to the user interface, the display unit 104 may display the tomographic image with the corrected pixel value. With such a configuration, an image desired by the operator can be provided as appropriate.
 以上述べた構成によれば、画像処理装置801は、同一位置で複数回走査した断層画像に対して影領域補正処理を適用し、得られた断層画像を用いてモーションコントラスト画像を生成する。影領域を補正した断層画像を用いることにより、PAを低減したモーションコントラスト画像が得られる。 According to the configuration described above, the image processing device 801 applies the shadow area correction processing to the tomographic image scanned a plurality of times at the same position, and generates a motion contrast image using the obtained tomographic image. By using the tomographic image in which the shadow area has been corrected, a motion contrast image with reduced PA can be obtained.
[第三実施形態]
 本実施形態に係る画像処理装置は、第一実施形態で述べた学習済モデルを用いた影領域補正処理を適用した断層画像を用いて、層形状や篩状板形状を計測する場合について説明する。
[Third embodiment]
The image processing apparatus according to the present embodiment describes a case in which a layer shape or a cribriform plate shape is measured using a tomographic image to which a shadow region correction process using a learned model described in the first embodiment is applied. .
 本実施形態に係る画像処理装置1101の構成を図11に示す。なお、第一実施形態における画像処理装置101における諸構成と略同じ機能を呈する構成については同じ参照番号を付記することとし、ここでの説明を省略する。本実施形態に係る画像処理装置1101は、画像処理部101-04に解析部101-47を備えることが第一実施形態と異なっている。なお解析部101-47は、抽出部101-471、及び計測部101-472を備える。 FIG. 11 shows the configuration of the image processing apparatus 1101 according to the present embodiment. Note that components having substantially the same functions as those of the components of the image processing apparatus 101 according to the first embodiment will be denoted by the same reference numerals, and description thereof will be omitted. The image processing apparatus 1101 according to the present embodiment is different from the first embodiment in that an image processing unit 101-04 includes an analysis unit 101-47. The analyzing unit 101-47 includes an extracting unit 101-471 and a measuring unit 101-472.
 次に、図12を参照して本実施形態の画像処理装置1101の処理手順を説明する。図12は、本実施形態における本システム全体の動作処理の流れを示すフローチャートである。なお、本実施形態の動作処理フローのうち、図12におけるS1205、及びS1206以外は第一実施形態における対応する各ステップで行われる処理と同様であるので、ここでの説明は省略する。 Next, a processing procedure of the image processing apparatus 1101 according to the present embodiment will be described with reference to FIG. FIG. 12 is a flowchart showing a flow of operation processing of the entire system in the present embodiment. In the operation processing flow of the present embodiment, the processes other than S1205 and S1206 in FIG. 12 are the same as the processes performed in the corresponding steps in the first embodiment, and thus description thereof will be omitted.
<ステップ1205(S1205)>
 抽出部101-471は、影領域補正済断層画像から取得した網膜及び脈絡膜の層境界や篩状板部の前面・後面境界に基づいて所定の層領域と篩状板領域、篩状板孔領域を特定する。本実施形態では、S1203で検出したブルッフ膜境界の端部及び篩状板部の前面及び後面で囲まれる深度範囲内の高輝度領域を篩状板領域として特定する。そして、S1203で重ね合わせ断層正面画像から取得した低輝度塊状領域を篩状板孔領域として特定する。なお篩状板孔領域は2次元領域として特定することに限定されるものではない。例えば影領域補正済の3次元断層画像に対して3次元ヘシアンフィルタを適用し、篩状板孔領域を強調してからブルッフ膜境界の端部及び篩状板部の前面及び後面で囲まれる深度範囲内に存在する低輝度管状領域を3次元の篩状板孔領域として特定してもよい。
<Step 1205 (S1205)>
The extraction unit 101-471 performs a predetermined layer region, a cristal plate region, and a cristal plate hole region based on the retinal and choroid layer boundaries and the anterior / posterior boundary of the lamina cribrosa acquired from the shadow region corrected tomographic image. To identify. In the present embodiment, a high-luminance region within the depth range surrounded by the end of the Bruch's membrane boundary and the front and back surfaces of the cribriform plate detected in S1203 is specified as the cribriform plate region. Then, in step S1203, the low-luminance mass region acquired from the superimposed tomographic front image is specified as the cribriform plate hole region. Note that the sieve plate hole region is not limited to being specified as a two-dimensional region. For example, a three-dimensional Hessian filter is applied to the three-dimensional tomographic image in which the shadow region has been corrected to emphasize the cribriform plate hole region, and then the depth surrounded by the edge of the Bruch's membrane boundary and the front and back surfaces of the cribriform plate. The low-luminance tubular region existing within the range may be specified as a three-dimensional cribrosa plate region.
 本実施形態では、S1204で断層画像に対して適用された影領域補正処理によって、特に網膜の深層や外層、脈絡膜に属する各層領域や篩状板部における影領域が低減される。このため、網膜の深層・外層や脈絡膜、篩状板、篩状板孔領域をより正確に特定できる。 In the present embodiment, the shadow region correction processing applied to the tomographic image in S1204 reduces the shadow region in the deep and outer layers of the retina, the layer regions belonging to the choroid, and the cribriform plate, in particular. Therefore, the deep and outer layers of the retina, the choroid, the lamina cribrosa, and the lamina cribrosa region can be more accurately specified.
 計測部101-472は網膜及び脈絡膜に属する層領域及び篩状板部の形状に関する計測値を算出する。本実施形態では、層形状に関する計測値として網膜厚及び脈絡膜厚を、篩状板形状に関する計測値として篩状板の深度方向の厚みと篩状板孔の直径を計測する。 The measuring unit 101-472 calculates a measured value related to the layer region belonging to the retina and the choroid and the shape of the lamina cribrosa. In the present embodiment, the retinal thickness and the choroid thickness are measured as the measurement values relating to the layer shape, and the thickness in the depth direction of the sieve plate and the diameter of the sieve plate hole are measured as the measurement values relating to the sieve plate shape.
 なお、S1205で行われるセグメンテーション処理や計測処理は、画像全体に適用されることに限定されるものではない。例えば、操作者が入力部103を用いて断層画像もしくは該断層画像の強調画像上に設定した任意形状の領域内のみに対してセグメンテーション処理や計測処理を実施してもよい。例えば黄斑部であればETDRSチャート内のみに対し、視神経乳頭部であれば(円グラフ状の)分円領域内のみに対してセグメンテーションもしくは計測処理を行ってもよい。また、本実施形態では断層画像に対して直接セグメンテーション処理を行った。しかし、本発明は例示した処理に限らず、断層画像に対して任意の公知の強調処理を適用してからセグメンテーション処理を行ってもよい。 The segmentation process and the measurement process performed in S1205 are not limited to being applied to the entire image. For example, the operator may use the input unit 103 to perform a segmentation process or a measurement process only on an area of an arbitrary shape set on a tomographic image or an enhanced image of the tomographic image. For example, the segmentation or measurement processing may be performed only in the ETDRS chart for the macula portion, and only in the (circular graph-shaped) divided circle region for the optic disc. In the present embodiment, the segmentation process is directly performed on the tomographic image. However, the present invention is not limited to the exemplified processing, and the segmentation processing may be performed after applying any known enhancement processing to the tomographic image.
<ステップ1206>
 表示制御部101-05は、S1205で取得した計測に関する情報を表示部104に表示させる。図13に、表示部104に表示されるレポート画面1300の例を示す。
<Step 1206>
The display control unit 101-05 causes the display unit 104 to display the information related to the measurement acquired in S1205. FIG. 13 shows an example of a report screen 1300 displayed on the display unit 104.
 表示制御部101-05は、レポート画面1300の左上のSLO画像702に対し、S1205で計測部101-472が計測した網膜厚を示す半透明のカラーマップ1301を重畳表示させる。なお表示する厚みマップは網膜厚に限定されるものではなく、S1205で計測可能な種類の層厚であれば任意の層厚マップを表示させてよい。例えば、脈絡膜厚マップを表示させてもよい。また第一実施形態の場合と同様に、レポート画面1300の右上に影領域補正済の重ね合わせた断層画像(Bスキャン画像)704を、左下には影領域補正済の重ね合わせた正面断層画像703を表示させる。レポート画面1300の右下には、現在表示中のBスキャン画像上で計測した層厚(本実施形態では網膜厚)グラフ1302を表示する。 The display control unit 101-05 superimposes a translucent color map 1301 indicating the retinal thickness measured by the measurement unit 101-472 in S1205 on the SLO image 702 at the upper left of the report screen 1300. Note that the thickness map to be displayed is not limited to the retinal thickness, and an arbitrary layer thickness map may be displayed as long as the layer thickness can be measured in S1205. For example, a choroid thickness map may be displayed. As in the case of the first embodiment, a superimposed tomographic image (B-scan image) 704 with the shadow area corrected is displayed on the upper right of the report screen 1300, and a superimposed front tomographic image 703 with the shadow area corrected is displayed on the lower left. Is displayed. On the lower right of the report screen 1300, a layer thickness (retinal thickness in the present embodiment) graph 1302 measured on the currently displayed B-scan image is displayed.
 本実施形態によれば、影領域が抑制されたBスキャン断層画像から取得した層境界及び篩状板部の境界に基づいて算出された計測データが表示される。このため、影領域の影響を受けにくいロバストな層もしくは篩状板の形状の計測ができる。 According to the present embodiment, the measurement data calculated based on the layer boundary and the boundary of the lamina cribrosa acquired from the B-scan tomographic image in which the shadow area is suppressed is displayed. For this reason, it is possible to measure the shape of the robust layer or the cribrosa plate that is hardly affected by the shadow region.
 なお、第一実施形態と同様に、影領域の補正処理の適用可否を切り替えるためのユーザインターフェース705を表示させてもよい。さらには、表示部104に表示する断層画像(Bスキャン/正面/3D画像)に対する影領域補正処理の適用状態を示す文字列やマーク、補正量の値を表示させてもよい。これらはユーザインターフェース705と併せて表示させてもよい。該ユーザインターフェース705により影領域補正処理の適用可否に関する指示が入力される。そして、この指示に基づき、表示部104に表示している断層画像(Bスキャン画像704や正面断層画像703、3D画像(非図示))に対する影領域補正処理の適用可否が切り替えられる。 Note that, similarly to the first embodiment, a user interface 705 for switching whether to apply the shadow area correction processing may be displayed. Furthermore, a character string, a mark, or a correction amount value indicating the application state of the shadow area correction processing on the tomographic image (B scan / front / 3D image) displayed on the display unit 104 may be displayed. These may be displayed together with the user interface 705. The user interface 705 inputs an instruction regarding whether or not the shadow area correction processing can be applied. Then, based on this instruction, whether to apply the shadow area correction processing to the tomographic image (B-scan image 704, front tomographic image 703, 3D image (not shown)) displayed on display unit 104 is switched.
 なお、断層画像(Bスキャン画像704や正面断層画像703、3D画像)に対する影領域補正処理の適用可否の切り替えに連動して、計測データを切り替えて表示部104に表示させてもよい。この場合、計測データには、該(影領域補正あり/なしの)断層画像に対する層形状や篩状板形状に関する計測データが含まれる。また、影領域の輝度補正量を調節するためのユーザインターフェース(スライダー等)を備え、ユーザが手動で影領域における補正量を調節可能なように構成してもよい。抽出部101-471及び計測部101-472は該調節された影補正量の断層画像に基づいて計測データを算出し、表示制御部101-05は表示部104にこれを表示させる。 The measurement data may be switched and displayed on the display unit 104 in conjunction with the switching of the application of the shadow area correction process to the tomographic image (the B-scan image 704 or the front tomographic image 703, the 3D image). In this case, the measurement data includes measurement data relating to the layer shape and the cribriform plate shape for the tomographic image (with / without shadow area correction). Further, a user interface (slider or the like) for adjusting the luminance correction amount of the shadow area may be provided so that the user can manually adjust the correction amount in the shadow area. The extraction unit 101-471 and the measurement unit 101-472 calculate measurement data based on the adjusted tomographic image of the shadow correction amount, and the display control unit 101-05 causes the display unit 104 to display this.
 前述のように、本実施形態に係る画像処理装置1101おいては、補正手段(補正部101-43)は、断層画像内の網膜、脈絡膜、及び篩状板部のうちの少なくとも一つに生じた影領域の画素値の補正を行う。そして、特定手段(抽出部101-471)は、画素値が補正された断層画像から被検眼200の層領域、篩状板領域、及び篩状板孔領域の少なくとも一つの境界を特定する。そして、計測手段(計測部101-472)は、特定手段が特定した層領域、篩状板領域、篩状板孔領域、血管領域、及び無血管領域の少なくとも一つに対する計測値を算出する。なお、本実施形態に係る画像処理装置1101における表示制御部101-05についてここでは述べていない。しかし該表示制御部101-05は、例えば第二実施形態で述べたように表示部104を制御して同様の表示を行わせることができる。 As described above, in the image processing apparatus 1101 according to the present embodiment, the correction unit (correction unit 101-43) generates at least one of the retina, choroid, and cribriform plate in the tomographic image. The correction of the pixel value of the shadow region is performed. Then, the specifying unit (extracting units 101-471) specifies at least one boundary of the layer region, the cribrosa region, and the cribrosa hole region of the eye 200 from the tomographic image in which the pixel values have been corrected. Then, the measurement means (measurement units 101-472) calculates a measurement value for at least one of the layer region, the cribriform plate region, the cribriform plate hole region, the vascular region, and the avascular region specified by the specifying unit. The display control unit 101-05 in the image processing device 1101 according to the present embodiment is not described here. However, the display control unit 101-05 can control the display unit 104 to perform similar display, for example, as described in the second embodiment.
 あるいは、本実施形態に係る画像処理装置1101は、画像取得手段、前述の学習済モデルを用いた補正手段、特定手段、及び計測手段を備える態様としてもよい。この場合、これら特定手段及び計測手段は、補正手段により画素値が補正された断層画像を用いて特定及び計測の処理を行う。そして、前述のように、画像取得手段(画像取得部101-01)は、被検眼200の断層画像(強度画像)を取得する。補正手段(補正部101-43)は、被検眼に含まれる構造物により断層画像に生じる影領域の画素値を補正する。特定手段(抽出部101-471)は、被検眼200の層領域、篩状板領域、篩状板孔領域、血管領域、及び無血管領域の少なくとも一つの境界を断層画像において特定する。計測手段(計測部101-472)は、特定手段が特定した層領域、篩状板領域、篩状板孔領域、血管領域、及び無血管領域の少なくとも一つに対する計測値を算出する。 Alternatively, the image processing apparatus 1101 according to the present embodiment may be configured to include an image acquisition unit, a correction unit using the learned model, a specifying unit, and a measurement unit. In this case, the specifying unit and the measuring unit perform the specifying and measuring process using the tomographic image whose pixel value has been corrected by the correcting unit. Then, as described above, the image acquisition unit (image acquisition unit 101-01) acquires a tomographic image (intensity image) of the eye 200 to be inspected. The correction unit (correction unit 101-43) corrects a pixel value of a shadow region generated in a tomographic image by a structure included in the eye to be inspected. The specifying means (extracting units 101-471) specifies at least one boundary of the layer region, the lamina cribrosa region, the lamina cribrosa region, the blood vessel region, and the avascular region of the eye 200 to be examined in the tomographic image. The measurement unit (measurement unit 101-472) calculates a measurement value for at least one of the layer region, the lamina cribrosa region, the lamina cribrosa region, the blood vessel region, and the avascular region specified by the identification unit.
 以上述べた構成によれば、画像処理装置1101は、第一実施形態で説明した学習済モデルを用いた影領域補正処理を適用した断層画像に対して層形状や篩状板形状を計測する。これにより、影領域の影響を受けにくいロバストな層もしくは篩状板の形状計測ができる。 According to the configuration described above, the image processing apparatus 1101 measures the layer shape and the laminar shape on the tomographic image to which the shadow region correction processing using the learned model described in the first embodiment is applied. This makes it possible to measure the shape of a robust layer or cribriform plate that is less likely to be affected by the shadow area.
[第四実施形態]
 本実施形態に係る画像処理装置は、第二実施形態で説明した影領域補正処理を適用した断層画像を用いて生成された、PA抑制済のモーションコントラスト画像を用いる。このモーションコントラスト画像から血管領域を特定し、血管の形状・分布を計測する。
[Fourth embodiment]
The image processing apparatus according to the present embodiment uses a PA-suppressed motion contrast image generated using a tomographic image to which the shadow area correction processing described in the second embodiment is applied. The blood vessel region is specified from the motion contrast image, and the shape and distribution of the blood vessel are measured.
 本実施形態に係る画像処理装置101の構成を図14に示す。なお、第一及び第二実施形態における画像処理装置101、801における諸構成と略同じ機能を呈する構成については同じ参照番号を付記することとし、ここでの説明を省略する。本実施形態に係る画像処理装置1401は、画像処理部101-04に解析部101-47を備えることが第二実施形態と異なっている。なお、解析部101-47は、第三実施形態で述べた抽出部101-471及び計測部101-472に加えて、血管領域の強調処理を行う強調部101-473を備える。 FIG. 14 shows the configuration of the image processing apparatus 101 according to the present embodiment. Note that components having substantially the same functions as those of the components of the image processing apparatuses 101 and 801 in the first and second embodiments will be denoted by the same reference numerals, and description thereof will be omitted. An image processing apparatus 1401 according to the present embodiment is different from the second embodiment in that an image processing unit 101-04 includes an analysis unit 101-47. The analyzing unit 101-47 includes an emphasizing unit 101-473 for emphasizing a blood vessel region in addition to the extracting unit 101-471 and the measuring unit 101-472 described in the third embodiment.
 次に、図15を参照して本実施形態の画像処理装置1401の処理手順を説明する。図15は、本実施形態における本システム全体の動作処理の流れを示すフローチャートである。なお、本実施形態の動作処理フローのうち、図15のS1507~S1510以外は第二実施形態における対応する各ステップで行われる処理と同様であるので、ここでの説明は省略する。 Next, a processing procedure of the image processing apparatus 1401 according to the present embodiment will be described with reference to FIG. FIG. 15 is a flowchart showing the flow of the operation processing of the entire system in the present embodiment. In the operation processing flow of the present embodiment, except for S1507 to S1510 in FIG. 15, the processing is the same as the processing performed in each corresponding step in the second embodiment, and the description thereof will be omitted.
<ステップ1507>
 操作者は入力部103を操作することにより、計測処理の開始を指示する。なお、本実施形態では図10Cのレポート画面上の画像をダブルクリックすることで計測画面に移行し、解析部101-47が計測処理を開始することとする。その際、非図示の計測開始画面において、操作者は入力部103を用いて、計測処理の種類を選択する。本実施形態では、モーションコントラスト画像に対する計測の種類として、
1) 血管密度(VAD)及び
2) 血管密度(VLD)
があり、操作者はこれらの中から所望の計測の種類を選択する。
<Step 1507>
The operator operates the input unit 103 to instruct the start of the measurement process. In the present embodiment, it is assumed that the measurement screen is displayed by double-clicking the image on the report screen in FIG. 10C, and the analysis unit 101-47 starts the measurement processing. At this time, the operator uses the input unit 103 to select the type of measurement processing on a measurement start screen (not shown). In the present embodiment, as a type of measurement for a motion contrast image,
1) Vessel density (VAD) and 2) Vessel density (VLD)
The operator selects a desired measurement type from these.
 なお、モーションコントラスト画像を用いて行われる計測は、前述した種類に限定されるものではない。例えばモーションコントラスト画像に対して無血管領域(Non Perfusion Area;NPA)の面積や形状を算出する場合も本発明に含まれる。ここで、VADはVessel Area Densityの略であり、計測対象に含まれる血管領域の割合で定義される血管密度(単位:%)である。また、VLDはVessel Length Densityの略であり、単位面積あたりに含まれる血管の長さの総和(単位:mm-1)で定義される血管密度である。 The measurement performed using the motion contrast image is not limited to the type described above. For example, the present invention includes a case where the area or shape of a non-vascular area (NPA) is calculated for a motion contrast image. Here, VAD is an abbreviation for Vessel Area Density, and is a blood vessel density (unit:%) defined by the ratio of the blood vessel region included in the measurement target. VLD is an abbreviation of Vessel Length Density, and is a blood vessel density defined by the total length of blood vessels included in a unit area (unit: mm -1 ).
 血管密度は血管の閉塞範囲や血管網の疎密の程度を定量化するための指標であり、VADが最もよく用いられている。ただし、VADでは計測値に占める大血管領域の寄与分が大きくなるため、毛細血管の病態に注目して計測したい場合には(より毛細血管の閉塞に敏感な指標として)VLDが用いられる。しかし、血管に関して本発明を適用可能な計測の種類はこれに限られない。例えば、血管構造の複雑さを定量化するFractal Dimensionや、血管径の分布(血管の瘤や狭窄の分布)を表すVessel Diameter Indexを計測してもよい。 Vessel density is an index for quantifying the occlusion range of blood vessels and the degree of density of the vascular network, and VAD is most often used. However, in the VAD, the contribution of the large blood vessel region to the measured value is large. Therefore, when it is desired to perform the measurement by paying attention to the pathology of the capillary, VLD is used (as an index more sensitive to the occlusion of the capillary). However, the type of measurement to which the present invention can be applied to blood vessels is not limited to this. For example, Fractal @ Dimension for quantifying the complexity of the blood vessel structure or Vessel @ Diameter @ Index representing the distribution of the blood vessel diameter (the distribution of an aneurysm or stenosis of a blood vessel) may be measured.
 次に、解析部101-047は計測処理の前処理を行う。前処理には、任意の公知の画像処理が適用でき、本実施形態ではモーションコントラスト画像に対してモルフォロジー演算の一種であるトップハットフィルタ処理を行う。トップハットフィルタを適用することにより、背景成分の輝度ムラを軽減できる。 Next, the analysis unit 101-047 performs pre-processing of the measurement processing. Any known image processing can be applied to the pre-processing. In the present embodiment, a top-hat filter process, which is a type of morphological operation, is performed on a motion contrast image. By applying the top hat filter, the luminance unevenness of the background component can be reduced.
<ステップ1508>
 解析部101-47は、モーションコントラスト画像に対する血管領域の特定処理を行う。本実施形態では、強調部101-473がモーションコントラスト画像に対してヘシアンフィルタに基づく血管強調処理を行う。次に抽出部101-471が血管強調画像に対してセグメンテーション処理を行い、整形処理を実施することで血管領域を特定する。血管領域の特定処理の詳細は、図16Aのフローチャートに示されるS1610~1650を参照して後述する。
<Step 1508>
The analysis unit 101-47 performs a process of specifying a blood vessel region on the motion contrast image. In the present embodiment, the enhancement units 101 to 473 perform a blood vessel enhancement process on the motion contrast image based on the Hessian filter. Next, the extraction unit 101-471 performs a segmentation process on the blood vessel enhanced image, and specifies a blood vessel region by performing a shaping process. Details of the blood vessel region identification processing will be described later with reference to S1610 to S1650 shown in the flowchart of FIG. 16A.
<ステップ1509>
 計測部101-472は、操作者により指定された計測対象領域に関する情報に基づいて、単検査の画像に対する血管分布の計測を行う。引き続いて表示制御部101-05が、計測結果を表示部104に表示する。ここで、血管分布の指標である血管密度としては前述のVADとVLDの2種類の指標があり、本実施形態ではより毛細血管の障害に敏感な指標であるVLDを計算する場合の手順を例に説明する。なお、モーションコントラスト画像に対するVLDの計測については図16Bのフローチャートに示されるS1660~S1670を参照して後述する。
<Step 1509>
The measurement units 101-472 measure the blood vessel distribution with respect to the image of the single examination based on the information on the measurement target area specified by the operator. Subsequently, the display control unit 101-05 displays the measurement result on the display unit 104. Here, as the blood vessel density which is an index of the blood vessel distribution, there are two kinds of indices of VAD and VLD described above. In the present embodiment, an example of a procedure for calculating VLD which is an index more sensitive to capillary failure is described. Will be described. The measurement of the VLD for the motion contrast image will be described later with reference to S1660 to S1670 shown in the flowchart of FIG. 16B.
<ステップ1510>
 表示制御部101-05は、S1509で実施した計測に関するレポートを表示部104に表示させる。なお、その際に、各計測対象画像に関して左右眼の別、撮影日時、画角・画素数、略同一位置における断層画像数、OCTA重ね合わせ処理の実施条件を、併せて表示部104に表示させてもよい。さらに、モーションコントラスト画像の評価値、投影法、PA除去実施の有無に関する情報を、併せて表示部104に表示させてもよい。
<Step 1510>
The display control unit 101-05 causes the display unit 104 to display a report on the measurement performed in step S1509. At this time, the display unit 104 also displays the left and right eyes, the shooting date and time, the angle of view and the number of pixels, the number of tomographic images at substantially the same position, and the OCTA superimposition processing execution conditions for each measurement target image. You may. Further, information regarding the evaluation value of the motion contrast image, the projection method, and whether or not the PA removal has been performed may be displayed on the display unit 104 together.
 あるいは、断層正面画像に対し、モーションコントラスト画像や、血管領域もしくは血管中心線の2値画像を所定の深度範囲ごとに適宜色や透明度を変えて表示部104に重畳表示してもよい。この場合、モーションコントラスト画像や血管領域もしくは血管中心線の2値画像は正面画像として投影表示することに限定されるものではなく、3次元的にレンダリングして3次元画像として表示してもよい。さらに、投影方法(MIP/AIP)やプロジェクションアーチファクト抑制処理についても、例えばコンテキストメニューから選択するなどの方法により変更してよい。またS1508で特定した血管領域に関する2値画像や、S1509で算出した計測値や計測値マップを外部記憶部102にファイルとして出力し、保存してもよい。 Alternatively, a motion contrast image or a binary image of a blood vessel region or a blood vessel center line may be superimposed on the tomographic front image on the display unit 104 by appropriately changing the color and transparency for each predetermined depth range. In this case, the motion contrast image or the binary image of the blood vessel region or the blood vessel center line is not limited to the projection display as the front image, and may be rendered three-dimensionally and displayed as a three-dimensional image. Further, the projection method (MIP / AIP) and the projection artifact suppression processing may be changed by a method such as selection from a context menu. Further, the binary image relating to the blood vessel region specified in S1508, the measured value or the measured value map calculated in S1509 may be output to the external storage unit 102 as a file and stored.
 さらに、図16Aに示すフローチャートを参照しながら、S1508で実行される血管領域の特定処理の詳細について説明する。 Furthermore, details of the blood vessel region identification processing executed in S1508 will be described with reference to the flowchart shown in FIG. 16A.
<ステップ1610>
 本実施形態では、深度範囲や部位に応じて適切な強調スケール設定(パラメータ設定)をすることにより、強調スケール調整処理を実施する。黄斑部や視神経乳頭部の網膜表層、あるいは脈絡膜深層のような領域では強調スケールを大きく設定することにより、網膜動静脈や脈絡膜深層血管のような血管径の大きい血管が(消滅したり複数領域に分離したりすることなく)適切に強調される。従って、正確に血管領域を特定できる。一方、網膜の深層や外層、あるいは脈絡膜毛細血管板のように毛細血管や新生血管のみが存在する領域では小さなスケールで強調処理を実施することにより細い血管のエッジが強調される。従って、2値化した際により正確に血管領域を特定できる(血管領域を過検出する現象を防止できる)。
<Step 1610>
In the present embodiment, the emphasis scale adjustment processing is performed by setting an appropriate emphasis scale (parameter setting) according to the depth range and the region. In areas such as the macula, the retinal surface of the optic papilla, or the deep choroid, a large emphasis scale can be used to reduce the size of vessels with large diameters, such as retinal arteries and veins or deep choroidal vessels. Appropriately emphasized (without separation). Therefore, the blood vessel region can be accurately specified. On the other hand, in a region where only capillaries and new blood vessels are present, such as a deep layer or outer layer of the retina, or a region where only capillaries and new blood vessels are present, the edges of thin blood vessels are emphasized by performing the enhancement process on a small scale. Therefore, the blood vessel region can be specified more accurately when binarized (the phenomenon of overdetecting the blood vessel region can be prevented).
 なお、本実施形態では、異なる太さの血管に対する強調スケールの調整処理に関して、正面モーションコントラスト画像に対して強調スケールを設定することしている。しかし、本発明はこれに限定されるものではない。例えば、3次元モーションコントラスト画像に対して強調スケールを適応的に設定してもよい。 In the present embodiment, the emphasis scale is set for the front motion contrast image with respect to the emphasis scale adjustment processing for blood vessels having different thicknesses. However, the present invention is not limited to this. For example, an emphasis scale may be adaptively set for a three-dimensional motion contrast image.
<ステップ1620>
 強調部101-473は、S1507の前処理を実施されたモーションコントラスト画像に対してヘッセ行列の固有値に基づく血管強調フィルタ処理(管状構造強調処理)を行う。このような強調フィルタはヘシアンフィルタと総称され、例えばVesselness filterやMulti-scale line filterが挙げられる。本実施形態ではMulti-scale line filterを用いるが、任意の公知の血管強調フィルタを用いてよい。
<Step 1620>
The enhancement unit 101-473 performs a blood vessel enhancement filter process (tubular structure enhancement process) based on the eigenvalue of the Hessian matrix on the motion contrast image that has been subjected to the preprocessing of S1507. Such an emphasis filter is generically referred to as a Hessian filter, and includes, for example, a Vesselness filter and a Multi-scale line filter. In the present embodiment, a multi-scale line filter is used, but any known blood vessel enhancement filter may be used.
 ヘシアンフィルタでは、強調したい血管の径に適したサイズで画像を平滑化する。その上で、該平滑化画像の各画素において輝度値の2次微分値を要素として持つヘッセ行列を算出し、該行列の固有値の大小関係に基づいて局所構造を強調する。ヘッセ行列は式(2)で与えられるような正方行列であり、該行列の各要素は例えば式(3)に示すような画像の輝度値Iを平滑化した画像の輝度値Isの2次微分値で表される。ヘシアンフィルタでは、このようなヘッセ行列の「固有値(λ1,λ2)の一方が0に近く、他方が負かつ絶対値が大きい」場合に線状構造とみなして強調する。これはモーションコントラスト画像上の血管領域が持つ特徴、すなわち「走行方向では輝度変化が小さく、走行方向に直交する方向では輝度値が大きく低下する」が成り立つ画素を線状構造とみなして強調することに相当する。 The Hessian filter smoothes the image with a size suitable for the diameter of the blood vessel to be emphasized. Then, a Hessian matrix having a second derivative of the luminance value as an element at each pixel of the smoothed image is calculated, and the local structure is emphasized based on the magnitude relation of the eigenvalues of the matrix. The Hessian matrix is a square matrix given by Expression (2), and each element of the matrix is, for example, a second derivative of the luminance value Is of the image obtained by smoothing the luminance value I of the image as shown in Expression (3). Expressed by value. The Hessian filter emphasizes such a Hessian matrix when "one of the eigenvalues (λ1, λ2) is close to 0 and the other is negative and has a large absolute value" as a linear structure. This is to emphasize pixels that satisfy the characteristics of the blood vessel region on the motion contrast image, that is, "the luminance change is small in the running direction and the brightness value is greatly reduced in the direction orthogonal to the running direction" as a linear structure. Is equivalent to
 また、モーションコントラスト画像には、毛細血管から細動静脈まで様々な径の血管が含まれる。このため、複数のスケールでガウスフィルタにより平滑化した画像に対して、ヘッセ行列を用いて線強調画像を生成する。次に式(4)に示すようにガウスフィルタの平滑化パラメータσの二乗を補正係数として乗じた上で、最大値演算により合成し、該合成画像Ihessianをヘシアンフィルタの出力とする。 モ ー シ ョ ン In addition, the motion contrast image includes blood vessels of various diameters from capillaries to fibrillation veins. Therefore, a line-enhanced image is generated using an Hessian matrix for an image smoothed by a Gaussian filter at a plurality of scales. Next, as shown in Expression (4), after multiplying the square of the smoothing parameter σ of the Gaussian filter as a correction coefficient, the combined image is synthesized by a maximum value operation, and the synthesized image Ihesian is set as the output of the Hessian filter.
 なお、本発明は正面モーションコントラスト画像に対して2次元ヘシアンフィルタを適用することに限定されるものではない。例えば、3次元モーションコントラスト画像に対して3次元ヘシアンフィルタを適用し、3次元強調画像を生成してもよい。 The present invention is not limited to applying a two-dimensional Hessian filter to a front motion contrast image. For example, a three-dimensional hessian filter may be applied to a three-dimensional motion contrast image to generate a three-dimensional enhanced image.
 ヘシアンフィルタはノイズに強く、血管の連続性が向上するという利点がある。一方で、実際には事前に画像に含まれる血管の最大径が不明の場合が多いため、特に平滑化パラメータが画像中の血管の最大径に対して大きすぎる場合に、強調された血管領域が太くなりやすいという欠点がある。本実施形態ではS1610で説明したような強調スケールの調整処理を行うことにより、該欠点を抑制するものとする。なお血管径に関わらず適切にモーションコントラスト画像を強調・2値化する方法は本実施例に述べた方法に限定されるものでない。例えば、ヘシアン強調画像の2値画像とエッジ選択鮮鋭化による血管強調画像の2値画像の共通領域を血管領域として特定してもよい。 The Hessian filter has the advantage of being resistant to noise and improving the continuity of blood vessels. On the other hand, in practice, the maximum diameter of the blood vessel included in the image is often unknown in advance, and particularly when the smoothing parameter is too large relative to the maximum diameter of the blood vessel in the image, the emphasized blood vessel region is There is a disadvantage that it tends to be thick. In this embodiment, the defect is suppressed by performing the enhancement scale adjustment processing as described in S1610. The method for appropriately enhancing and binarizing the motion contrast image regardless of the blood vessel diameter is not limited to the method described in the present embodiment. For example, a common region of the binary image of the Hessian-enhanced image and the binary image of the blood vessel-enhanced image based on the edge selection sharpening may be specified as a blood vessel region.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004
<ステップ1630>
 抽出部101-471は、S1620で生成したヘシアンフィルタによる血管強調画像(以下、ヘシアン強調画像と表記)を2値化する。本実施形態ではヘシアン強調画像の平均値を閾値として2値化する。ただし、2値化の閾値には所定の下限値を設定することにより、血管以外の領域がアーチファクトとして誤検出されるのを防止することができる。なおここで述べる2値化処理は閾値処理に限定されるものではなく、任意の公知のセグメンテーション法によって2値化してもよい。また、2値化によるセグメンテーション処理は画像全体に適用されることに限定されない。例えば、操作者が入力部103を用いてモーションコントラスト画像もしくは該モーションコントラスト画像の強調画像上に設定した任意形状の領域内にのみセグメンテーション処理を実施してもよい。
<Step 1630>
The extraction unit 101-471 binarizes the blood vessel-enhanced image (hereinafter, referred to as a Hessian-enhanced image) using the Hessian filter generated in S1620. In the present embodiment, binarization is performed using the average value of the Hessian emphasized image as a threshold. However, by setting a predetermined lower limit as the threshold value for binarization, it is possible to prevent a region other than a blood vessel from being erroneously detected as an artifact. Note that the binarization processing described here is not limited to threshold processing, and may be binarized by any known segmentation method. Further, the segmentation processing by binarization is not limited to being applied to the entire image. For example, the operator may use the input unit 103 to perform the segmentation processing only on a motion contrast image or an arbitrary shape region set on the enhanced image of the motion contrast image.
<ステップ1640>
 抽出部101-471は、S1630で生成した血管領域の2値画像を細線化処理することにより、血管の中心線に相当する線幅1画素の2値画像(以下、スケルトン画像と表記)を生成する。この処理として、任意の細線化法もしくはスケルトン処理を用いてよいが、本実施形態では細線化法としてHilditchの細線化法を用いる。
<Step 1640>
The extraction unit 101-471 performs a thinning process on the binary image of the blood vessel region generated in S1630 to generate a binary image having a line width of one pixel corresponding to the center line of the blood vessel (hereinafter, referred to as a skeleton image). I do. As this processing, any thinning method or skeleton processing may be used, but in the present embodiment, Hilditch's thinning method is used as the thinning method.
<ステップ1650>
 抽出部101-471は、血管領域の整形処理としてモルフォロジー演算処理(オープニング処理(収縮処理後に膨張処理を行うこと)及びクロージング処理(膨張処理後に収縮処理を行うこと))を実施する。なお、整形処理はこれに限らず例えば2値画像をラベリングした場合の各ラベルの面積に基づく小領域除去を行ってもよい。
<Step 1650>
The extraction units 101 to 471 perform morphological operation processing (opening processing (performing expansion processing after erosion processing) and closing processing (performing erosion processing after expansion processing)) as shaping processing of a blood vessel region. The shaping process is not limited to this, and for example, small regions may be removed based on the area of each label when a binary image is labeled.
 さらに、図16Bに示すフローチャートを参照しながら、S1509で実行される計測処理の詳細について説明する。 {Details of the measurement processing executed in S1509 will be described with reference to the flowchart shown in FIG. 16B.
<ステップ1660>
 解析部101-47は、操作者が入力部103を用いて指示した内容に基づき関心領域(計測対象画像及び計測領域)を設定する。本実施形態では、OCTAマップとして「None」、OCTAセクタマップとして「VLD」が選択されているため、関心領域としてETDRSセクタ領域を設定する。OCTAマップとは、OCTA画像全体に対し画素単位で計測された血管密度に関するカラー(もしくはグレースケールの)マップを指す。また、OCTAセクタマップとはOCTA画像内に設定したETDRSセクタにおける血管密度分布の統計値(例えば平均)マップを指す。なお、関心領域の態様はこれに限らず、円グラフ状の分円領域に分割してもよいし、任意形状の関心領域を設定してもよい。
<Step 1660>
The analysis unit 101-47 sets a region of interest (measurement target image and measurement region) based on the content specified by the operator using the input unit 103. In the present embodiment, since “None” is selected as the OCTA map and “VLD” is selected as the OCTA sector map, the ETDRS sector region is set as the region of interest. The OCTA map refers to a color (or gray scale) map related to the blood vessel density measured in pixel units for the entire OCTA image. Further, the OCTA sector map refers to a statistical value (for example, an average) map of a blood vessel density distribution in the ETDRS sector set in the OCTA image. Note that the mode of the region of interest is not limited to this, and the region of interest may be divided into a pie chart-shaped segmented region, or a region of interest of any shape may be set.
<ステップ1670>
 計測部101-472は、S1640で得られたスケルトン画像に基づいて計測処理を行う。本実施形態では、該スケルトン画像の各画素位置において当該画素を中心とした近傍領域における単位面積当たりの非0画素(白画素)の長さの総和[mm-1]を当該画素における血管密度(VLD)として算出する。さらに、各画素で算出した血管密度(VLD)の値を持つ画像(VLDマップ)を生成する。
<Step 1670>
The measurement unit 101-472 performs a measurement process based on the skeleton image obtained in S1640. In the present embodiment, at each pixel position of the skeleton image, the sum [mm −1 ] of the length of non-zero pixels (white pixels) per unit area in the vicinity area around the pixel is determined by the blood vessel density ( VLD). Further, an image (VLD map) having a value of the blood vessel density (VLD) calculated for each pixel is generated.
 なお、関心領域としてセクタ領域が指定されている場合は、該スケルトン画像上の各セクタ領域における単位面積当たりの非0画素(白画素)の長さの総和[mm-1]を当該セクタにおける血管密度(VLD)として算出すればよい。また、モーションコントラスト画像に対する計測としてVADマップが指定されている場合は、以下のようにすればよい。すなわちS1630で取得した血管領域に関する2値画像の各画素位置で当該画素を中心とした近傍領域内に占める非0画素(白画素)の割合を当該画素における血管密度(VAD)として算出し、各画素で算出したVAD値を持つ画像(VADマップ)を生成する。また、該血管領域に関する2値画像上の各セクタ領域における非0画素(白画素)の割合を当該セクタにおける血管密度(VAD)として算出することにより、VADセクタマップを生成できる。 When a sector region is designated as the region of interest, the sum [mm −1 ] of the length of non-zero pixels (white pixels) per unit area in each sector region on the skeleton image is determined by the blood vessel in the sector. What is necessary is just to calculate as a density (VLD). When a VAD map is specified as a measurement for a motion contrast image, the following may be performed. That is, at each pixel position of the binary image relating to the blood vessel region acquired in S1630, the ratio of non-zero pixels (white pixels) occupying in the vicinity area around the pixel is calculated as the blood vessel density (VAD) in the pixel. An image (VAD map) having the VAD value calculated by the pixel is generated. Further, a VAD sector map can be generated by calculating the ratio of non-zero pixels (white pixels) in each sector area on the binary image relating to the blood vessel area as the blood vessel density (VAD) in the sector.
 なお、本実施形態では、S1504において学習済モデルを用いて断層画像内の血管をはじめとする眼部内の物体下に生じる影領域の補正を行った。しかし、影領域の補正方法は、学習済モデルを用いた画像補正に限定されるものではない。例えば特許文献1に開示される任意の公知の画像処理手法に基づいて影領域に対する画像補正を実施した断層画像を用いてモーションコントラスト画像を生成することにより、PAを抑制する場合も本発明に含まれる。 In the present embodiment, in S1504, the shadow region generated under the object in the eye such as the blood vessel in the tomographic image is corrected using the learned model. However, the method of correcting the shadow area is not limited to the image correction using the learned model. For example, the present invention includes a case in which PA is suppressed by generating a motion contrast image using a tomographic image in which image correction has been performed on a shadow region based on any known image processing method disclosed in Patent Document 1. It is.
 本実施形態に係る画像処理装置1401は、第一実施形態及び第二実施形態で述べた構成に加えてさらに領域特定手段を備える。該領域特定手段(抽出部101-471)は、画素値が補正された複数の断層画像を用いて生成されたモーションコントラスト画像から、例えば前述の手法により、血管領域もしくは無血管領域を特定する。また、この画像処理装置1401は、領域特定手段が特定した血管領域、及び無血管領域の少なくとも一つに対する計測値を算出する算出手段(計測部101-472)をさらに備える。 The image processing apparatus 1401 according to the present embodiment further includes an area specifying unit in addition to the configuration described in the first embodiment and the second embodiment. The region specifying means (extracting units 101-471) specifies a blood vessel region or an avascular region by, for example, the above-described method from a motion contrast image generated using a plurality of tomographic images with corrected pixel values. The image processing apparatus 1401 further includes a calculation unit (measurement unit 101-472) that calculates a measurement value for at least one of the blood vessel region and the avascular region specified by the region specification unit.
 以上述べた構成によれば、画像処理装置1401は第二実施形態で説明した影領域補正処理を適用した断層画像を用いて生成された、PA抑制済モーションコントラスト画像から血管領域を特定し、血管密度を計測、算出する。これにより、OCTA画像上の画像特徴の学習が不要な、効率的に学習された学習済モデルに基づいてPAが抑制されたモーションコントラスト画像から正確に血管領域の特定や血管形状・分布の計測ができる。 According to the configuration described above, the image processing apparatus 1401 identifies a blood vessel region from a PA-suppressed motion contrast image generated using a tomographic image to which the shadow region correction processing described in the second embodiment has been applied, Measure and calculate the density. This makes it possible to accurately specify a blood vessel region and measure a blood vessel shape / distribution from a motion contrast image in which PA is suppressed based on an efficiently trained model in which learning of image features on an OCTA image is unnecessary. it can.
[その他の実施形態]
 上述した各実施形態では、本発明を画像処理装置101、801、1101、1401として実現したが、本発明の実施形態は例示した画像処理装置のみに限定されるものではない。例えば、本発明はシステム、装置、方法、プログラムもしくは記憶媒体等としての実施態様をとることができる。すなわち、本発明は、前述の実施形態の1以上の機能を実現するプログラムを、ネットワーク又は記憶媒体を介してシステム又は装置に供給する。そして、そのシステム又は装置のコンピュータにおける1つ以上のプロセッサがプログラムを読出し実行する処理によっても実現可能である。また、1以上の機能を実現する回路(例えば、ASIC)によっても実現可能である。
[Other Embodiments]
In each of the embodiments described above, the present invention is implemented as the image processing apparatuses 101, 801, 1101, and 1401, but the embodiments of the present invention are not limited to the illustrated image processing apparatuses. For example, the present invention can take an embodiment as a system, an apparatus, a method, a program, a storage medium, or the like. That is, the present invention supplies a program for realizing one or more functions of the above-described embodiments to a system or an apparatus via a network or a storage medium. The present invention can also be realized by processing in which one or more processors in a computer of the system or the apparatus read and execute a program. Further, it can also be realized by a circuit (for example, an ASIC) that realizes one or more functions.
 以上、実施形態を参照して本発明について説明したが、本発明は前述の実施形態に限定されるものではない。本発明の趣旨に反しない範囲で変更された発明、及び本発明と均等な発明も本発明に含まれる。また、前述の各実施形態は、本発明の趣旨に反しない範囲で適宜組み合わせることができる。 Although the present invention has been described with reference to the exemplary embodiments, the present invention is not limited to the above-described exemplary embodiments. The present invention includes inventions that have been modified within a scope not contrary to the gist of the present invention and inventions equivalent to the present invention. The above embodiments can be combined as appropriate without departing from the spirit of the present invention.
 本発明は上記実施形態に制限されるものではなく、本発明の精神及び範囲から離脱することなく、様々な変更及び変形が可能である。従って、本発明の範囲を公にするために以下の請求項を添付する。 The present invention is not limited to the above embodiment, and various changes and modifications can be made without departing from the spirit and scope of the present invention. Therefore, the following claims are appended to make the scope of the present invention public.
 本願は、2018年9月6日提出の日本国特許出願特願2018-167195を基礎として優先権を主張するものであり、その記載内容の全てをここに援用する。 This application claims the priority of Japanese Patent Application No. 2018-167195 filed on Sep. 6, 2018, the entire contents of which are incorporated herein by reference.
101-01:画像取得部
101-04:画像処理部
101-43:補正部

 
101-01: Image acquisition unit 101-04: Image processing unit 101-43: Correction unit

Claims (22)

  1.  被検眼の断層画像を取得する画像取得手段と、
     被検眼に含まれる物体により生じる影領域であって、前記断層画像における影領域の画素値を、学習済モデルを用いて補正する補正手段と、
     を備えることを特徴とする画像処理装置。
    Image acquisition means for acquiring a tomographic image of the eye to be examined,
    Correction means for correcting a pixel value of a shadow area in the tomographic image, which is a shadow area generated by an object included in the eye to be inspected, using a learned model;
    An image processing apparatus comprising:
  2.  被検眼の断層画像を取得する画像取得手段と、
     被検眼に含まれる物体により生じる影領域であって、前記断層画像における影領域の画素値が補正された断層画像を、学習済モデルを用いて生成する生成手段と、
     を備えることを特徴とする画像処理装置。
    Image acquisition means for acquiring a tomographic image of the eye to be examined,
    Generating means for generating, using a learned model, a tomographic image which is a shadow area generated by an object included in the eye to be inspected and in which the pixel value of the shadow area in the tomographic image is corrected,
    An image processing apparatus comprising:
  3.  前記物体における血管及び前記血管により生じる前記影領域の少なくともいずれかを同定する同定手段を更に備え、
     前記同定された血管及び前記同定された影領域の少なくともいずれかを含む部分領域に対応した学習済モデルを用いて、前記影領域の画素値が補正されることを特徴とする請求項1又は2に記載の画像処理装置。
    An identification unit for identifying at least one of a blood vessel and the shadow region generated by the blood vessel in the object,
    The pixel value of the shadow area is corrected using a learned model corresponding to a partial area including at least one of the identified blood vessel and the identified shadow area. An image processing apparatus according to claim 1.
  4.  前記断層画像内の網膜、脈絡膜、及び篩状板部のうちの少なくとも一つに生じた影領域の画素値に対して、前記画素値の補正が行われることを特徴とする請求項1乃至3のいずれか1項に記載の画像処理装置。 4. The pixel value correction is performed on a pixel value of a shadow region generated in at least one of a retina, a choroid, and a cribriform plate portion in the tomographic image. The image processing device according to any one of claims 1 to 4.
  5.  前記断層画像内の網膜、脈絡膜、及び篩状板部の各々に対応した学習済モデルを用いて前記影領域の画像値の補正が行なわれることを特徴とする請求項4に記載の画像処理装置。 The image processing apparatus according to claim 4, wherein the image value of the shadow area is corrected using a learned model corresponding to each of a retina, a choroid, and a cribriform plate in the tomographic image. .
  6.  前記画素値が補正された断層画像から被検眼の層領域、篩状板領域、及び篩状板孔領域の少なくとも一つの境界を特定する特定手段をさらに備えることを特徴とする請求項1乃至5のいずれか1項に記載の画像処理装置。 6. The image processing apparatus according to claim 1, further comprising: a specifying unit configured to specify at least one boundary between a layer region, a lamina cribrosa region, and a lamina cribrosa region of the subject's eye from the tomographic image in which the pixel values are corrected. The image processing device according to any one of claims 1 to 4.
  7.  前記特定手段が特定した被検眼の層領域、篩状板領域、及び篩状板孔領域の少なくとも一つに対する計測値を算出する計測手段をさらに備えることを特徴とする請求項6に記載の画像処理装置。 The image according to claim 6, further comprising a measurement unit configured to calculate a measurement value for at least one of a layer region, a lamina cribrosa region, and a lamina cribrosa region of the eye to be examined identified by the identification unit. Processing equipment.
  8.  被検眼の同一位置を複数回走査することを意図して取得した複数の断層画像であって、前記画素値が補正された複数の断層画像を用いてモーションコントラスト画像を生成するモーションコントラスト画像生成手段をさらに備えることを特徴とする請求項1乃至7のいずれか1に記載の画像処理装置。 Motion contrast image generation means for generating a motion contrast image using a plurality of tomographic images acquired with the intention of scanning the same position of the subject's eye a plurality of times, wherein the plurality of tomographic images whose pixel values have been corrected are used. The image processing apparatus according to claim 1, further comprising:
  9.  被検眼の同一位置を複数回走査することを意図して取得した複数の断層画像を含むクラスタを取得する画像取得手段と、
     被検眼に含まれる物体により生じる影領域であって、前記複数の断層画像における影領域の画素値が補正された複数の断層画像を含む前記クラスタを用いて、モーションコントラスト画像を生成するモーションコントラスト画像生成手段と、
     を備えることを特徴とする画像処理装置。
    Image acquisition means for acquiring a cluster including a plurality of tomographic images acquired with the intention of scanning the same position of the eye to be examined a plurality of times,
    A motion contrast image for generating a motion contrast image using the cluster including a plurality of tomographic images, which are shadow regions generated by an object included in the eye to be inspected, and in which the pixel values of the shadow regions in the plurality of tomographic images are corrected. Generating means;
    An image processing apparatus comprising:
  10.  前記画素値が補正された断層画像及び前記モーションコントラスト画像の少なくともいずれかを表示手段に表示させる表示制御手段をさらに備えることを特徴とする請求項8又は9に記載の画像処理装置。 10. The image processing apparatus according to claim 8, further comprising a display control unit that causes a display unit to display at least one of the tomographic image in which the pixel value is corrected and the motion contrast image.
  11.  前記画素値が補正された断層画像が前記表示手段に表示された場合に、前記表示された断層画像が受け入れられるものであるか否かの判断を受け付ける入力手段をさらに備えることを特徴とする請求項10に記載の画像処理装置。 When the tomographic image whose pixel value has been corrected is displayed on the display unit, the display unit further includes an input unit that receives a determination as to whether or not the displayed tomographic image is acceptable. Item 11. The image processing device according to Item 10.
  12.  前記表示された断層画像が受け入れられないものであるとの判断が前記入力手段に受け付けられた場合に、
     前記表示制御手段は、前記取得した複数の断層画像を用いて前記モーションコントラスト画像生成手段が生成したモーションコントラスト画像を前記表示手段に表示させることを特徴とする請求項11に記載の画像処理装置。
    When a determination that the displayed tomographic image is unacceptable is received by the input unit,
    The image processing apparatus according to claim 11, wherein the display control unit causes the display unit to display a motion contrast image generated by the motion contrast image generation unit using the plurality of acquired tomographic images.
  13.  前記モーションコントラスト画像生成手段は前記取得した複数の断層画像を用いてモーションコントラスト画像を生成可能であって、
     前記表示制御手段は、前記取得した複数の断層画像を用いて生成されたモーションコントラスト画像と、前記画素値が補正された複数の断層画像を用いて生成されたモーションコントラスト画像とを、前記表示手段に表示させることが可能であることを特徴とする請求項10に記載の画像処理装置。
    The motion contrast image generating means can generate a motion contrast image using the plurality of acquired tomographic images,
    The display control means, a motion contrast image generated using the plurality of acquired tomographic images, and a motion contrast image generated using the plurality of tomographic images corrected pixel values, the display means The image processing apparatus according to claim 10, wherein the image processing apparatus can display the image data.
  14.  前記表示制御手段は、前記取得された複数の断層画像に対する前記影領域の画像値の補正の適用可否を切り替えるためのユーザインターフェースと、前記影領域の画像値の補正の適用状態を示す文字列もしくはマーク、又は前記画素値の補正量の値の少なくとも一つと、を前記表示手段に表示させ、
     前記影領域の画像値の補正を適用する指示が前記ユーザインターフェースに入力された場合に、前記表示制御手段は、前記画素値が補正された断層画像を前記表示手段に表示させることを特徴とする請求項10乃至13のいずれか1項に記載の画像処理装置。
    The display control unit includes a user interface for switching whether to apply the correction of the image value of the shadow area to the obtained plurality of tomographic images, and a character string or a character string indicating an application state of the correction of the image value of the shadow area. Mark, or at least one of the values of the correction amount of the pixel value, is displayed on the display means,
    When an instruction to apply the correction of the image value of the shadow area is input to the user interface, the display control unit displays the tomographic image with the corrected pixel value on the display unit. The image processing device according to claim 10.
  15.  前記画素値が補正された複数の断層画像を用いて生成されたモーションコントラスト画像から血管領域もしくは無血管領域を特定する領域特定手段をさらに備えることを特徴とする請求項8乃至14のいずれか1項に記載の画像処理装置。 15. The image processing apparatus according to claim 8, further comprising a region specifying unit that specifies a blood vessel region or an avascular region from a motion contrast image generated using the plurality of tomographic images whose pixel values have been corrected. An image processing apparatus according to the item.
  16.  前記領域特定手段が特定した前記血管領域、及び前記無血管領域の少なくとも一つに対する計測値を算出する算出手段をさらに備えることを特徴とする請求項15に記載の画像処理装置。 16. The image processing apparatus according to claim 15, further comprising a calculation unit configured to calculate a measurement value for at least one of the blood vessel region and the avascular region specified by the region specification unit.
  17.  前記画素値が補正された断層画像を表示手段に表示させる表示制御手段をさらに備え、
     前記表示制御手段は、前記取得された断層画像に対する前記影領域の画像値の補正の適用可否を切り替えるためのユーザインターフェースと、前記影領域の画像値の補正の適用状態を示す文字列もしくはマーク、又は前記画素値の補正量の値の少なくとも一つと、を前記表示手段に表示させ、
     前記影領域の画像値の補正を適用する指示が前記ユーザインターフェースに入力された場合に、前記表示制御手段は、前記画素値が補正された断層画像を前記表示手段に表示させることを特徴とする請求項1乃至7のいずれか1項に記載の画像処理装置。
    The image processing apparatus further includes a display control unit that causes the display unit to display the tomographic image in which the pixel values are corrected,
    The display control means, a user interface for switching whether to apply the correction of the image value of the shadow area to the acquired tomographic image, a character string or a mark indicating the application state of the correction of the image value of the shadow area, Or at least one of the values of the correction amount of the pixel value, and display on the display means,
    When an instruction to apply the correction of the image value of the shadow area is input to the user interface, the display control unit displays the tomographic image with the corrected pixel value on the display unit. The image processing device according to claim 1.
  18.  前記学習済モデルは、被検眼の補正前の断層画像を入力データとし、該補正前の断層画像の少なくとも一部の画素値を補正して得た補正済みの断層画像を出力データとする教師データを用いて得られることを特徴とする請求項1乃至17のいずれか1項に記載の画像処理装置。 The learned model is a teacher data in which a tomographic image of the subject's eye before correction is used as input data, and a corrected tomographic image obtained by correcting at least a part of the pixel values of the tomographic image before correction is output data. The image processing apparatus according to claim 1, wherein the image processing apparatus is obtained by using:
  19.  被検眼の断層画像を取得する工程と、
     被検眼に含まれる物体により生じる影領域であって、前記断層画像における影領域の画素値を、学習済モデルを用いて補正する工程と、
     を含むことを特徴とする画像処理方法。
    A step of acquiring a tomographic image of the subject's eye;
    Correcting a pixel value of a shadow area in the tomographic image, which is a shadow area generated by an object included in the eye to be examined, using a learned model;
    An image processing method comprising:
  20.  被検眼の断層画像を取得する工程と、
     被検眼に含まれる物体により生じる影領域であって、前記断層画像における影領域の画素値が補正された断層画像を、学習済モデルを用いて生成する工程と、
     を含むことを特徴とする画像処理方法。
    A step of acquiring a tomographic image of the subject's eye;
    A step of generating a tomographic image in which a shadow area caused by an object included in the eye to be inspected and in which the pixel value of the shadow area in the tomographic image is corrected using a learned model,
    An image processing method comprising:
  21.  被検眼の同一位置を複数回走査することを意図して取得した複数の断層画像を含むクラスタを取得する工程と、
     被検眼に含まれる物体により生じる影領域であって、前記複数の断層画像における影領域の画素値が補正された複数の断層画像を含む前記クラスタを用いて、モーションコントラスト画像を生成する工程と、
     を含むことを特徴とする画像処理方法。
    A step of acquiring a cluster including a plurality of tomographic images acquired with the intention of scanning the same position of the subject's eye a plurality of times,
    A shadow region generated by an object included in the eye to be inspected, a motion contrast image is generated using the cluster including the plurality of tomographic images in which the pixel values of the shadow region in the plurality of tomographic images are corrected,
    An image processing method comprising:
  22.  プロセッサによって実行されると、該プロセッサに請求項19乃至21のいずれか1項に記載の画像処理方法の各工程を実行させる、プログラム。

     
    A program which, when executed by a processor, causes the processor to execute each step of the image processing method according to any one of claims 19 to 21.

PCT/JP2019/034752 2018-09-06 2019-09-04 Image processing device, image processing method and program WO2020050308A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-167195 2018-09-06
JP2018167195A JP2020039430A (en) 2018-09-06 2018-09-06 Image processing device, image processing method and program

Publications (1)

Publication Number Publication Date
WO2020050308A1 true WO2020050308A1 (en) 2020-03-12

Family

ID=69723238

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/034752 WO2020050308A1 (en) 2018-09-06 2019-09-04 Image processing device, image processing method and program

Country Status (2)

Country Link
JP (1) JP2020039430A (en)
WO (1) WO2020050308A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11382794B2 (en) 2018-07-02 2022-07-12 Belkin Laser Ltd. Direct selective laser trabeculoplasty
US11564836B2 (en) 2010-05-10 2023-01-31 Ramot At Tel Aviv University Ltd. System and method for treating an eye
US11771596B2 (en) 2010-05-10 2023-10-03 Ramot At Tel-Aviv University Ltd. System and method for treating an eye

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013153844A (en) * 2012-01-27 2013-08-15 Canon Inc Image processing apparatus, image processing method, and program
JP2017047127A (en) * 2015-09-04 2017-03-09 株式会社ニデック Oct motion contrast data analysis device and oct motion contrast data analysis program
US20170169590A1 (en) * 2015-12-09 2017-06-15 Oregon Health & Science University Systems and methods to remove shadowgraphic flow projections in oct angiography
WO2017143300A1 (en) * 2016-02-19 2017-08-24 Optovue, Inc. Methods and apparatus for reducing artifacts in oct angiography using machine learning techniques
JP2018068748A (en) * 2016-10-31 2018-05-10 キヤノン株式会社 Information processing apparatus, information processing method, and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013153844A (en) * 2012-01-27 2013-08-15 Canon Inc Image processing apparatus, image processing method, and program
JP2017047127A (en) * 2015-09-04 2017-03-09 株式会社ニデック Oct motion contrast data analysis device and oct motion contrast data analysis program
US20170169590A1 (en) * 2015-12-09 2017-06-15 Oregon Health & Science University Systems and methods to remove shadowgraphic flow projections in oct angiography
WO2017143300A1 (en) * 2016-02-19 2017-08-24 Optovue, Inc. Methods and apparatus for reducing artifacts in oct angiography using machine learning techniques
JP2018068748A (en) * 2016-10-31 2018-05-10 キヤノン株式会社 Information processing apparatus, information processing method, and program

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11564836B2 (en) 2010-05-10 2023-01-31 Ramot At Tel Aviv University Ltd. System and method for treating an eye
US11771596B2 (en) 2010-05-10 2023-10-03 Ramot At Tel-Aviv University Ltd. System and method for treating an eye
US11382794B2 (en) 2018-07-02 2022-07-12 Belkin Laser Ltd. Direct selective laser trabeculoplasty

Also Published As

Publication number Publication date
JP2020039430A (en) 2020-03-19

Similar Documents

Publication Publication Date Title
US11935241B2 (en) Image processing apparatus, image processing method and computer-readable medium for improving image quality
WO2020036182A1 (en) Medical image processing device, medical image processing method, and program
JP7374615B2 (en) Information processing device, information processing method and program
US20210183019A1 (en) Image processing apparatus, image processing method and computer-readable medium
WO2020183791A1 (en) Image processing device and image processing method
WO2020050308A1 (en) Image processing device, image processing method and program
WO2020137678A1 (en) Image processing device, image processing method, and program
JP2015160105A (en) Image processing device, image processing method and program
JP7362403B2 (en) Image processing device and image processing method
JP2021122559A (en) Image processing device, image processing method, and program
JP7195745B2 (en) Image processing device, image processing method and program
WO2020075719A1 (en) Image processing device, image processing method, and program
JP7009265B2 (en) Image processing equipment, image processing methods and programs
JP7106304B2 (en) Image processing device, image processing method and program
WO2019230643A1 (en) Information processing device, information processing method, and program
JP2022189963A (en) Ophthalmologic apparatus
JP2018191761A (en) Information processing device, information processing method, and program
JP7204345B2 (en) Image processing device, image processing method and program
JP2022062620A (en) Image processing device, image processing method and program
JP7387812B2 (en) Image processing device, image processing method and program
JP7297952B2 (en) Information processing device, information processing method and program
US20240057861A1 (en) Grade evaluation apparatus, ophthalmic imaging apparatus, non-transitory computer-readable storage medium, and grade evaluation method
JP7446730B2 (en) Image processing device, image processing method and program
JP2018047084A (en) Ophthalmologic examination apparatus
WO2020049828A1 (en) Image processing apparatus, image processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19856552

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19856552

Country of ref document: EP

Kind code of ref document: A1