WO2020209012A1 - Procédé d'analyse d'image et dispositif d'analyse d'image - Google Patents

Procédé d'analyse d'image et dispositif d'analyse d'image Download PDF

Info

Publication number
WO2020209012A1
WO2020209012A1 PCT/JP2020/011457 JP2020011457W WO2020209012A1 WO 2020209012 A1 WO2020209012 A1 WO 2020209012A1 JP 2020011457 W JP2020011457 W JP 2020011457W WO 2020209012 A1 WO2020209012 A1 WO 2020209012A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
analysis
region
image analysis
blood vessel
Prior art date
Application number
PCT/JP2020/011457
Other languages
English (en)
Japanese (ja)
Inventor
宏治 野里
和英 宮田
Original Assignee
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2019114966A external-priority patent/JP2020171664A/ja
Application filed by キヤノン株式会社 filed Critical キヤノン株式会社
Publication of WO2020209012A1 publication Critical patent/WO2020209012A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes

Definitions

  • the present invention relates to an image analysis method and an image analysis apparatus, and more particularly to an image analysis method and an image analysis apparatus for analyzing a fundus image captured by an ophthalmologic examination.
  • OCT Optical Coherence Tomography: optical coherence tomography or optical coherence tomography
  • TD-OCT Time Domain OCT: time domain method
  • SD-OCT Spectral Domain OCT: spectral domain method
  • Non-Patent Document 1 shows an example of AO-OCT.
  • AO-SLO and AO-OCT generally measure the wave surface of the eye by the Shack-Hartmann wave surface sensor method.
  • the Shack-Hartmann wave surface sensor method measures the wave surface by injecting measurement light into the eye and receiving the reflected light with a CCD camera through a microlens array.
  • AO-SLO and AO-OCT can perform high-resolution imaging.
  • OCT Angiography angiography using OCT
  • a blood vessel image (hereinafter referred to as an OCTA image) is generated by projecting three-dimensional motion contrast data acquired by OCT onto a two-dimensional plane.
  • the motion contrast data is data obtained by repeatedly imaging the same cross section of the measurement target by OCT and detecting a temporal change of the measurement target between the imaging.
  • the motion contrast data can be obtained, for example, by calculating the temporal change of the phase, vector, and intensity of the complex OCT signal from the difference, ratio, correlation, and the like (for example, Patent Document 1).
  • SLO and AO-SLO a method of generating a blood vessel image (hereinafter referred to as SLOA or AO-SLOA image) from the motion contrast data of the plane image is also being studied.
  • the present invention realizes analysis of cell bodies such as ganglion cells from retinal inner layer images based on high-resolution images such as AO-OCT images, and obtains clinically useful information.
  • the purpose realizes analysis of cell bodies such as ganglion cells from retinal inner layer images based on high-resolution images such as AO-OCT images, and obtains clinically useful information. The purpose.
  • the image analysis method of the present invention is an image analysis method for analyzing a tomographic image of the tomographic image generated based on the return light from the tomographic eye irradiated with the measurement light, and extracts a predetermined layer from the tomographic image. It is characterized by having an extraction step to be performed, a specific step of specifying a vascular region in the predetermined layer, and an analysis step of analyzing ganglion cells based on the predetermined layer and the specified vascular region.
  • FIG. 1 is a diagram showing a configuration of a fundus imaging device (OCT device)
  • FIG. 2 is a configuration diagram of an image analysis unit that analyzes cells from output data (image) of the fundus imaging device.
  • 110 is an OCT unit, which is composed of a light source 101, a fiber coupler 102, a reference optical system 111, a spectroscope 112, and an eyepiece optical system as main components.
  • Reference numeral 101 denotes a light source, and an SLD light source having a wavelength of 840 nm was used.
  • the light source 101 may have low coherence, and an SLD having a wavelength width of 30 nm or more is preferably used.
  • an ultrashort pulse laser such as a titanium sapphire laser can be used as a light source.
  • the light emitted from the light source 101 is guided to the fiber coupler 102 through the single mode optical fiber.
  • the fiber coupler 102 branches the measurement optical path 103 and the reference optical path 113.
  • a fiber coupler having a branching ratio of 10:90 is used, and is configured so that 10% of the input light amount goes to the measurement light path 103.
  • the light passing through the measurement light path 103 is irradiated by the collimator 104 as parallel light rays.
  • the polarization of the emitted light is adjusted by a polarization regulator (not shown) provided in the path of the single-mode optical fiber 103.
  • a polarization regulator not shown
  • an optical component for adjusting polarization is arranged in the optical path after being emitted from the collimator 104.
  • an optical element for adjusting the dispersion characteristic of the measurement light and an optical element for adjusting the chromatic aberration characteristic may be provided in the optical path.
  • the measurement light 105 is relayed by reflection mirrors 106-1 to 3 and a lens (not shown), and is scanned one-dimensionally or two-dimensionally on the eye 109 to be inspected by the scanning optical system 107-1.
  • two galvano scanners are used for the scanning optical system 107-1 for the main scanning (horizontal fundus) and for the sub-scanning (vertical fundus).
  • an optical element such as a mirror or a lens is used between the scanners.
  • the scanning optical system further has a tracking mirror 107-2.
  • the tracking mirror 107-2 is composed of two galvano scanners, and the imaging region can be further moved in two directions.
  • the scanning optical system 107-1 also serves as a tracking mirror 107-2.
  • a relay optical system (not shown) is often used.
  • the measurement light 105 scanned by the scanning optical systems 107-1 and 107-2 is applied to the eye 109 to be inspected through the eyepieces 108-1 and 108-2.
  • the measurement light applied to the eye 109 to be inspected is reflected or scattered at the fundus.
  • a lens is used for the eyepiece, but a spherical mirror or the like may be used.
  • the reflected light reflected or scattered from the retina of the eye 109 to be inspected travels in the opposite direction when it is incident, enters the optical fiber 103 through the collimator 104, and returns to the fiber coupler 102.
  • the reference light passing through the reference optical path 113 is emitted by the collimator 114, reflected by the optical path length variable portion 116, and returned to the fiber coupler 102 again.
  • the return light and the reference light that have reached the fiber coupler 102 are combined and guided to the spectroscope 112 through the optical fiber 117 as interference light.
  • the interfering light entering the spectroscope 112 is emitted by the collimator 118 and is separated for each wavelength by the grating 119.
  • the dispersed light is applied to the line sensor 121 through the lens system 120.
  • the line sensor 121 may be composed of a CCD sensor or a CMOS sensor.
  • a tomographic image of the fundus is constructed by the control unit 122 based on the interference light information dispersed by the spectroscope 112.
  • the control unit 122 controls the optical path length variable unit 116 and can acquire an image at a desired depth position. Further, the control unit 122 also controls the scanning units 107-1 and 107-2 at the same time, and can acquire an interference signal at an arbitrary position. Generally, the scanning units 107-1 and 107-2 perform a raster scan of an arbitrary range on the fundus, and the interference signal at each position is recorded at the same time as the position. Three-dimensional volume data can be acquired by creating a tomographic image from the obtained interference signal.
  • the ganglion cells are analyzed from the B scan, it is sufficient if a plurality of tomographic images can be obtained in one line at a specific position, and it is not necessary to perform a raster scan to acquire a 3D volume.
  • a tomographic image of the fundus is generated by the tomographic image generation unit 202 from the obtained signal.
  • the blood vessel information is acquired as OCTA (OCT Angiografy) data generated from the OCT tomographic image
  • the tomographic information generated by the tomographic image generation unit 202 is processed by the blood vessel position analysis unit 203.
  • a blood vessel image is generated from the tomographic image by OCTA processing.
  • OCTA measures the time change of the OCT interference signal due to blood flow, it is necessary to measure it multiple times at the same place.
  • a B scan at the same location is repeated m times, and a scan that moves to the y position at n locations is performed.
  • the B scan is repeated m times for n positions of y1 to yn on the fundus plane.
  • the scanning time becomes long, and there arises a problem that motion artifacts occur in the image due to eye movements (fixed vision fine movements) during scanning and a problem that the burden on the subject increases.
  • m 4 (Fig. 5B) was set in consideration of the balance between the two.
  • the number of repetitions m may be set according to the A scan speed of the OCT apparatus and the motion analysis of the fundus surface image of the subject.
  • p indicates the sampling number of A scan in one B scan. That is, the plane image size is determined by p ⁇ n. If p ⁇ n is large, a wide range can be scanned at the same measurement pitch, but the scanning time becomes long, which causes the above-mentioned motion artifacts and problems of burden on the subject.
  • n and p can be arbitrarily set as appropriate.
  • ⁇ x in FIG. 5A is the interval (x pitch) between adjacent x positions
  • ⁇ y is the interval (y pitch) between adjacent y positions.
  • the x-pitch is determined as 1/2 of the beam spot diameter of the irradiation light on the fundus, and in the present embodiment, it is 10 ⁇ m (FIG. 5B). Since the scan line is one line, the Y position is only one of Y1.
  • FIG. 6 is an example of an OCTA image.
  • FIG. 6A is an OCT tomographic image, and a plurality of tomographic images at the same position are captured for constructing the OCTA image, and the images are 601 to 604, respectively.
  • a motion contrast image is generated by calculating the difference between the respective tomographic images 601 to 604.
  • the image shown in FIG. 6B is a tomographic image, and the motion contrast image obtained from the plurality of tomographic images is the image shown in FIG. 6C.
  • What is required in this embodiment is a motion contrast image in such a tomographic image.
  • motion contrast images are aligned and stacked in a three-dimensional direction, and motion contrast data in an arbitrary layer range is extracted to generate an OCTA image (FIG. 6D) of a specific layer. ..
  • step S101 the tomographic image generation unit 202 repeatedly extracts the B scan interference signal (m sheets) at the position yk.
  • step S102 the tomographic image generation unit 202 extracts the j-th tomographic data.
  • step S103 the tomographic image generation unit 202 subtracts the acquired background data from the interference signal.
  • step S104 the tomographic image generation unit 202 performs a wave number function conversion process on the interference signal obtained by subtracting the background, and performs a Fourier transform.
  • a fast Fourier transform FFT: Fast Fourier Transform
  • the interference signal may be increased by performing zero padding processing before the Fourier transform. By performing the zero padding process, the gradation property after the Fourier transform is increased, and the alignment accuracy can be improved in step S109 described later.
  • step S105 the tomographic image generation unit 202 calculates the absolute value of the complex signal obtained by the Fourier transform executed in step S104. This value is the intensity of the tomographic image of the scan.
  • step S106 the tomographic image generation unit 202 determines whether the index j has reached a predetermined number (m). That is, it is determined whether the intensity calculation of the tomographic image at the position yk is repeated m times. If the number is less than the predetermined number, the process returns to step S102, and the integrity calculation of the tomographic image at the same Y position is repeated. When the predetermined number is reached, the process proceeds to the next step.
  • m a predetermined number
  • the tomographic image generation unit 202 calculates the similarity of images in the tomographic image at the same position of the m frame at a certain yk position. Specifically, the tomographic image generation unit 202 selects an arbitrary one of the m-frame tomographic images as a template, and calculates the correlation value with the remaining m-1 frame images. In step S108, the tomographic image generation unit 202 selects a highly correlated image having a certain threshold value or more from the correlation values calculated in step S107.
  • the threshold value can be set arbitrarily, and is set so that frames whose correlation as an image has deteriorated due to the subject's blinking or fixation tremor can be excluded.
  • OCTA is a technique for distinguishing the contrast between a flowing tissue (for example, blood) and a non-flowing tissue among the sample tissues based on the correlation value between the images. That is, since the tissue without flow extracts the tissue with flow on the premise that the correlation between the images is high, if the correlation is low as an image, an erroneous detection occurs when calculating the motion contrast, and the entire image flows as if it were. It will be judged as if it is an organization with. In this step, in order to avoid such false positives, tomographic images with low correlation are excluded in advance as images, and only images with high correlation are selected. As a result of image selection, the m-frame image acquired at the same position yk is appropriately selected and becomes a q-frame image.
  • the possible value of q is 1 ⁇ q ⁇ m.
  • step S109 the tomographic image generation unit 202 aligns the tomographic image of the q frame selected in step S108.
  • the correlation may be calculated for all combinations of each other, the sum of the correlation coefficients may be obtained for each frame, and the frame having the maximum sum may be selected.
  • the amount of misalignment ( ⁇ X, ⁇ Y, ⁇ ) is obtained by collating each frame with the template.
  • Normalized Cross-Correlation which is an index showing the degree of similarity while changing the position and angle of the template image, is calculated, and the difference in the image position when this value is maximized is obtained as the amount of misalignment.
  • the index showing the degree of similarity can be variously changed as long as it is a measure showing the similarity between the features of the image in the template and the frame. For example, Sum of Absolute Correlation (SAD), Sum of Squared Difference (SSD), Zero-means Normalized Cross-Correlation (ZNCC), Phase Online Correlation (ZNCC), Phase Online Correlation (CorrationCorrelation)
  • the tomographic image generation unit 202 applies position correction to the (q-1) frame other than the template according to the amount of misalignment ( ⁇ X, ⁇ Y, ⁇ ), and aligns the frames. If q is 1, this step is not performed.
  • step S110 the tomographic image generation unit 202 calculates the motion contrast.
  • the dispersion value is calculated for each pixel at the same position between the intensity images of the q frame selected in step S108 and aligned in step S109, and the dispersion value is used as the motion contrast.
  • the motion contrast can be applied as long as it is an index showing a change in the motion contrast value of each pixel of a plurality of tomographic images at the same Y position.
  • the step may be completed with the feature amount set to 0, or when the motion contrast in the images of front and back yk-1 and yk + 1 can be obtained, the values may be interpolated from the front and back dispersion values. In this case, the feature amount that could not be calculated correctly may be notified as an abnormality as an interpolated value. Further, the Y position where the feature amount could not be calculated may be stored, and the rescan for the frame which could not be calculated automatically may be performed. Alternatively, a message prompting the measurement of m frames may be issued again without automatically rescanning.
  • step S111 the tomographic image generation unit 202 averages the Intensity images aligned in step S109 and generates an Intensity averaged image.
  • step S112 the tomographic image generation unit 202 performs threshold processing of the motion contrast output in step S110.
  • the threshold value the area where only random noise is displayed on the noise floor is extracted from the Integrity averaged image output by the tomographic image generation unit 202 in step S111, the standard deviation ⁇ is calculated, and the average motion contrast of the noise floor is calculated. Set the value to + 2 ⁇ .
  • each Intensity sets the value of the motion contrast corresponding to the region below the threshold value to 0. By this threshold processing, noise can be reduced by removing motion contrast derived from random noise.
  • the threshold value is set as the average motion contrast value of the noise floor + 2 ⁇ , but the threshold value is not limited to this.
  • step S113 the tomographic image generation unit 202 determines whether the index k has reached a predetermined number (n). That is, it is determined whether the image correlation degree calculation, the image selection, the alignment, the integrity image averaging calculation, the motion contrast calculation, and the threshold value processing have been performed at all n Y positions. If the number is less than the predetermined number, the process returns to S101, and if the number reaches the predetermined number, the process proceeds to the next step S114. Again, in this embodiment, since the OCTA image of the tomographic image is generated, this step is completed with only one Y.
  • step S113 3D volume data (3D OCTA data) of the integrity average image and motion contrast of the tomographic images at all Y positions has been generated.
  • step S114 it is determined whether the OCTA tomographic image or the Enface image is generated.
  • step S115 When generating an OCTA tomographic image, the process proceeds to step S115 to generate a motion contrast tomographic image.
  • the process proceeds to step S116, and a motion contrast front image integrated in the depth direction is generated with respect to the generated three-dimensional OCTA data.
  • the integrated image depth range may be arbitrarily set.
  • the layer boundary of the fundus retina is extracted based on the averaged image of Intensity generated in step S111, and a motion contrast front image is generated so as to include a desired layer.
  • the tomographic image generation unit 202 ends the signal processing flow.
  • the tomographic image generation unit 202 generates a tomographic image as shown in FIG. 7A.
  • the fundus has a multi-layered structure, including choroid 701, retinal pigment epithelial layer 702, photoreceptor layer 703, external limiting membrane 704, outer nuclear layer 705, outer plexiform layer 706, inner nuclear layer 707, inner plexiform layer 708, and ganglion. It is composed of a cell layer 709, an optic nerve fiber layer 710, a vitreous body 711, and the like. There are large and small blood vessels 712 in a specific layer. In this embodiment, the analysis of the ganglion cell 700 present in the ganglion cell layer 709 will be described.
  • the tomographic image generated by the tomographic image generation unit 202 is extracted with blood vessel position information by OCTA processing with the blood vessel position analysis unit 203, and the blood vessel site 712 is extracted as shown in FIG. 7B.
  • Blood vessels 712 are present in a plurality of layers in the tomographic image, and are also present in the ganglion cell layer 709 and the optic nerve fiber layer 710, which are the layers to be analyzed. Since blood flow is also present near the choroid 701, it is often visualized by OCTA.
  • the obtained blood vessel position information is sent to the cell analysis region determination unit 204.
  • the tomographic image is also sent from the tomographic image generation unit 202 to the cell analysis region determination unit 204.
  • the region of the ganglion cell layer 709 to be analyzed is extracted from the tomographic image (FIG. 7A) and used as the analysis candidate region (white portion in FIG. 7C).
  • the blood vessel position 712 and the artifact region 713 by the blood vessel are determined (the region having the same width as the blood vessel and deeper than the blood vessel), and the analysis exclusion region (white part in FIG. 7D).
  • the final analysis region (white part in FIG. 7E) is determined from the analysis candidate region (FIG. 7C) and the exclusion region (FIG. 7D).
  • the white portion 715 in FIG. 7E is an analysis candidate region
  • the shaded hatched portion 714 is an exclusion region (also referred to as a blood vessel region) due to the influence of blood vessels.
  • the analysis area can be further modified manually using a mouse, a stylus, or the like. In all cases, detection of ganglion cell layers and blood vessels is not always successful, so it is useful to have the ability to manually modify the analysis area.
  • the tomographic image (FIG. 7A) and the information of the analysis area (FIG. 7E) are sent to the cell analysis unit 205, and the ganglion cells 700 included in the analysis area are analyzed as shown by the circles shown in FIG.
  • the ganglion cell since the ganglion cell contains a low-luminance region near the center, it is effective to detect the cell by a method such as detecting a valley of brightness in the tomographic image.
  • the cell analysis includes analysis of the number, density, closest distance, arrangement, size, shape, etc. of the cells to be analyzed, and the analysis results are displayed on the display 206.
  • the analysis result may be displayed numerically, or may be displayed by superimposing it on a tomographic image, a plane image, or a 3D image.
  • the display color may be changed from that of a tomographic image or the like in order to improve visibility.
  • the density in a unit area may be displayed in different colors according to the height thereof.
  • FIG. 9 shows an example of display by the display unit 206.
  • the fundus observation image 901 and the entire tomographic image 902 are displayed on the display GUI 900, and are displayed as the tomographic image 904 of the analysis target area 903.
  • the ganglion cell analysis region 905 based on the tomographic image 904 is displayed, and the white round dots 906 indicate the positions of individual ganglion cells in the ganglion cell analysis region.
  • the numerical analysis result 907 obtained based on the extracted ganglion cells is displayed.
  • pressing the button 908 it shifts to the manual mode, and it can be corrected by instructing the analysis area and the detected cell position, and instructing the undetected cell position.
  • 318 is an AO-SLO unit and 324 is an AO-OCT unit.
  • 301 is a light source, and an SLD light source (Super Luminescent Diode) having a wavelength of 760 nm was used.
  • the wavelength of the light source 301 is not particularly limited, but for fundus imaging, about 750 to 1500 nm is preferably used in order to reduce the glare of the subject and maintain the resolution.
  • an SLD light source is used, but a laser or the like is also used.
  • the light sources for fundus imaging and wave surface measurement are shared, but each may be a separate light source and may be configured to combine waves in the middle of the optical path.
  • a wavelength different from the wavelength of the AO-OCT light source is selected in order to branch off from the optical path with AO-OCT, and the optical path is selected by the dichroic mirror. Is configured to branch.
  • the light emitted from the light source 301 passes through the single mode optical fiber 302 and is emitted as parallel light rays (measurement light 305) by the collimator 303.
  • the polarization of the emitted light is adjusted by a polarization regulator (not shown) provided in the path of the single-mode optical fiber 302.
  • a polarization regulator not shown
  • the irradiated measurement light 305 passes through the light splitting unit 304 including the beam splitter, further passes through the beam splitter 319 for optical branching with the OCT, and is guided to the optical system of adaptive optics.
  • the adaptive optics system is composed of an optical dividing unit 306, a wave surface sensor 314, a wave surface correction device 308, and reflection mirrors 307-1 to 4 for guiding the light to them.
  • the reflection mirrors 307-1 to 4 are installed so that at least the pupil of the eye to be inspected 311, the wave surface sensor 314, and the wave surface correction device 308 are optically conjugated.
  • the optical splitting unit 306 a beam splitter was used in this embodiment.
  • the measurement light 305 transmitted through the light dividing unit 306 is reflected by the reflection mirrors 307-1 and 307-2 and is incident on the wave surface correction device 308.
  • the measurement light 305 reflected by the wave surface correction device 308 is further reflected by the reflection mirrors 307-3 and 307-4, and is guided to the scanning optical system 309-1.
  • variable shape mirror is used as the wave surface correction device 308.
  • the variable shape mirror is a mirror in which the reflecting surface is divided into a plurality of regions, and the wave surface of the reflected light can be changed by changing the angle of each region.
  • the wave surface correction device it is also possible to use a spatial phase modulator using a liquid crystal element instead of the variable shape mirror. In that case, two spatial phase modulators may be used to correct the bipolarized wave front of the light from the eye 311 to be inspected.
  • the light reflected by the reflection mirrors 307-3 and 4 is scanned one-dimensionally or two-dimensionally by the scanning optical system 309-1.
  • one resonance scanner and one galvano scanner are used for the scanning optical system 309-1 for the main scanning (horizontal direction of the fundus) and for the sub-scanning (vertical direction of the fundus).
  • two galvano scanners may be used for the scanning optical system 309-1.
  • an optical element such as a mirror or a lens is used between the scanners in order to bring each scanner in the scanning optical system 309-1 into an optically conjugated state.
  • the scanning optical system further has a tracking mirror 309-2.
  • the tracking mirror 309-2 is composed of two galvano scanners, and the imaging region can be further moved in two directions.
  • the scanning optical system 309-1 also serves as the tracking mirror 309-2
  • the tracking mirror 309-2 is configured only in the resonance scanner direction of the scanning optical system 309-1
  • the tracking mirror 309-2 is a two-dimensional mirror.
  • a relay optical system (not shown) is often used.
  • the measurement light 305 scanned by the scanning optical systems 309-1 and 309-2 is applied to the eye 311 to be inspected through the eyepieces 310-1 and 310-2.
  • the measurement light applied to the eye 311 to be inspected is reflected or scattered at the fundus.
  • By adjusting the positions of the eyepieces 310-1 and 310-2 it is possible to perform optimum irradiation according to the diopter of the eye to be inspected 311.
  • a lens is used for the eyepiece, but a spherical mirror or the like may be used.
  • the reflected light reflected or scattered from the retina of the eye 311 to be inspected travels in the opposite direction when it is incident, and is partially reflected by the light dividing unit 306 to the wave surface sensor 314 to measure the wave surface of the light beam. Used.
  • the light rays reflected by the light dividing unit 306 toward the wave surface sensor 314 pass through the relay optical systems 316-1 and 316-2 and enter the wave surface sensor 314.
  • An aperture 317 is installed between the relay optical systems 316-1 and 316-2 to prevent unnecessary reflected scattered light from a lens or the like from incident on the wave surface sensor 314.
  • a Shack-Hartmann sensor is used as the wave surface sensor 314.
  • the wave surface sensor 314 is connected to the adaptive optics control unit 315 and transmits the received wave surface to the adaptive optics control unit 315.
  • the wave surface correction device 308 is also connected to the adaptive optics control unit 315, and performs modulation instructed by the adaptive optics control unit 315.
  • the adaptive optics control unit 315 calculates the modulation amount (correction amount) for each pixel of the wave surface correction device for correcting the wave surface without aberration based on the wave surface acquired by the measurement result of the wave surface sensor 314, and corrects the wave surface. Instruct device 308 to do so.
  • the wave surface measurement and the instruction to the wave surface correction device are repeatedly processed, and feedback control is performed so that the wave surface is always optimized.
  • Part of the reflected light transmitted through the light dividing unit 306 is reflected by the light dividing unit 304, condensed by the condensing lens 312 on the optical sensor 313 having a pinhole, and the light is converted into an electric signal.
  • the optical sensor 313 is connected to the control unit 334, and the control unit 334 constructs a planar image based on the obtained electric signal and the position of the optical scan, and displays it on the display 335 as an AO-SLO image.
  • AO-OCT unit is an AO-OCT unit, which is composed of a light source 320, a fiber coupler 321 as a reference optical system 325, and a spectroscope 326 as main units.
  • the 320 is a light source, and an SLD light source having a wavelength of 840 nm was used.
  • the light source 320 may be any light having low coherence, and an SLD having a wavelength width of 30 nm or more is preferably used.
  • an ultrashort pulse laser such as a titanium sapphire laser can be used as a light source.
  • the wavelength is different from that of the AO-SLO light source and the wavelength is branched by a dichroic mirror or the like.
  • the light emitted from the light source 320 is guided to the fiber coupler 321 through the single mode optical fiber.
  • the fiber coupler 321 branches the measurement optical path 322 and the reference optical path.
  • a fiber coupler having a branching ratio of 10:90 was used, and 10% of the input light amount was configured to go to the measurement light path 322.
  • the light that has passed through the measurement light path 322 is irradiated with the measurement light as parallel rays by the collimator 323.
  • the polarization of the emitted light is adjusted by a polarization regulator (not shown) provided in the path of the single-mode optical fiber 322.
  • a polarization regulator not shown
  • an optical component for adjusting polarization is arranged in the optical path after being emitted from the collimator 323.
  • an optical element for adjusting the dispersion characteristic of the measurement light and an optical element for adjusting the chromatic aberration characteristic may be provided in the optical path.
  • the measurement light is combined with the AO-SLO measurement light by the beam splitter 319 for optical branching, and the measurement light 305 follows the same optical path as the AO-SLO to irradiate the eye to be inspected 311.
  • the return light scattered and reflected from the eye 311 to be inspected travels in the same path as the outward path in the opposite direction to the AO-SLO, is reflected by the beam splitter 319 for optical branching, and returns to the fiber coupler 321 through the optical fiber 322.
  • the wave surface of the AO-OCT light is also measured by the wave surface sensor 314 and corrected by the wave surface correction device 308.
  • the method of wave surface correction is not limited to such a method, and when measuring only the wave surface of AO-OCT light or when measuring only the wave surface of AO-SLO light, before the wave surface sensor 314. It is configured to add an optical filter. It is also possible to control the switching of the light to be measured by dynamically inserting and removing the optical filter and changing it.
  • the reference light that has passed through the reference light path is emitted by the collimator 327, reflected by the optical path length variable portion 329, and returns to the fiber coupler 321 again.
  • the return light and the reference light that have reached the fiber coupler 321 are combined and guided as interference light to the spectroscope 326 through an optical fiber.
  • the interfering light entering the spectroscope 326 is emitted by the collimator 330 and is separated for each wavelength by the grating 331.
  • the dispersed light is applied to the line sensor 333 through the lens system 332.
  • the line sensor 333 may be composed of a CCD sensor or a CMOS sensor.
  • a tomographic image of the fundus is constructed by the control unit 334 based on the interference light information dispersed by the spectroscope 326.
  • the control unit 334 controls the optical path length variable unit 329, and can acquire an image at a desired depth position. Further, the control unit 334 also controls the scanning units 309-1 and 309-2 at the same time, and can acquire an interference signal at an arbitrary position. Three-dimensional volume data is acquired by creating a tomographic image from the obtained interference signal.
  • the scan pattern control of AO-OCT and the method of constructing an OCTA image are the same as those in the first embodiment.
  • AO-OCT can obtain very high resolution and high definition tomographic images, so that the cell structure of ganglion cells and the like can be clearly visualized, which is very useful for cell level analysis.
  • the resolution is very high, when the same part is continuously imaged, a slight displacement of the structure and a change in brightness are included between the frames. Therefore, when a motion contrast image such as OCTA is created, this minute change becomes a noise component, and the motion contrast image becomes noisy.
  • the blood vessel structure is visualized by a motion contrast image as in the AO-OCT of the present embodiment, the blood vessel structure is not so minute, so it is better to reduce the resolution to some extent to obtain a better motion contrast image.
  • FIG. 2B shows the unit configuration of the present embodiment, and most of them are the same as those of the first embodiment.
  • the tomographic image generated by the tomographic image generation unit 202 is sent to the blood vessel position analysis unit 203 at a reduced resolution by the resolution changing unit 207.
  • general filtering such as a Gaussian filter or a median filter can be used.
  • the resolution is improved by performing super-resolution processing in this unit.
  • the blood vessel analysis unit 203 analyzes the blood vessel position in the same manner as in the first embodiment using the low-resolution tomographic image sent from the resolution change unit 207, and the analysis region is extracted by the cell analysis region determination unit 204.
  • the tomographic image sent to the cell analysis region determination unit 204 maintains the original high resolution.
  • the tomographic image may be one frame, or may be an averaging image obtained by adding and averaging a plurality of frames.
  • the analysis area information and the tomographic image are finally sent to the cell analysis unit 205, and the cell analysis is executed in the same manner as in the first embodiment.
  • the tomographic image to be sent may be one frame, or may be an averaging image obtained by adding and averaging a plurality of frames.
  • the analyzed result is displayed on the display as in the first embodiment and provided to the operator.
  • the present embodiment is an example of adaptive optics OCT-SLO having both AO-SLO and AO-OCT functions in the same device with the object to be measured as an eye, and obtains blood vessel information from the outside. It is a composition.
  • FIG. 2C shows the unit configuration of the present embodiment, and most of them are the same as those of the first embodiment.
  • the tomographic image generated by the tomographic image generation unit 202 is sent to the cell analysis area determination unit 204.
  • the OCTA capture unit 208 captures an OCTA image from the outside.
  • the OCTA image may be an image captured and generated by another OCT device, and there is no limitation on the OCTA generation method.
  • the blood vessel position analysis unit 203 detects a blood vessel in the same region as the current imaging position from the tomographic image and the plane image related to the acquired OCTA image, and generates blood vessel position information.
  • the obtained blood vessel position information is sent to the cell analysis region determination unit 204.
  • the analysis area information and tomographic image are finally sent to the cell analysis unit 205, and cell analysis is executed.
  • the tomographic image to be sent may be one frame, or may be an averaging image obtained by adding and averaging a plurality of frames.
  • the analyzed result is displayed on the display as in the first embodiment and provided to the operator.
  • the tomographic image is divided into a plurality of regions, and the ganglion cells in each region are divided. Is analyzed.
  • the size of the divided area is the same, but the number of areas is specified by the examiner within a predetermined range (for example, 2 to 10), and the size is adjusted according to the specified number and the size of the tomographic image. You may decide.
  • the region is set to automatically detect the macula and to be separated from the macula by a predetermined distance (for example, a distance slightly shorter than the distance from the macula where ganglion cells are anatomically present).
  • Area 1 is set in, and the area is sequentially set so as to be in contact with the sequentially set area. The area 1 may be set by the examiner and then automatically set.
  • FIG. 10 shows an example of display by the display unit 206.
  • the display GUI 1000 displays a fundus observation image 1001 and an overall tomographic image 1002.
  • the analysis region 1003 is set with five analysis regions 1 to 5 which are different in distance from the center of the macula and are in contact with each other.
  • the analysis area included in any of the analysis areas 1 to 5 is displayed as the ganglion cell analysis area 1005, and the white round dots indicate the positions of individual ganglion cells in the ganglion cell analysis area.
  • the display GUI 1000 displays a fundus observation image 1001 and an overall tomographic image 1002.
  • the analysis region 1003 is set with five analysis regions 1 to 5 which are different in distance from the center of the macula and are in contact with each other.
  • the analysis area included in any of the analysis areas 1 to 5 is displayed as the ganglion cell analysis area 1005, and the white round dots indicate the positions of individual ganglion cells in the ganglion cell analysis area.
  • the white round dots indicate the positions of individual ganglion cells in the
  • the numerical analysis results for each analysis region obtained based on the extracted ganglion cells are displayed in 1007, and the results of each analysis region 1 to 5 are displayed side by side here. Further, in order to improve the visibility of the tendency between the analysis regions, a graph such as 1008 may be generated and displayed.
  • the densities of ganglion cells in each analysis region (normalized densities are used because the size of the vascular region in each analysis region is different) are plotted.
  • the number the number based on the size of the narrowest analysis area is used because the size of the area excluding the blood vessel area of each analysis area is different
  • the average closest distance etc. It may be configured to display the analysis result.
  • the average closest distance is the average of the shortest distances between the ganglion cells analyzed for each analysis region. It may be applied to the apparatus having the configuration described in the second and third embodiments.
  • the present embodiment it is possible to analyze and present the ganglion cells in the inner layer of the retina in more detail, and it is possible to obtain information having high clinical value from the tomographic image.
  • the acquired tomographic image is analyzed by the same method as in the first embodiment, and the analysis of the ganglion cells is performed on a plurality of tomographic images obtained by photographing substantially the same position of the same subject over an arbitrary period. To do.
  • the analysis area of each tomographic image is determined by aligning the planar image and the tomographic image so that the analysis positions are the same.
  • FIG. 11 shows an example of display by the display unit 206.
  • the display GUI 1100 displays fundus observation images 1101-1 to 1101 to 1101 to 4 and overall tomographic images 1102-1 to 4 on different shooting dates. In order to improve visibility by the operator, the shooting date and time may be displayed together with the tomographic image.
  • 1103 is the analysis area, but the analysis area in each tomographic image 1102-1 to 1101-4 is determined after the tomographic images are aligned with each other so that the analysis is performed in the same area.
  • the numerical analysis results obtained based on the ganglion cells extracted in each analysis area are displayed in 1107, and the results of each shooting date 1 to 4 are displayed side by side here. Further, in order to improve the visibility of the tendency between the analysis regions, a graph such as 1108 may be generated and displayed. In Graph 1108, the density of ganglion cells is plotted, but in addition to the density, other analysis results such as the number and the average closest distance may be displayed as a graph. It may be applied to the apparatus having the configuration described in the second and third embodiments.
  • the present embodiment it is possible to grasp the temporal course of ganglion cells in the inner layer of the retina, and it is possible to obtain information having high clinical value from the tomographic image.
  • FIG. 6 Another analysis example using the OCT apparatus having the configuration described in the first embodiment will be described.
  • This example is an example of analyzing ganglion cells together with information on the eye to be examined.
  • the acquired tomographic image is analyzed by the same method as in the first embodiment, and when the ganglion cells are analyzed, the same tomographic image is divided into a plurality of regions, and the ganglion cells in each region are analyzed. Is the same as in the fourth embodiment.
  • the operator sets information about the shooting eye at the time of shooting or at another timing such as before or after shooting.
  • information related to the photographing eye it is possible to set basic information such as age, gender, and race, and information related to the eye such as visual acuity, diopter, axial length, and intraocular pressure. It may also be a disease or a medical history (disease history).
  • five analysis regions 1 to 5 are set according to the distance from the center of the macula and the analysis is executed.
  • FIG. 12 shows an example of the analysis result displayed by the display unit 206.
  • 1208-1 is an example of analyzing the age
  • 1208-2 is an example of analyzing the risk of the disease of the age group to which the subject belongs.
  • Statistical data of ganglion cell density in each age group is set in advance in this device, and when displaying the analysis result in 1208-1, the normal range of each age group is displayed in different colors.
  • 1210-1 is the statistical normal range in the age group for 40 years or younger
  • 1210-2 is the statistical normal range in the age group for 40-60 years
  • 1210-3 is the statistical normal range in the age group for 60-70 years.
  • the range, 1210-4 indicates the statistically normal range in the age group of 80 years or older. In this way, by displaying the image together with the age of the subject, it is possible to easily grasp whether or not the ganglion cell density of the subject is within the statistically normal range at the target age.
  • FIG. 7 Another analysis example using the OCT apparatus having the configuration described in the first embodiment will be described.
  • This example is an example of analyzing the analysis result of ganglion cells together with other measurement information of the eye to be inspected.
  • the acquired tomographic image is analyzed by the same method as in the first embodiment, and the ganglion cells are analyzed.
  • film thickness information such as the full-thickness of the retina and nerve fiber thickness analyzed from the tomographic image, visual field sensitivity information acquired from another measuring device, spontaneous fluorescence intensity, and the like can be used.
  • the area is divided so as to match the analysis area of another acquired measurement result, and the ganglion cells in each area are analyzed.
  • the 3D data acquired by the radial scan is suitable for the analysis as the analysis target, but the scanning method is not particularly limited as long as it has the volume data.
  • FIG. 13 shows an example of the analysis result displayed by the display unit 206.
  • the fundus observation image 1301 is displayed on the display GUI 1300.
  • the acquired visual field sensitivity information is superimposed and displayed on the fundus observation image 1301, and the analysis areas 1 to 4 are set accordingly.
  • the film thickness map 1313 obtained by analyzing the tomographic image is also displayed, and the corresponding analysis area is also displayed. It may be applied to the apparatus having the configuration described in the second and third embodiments.
  • the results of each analysis area 1 to 4 are displayed as 1307.
  • the visual field sensitivity 1314 and the film thickness information 1315 of each analysis region are also displayed in 1307, and can be recognized together with the analysis results such as the ganglion cell density.
  • the present embodiment it is possible to analyze the ganglion cells in the inner layer of the retina in detail together with the correlation with the luminosity factor and the thickness, and the information having high clinical value from the tomographic image. Can be obtained.
  • the analysis target region when the analysis target region is set, the analysis target region is set after excluding the blood vessel region which is a region related to blood vessels, but a region having low brightness is extracted as a ganglion cell candidate and among them.
  • the process of removing the region relating to the blood vessel may be performed.
  • the process of removing the region related to the blood vessel may be performed by determining whether or not the extracted region is circular, that is, whether the length in one direction is longer than the length in the other direction. That is, it may be determined whether or not the extracted ganglion cell candidates are isolated.
  • the present invention can also be achieved by configuring the device as follows. That is, a recording medium (or storage medium) in which a program code (computer program) of software that realizes the functions of the above-described embodiment is recorded may be supplied to the system or device. Further, not only the mode of the recording medium but also a computer-readable recording medium may be used. Then, the computer (or CPU or MPU) of the system or device reads and executes the program code stored in the recording medium. In this case, the program code itself read from the recording medium realizes the function of the above-described embodiment, and the recording medium on which the program code is recorded constitutes the present invention.
  • the embodiment can also be realized by a circuit (for example, an ASIC) that realizes one or more functions.

Abstract

La présente invention concerne un procédé d'analyse d'image pour analyser une image tomographique de l'œil à examiner, qui est générée sur la base de la lumière de retour provenant de l'œil à examiner, irradiée avec la lumière de mesure, le procédé d'analyse d'image étant caractérisé en ce qu'il comprend : une étape d'extraction pour extraire une couche prédéterminée de l'image tomographique ; une étape de spécification pour spécifier une région de vaisseau sanguin dans la couche prédéterminée ; et une étape d'analyse pour analyser des cellules ganglionnaires, sur la base de la couche prédéterminée et de la région de vaisseau sanguin spécifiée.
PCT/JP2020/011457 2019-04-09 2020-03-16 Procédé d'analyse d'image et dispositif d'analyse d'image WO2020209012A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2019-074366 2019-04-09
JP2019074366 2019-04-09
JP2019114966A JP2020171664A (ja) 2019-04-09 2019-06-20 画像解析方法及び画像解析装置
JP2019-114966 2019-06-20

Publications (1)

Publication Number Publication Date
WO2020209012A1 true WO2020209012A1 (fr) 2020-10-15

Family

ID=72751020

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/011457 WO2020209012A1 (fr) 2019-04-09 2020-03-16 Procédé d'analyse d'image et dispositif d'analyse d'image

Country Status (1)

Country Link
WO (1) WO2020209012A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007325831A (ja) * 2006-06-09 2007-12-20 Topcon Corp 眼底観察装置、眼科画像処理装置及び眼科画像処理プログラム
JP2010279440A (ja) * 2009-06-02 2010-12-16 Canon Inc 画像処理装置及びその制御方法、コンピュータプログラム
JP2015131107A (ja) * 2013-12-13 2015-07-23 株式会社ニデック 光コヒーレンストモグラフィ装置、及びプログラム
JP2017176341A (ja) * 2016-03-29 2017-10-05 キヤノン株式会社 情報処理装置、情報処理装置の制御方法、及び該制御方法の実行プログラム
JP2019042376A (ja) * 2017-09-06 2019-03-22 キヤノン株式会社 画像処理装置、画像処理方法およびプログラム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007325831A (ja) * 2006-06-09 2007-12-20 Topcon Corp 眼底観察装置、眼科画像処理装置及び眼科画像処理プログラム
JP2010279440A (ja) * 2009-06-02 2010-12-16 Canon Inc 画像処理装置及びその制御方法、コンピュータプログラム
JP2015131107A (ja) * 2013-12-13 2015-07-23 株式会社ニデック 光コヒーレンストモグラフィ装置、及びプログラム
JP2017176341A (ja) * 2016-03-29 2017-10-05 キヤノン株式会社 情報処理装置、情報処理装置の制御方法、及び該制御方法の実行プログラム
JP2019042376A (ja) * 2017-09-06 2019-03-22 キヤノン株式会社 画像処理装置、画像処理方法およびプログラム

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIU, Z. L.: "Imaging and quantifying ganglion cells and other transparent neurons in the living human retina", PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, vol. 114, no. 48, 28 November 2017 (2017-11-28), pages 12803 - 12808, XP055748364 *
WELLS-GRAY, E. M.: "Inner retinal changes in primary open-angle glaucoma revealed through adaptive optics-optical coherence tomography", JOURNAL OF GLAUCOMA, vol. 27, no. 11, November 2018 (2018-11-01), pages 1025 - 1028, XP055748362 *

Similar Documents

Publication Publication Date Title
US9456748B2 (en) Ophthalmological apparatus, alignment method, and non-transitory recording medium
US9044167B2 (en) Image processing device, imaging system, image processing method, and program for causing computer to perform image processing
US9408532B2 (en) Image processing apparatus and image processing method
JP6322042B2 (ja) 眼科撮影装置、その制御方法、およびプログラム
US10660514B2 (en) Image processing apparatus and image processing method with generating motion contrast image using items of three-dimensional tomographic data
US10165939B2 (en) Ophthalmologic apparatus and ophthalmologic apparatus control method
US10561311B2 (en) Ophthalmic imaging apparatus and ophthalmic information processing apparatus
JP6652281B2 (ja) 光断層撮像装置、その制御方法、及びプログラム
RU2637851C2 (ru) Устройство обработки изображений и способ управления устройством обработки изображений
JP2022040372A (ja) 眼科装置
JP7348374B2 (ja) 眼科情報処理装置、眼科撮影装置、眼科情報処理方法、及びプログラム
WO2020209012A1 (fr) Procédé d'analyse d'image et dispositif d'analyse d'image
JP7262929B2 (ja) 画像処理装置、画像処理方法及びプログラム
JP2018191761A (ja) 情報処理装置、情報処理方法及びプログラム
JP2018114121A (ja) 情報処理装置、情報処理方法及びプログラム
JP6882242B2 (ja) 眼科装置およびその制御方法
JP2020171664A (ja) 画像解析方法及び画像解析装置
WO2019198629A1 (fr) Dispositif de traitement d'images et son procédé de commande
JP6839310B2 (ja) 光断層撮像装置、その制御方法、及びプログラム
JP7327954B2 (ja) 画像処理装置および画像処理方法
JP6594499B2 (ja) 画像処理装置及び画像処理装置の作動方法
WO2023282339A1 (fr) Procédé de traitement d'image, programme de traitement d'image, dispositif de traitement d'image et dispositif ophtalmique
JP6419238B2 (ja) 眼科装置および位置合わせ方法
JP2018000373A (ja) 眼科撮影装置及びその制御方法、並びに、プログラム
JP2019154933A (ja) 画像処理装置およびその制御方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20788235

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20788235

Country of ref document: EP

Kind code of ref document: A1