WO2019171986A1 - Dispositif et procédé de traitement d'image, et programme - Google Patents

Dispositif et procédé de traitement d'image, et programme Download PDF

Info

Publication number
WO2019171986A1
WO2019171986A1 PCT/JP2019/006835 JP2019006835W WO2019171986A1 WO 2019171986 A1 WO2019171986 A1 WO 2019171986A1 JP 2019006835 W JP2019006835 W JP 2019006835W WO 2019171986 A1 WO2019171986 A1 WO 2019171986A1
Authority
WO
WIPO (PCT)
Prior art keywords
blood vessel
image
image processing
motion contrast
processing apparatus
Prior art date
Application number
PCT/JP2019/006835
Other languages
English (en)
Japanese (ja)
Inventor
勇貴 村岡
牧平 朋之
Original Assignee
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by キヤノン株式会社 filed Critical キヤノン株式会社
Publication of WO2019171986A1 publication Critical patent/WO2019171986A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions

Definitions

  • the present disclosure relates to an image processing apparatus, an image processing method, and a program.
  • OCT apparatus Optical coherence tomography (hereinafter referred to as OCT apparatus) using interference by low coherence light has been put into practical use as an ophthalmic device.
  • OCT apparatus can image and depict the three-dimensional structure of the retina of the fundus.
  • Patent Document 1 In recent years, not only retinal structure but also OCT angiography (OCTA) that develops non-invasive imaging of retinal blood vessels by detecting changes in signals between successively acquired tomographic images has been developed.
  • OCT angiography OCTA
  • Patent Document 1 discloses a method for determining whether a predetermined blood vessel is an artery or a vein from a tomographic image.
  • the present disclosure has been made in view of the above problems, and an object thereof is to assist in determining whether a predetermined blood vessel is an artery or a vein based on a motion contrast image.
  • the present invention is not limited to the above-described object, and is an operational effect derived from each configuration shown in an embodiment for carrying out the invention described later, and also has an operational effect that cannot be obtained by conventional techniques. It can be positioned as one of other purposes.
  • the image processing apparatus disclosed in the present specification is based on an acquisition unit that acquires a motion contrast image of the fundus and a motion contrast value of an evaluation area that is an area within a predetermined distance from an area corresponding to a blood vessel in the motion contrast image.
  • FIG. 1A is a diagram illustrating an example of an apparatus configuration of an OCT optical system that captures a fundus tomographic image.
  • the light source 001 is an SLD light source, and the low coherence light emitted from the light source 001 is branched by the coupler 002 into measurement light and reference light under a desired branching ratio.
  • the measurement light branched by the coupler 002 becomes collimated light from the collimator 021 and is emitted to the sample optical system 102.
  • a focus lens 022 In the sample optical system 102, a focus lens 022, a variable angle X galvanometric mirror 023 and a Y galvanometric mirror 024, a lens 025 forming an objective lens system, and a lens 026 are arranged.
  • a beam spot by the measurement light is formed on the fundus of the eye 027 to be examined.
  • the beam spot guided onto the fundus is scanned two-dimensionally on the fundus by driving the X galvanometric mirror 023 and the Y galvanometric mirror 024.
  • the measurement light reflected and scattered by the fundus of the subject eye 027 is guided to the coupler 002 after passing through the sample optical system 102.
  • the reference light branched by the coupler 002 is guided to the reference optical system 103, becomes collimated light by the collimator lens 031, and is attenuated to a predetermined light amount by passing through the ND filter 032. Thereafter, the reference light is reflected by the mirror 033 which can move in the optical axis direction and can correct the optical path length difference with the sample optical system 102 while maintaining the collimated state, and is folded back to the same optical path.
  • the folded reference light is guided to the coupler 002 after passing through the ND filter 032 and the collimator lens 031. Further, the polarization state of the reference light is adjusted by the polarization controller 003 so as to correspond to the polarization state of the measurement light.
  • the measurement light and the reference light that have returned to the coupler 002 are combined by the coupler 002 and guided to the detection system (or spectrometer 046) 104.
  • the combined light is emitted as collimated light by the collimator 042, dispersed by the diffraction grating 043, received by the line sensor 045 through the lens 044, and output as an interference signal corresponding to the light intensity.
  • the line sensor 045 is disposed so that each pixel receives light corresponding to the wavelength component of light dispersed by the diffraction grating 043.
  • FIG. 1B is a diagram illustrating an example of a hardware configuration of an OCT apparatus including an image processing apparatus.
  • a focus driving unit 061 for moving the focus lens 022 and a galvano driving unit 062 for driving the X galvanometric mirror 023 and the Y galvanometric mirror 024 move the mirror 033 in the optical axis direction.
  • Mirror drive units 063 are provided for each.
  • Each drive unit, light source 001, line sensor 045, sampling unit 051, memory 052, signal processing unit 053, operation input unit 056, monitor 055, etc. are connected to the control unit 054 to control the movement of the entire apparatus.
  • the image processing apparatus includes a signal processing unit 053 and a control unit 054.
  • the image processing apparatus may include at least one of the sampling unit 051, the memory 052, the monitor 055, and the operation input unit 056.
  • the signal processing unit 053 and the control unit 054 are realized by a processor such as a CPU provided in the image processing apparatus executing a program stored in the memory.
  • a processor such as a CPU may function as the sampling unit 051 by executing a program stored in a memory.
  • the output signal from the line sensor 045 is output as an interference signal by the sampling unit 051 according to an arbitrary driving position of the galvanometric mirror driven by the galvano driving unit 062. Subsequently, the drive position of the galvanometric mirror is offset by the galvano drive unit 062, and an interference signal at that position is output. Thereafter, interference signals are generated one after another by repeating this process.
  • the interference signal generated by the sampling unit 051 is stored in the memory 052 together with the driving position of the galvanometric mirror.
  • the interference signal stored in the memory 052 is frequency-analyzed by the signal processing unit 053, becomes a tomographic image of the fundus of the subject eye 027, and is displayed on the monitor 055.
  • a three-dimensional fundus volume image can be generated and displayed on the monitor 055 based on information on the driving position of the galvanometric mirror.
  • the control unit 054 acquires background data at an arbitrary timing during shooting.
  • the background data refers to a signal in a state where the measurement light is not incident on the eye 207, that is, a signal of only the reference light.
  • the galvanometric mirror is driven by the galvano drive unit 062, and the background data is acquired by performing signal acquisition in a state where the measurement light irradiation position is adjusted so that the measurement light does not return from the sample optical system.
  • FIG. 2A is a diagram showing an arbitrary scan pattern
  • FIG. 2B is a diagram showing a scan pattern reflecting numerical values specifically executed in the present embodiment.
  • the OCT apparatus performs a scan that moves to n y positions while repeating the B scan at the same location m times. A specific scan pattern is shown in FIG. 2A.
  • the B scan is repeated m times for n positions y1 to yn on the fundus plane. If m is large, the number of times of measurement at the same place increases, so that blood flow detection accuracy is improved. On the other hand, the total scanning time becomes long, and there arises a problem that motion artifacts are generated in the image due to eye movement (fixation fine movement) during scanning and a burden on the subject is increased.
  • m 4 (FIG. 2B) was implemented in consideration of the balance between the two.
  • the number of repetitions m may be changed according to the A-scan speed of the OCT apparatus and the motion analysis of the fundus surface image of the eye 027 to be examined.
  • p indicates the number of samplings of A scan in one B scan. That is, the plane image size is determined by p ⁇ n.
  • ⁇ x is an interval between adjacent x positions (x pitch)
  • ⁇ y is an interval between adjacent y positions (y pitch).
  • the x pitch and the y pitch are determined as 1 ⁇ 2 of the beam spot diameter of the irradiation light on the fundus, and in this embodiment, 10 ⁇ m (FIG. 2B).
  • An image to be generated can be formed with high definition by setting the x pitch and the y pitch to 1 ⁇ 2 of the beam spot diameter on the fundus. Even if the x pitch and the y pitch are smaller than 1 ⁇ 2 of the fundus beam spot diameter, the effect of further increasing the definition of the generated image is small.
  • the definition deteriorates, but a wide range of images can be acquired with a small data capacity.
  • the x pitch and y pitch may be freely changed according to clinical requirements.
  • the signal processing unit 053 may extract the B-scan interference signals repeatedly in positions y k (m sheets).
  • the signal processing unit 053 extracts the j-th tomographic data.
  • the signal processing unit 053 subtracts the acquired background data from the interference signal.
  • the signal processing unit 053 performs a Fourier transform on the interference signal obtained by subtracting the background by converting the wave number function. In the present embodiment, Fast Fourier Transform (FFT) is applied. Note that zero padding processing may be performed before Fourier transform to increase the interference signal.
  • FFT Fast Fourier Transform
  • step S405 the signal processing unit 053 calculates the absolute value of the complex signal obtained by the Fourier transform executed in step S404. This value becomes the intensity of the tomographic image of the scan.
  • step S406 the signal processing unit 053 determines whether the index j has reached a predetermined number (m). That is, it is determined whether the intensity calculation of the tomographic image at the position y k has been repeated m times. If the predetermined number is not reached, the process returns to S402, and the intensity calculation of the tomographic image at the same Y position is repeated. When the predetermined number is reached, the process proceeds to the next step.
  • the signal processing unit 053 calculates image similarity in m frames of the same tomographic image at a certain y k position. Specifically, the signal processing unit 053 selects any one of the m-frame tomographic images as a template, and calculates a correlation value with the remaining (m ⁇ 1) -frame images.
  • the correlation value calculation method may be another method.
  • the signal processing unit 053 selects a highly correlated image that is equal to or greater than a certain threshold value from the correlation values calculated in step S407.
  • the threshold value can be arbitrarily set, and is set so as to exclude frames in which the correlation as an image has decreased due to blinking of the subject or slight eye movement.
  • OCTA is a technique for discriminating the contrast between a flowing tissue (for example, blood) and a non-flowing tissue among test eye tissues based on a correlation value between images.
  • tissue with no flow is extracted on the premise that there is a high correlation between images, so if the correlation is low as an image, it will be erroneously detected when calculating MC (motion contrast), as if the image It is judged as if the whole is a flowing organization.
  • tomographic images with low correlation are excluded in advance as images, and only images with high correlation are selected.
  • images of m frames acquired at the same position y k are appropriately selected and become q frame images.
  • a possible value of q is 1 ⁇ q ⁇ m.
  • step S409 the signal processing unit 053 performs alignment of the tomographic image of the q frame selected in step S408.
  • the frames to be selected as the alignment template correlation may be calculated for all combinations, a sum of correlation coefficients may be obtained for each frame, and a frame having the maximum sum may be selected.
  • the position deviation amounts ( ⁇ X, ⁇ Y, ⁇ ) are obtained by matching each frame with a template. Specifically, Normalized Cross-Correlation (NCC), which is an index representing similarity while changing the position and angle of the template image, is calculated, and the difference between the image positions when this value is maximized is obtained as the amount of positional deviation.
  • NCC Normalized Cross-Correlation
  • the index representing the similarity can be variously changed as long as it is a scale representing the similarity between the template and the image feature in the frame. For example, Sum of Absolute Difference (SAD), Sum of Squared Difference (SSD), Zero-means Normalized Cross-Correlation (ZNCC), Phase Only Correlation (POC), and Relative Correlation (POC).
  • SAD Sum of Absolute Difference
  • SSD Sum of Squared Difference
  • ZNCC Zero-means Normalized Cross-Correlation
  • POC Phase Only Correlation
  • Relative Correlation POC
  • the signal processing unit 053 applies position correction to (q ⁇ 1) frames other than the template in accordance with the positional deviation amounts ( ⁇ X, ⁇ Y, ⁇ ), and performs frame alignment. If q is 1, this step is not executed.
  • step S410 the signal processing unit 053 calculates MC.
  • MC is a value indicating the decorrelation of luminance between tomographic images, for example.
  • a variance value is calculated for each pixel at the same position between q-frame Intensity images selected in step S408 and aligned in step S409, and the variance value is defined as MC.
  • MC can be applied as long as it is an index representing a change in luminance value of each pixel of a plurality of tomographic images at the same Y position.
  • the feature may be set to 0 and the step may be terminated, or when MCs in images of y k ⁇ 1 and y k +1 before and after are obtained, values may be interpolated from the previous and next variance values.
  • the abnormality may be notified that the feature quantity that could not be calculated correctly is a complementary value.
  • the Y position where the feature amount could not be calculated may be stored, and rescanning may be performed automatically.
  • a warning prompting remeasurement may be issued without performing automatic rescanning.
  • step S411 the signal processing unit 053 averages the intensity image that has been aligned in step S409, and generates an intensity averaged image.
  • the signal processing unit 053 performs threshold processing for the MC output in step S410.
  • the threshold value is obtained by extracting an area where only random noise is displayed on the noise floor from the intensity averaged image output by the signal processing unit 053 in step S411, calculating the standard deviation ⁇ , and calculating the average luminance of the noise floor (intensity). ) Set the value + 2 ⁇ .
  • the signal processing unit 053 sets the MC value corresponding to the region where each Intensity is equal to or less than the threshold value to 0. By this threshold processing, noise can be reduced by removing MC derived from random noise. As the threshold value is smaller, the MC detection sensitivity increases, but the noise component also increases. Also, the larger the noise, the less noise, but the MC detection sensitivity decreases.
  • the threshold value is set as the average luminance value (intensity) value of the noise floor + 2 ⁇ , but the threshold value is not limited to this.
  • step S413 the signal processing unit 053 determines whether the index k has reached a predetermined number (n). That is, it is determined whether image correlation calculation, image selection, alignment, intensity image averaging calculation, MC calculation, and threshold processing have been performed at all n Y positions.
  • n a predetermined number
  • the process returns to S401, and when the predetermined number is reached, the process proceeds to the next step S414.
  • step S413 is completed, the intensity average image and the MC three-dimensional volume data (three-dimensional OCTA data) in the tomographic images at all the Y positions are generated.
  • a three-dimensional MC image is an example of MC three-dimensional volume data.
  • step S414 the signal processing unit 053 generates an MC front image (MC EnFace image) integrated in the depth direction with respect to the generated three-dimensional OCTA data. That is, the signal processing unit 053 corresponds to an example of an acquisition unit that acquires a motion contrast image of the fundus.
  • FIG. 3A is an example of an MC front image.
  • the signal processing unit 053 extracts the layer boundary of the fundus retina based on the intensity averaged image generated in step S411, and generates an MC front image so as to include a desired layer. After generating the MC front image, the signal processing unit 053 ends the signal processing flow.
  • the MC image includes a three-dimensional MC image and a two-dimensional MC image such as an EnFace image.
  • the integrated depth range is set to several layers on the surface side of the retina (from the inner boundary film 302 to the INL central part 303) as shown in FIG. 3B for the three-dimensional OCTA data obtained by imaging the macular region, as shown in FIG. 3A.
  • An MC front image of the superficial layer of the retina (Superficial Capillary) is obtained.
  • the signal processing unit 053 can extract the fundus blood vessel 301 at the center of the macula from the MC front image.
  • the depth range for generating the MC front image is not limited to the above example.
  • step S501 the control unit 054 displays a GUI for prompting the user to input the MC acquisition number F on the monitor 055, and the user inputs F in step S502 (F: an integer equal to or greater than 1).
  • the MC acquisition number F may be a predetermined value regardless of user input.
  • step S503 the control unit 054 controls the OCT optical system to acquire the MC image acquisition number i sequentially from one, and acquires MC images up to F designated by the user as shown in steps S504, 505, and 506. .
  • step S505 the process of FIG. 4 is executed.
  • the MC image generation may be executed in step S505, may be performed after the scan for the MC acquisition number F is completed, or may be performed in parallel with the scan for the MC acquisition number F. It is good.
  • the signal processing unit 053 After obtaining the desired number of MC images, in step S507, the signal processing unit 053 stores the data (data of a plurality of MC images) in the memory 052. In step S508, the signal processing unit 053 uses the plurality of stored MC images to acquire an MC superimposed image, and stores it in the memory 052 as other data (step S509). In this embodiment, ten MC images are acquired. By this processing, a plurality of MC superimposed images can be acquired.
  • the MC image to be superimposed may be a three-dimensional MC image or a two-dimensional MC image (MC EnFace image).
  • the superposition of the MC image is not an essential process, and the determination of the arterial and vein may be performed from the MC image which is not superposed.
  • the signal processing unit 053 acquires the MC superimposed image 601 shown in FIG. 6A from the memory 052.
  • the signal processing unit 053 acquires the MC image 602 shown in FIG. 6B from the image 601 excluding the inner boundary membrane and the nerve fiber layer (RNFL).
  • the MC front image can be easily determined for arteriovenous.
  • the MC image 602 may be generated by removing from the inner boundary membrane to the upper layer of the inner granular layer. That is, the MC image 602 is an MC front image of a blood vessel layer located in the lower layer of the inner granular layer.
  • the MC image 602 may or may not include the choroid side layer from the lower layer of the inner granular layer. Moreover, it is good also as producing
  • the signal processing unit 053 generates the MC image 602 from a three-dimensional MC image. Note that the MC front image generated for the determination of the arteriovenous is not limited to the above example.
  • the MC front image generated for the determination of arteriovenous may be an image including the ganglion cell layer without including the inner boundary membrane and the nerve fiber layer, or the inner boundary membrane, the nerve fiber layer, the ganglion. It may be an image including a part of the inner granular layer without including the cell layer and the inner network layer.
  • the MC front image generated for the determination of the arteriovenous may not include a layer on the choroid side from a part of the inner granule layer.
  • the part of the inner granule layer is, for example, one of the cases where the inner granule layer is divided into upper and lower parts on the vitreous side and the choroid side.
  • the signal processing unit 053 binarizes the MC image 602 and acquires the image 603 shown in FIG. 6C.
  • the signal processing unit 053 obtains an erosion image 604 by performing erosion on the image 603 (or any other method as long as it is a noise removal method).
  • the signal processing unit 053 acquires main blood vessels (606 to 610) from the image 604.
  • the acquisition of blood vessels may be performed based on the user's designation with respect to the MC image 602, or may be performed automatically regardless of the user's designation.
  • blood vessel acquisition may be performed based on a three-dimensional MC image, or may be performed based on a two-dimensional MC image.
  • a blood vessel having a blood vessel length of a predetermined value or more may be extracted as a main blood vessel.
  • the signal processing unit 053 acquires an image 605 obtained by adapting the extracted blood vessel to the original image 602.
  • the signal processing unit 053 identifies the positions of the main blood vessels 606 to 610 in the MC image 602 by associating the positional relationship between the MC image 602 and the image 605, for example.
  • the signal processing unit 053 treats the blood vessel 606 and the blood vessel 609 as the same blood vessel. Therefore, the blood vessels independent in this processing are 606, 607, 608, and 610.
  • the signal processing unit 053 measures the luminance around the blood vessels (2 pixels along the blood vessel wall perpendicular line: about 20 ⁇ m) of the independent blood vessels (606, 607, 608, 610).
  • the signal processing unit 053 calculates the luminance per unit pixel (Iave: luminance value per unit pixel) around the blood vessel as a representative value of the motion contrast value.
  • the area where the luminance is measured is an example of an evaluation area.
  • This evaluation region may be a region adjacent to the blood vessel or may not be adjacent to the blood vessel.
  • the evaluation region may be a region located within 2 pixels from the blood vessel even if it is not adjacent to the blood vessel.
  • the signal processing unit 053 determines whether the blood vessel is an artery or a vein based on the evaluation region that satisfies such a condition. That is, the signal processing unit 053 is an example of a determination unit that determines whether a blood vessel is an artery or a vein based on a motion contrast value of an evaluation region that is a region located within a predetermined distance from a region corresponding to a blood vessel in a motion contrast image. Equivalent to.
  • the evaluation area is not limited to the area of 2 pixels from the blood vessel, and may be larger than 2 pixels or less than 2 pixels.
  • the evaluation region may be determined based on the blood vessel diameter.
  • the evaluation region may be closer to the blood vessel as the blood vessel diameter is shorter. That is, the predetermined distance from the region corresponding to the blood vessel, which is the distance defining the location where the evaluation region is located, may be determined based on the thickness of the region corresponding to the blood vessel. Specifically, the predetermined distance may be shorter as the thickness of the region corresponding to the blood vessel is smaller.
  • the width of the evaluation region in the direction intersecting the blood vessel traveling direction may be shortened as the blood vessel diameter is shorter. That is, the size of the evaluation region in the direction intersecting the traveling direction of the blood vessel may be determined based on the thickness of the region corresponding to the blood vessel.
  • the size of the evaluation region in the running direction of the blood vessel is one of important parameters in the determination of the arteriovenous. Therefore, the size of the evaluation region in the blood vessel traveling direction may be larger than a predetermined value.
  • the signal processing unit 053 may set an evaluation region for the entire blood vessel. Further, the ratio of the evaluation region to the travel distance of the blood vessel in the travel direction of the blood vessel may be larger than a predetermined value.
  • the evaluation area may be set to a two-dimensional MC image (MC front image) or may be set to a three-dimensional MC image.
  • the signal processing unit 053 determines that the main blood vessel is a vein, and if not, the main blood vessel is determined as an artery. Is determined. That is, the signal processing unit 053 determines that the blood vessel is a vein when the representative value is greater than the threshold value, and determines the blood vessel as an artery when the representative value is less than the threshold value.
  • the signal processing unit 053 can acquire data as shown in FIG. 6F. Since the luminance average value Iave of the blood vessel 606 is lower than Is, the signal processing unit 053 can determine that the artery is an artery. Further, since the luminance average value Iave of the blood vessels 607, 608, and 610 is higher than Is, the signal processing unit 053 can determine that the blood vessels 607, 608, and 610 are veins.
  • the threshold value Is may be fixed or variable. For example, the signal processing unit 053 may change the threshold Is according to the blood vessel diameter. The signal processing unit 053 may increase the threshold Is as the blood vessel diameter is shorter. That is, the threshold value Is may be a constant value regardless of the blood vessel, or may be a value that is adaptively set for each blood vessel.
  • step S701 the signal processing unit 053 reads from the memory 052 the MC image (three-dimensional MC image) after superposition obtained by the processing of FIG.
  • step S702 the signal processing unit 053 acquires an EnFace image from the three-dimensional MC image.
  • the signal processing unit 053 includes the GCL (ganglion cell layer) / IPL (inner reticulated layer) layer boundary and the IPL / INL (inner granule layer) ( ⁇ 15 ⁇ m: the negative sign indicates the choroid side) An EnFace image having a depth range from the boundary of the position of was acquired.
  • step S703 the signal processing unit 053 performs image processing on the image obtained in step S702 to extract main blood vessels.
  • the signal processing unit 053 binarizes the MC image and then performs erosion processing once (see FIG. 6D).
  • the signal processing unit 053 extracts a set (line) of dots (pixels) that continue a certain amount or more in the image.
  • the signal processing unit 053 extracts lines of 80 pixels or more (about 1.0 mm or more) as main blood vessels. Note that the numerical value for blood vessel extraction is not limited to 80 pixels.
  • step S704 the signal processing unit 053 numbers the main blood vessels obtained in step S703.
  • step S705 the signal processing unit 053 adapts the numbered blood vessel to the MC image acquired in S702 (see FIG. 6E).
  • step S706 the signal processing unit 053 measures the luminance (motion contrast value) around each blood vessel using the image 605 obtained in step S705.
  • the luminance around the blood vessel (2 pixels: about 20 ⁇ m area) is measured, and the average luminance value (Iave) is calculated for each blood vessel.
  • the signal processing unit 053 may be a median luminance instead of the luminance average.
  • step S707 the signal processing unit 053 compares the luminance average (Iave) of each blood vessel obtained in step S706 with a threshold value (Is). In step S708, the signal processing unit 053 determines that the artery is an artery if Iave is lower than Is. In step S709, when Iave is higher than Is, it is determined as a vein.
  • Iave luminance average
  • Is threshold value
  • the median, standard deviation, maximum value, etc. may be used as the representative value of the evaluation area instead of Iave.
  • the arteriovenous determination is performed based on the luminance at a position of 20 ⁇ m from the vascular wall perpendicular, but in this embodiment, the arteriovenous is determined by measuring the luminance profile from the blood vessel center as follows. .
  • the signal processing unit 053 extracts a blood vessel from the MC image (FIG. 10A). As shown in FIG. 10A, the signal processing unit 053 sets areas 1011, 1012, 1013 of 500 ⁇ m perpendicular to the traveling direction of the extracted blood vessels 1010, 1014.
  • the areas 1011, 1012, and 1013 may be defined based on user input, or may be set regardless of user input. In the present embodiment, the term “vertical” is a concept including a completely vertical case and a substantially vertical case. Further, the size of the areas 1011, 1012, and 1013 is not limited to 500 ⁇ m, and may be other values. For example, the above area may be an individual size for each blood vessel.
  • the size of the area may be changed according to the blood vessel diameter. For example, the area size may be reduced as the blood vessel diameter is shorter.
  • the signal processing unit 053 measures a luminance profile (motion contrast value profile) for each of the areas 1011, 1012, 1013. The luminance profile of the area 1011 was measured as shown in FIG. 10B. Similarly, the area 1012 was measured as shown in FIG. 10C and the area 1013 as shown in FIG. 10D.
  • the signal processing unit 053 determines the arteriovenous from the luminance profile illustrated in FIG. 10B. For example, the signal processing unit 053 sets Is (brightness threshold) as a luminance value (motion contrast value) of a region where blood vessels do not exist (a level equivalent to background noise).
  • the signal processing unit 053 can divide the profile into a profile region 1001 having a higher luminance than Is, a profile region 1002 having a lower luminance than Is, a profile region 1003 having a higher luminance in the profile, and a profile region 1004 having a lower luminance than Is. .
  • the profile region 1001 is a capillary network
  • the profile region 1002 is a non-blood vessel region
  • the profile region 1003 is a thick blood vessel of several tens ⁇ m
  • the profile region 1004 It can be confirmed that there is an avascular region.
  • the avascular region of the profile region 1002 and the profile region 1004 indicates that an avascular region exists at a plurality of points (pixels), that is, a region of several tens of ⁇ m or more.
  • a blood vessel 1010 that has an avascular region around the vascular structure of the fundus can be determined as an artery.
  • the luminance profile of the area 1012 will be described with reference to FIG. 10C.
  • the signal processing unit 053 has a profile area 1005 having a higher luminance than Is, a profile area 1006 having a lower luminance than Is, and a profile area 1007 having a higher luminance in the profile, and a luminance lower than Is.
  • the profile shown in FIG. 10C can be divided into the profile area 1008. Comparing the luminance profile in FIG.
  • the profile region 1005 is a capillary network
  • the profile region 1006 is an avascular region
  • the profile region 1007 is a thick blood vessel of several tens of ⁇ m (thicker than the blood vessel 1010)
  • the profile area 1008 indicates that an avascular area exists.
  • the avascular region of the profile region 1006 and the profile region 1008 is present at a plurality of points, that is, regions of several tens of ⁇ m or more.
  • the signal processing unit 053 can determine that the region 1012 is an artery from the MC image because the region 1012 also has the avascular region around the presence of the avascular region.
  • the blood vessel in the region 1012 is a natural result because the blood vessel 1010 is branched.
  • a plurality of areas 1011 and 1012 are set for one blood vessel 1010.
  • an arteriovenous determination is made based on the profile of one area, and the determination result is reflected on the connected blood vessels. Also good.
  • the determination target blood vessel may be determined as an artery (or vein) only when it is determined as an artery (or vein) in a plurality of areas.
  • the signal processing unit 053 can divide the profile into a profile area 1009 having a higher luminance than Is, a profile area 1015 having a lower luminance than Is, and a profile area 1015 having a higher luminance in the profile.
  • the profile regions 1009 and 1016 whose luminance is higher than Is some blood vessels such as capillaries exist.
  • the profile region 1015 having a low luminance is a single point and is located between the blood vessel and the blood vessel rather than the avascular region.
  • the signal processing unit 053 can determine that the blood vessel 1014 is a vein without an avascular region (or because the avascular region is equal to or less than a threshold value).
  • the signal processing unit 053 can determine the arteriovenous from the size of the avascular region obtained from the profile of the motion contrast value.
  • step S1101 the signal processing unit 053 reads the superimposed MC image obtained by the process of FIG.
  • step S1102 the signal processing unit 053 extracts a blood vessel from the MC image.
  • step S1103 the signal processing unit 053 sets a straight line of 200 ⁇ m in the vertical direction with respect to the direction of blood vessel travel, and measures the luminance (motion contrast value) on the straight line.
  • step S1104 the signal processing unit 053 reads the threshold value Is from the memory 052.
  • step S1105 the signal processing unit 053 measures the number (number of pixels) n of regions whose luminance is lower than the threshold Is.
  • the threshold value Is corresponds to an example of a first threshold value. In the present embodiment, only the number of pixels is counted. In steps S1106 and 1107, if the number of pixels measured in step S1105 is n> 4, the signal processing unit 053 determines that the artery is an artery because there are more avascular regions than the threshold. In steps S1106 and 1108, if n ⁇ 4, the signal processing unit 053 determines that the avascular region is equal to or less than the threshold value, so that it is a vein. That is, it can be said that the signal processing unit 053 determines whether the blood vessel is an artery or a vein based on the size of the region where the motion contrast value is less than the first threshold in the evaluation region.
  • the above “4” corresponds to an example of a second threshold value. That is, it can be said that the signal processing unit 053 determines that the blood vessel is an artery when the size of the region where the motion contrast value is less than the first threshold in the value region is greater than the second threshold. Further, it can be said that the signal processing unit 053 determines that the blood vessel is a vein when the size of the region whose motion contrast value is less than the first threshold in the evaluation region is smaller than the second threshold.
  • step S1110 the control unit 054 causes the monitor 055 to display the result.
  • the control unit 054 causes the monitor 055 to display an MC image in which an artery and a vein are displayed in different colors.
  • the arteriovenous can be determined by measuring the luminance around the blood vessel using the MC image.
  • the profile region is a straight region, but a two-dimensional region, a three-dimensional region, a region including a set of points, or the like may be used.
  • the size may be other than the embodiment.
  • luminance average, standard deviation, maximum value, and the like may be used as representative values of the evaluation area.
  • Threshold value Is can also be determined with high accuracy by changing it according to the installation environment, setting by the user, and race. Although described in the flow of FIG. 11, the flow of extracting blood vessels from the image may be manual or automatic.
  • the superposition of MC images is used, but the determination of the arteriovenous can also be made using a single MC image.
  • step 702 when acquiring the MC image, if the MC front image is acquired based on INL (inner granule layer), the determination of the arteriovenous becomes more stable. Specifically, as shown in FIG. 3C, the signal processing unit 053 converts the line 307 (inner boundary membrane) to the line 308 (lower nerve fiber layer) and the line 308 (lower nerve fiber layer) to line 309 (ganglion cell layer). (Lower part), line 309 (lower ganglion cell layer) to line 306 (INL center), and line 306 (INL center) to line 310 (outer reticulated layer lower).
  • the signal processing unit 053 separates the four layers as described below using this division information, and the data is stabilized when the above-described arteriovenous determination is performed in the second layer.
  • the first layer is defined as a layer between the line 307 and the line 308 about 5 ⁇ m below (choroidal direction).
  • the second layer is defined as a layer between 5 ⁇ m or more below line 308 and about 5 ⁇ m above line 309.
  • the third layer is defined as a layer between the line 306 and the line 306 from about 5 ⁇ m above the line 309.
  • the fourth layer is defined as the layer between line 306 and line 310.
  • the MC image using the second layer corresponds to an image including the ganglion cell layer without including the inner boundary membrane and the nerve fiber layer.
  • line 304 indicates the upper end of INL
  • line 305 indicates the lower end of INL.
  • the third layer and the fourth layer may be used for the arteriovenous determination.
  • the MC image using the fourth layer does not include the inner boundary membrane, the nerve fiber layer, and the ganglion cell layer, and corresponds to an image including only a part of the entire inner granule layer.
  • the line 306 is the center of the INL, the present invention is not limited to this, and the line 306 may be a position where the line 306 is separated into two upper and lower layers based on the arrangement of blood vessels in the INL. Since the blood vessels in the INL are linearly arranged in the X direction (direction orthogonal to the depth direction) (for example, the upper and lower two-stage blood vessels are linearly arranged), the signal processing unit 053 is arranged in the blood vessel array. Based on this, the position where the INL is separated into two layers can be determined.
  • the signal processing unit 053 obtains a line connecting the blood vessels in the INL in the X direction (or Y direction) for the two-stage blood vessels in the INL, and obtains a midpoint in the depth direction of the obtained lines.
  • a line connecting the midpoints obtained in the X direction may be a line 306. That is, the signal processing unit 053 corresponds to an example of a determination unit that determines a part of the inner granule layer (the upper layer or the lower layer of the inner granule layer) based on the arrangement of blood vessels included in the inner granule layer. .
  • the signal processing unit 053 generates the third layer and fourth layer MC images using the line 306. That is, it corresponds to an example of a generation unit that generates a motion contrast image based on a part of the inner granular layer.
  • an MC image of only a part (upper layer or lower layer) of the inner granular layer defined by the line 306 may be generated. This modification can also be applied to the first, third, and fourth embodiments.
  • Modification 2 In this example, a clear MC image can be acquired by separating blood vessels, and an example in which the analysis of the first and second embodiments is further improved will be described.
  • a black point 902 in FIG. 9A is an image displaying only the MC signal in the XZ section.
  • a black point 902 in FIG. 9A is an area (point) having a temporal change in the OCT image.
  • a black dot 902 is a portion having a signal having a motion contrast value of a predetermined value or more, that is, a blood vessel portion. The same applies to the black point 903.
  • Black dots 902 indicate blood vessels in the surface layer of the retina, and black dots 903 indicate blood vessels in the choroid area.
  • FIG. 9A when luminance (motion contrast value) is added in the x direction at the arrow positions a, b, c, and d, the result is as shown in FIG. 9B.
  • the positions of the arrows a, b, c, and d can be specified from the peak position of the profile in the A scan direction of the luminance tomographic image.
  • the A scan for specifying the positions of the arrows a, b, c, and d may use an integrated value of a plurality of A scans. It was found that blood vessels existing in the superficial layer (Superficial Capillary) and the deep layer (Deep Capillary) can be classified into four blood vessels as shown in FIG. 9A.
  • the blood vessel layer a has low luminance, and it is considered that the blood vessel layer a belonged to the first layer in the above-described embodiment.
  • the blood vessel layer b is considered to belong to the second layer, the blood vessel layer c to the third layer, and the blood vessel layer d to the fourth layer.
  • the signal processing unit 053 changes the angle of the line integrated in the x direction so that the integrated value of the luminance of the blood vessel layer b corresponding to the second layer is high, and the integrated value of the luminance of the blood vessel layer b is maximized. It is also possible to generate an MC front image for determining the arteriovenous with reference to the line of the angle.
  • the signal processing unit 053 generates the MC front image using a region having a predetermined width in a direction orthogonal to the line having an angle at which the integrated value of the luminance of the blood vessel layer b is maximized.
  • the predetermined width may be an arbitrary value determined in advance, or a variance value or a standard deviation value of the motion contrast value of the blood vessel layer may be used. That is, the signal processing unit 053 generates an MC front image based on the blood vessel array.
  • the position of the blood vessel layer b may be specified by the user, or may be determined based on the peak position obtained by sequentially performing luminance integration in the x direction by the signal processing unit 053. is there.
  • the signal processing unit 053 can identify the second peak position from the choroid side as the blood vessel layer b.
  • MC images suitable for determination of arteriovenous can be acquired by generating MC front images using blood vessels.
  • an MC front image of a desired blood vessel is generated, which is suitable for diagnosis. This modification can also be applied to the first, third, and fourth embodiments.
  • the control unit 054 causes the monitor 055 to display at least one of Iave obtained in step S706 in FIG. 7 and the number n of pixels indicating the avascular region obtained in step S1105 in FIG.
  • Iave or number n corresponds to an example of information indicating whether a blood vessel is an artery or a vein.
  • the control unit 054 corresponds to an example of a display control unit that displays information indicating whether an artery or a vein is displayed on the display unit.
  • the control unit 054 causes the monitor 055 to display at least one of Iave and the number n regarding the blood vessel selected by the user or the blood vessel automatically selected by the device. Further, the control unit 054 may cause the monitor 055 to display a threshold value serving as a reference for determining at least one of Iave and number n. Further, the MC image may be further displayed on the monitor 055. For example, when a blood vessel on the MC image is designated by the user, at least one of Iave and number n related to the blood vessel and a threshold value used as a reference for determining at least one of Iave and number n are displayed on the monitor 055. .
  • FAZ is an avascular region in the fovea and is a region 311 in FIG. 3A.
  • the FAZ measurement was not stable. The reason is that the dividing boundary between the shallow layer (Superficial Capillary) and the deep layer (Deep Capillary) is ambiguous and depends on the layer segmentation definition.
  • step S801 the signal processing unit 053 acquires the MC image after superposition from the memory 052.
  • step S802 the image acquired in step S801 is separated into the first layer, the second layer, the third layer, and the fourth layer described above.
  • step S803 the signal processing unit 053 extracts the MC image of the third layer divided in step S802.
  • the signal processing unit 053 generates a third-layer MC image (MC front image) based on the superimposed three-dimensional MC image acquired in step S801.
  • step S804 the signal processing unit 053 measures FAZ using the third layer MC image acquired in step S803. In this embodiment, FAZ was measured by binarizing the third layer MC image.
  • Measured FAZ by the above-described process improved the repeatability compared to the measurement using the conventional shallow layer (Supercapillary Capillary). Moreover, in the diseased eye, the FAZ value was further stabilized by performing the same treatment.
  • the processing of the present embodiment is particularly effective when measuring changes over time because it is possible to accurately extract slight changes when acquiring time change information for the same subject.
  • the disclosed technology can take an embodiment as a system, apparatus, method, program, recording medium (storage medium), or the like.
  • the present invention may be applied to a system composed of a plurality of devices (for example, a host computer, an interface device, an imaging device, a web application, etc.), or may be applied to a device composed of a single device. good.
  • a recording medium (or storage medium) that records a program code (computer program) of software that implements the functions of the above-described embodiments is supplied to the system or apparatus.
  • a storage medium is a computer-readable storage medium.
  • the computer or CPU or MPU
  • the program code itself read from the recording medium realizes the functions of the above-described embodiment, and the recording medium on which the program code is recorded constitutes the present invention.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

La présente invention concerne un dispositif de traitement d'image qui prend en charge la détermination de si un vaisseau sanguin prescrit est une artère ou une veine sur la base d'une image de contraste de mouvement. Le dispositif de traitement d'image comprend un moyen d'acquisition pour acquérir une image de contraste de mouvement d'un fond d'oeil, et un moyen de détermination pour déterminer si un vaisseau sanguin est une artère ou une veine sur la base d'une valeur de contraste de mouvement d'une région d'évaluation qui est une région située à une distance prédéterminée d'une région qui correspond au vaisseau sanguin dans l'image de contraste de mouvement.
PCT/JP2019/006835 2018-03-05 2019-02-22 Dispositif et procédé de traitement d'image, et programme WO2019171986A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018038354A JP7140512B2 (ja) 2018-03-05 2018-03-05 画像処理装置、画像処理方法及びプログラム
JP2018-038354 2018-03-05

Publications (1)

Publication Number Publication Date
WO2019171986A1 true WO2019171986A1 (fr) 2019-09-12

Family

ID=67846663

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/006835 WO2019171986A1 (fr) 2018-03-05 2019-02-22 Dispositif et procédé de traitement d'image, et programme

Country Status (2)

Country Link
JP (1) JP7140512B2 (fr)
WO (1) WO2019171986A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11900593B2 (en) 2021-04-23 2024-02-13 Fujifilm Sonosite, Inc. Identifying blood vessels in ultrasound images
US11896425B2 (en) 2021-04-23 2024-02-13 Fujifilm Sonosite, Inc. Guiding instrument insertion

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220361840A1 (en) * 2021-04-23 2022-11-17 Fujifilm Sonosite, Inc. Displaying blood vessels in ultrasound images

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003019119A (ja) * 2001-07-10 2003-01-21 Canon Inc 眼底血流計
WO2008069062A1 (fr) * 2006-12-01 2008-06-12 Kyushu Tlo Company, Limited Dispositif de création d'image de vitesse de débit sanguin
JP2014504523A (ja) * 2011-01-20 2014-02-24 ユニバーシティ オブ アイオワ リサーチ ファウンデーション 血管画像における動静脈比の自動測定
JP2017077413A (ja) * 2015-10-21 2017-04-27 株式会社ニデック 眼科解析装置、眼科解析プログラム
JP2017202369A (ja) * 2017-08-23 2017-11-16 株式会社トプコン 眼科画像処理装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003019119A (ja) * 2001-07-10 2003-01-21 Canon Inc 眼底血流計
WO2008069062A1 (fr) * 2006-12-01 2008-06-12 Kyushu Tlo Company, Limited Dispositif de création d'image de vitesse de débit sanguin
JP2014504523A (ja) * 2011-01-20 2014-02-24 ユニバーシティ オブ アイオワ リサーチ ファウンデーション 血管画像における動静脈比の自動測定
JP2017077413A (ja) * 2015-10-21 2017-04-27 株式会社ニデック 眼科解析装置、眼科解析プログラム
JP2017202369A (ja) * 2017-08-23 2017-11-16 株式会社トプコン 眼科画像処理装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11900593B2 (en) 2021-04-23 2024-02-13 Fujifilm Sonosite, Inc. Identifying blood vessels in ultrasound images
US11896425B2 (en) 2021-04-23 2024-02-13 Fujifilm Sonosite, Inc. Guiding instrument insertion

Also Published As

Publication number Publication date
JP7140512B2 (ja) 2022-09-21
JP2019150345A (ja) 2019-09-12

Similar Documents

Publication Publication Date Title
JP7193343B2 (ja) 機械学習技法を用いたoctアンギオグラフィにおけるアーチファクトを減少させるための方法及び装置
KR102046309B1 (ko) 화상생성장치, 화상생성방법, 및 기억매체
US20210224997A1 (en) Image processing apparatus, image processing method and computer-readable medium
US8251511B2 (en) Method for finding the lateral position of the fovea in an SDOCT image volume
US10022047B2 (en) Ophthalmic apparatus
US10383516B2 (en) Image generation method, image generation apparatus, and storage medium
US9839351B2 (en) Image generating apparatus, image generating method, and program
US10327635B2 (en) Systems and methods to compensate for reflectance variation in OCT angiography
CA2844433A1 (fr) Correction de mouvement et normalisation de caracteristiques dans une tomographie par coherence optique
US10251550B2 (en) Systems and methods for automated segmentation of retinal fluid in optical coherence tomography
KR20120064625A (ko) 피검안의 단층 화상을 처리하는 화상 처리장치, 촬상 시스템, 화상 처리방법 및 기록매체
US20160287071A1 (en) Methods of measuring total retinal blood flow using en face doppler oct
WO2019171986A1 (fr) Dispositif et procédé de traitement d'image, et programme
US20230108071A1 (en) Systems and methods for self-tracking real-time high resolution wide-field optical coherence tomography angiography
JP7254682B2 (ja) 画像処理装置、画像処理方法、及びプログラム
JP7111874B2 (ja) 眼科撮影装置
JP2019150554A (ja) 画像処理装置およびその制御方法
JP2018191761A (ja) 情報処理装置、情報処理方法及びプログラム
JP7130989B2 (ja) 眼科画像処理装置、および眼科画像処理プログラム
JP6976818B2 (ja) 画像処理装置、画像処理方法及びプログラム
WO2019172043A1 (fr) Dispositif de traitement d'image et son procédé de commande
JP6992030B2 (ja) 画像生成装置、画像生成方法およびプログラム
WO2021210295A1 (fr) Procédé de traitement d'images, dispositif de traitement d'images et programme
Li Computational Methods for Enhancements of Optical Coherence Tomography
JP2023066198A (ja) 情報出力装置、眼底画像撮影装置、情報出力方法、及び情報出力プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19765043

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19765043

Country of ref document: EP

Kind code of ref document: A1