WO2021040258A1 - Dispositif et procédé de diagnostic automatique d'une maladie à l'aide d'une segmentation de vaisseau sanguin dans une image ophtalmique - Google Patents

Dispositif et procédé de diagnostic automatique d'une maladie à l'aide d'une segmentation de vaisseau sanguin dans une image ophtalmique Download PDF

Info

Publication number
WO2021040258A1
WO2021040258A1 PCT/KR2020/010274 KR2020010274W WO2021040258A1 WO 2021040258 A1 WO2021040258 A1 WO 2021040258A1 KR 2020010274 W KR2020010274 W KR 2020010274W WO 2021040258 A1 WO2021040258 A1 WO 2021040258A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
blood vessel
graph
fag
fundus
Prior art date
Application number
PCT/KR2020/010274
Other languages
English (en)
Korean (ko)
Inventor
이수찬
노경진
박상준
Original Assignee
국민대학교산학협력단
서울대학교병원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 국민대학교산학협력단, 서울대학교병원 filed Critical 국민대학교산학협력단
Publication of WO2021040258A1 publication Critical patent/WO2021040258A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • A61B3/145Arrangements specially adapted for eye photography by video means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Definitions

  • the present invention relates to an automatic disease determination apparatus and method using an eyeball image, and more particularly, to an apparatus and method for automatically determining a disease by using blood vessel segmentation in an eyeball image.
  • Fundus imaging is one of the most frequently used ophthalmic pictures for diagnosis or recording in ophthalmology.
  • the fundus image is relatively similar to the fundus of the subject observed during treatment and is intuitive, so it is used for eye disease examination.
  • clinicians are trying to develop a system for quantitatively analyzing blood vessel characteristics based on these fundus images and diagnosing diseases based on this, but the precise blood vessel region extraction technology still has a limitation in accuracy.
  • Deep learning is defined as a set of machine learning algorithms that attempt high-level abstractions (summarizing key contents or functions in a large amount of data or complex data) through a combination of several nonlinear transformation methods. Deep learning can be seen as a branch of machine learning that teaches computers how people think in a big frame.
  • An object of the present invention is to provide an apparatus and method capable of automatically determining a disease using a result of analyzing an ocular blood vessel.
  • a processor ; And a memory electrically connected to the processor and storing a convolutional neural network, wherein the memory includes a first blood vessel segmentation image for a first eye image and a second blood vessel image for a second eye image according to a preset method.
  • a segmented image is generated, a first blood vessel graph is generated using the first vessel segmented image, a second vessel graph is generated using the second vessel segmented image, and the first vessel graph and the second vessel
  • An automatic disease determination apparatus is disclosed that includes instructions for comparing graphs to determine the presence or absence of a disease through the convolutional neural network, wherein the first eyeball image is an image generated earlier than the second eyeball image.
  • the memory extracts an arterial center line from the first blood vessel segmentation image or the second blood vessel segmentation image, extracts an artery bifurcation point using the arterial center line, and sets the arterial bifurcation point according to a preset method.
  • the connection may further include instructions for generating the first blood vessel graph or the second blood vessel graph.
  • the memory extracts a vein center line from the first vascular segment image or the second vascular segment image, extracts a vein branch point using the vein center line, and sets the vein branch point according to a preset method.
  • the connection may further include instructions for generating the first blood vessel graph or the second blood vessel graph.
  • the memory generates the first blood vessel graph or the second blood vessel graph by reflecting the thickness and length of the arteries connecting the arterial bifurcation points, and determines the thickness and length of the veins connecting the vein bifurcation points. It may further include instructions for generating the first blood vessel graph or the second blood vessel graph by reflection.
  • the memory by matching the first blood vessel graph and the second blood vessel graph, observes different parts, and determines the presence or absence of the disease using the convolutional neural network learned in advance using the different parts.
  • the convolutional neural network may be characterized in that it is learned in advance through a correlation between a type of disease and a change in a blood vessel graph according to a change in time.
  • a first blood vessel segmentation image for a first eye image and a second blood vessel segmentation for a second eye image according to a preset method. Generating an image; Generating a first blood vessel graph using the first blood vessel segmentation image and generating a second blood vessel graph using the second vessel segmentation image; And determining the presence or absence of a disease by comparing the first blood vessel graph and the second blood vessel graph; including, wherein the first eye image is an image generated earlier than the second eye image, and an automatic disease determination method is disclosed. do.
  • generating the first blood vessel graph may include extracting an artery center line from the first blood vessel segmentation image or the second vessel segmentation image; Extracting an artery bifurcation point using the arterial center line; And generating the first blood vessel graph or the second blood vessel graph by connecting the arterial bifurcation points according to a preset method.
  • generating the second blood vessel graph may include extracting a vein center line from the first blood vessel segmentation image or the second vessel segmentation image; Extracting a vein branch point using the vein center line; And generating the first blood vessel graph or the second blood vessel graph by connecting the vein bifurcation points according to a preset method.
  • the generating of the first blood vessel graph and the second blood vessel graph includes generating the first blood vessel graph or the second blood vessel graph by reflecting the thickness and length of the arteries connecting the arterial bifurcation points. step; And generating the first blood vessel graph or the second blood vessel graph by reflecting the thickness and length of veins connecting the vein bifurcation points.
  • determining the presence or absence of the disease may include: observing different portions by matching the first blood vessel graph and the second blood vessel graph; And determining the presence or absence of the disease using a convolutional neural network learned in advance using the different parts; including, wherein the convolutional neural network is preliminarily based on the correlation between the type of disease and the vascular graph change according to time change It may be learned.
  • generating the first blood vessel segmentation image includes: acquiring frames of a patient's fundus image (FP image) and a fluorescent fundus angiography image (FAG image); Rigid registration of each frame of a fluorescent fundus angiography image (FAG image) using a feature point matching technique; Performing blood vessel extraction of a fluorescent fundus angiography image (FAG image) matched based on deep learning according to the characteristics of a fluorescent fundus angiography image (FAG image); Integrating the blood vessel extraction results of the FAG image frames into an average value; Performing blood vessel extraction of the fundus image based on deep learning according to the characteristics of the fundus image; Performing matching of a fluorescent fundus angiography image (FAG image) and a fundus image (FP image) from the extracted blood vessels; And dividing the blood vessel based on the matched result.
  • FP image fluorescent fundus angiography image
  • FOG image fluorescent fundus angiography image
  • the step of rigid registration of each frame of the fluorescent fundus angiography image (FAG image) using a feature point matching technique may include detecting a feature point, extracting a feature point descriptor, and performing a RANSAC (RANdom Sample) during the feature point matching process.
  • Consensus) to perform matching of a fluorescent fundus angiography image (FAG image) may include.
  • the deep learning is a learned convolutional neural network (CNN), and a fluorescent fundus angiography image matched based on deep learning according to the characteristics of the fluorescent fundus angiography image (FAG image)
  • the step of performing blood vessel extraction of the FAG image may include deriving a deep learning-based FAG Vessel Probability map (FAGVP) based on blood vessels in a fluorescent fundus angiography image (FAG image).
  • FAGVP FAG Vessel Probability map
  • the step of integrating the blood vessel extraction results of the FAG image frames as an average value is a ratio based on a free-form deformation technique of a coordinate grid expressed by a B-Spline model in FAGVP.
  • Performing non-rigid registration and deriving a Maximum FAG Vessel Probability Map (M-FAGVP) obtained by extracting the average value of the average FAG Vessel Probability map (A-FAGVP) of the matched FAGVP and the maximum value for each pixel position May include;
  • the step of performing blood vessel extraction of a fundus image based on deep learning according to the characteristics of the fundus image includes a deep learning-based Fundus Photo Vessel Probability map for matching between the fundus image (FP image) and A-FAGVP. It may include; deriving (FPVP).
  • the step of performing the matching of a fluorescent fundus angiography image (FAG image) and a fundus image (FP image) from the extracted blood vessels includes chamfer matching from blood vessels derived from FPVP and A-FAGVP.
  • the step of segmenting a blood vessel based on the matched result is a binary vessel segmentation mask by applying a hysteresis thresholding technique based on a probability value for each pixel of A-FAGVP.
  • the disease determination apparatus and method according to embodiments of the present invention can automatically extract a precise region of the retinal blood vessels in the fundus image by matching the fundus image and the fluorescent angiography image, so that accurate ocular blood vessel analysis is possible.
  • the apparatus and method for determining a disease may increase accuracy of automatic disease determination based on an accurate ocular blood vessel analysis.
  • FIG. 1 is a block diagram of an apparatus for determining a disease according to an embodiment of the present invention.
  • FIG. 2 is a flowchart of an operation and method for determining a disease according to an embodiment of the present invention.
  • FIG. 3 is a logical configuration diagram of a blood vessel extraction module according to an embodiment of the present invention.
  • FIG. 4 is a diagram for a result of matching using the SIFT technique according to an embodiment of the present invention.
  • FIG. 5 is a diagram illustrating a result of deriving a FAG Vessel Probability map (FAGVP) in a fluorescent fundus angiography image (FAG image) according to an embodiment of the present invention.
  • FAGVP FAG Vessel Probability map
  • FIG. 6 is a diagram illustrating a result of very precisely registration of two images similarly matched by rigid registration according to an embodiment of the present invention through the B-Spline technique.
  • FIG. 7 is a diagram for a result of a precisely segmented blood vessel image according to an embodiment of the present invention.
  • FIG. 8 is a flowchart illustrating a method of matching a fundus image and a fluorescent fundus angiography image according to an embodiment of the present invention.
  • FIG. 9 is a diagram illustrating a FAG Vessel Probability map using deep learning according to an embodiment of the present invention.
  • FIG. 10 is a view showing the results of rigid body registration (left) and non-rigid body registration (right) according to an embodiment of the present invention.
  • FIG. 11 is a diagram showing an average FAGVP map (left) and a maximum FAGVP map (right) according to an embodiment of the present invention.
  • FIG. 12 is a diagram illustrating an FP image (left) and a FAGVP map derived using deep learning according to an embodiment of the present invention.
  • FIG. 13 is a view showing a result of rigid body matching using chamfer matching according to an embodiment of the present invention.
  • FIG. 14 is a diagram showing a result of non-rigid matching using a B-Spline technique according to an embodiment of the present invention.
  • 15 is a diagram for explaining a schematic flow of an operation in which a blood vessel segmented image is generated according to an embodiment of the present invention.
  • 16 is a diagram for explaining an operation of extracting an artery bifurcation point and a vein bifurcation point from a blood vessel segmentation image according to an embodiment of the present invention.
  • FIG. 17 is a diagram illustrating a blood vessel graph according to an embodiment of the present invention.
  • FIG. 18 is a diagram illustrating a comparison between a first blood vessel graph and an n-th blood vessel graph according to an embodiment of the present invention.
  • one component when one component is referred to as “connected” or “connected” to another component, the one component may be directly connected or directly connected to the other component, but specially It should be understood that as long as there is no opposite substrate, it may be connected or may be connected via another component in the middle.
  • FIG. 1 is a block diagram of an apparatus for determining a disease according to an embodiment of the present invention.
  • a disease determination apparatus 100 may include a receiver 110, a processor 120, and a memory 130.
  • the receiving unit 110 may be connected to an external device to receive various types of information.
  • the receiving unit 110 may receive an eyeball image from an external device.
  • the receiving unit 110 may include a communication modem, a USB port, and the like.
  • the receiving unit 110 may be connected to a mobile communication terminal (not shown) by wire or wirelessly to receive an eyeball image through a mobile communication terminal (not shown).
  • the receiving unit 110 may be connected to an external memory device (not shown) by wire to receive an eyeball image stored in the external memory device (not shown).
  • the eyeball image may be an image corresponding to the eyeball of an arbitrary patient.
  • the eyeball image may include a fundus image (FP image, Fundus Photography image) corresponding to the eyeball of a patient.
  • the eyeball image may be a Fluorescein Angiography image (FAG image) corresponding to the eyeball of a patient.
  • the eye image may be a concept including both an FP image and a fluorescein angiography image (FAG image) of an arbitrary patient.
  • the receiver 110 may output the received eyeball image to the processor 120.
  • the processor 120 may analyze eye information using instructions stored in the memory 130. That is, the processor 120 may analyze the input eye information using instructions stored in the memory 130 and then determine whether the patient has a disease. Hereinafter, an operation in which the processor 120 analyzes eye information will be described in detail.
  • FIG. 2 is a schematic flowchart of a disease determination operation and a disease determination method of the processor 120 according to an embodiment of the present invention.
  • the processor 120 may receive an eyeball image from the receiver 110.
  • the eyeball image received by the processor 120 may include a first eyeball image and a second eyeball image.
  • the first eyeball image may be an image generated before the second eyeball image.
  • the first eyeball image may be an eyeball image generated in August, 2017, and the second eyeball image may be an eyeball image generated in August, 2019.
  • the first eyeball image may include a first fundus image (FP image) and a first fluorescent fundus angiography image (FAG image).
  • the second eyeball image may include a second fundus image (FP image) and a second fluorescent fundus angiography image (FAG image).
  • the processor 120 may analyze eye information using instructions corresponding to the blood vessel extraction module 140.
  • the blood vessel extraction module 140 may include instructions for dividing only information on blood vessels from eye information. Accordingly, the processor 120 may generate a first blood vessel segmentation image for the first eyeball image by using instructions corresponding to the blood vessel extraction module 140. Likewise, the processor 120 may generate a second blood vessel segmented image for the second eyeball image by using instructions corresponding to the blood vessel extraction module 140.
  • the first blood vessel segmentation image may be an image in which only information about the blood vessels of the patient's eye is segmented using the first eyeball image
  • the second vessel segmentation image is an image of the blood vessels of the patient's eye using the second eyeball image. Only information may be a segmented image.
  • FIG. 3 is a logical configuration diagram of a blood vessel extraction module according to an embodiment of the present invention.
  • the blood vessel extraction module 140 includes a FAG matching unit 210, a FAG blood vessel extraction unit 220, an integrated unit 230, an FP blood vessel extraction unit 240, and a FAG-FP matching unit 250. And a logical configuration such as the blood vessel division unit 260.
  • An operation of the processor 120 generating the first blood vessel segmentation image and the operation of generating the second vessel segmentation image may be the same. Therefore, hereinafter, the processor 120 will collectively be described as generating a blood vessel segmentation image without distinction.
  • the processor 120 acquires frames of a patient's fundus image (FP image) and a fluorescent fundus angiography image (FAG image).
  • the fundus image (FP image) can be acquired through a fundus imaging device for the examination of eye diseases in ophthalmology
  • the fluorescein angiography image (FAG image) is by injecting a fluorescent substance (Fluorescein) into a vein, and a fluorescent substance.
  • the circulating through the retinal circulatory system can be optically photographed and acquired through a device displaying blood vessels.
  • the above-described fundus image and/or fluorescent fundus angiography image may be configured as multiple frames according to the passage of time.
  • the FAG matching unit 310 includes instructions for matching each frame of a fluorescent fundus angiography image composed of multiple frames according to the passage of time using a feature point matching technique. More specifically, the FAG matching unit 310 includes instructions for performing rigid registration of a fluorescent fundus angiography image (FAG image) using a Scale Invariant Feature Transform (SIFT) technique. The FAG matching unit 310 may perform registration of a fluorescent fundus angiography image (FAG image) using RANSAC (RANdom Sample Consensus) during a feature point detection, feature point descriptor extraction, and feature point matching process.
  • FAG image fluorescent fundus angiography image
  • RANSAC Random Sample Consensus
  • Fluorescein angiography images may have changes in blood vessels and backgrounds over time. Accordingly, in the present invention, a Scale Invariant Feature Transform (SIFT) technique may be used to detect various features while minimizing a change in an image.
  • SIFT Scale Invariant Feature Transform
  • various regional features such as optic disc, fovea, and local vessel structure in a FAG image can be searched.
  • the processor 120 matches features in two images and estimates a perspectivce transform based on the RANSAC (RANDOM SAMPLE CONSENSUS) technique. I can.
  • the processor 120 may perform rigid registration between two images using the estimated perspective transform.
  • the processor 120 may perform SIFT-based registration of the continuously continuous images based on the image for which registration has been previously performed.
  • the processor 120 may repeatedly register all of the fluorescent fundus angiography images (FAG images). After the processor 120 performs the above operation on all of the fluorescent fundus angiography images (FAG images), the final registration result may be registered based on the first image.
  • FOG images fluorescent fundus angiography images
  • the FAG blood vessel extraction unit 320 may include instructions for performing blood vessel extraction of a fluorescent fundus angiography image (FAG image) matched based on deep learning in accordance with the characteristics of a fluorescent fundus angiography image (FAG image).
  • deep learning may be a learned convolutional neural network (CNN).
  • the processor 120 can derive a deep learning-based FAG Vessel Probability map (FAGVP) based on blood vessels in a fluorescent fundus angiography image (FAG image) using the instructions of the FAG blood vessel extraction unit 320. have.
  • Fluorescent fundus angiography image (FAG image) through the instructions of the FAG matching unit 310 described above is forcibly registered from the perspective transform, but the fluorescence fundus angiography image (FAG image) is still subject to change over time, optics Since it retains the properties of the original fluorescent fundus angiography image (FAG image) such as disc and background, it is necessary to remove other properties that interfere with registration for very close registration between blood vessels.
  • the processor 120 applies the high-precision deep learning (DL) based blood vessel regionization technique instructions included in the FAG blood vessel extraction unit 320 to obtain a matched fluorescein angiography image (FAG image).
  • DL deep learning
  • Deep learning-based blood vessel localization techniques are various known techniques, and other properties other than blood vessels can be removed with a very high probability.
  • databases (DRIVE, STARE, CHASE, HRF) of the Retinal Vessel Segmentation techniques are all for FP images, so they have different characteristics from FAG images. In particular, the characteristics of blood vessels appear very differently.
  • the fundus image (FP image) is converted to grayscale, it is expressed as a lower pixel value than the periphery of the blood vessel.
  • FOG image fluorescence fundus angiography image
  • the existing fundus image (FP image) database can be appropriately converted so that these contradictory characteristics can be reflected in the fluorescence fundus angiography image (FAG image). This is because if the pixel value of the green channel in the FP image is inverse transformed based on the blood vessel, it can be transformed to include characteristics similar to the FAG image. .
  • Deep Learning (DL) for the based vascular localization technique can be learned about the same Ground Truth (GT) based on the converted fundus image (FP image), and through this, a fluorescence fundus angiography image (FAG) can be learned. image) can be derived from the FAG Vessel Probability map (FAGVP).
  • the blood vessel is estimated very precisely as shown in FIG. 5.
  • the upper left is a Color Fundus Photo Image
  • the upper right is a Gray Fundus Photo Image
  • the lower left is an Inverse Gray Fundus Photo
  • the lower right is a FAG Image.
  • the processor 120 may integrate the blood vessel extraction results of the FAG image frames as an average value using the instructions of the integration unit 330. More specifically, the processor 120 uses the instructions of the integration unit 330 to perform non-rigid registration based on the B-Spline technique in FAGVP, and the average FAG Vessel Probability map of the matched FAGVP. (A-FAGVP) can be derived. In more detail, the processor 120 uses the instructions of the integration unit 330 to perform non-rigid registration based on the free-form deformation technique of the coordinate grid represented by the B-Spline model in FAGVP. And the average value of the average FAG Vessel Probability map (A-FAGVP) of the matched FAGVP can be derived.
  • A-FAGVP Average FAG Vessel Probability map
  • the SIFT-based rigid registration using the instructions of the above-described FAG blood vessel extraction unit 320 cannot be precisely matched due to the surrounding structures or non-rigid movements other than blood vessels. Accordingly, the processor 120 in the present invention performs non-rigid registration with high precision between blood vessels from the FAGVP from which other structures have been removed.
  • the processor 120 calculates information by performing repetitive registration from FAGVPs that have already been fairly similarly matched through rigid registration, and utilizes a B-Spline registration technique that reduces errors based on this. Non-rigid registration can be performed.
  • the processor 120 may extract blood vessels of the fundus image based on deep learning in accordance with the characteristics of the fundus image using instructions of the FP blood vessel extraction unit 340. More specifically, the processor 120 uses the instructions of the FP blood vessel extraction unit 340 to derive a deep learning-based Fundus Photo Vessel Probability map (FPVP) for matching between the fundus image (FP image) and A-FAGVP. I can. After non-rigid registration using the instructions of the integration unit 330, the processor 120 calculates the average of the same pixel positions along the time axis to include all changes in the entire blood vessel over time and average FAG Vessel.
  • FPVP Fundus Photo Vessel Probability map
  • a Probability map (A-FAGVP) and a Maximum FAG Vessel Probability Map (M-FAGVP) obtained by extracting the maximum value for each pixel location can be derived.
  • A-FAGVP calculated as an average value for each pixel along the time axis, not only reflects the change in blood vessels over time, but also effectively suppresses small noise that may occur.
  • a deep learning (DL) program is also included in the FP blood vessel extraction unit 340 and the processor 120 ) It is preferable to derive the FPVP using the deep learning (DL) of the FP blood vessel extraction unit 340.
  • a DL technique learned from DRIVE which is one of the known techniques, can be used.
  • the processor 120 may perform registration of a fluorescent fundus angiography image (FAG image) and a fundus image (FP image) from blood vessels extracted using instructions of the FAG-FP matching unit 350.
  • the reason for this registration is that the characteristics of blood vessels appear differently in the fluorescence fundus angiography image (FAG image) and the fundus image (FP image).
  • Processor 120 is based on the FPVP derived from the fundus image (FP image) by using the deep learning technique of the FAG-FP matching unit 350, a fluorescence fundus angiography image (FAG image) and a fundus image (FP image). Matching can be performed.
  • the processor 120 may perform matching of a fluorescent fundus angiography image (FAG image) and a fundus image (FP image) by using the instructions of the FAG-FP matching unit 350 through two major processes.
  • the processor 120 performs rigid registration between the fundus image-fluorescent fundus angiography image (FP-FAG) using a chamfer matching technique from blood vessels derived from FPVP and A-FAGVP.
  • the FAG-FP matching unit 350 is a non-rigid body between the final fundus image-fluorescent fundus angiography image (FP-FAG) based on the free-form deformation technique of the coordinate grid represented by the B-Spline model. Non-rigid registration can be performed.
  • the processor 120 binarizes the A-FAGVP and FPVP derived from the above-described process based on an appropriate threshold value, and may perform rigid registration using a chamfer matching technique. Second, the processor 120 may perform non-rigid registration based on a free-form deformation technique of a coordinate grid represented by a B-Spline model for precise registration between blood vessels.
  • A-FAGVP may be registered based on FPVP.
  • a chamfer matching technique is used instead of the above-described SIFT technique since it has partial information about a blood vessel.
  • the SIFT technique detects a feature from an image and generates a descriptor. Thereafter, a matching point is found from a feature descriptor, and a perspective transform using RANSAC (RANDOM SAMPLE CONSENSUS) is calculated based on this.
  • RANSAC RANDOM SAMPLE CONSENSUS
  • the chamfer matching technique calculates the distances of all pixels from the target and source binary images and calculates the similarity between the two distance images based on this. Therefore, the chamfer matching technique requires a very small amount of computation and time compared to the SIFT technique, and is effective for registration between vessels. Meanwhile, the chamfer matching technique according to the present embodiment is a customized chamfer matching technique that improves the conventional chamfer matching technique that calculates translation, and considers rotation. do.
  • the processor 120 uses the instructions of the FAG-FP matching unit 350 to perform non-rigid registration for matching the final fundus image-fluorescent fundus angiography image (FP-FAG). can do.
  • the processor 120 performs non-rigid registration based on the free-form deformation technique of the coordinate grid represented by the B-Spline model between the fluorescent fundus angiography images (FP-FAG). do.
  • the processor 120 finally considers a non-rigid motion to match a precise fundus image-fluorescent fundus angiography image (FP-FAG), and free-of-coordinate grids expressed by the B-Spline model.
  • Nonrigid registration based on the form deformation technique can be performed.
  • Two images that are already similarly matched by rigid registration can derive a very precise registration result as shown in FIG. 6 through the free-form deformation technique of the coordinate grid represented by the B-Spline model.
  • the upper left is the fundus image
  • the upper right is the blood vessel probability map of the fundus image
  • the lower left is the result of prior registration between the blood vessel probability map (white) of the fundus image and the blood vessel probability map (different color) of the fluorescent fundus angiography image
  • the lower right is the result after matching between the blood vessel probability map of the fundus image (white) and the blood vessel probability map (different color) of the fluorescent fundus angiography image.
  • the processor 120 may divide a blood vessel based on a result matched using the instructions of the blood vessel segmentation unit 360.
  • the processor 120 since a very precise vessel segmentation mask that can be used as a GT (Ground Truth) must be derived from the matched A-FAGVP, the processor 120 performs the instructions of the vessel segmentation unit 360.
  • a hysteresis thresholding technique based on the probability value of each pixel of A-FAGVP, a binary vessel segmentation mask is derived, and the connected component is connected to the derived vessel segmentation mask. analysis) technique can be applied to remove noise.
  • the processor 120 needs to fill a hole occurring in a vein region because a vein is filled with a late contrast in a FAG image. Accordingly, the blood vessel segmentation unit 360 may detect a hole inside a vein in the matched A-FAGVP and reinforce a relatively low pixel probability. Thereafter, the processor 120 may separately divide thin blood vessels and thick blood vessels by applying the concept of a hysteresis threshold in the matched A-FAGVP. Thereafter, the processor 120 removes a very small region from connected components to remove noise.
  • A-FAGVP a gap may occur mainly in the center in the region of the vein. This occurs because of the time difference between the contrast passing through the artery and reaching the vein.
  • the average probability corresponding to the vessel wall may be considerably high.
  • the center of the vein fills in a short area near the end, the average probability is low.
  • the processor 120 must fill a gap occurring in a vein in order to obtain a vessel segmentation mask.
  • the processor 120 may fill the gap from the binarized image using a morphological closing technique.
  • the calculated subtracted image is a binary image of the vein gap.
  • the processor 120 may perform segmentation based on the matched A-FAGVP reinforced to a gap occurring in a vein by using the instructions of the blood vessel segmentation unit 360. At this time, it is possible to perform segmentation based on the concept of hysteresis rather than evolution from a simple threshold.
  • the processor 120 acquires a binary vessel mask centered on a thick vessel through a first threshold value.
  • the processor 120 derives the second binary vessel mask by setting the second threshold low to detect thin vessels. Thereafter, the processor 120 derives the vessel center line mask from the secondary binay vessel mask through a skeletonization technique.
  • the vessel center line mask which is expressed up to the thin vessel with the pixel thickness close to 1, is slowly filled in the vein, so the processor 120 is used as a vein. Holes occurring in the (vein) area must be filled. Accordingly, the processor 120 detects a hole in a vein in the matched A-FAGVP and reinforces a relatively low pixel probability. Thereafter, the processor 120 divides thin blood vessels and thick blood vessels separately by applying the concept of a hysteresis threshold in the matched A-FAGVP. Thereafter, the processor 120 removes a very small region from connected components to remove noise.
  • a result of a precisely divided blood vessel image as shown in FIG. 7 can be obtained, and an early diagnosis and treatment of a blindness-causing disease and/or a full blood vessel disease is possible through the result image.
  • the upper left is the whole fundus image
  • the upper right is the whole fundus image vascularization result
  • the lower left is the optic disc center enlarged fundus image from the left
  • the lower right of the image is the result of enlarged vascularization at the center of the optic disc center from the left, and the result of enlarged vascularization at the fovea center.
  • FIG. 8 is a flowchart of a method of matching a fundus image and a fluorescent fundus angiography image according to an embodiment of the present invention.
  • the image matching method according to the present embodiment is performed by the automatic disease determination apparatus 100 using the above-described eyeball image. Therefore, since the subject performing each step described below is the automatic disease determination apparatus 100, it may be omitted.
  • frames of a patient's fundus image (FP image) and a fluorescent fundus angiography image (FAG image) are acquired (S810).
  • the fundus image (FP image) can be acquired through a fundus imaging device for the examination of eye diseases in ophthalmology, and the fluorescein angiography image (FAG image) is by injecting a fluorescent substance (Fluorescein) into a vein, and a fluorescent substance.
  • the circulating through the retinal circulatory system can be optically photographed and acquired through a device displaying blood vessels.
  • each frame of the fluorescent fundus angiography image is matched using a feature point matching technique (S820). More specifically, rigid registration of a fluorescent fundus angiography image (FAG image) is performed using a Scale Invariant Feature Transform (SIFT) technique.
  • SIFT Scale Invariant Feature Transform
  • Fluorescent fundus angiography (FAG) images have changes in blood vessels and backgrounds over time, and thus, in the present invention, a Scale Invariant Feature Transform (SIFT) technique is used to detect various features while minimizing changes in the image.
  • the blood vessel extraction of the fluorescent fundus angiography image (FAG image) matched based on deep learning according to the characteristics of the fluorescent fundus angiography image (FAG image) is performed (S830).
  • the deep learning may be a learned convolutional neural network (CNN).
  • the blood vessel extraction results of the frames of the fluorescent fundus angiography image are integrated as an average value (S840). More specifically, non-rigid registration is performed based on the free-form deformation technique of the coordinate grid represented by the B-Spline model in FAGVP, and the average FAG Vessel Probability map (A-FAGVP) of the matched FAGVP is performed. ).
  • FPVP Fundus Photo Vessel Probability map
  • a fluorescence fundus angiography image (FAG image) and a fundus image (FP image) from the extracted blood vessels is performed (S860).
  • the reason for this registration is that the characteristics of blood vessels appear differently in the fluorescence fundus angiography image (FAG image) and the fundus image (FP image).
  • matching of a fluorescein angiography image (FAG image) and a fundus image (FP image) is performed based on FPVP derived from a fundus image (FP image) using a deep learning technique.
  • the blood vessel is divided based on the matched result (S870). More specifically, a vessel segmentation mask is derived from the matched A-FAGVP using a vessel segmentation mask generation technique, and the matched A is reinforced to the gap occurring in the vein. -Segmentation based on FAGVP.
  • the applicant of the present invention conducted the following experiment to confirm the result of automatic blood vessel segmentation using the matching of the fundus image and the fluorescent fundus angiography image.
  • the experiment was conducted in hardware consisting of Intel(R) Core(TM) i7-6850K CPU @ 3.6GHz, 32G RAM, GeForce GTX 1080TI11G, Ubuntu 16.04 LTS OS, and python 2.7 development environment.
  • the database used in the experiment is a total of 10 FP-FAG sets consisting of one FP image and several FAG images.
  • 20 FP images and GT (ground truth) of each train and test set of the DRIVE Database released for deep learning learning were used.
  • the perspective transform was calculated and matched through the feature detection and feature matching using the SIFT technique, and the RANSAC technique.
  • non-rigid body registration is performed through the freeform deformation technique of the coordinate grid expressed by the B-Spline model of the free form deformation series in order to achieve more precise matching using the FAG Vessel Probability map derived from the learned deep learning.
  • the FAG matching result derived up to the non-rigid matching can be seen very precise results as shown in FIG. 10.
  • a map that can synthesize the entire sequence must be derived using the FAGVP map that has been completely matched to the non-rigid body. Therefore, we derive the results including information along the time axis as Average FAGVP map and Maximum FAGVP map.
  • the aggregated image shows that blood vessels are estimated with great precision and precision.
  • the Vessel Probability map of the FP image is similarly derived using deep learning.
  • the database used DRIVE and the network model also used the same model.
  • the input was learned using the FP image of the DRIVE Database as it is, and the average was subtracted and divided by the standard deviation by pre-processing.
  • the result as shown in FIG. 12 was derived by using our FP image as an input.
  • FP-FAG matching the final process of matching, is to match the A-FAGVP map to the FPVP map.
  • the first step is the FP-FAG rigid body registration.
  • the processor 120 may generate a first blood vessel segmentation image for the first eyeball image, and may generate a second vessel segmentation image for the second eyeball image.
  • the blood vessel segmentation image may mean an image obtained by extracting blood vessels corresponding to the eyeball of the patient from the eyeball image.
  • the first blood vessel segmentation image may mean an image from which only blood vessels are extracted from the first eyeball image
  • the second blood vessel segmentation image may mean an image from which only blood vessels are extracted from the second eyeball image.
  • the processor 120 may generate a first arteriovenous segmented image by separating an artery and a vein from the first vessel segmented image using instructions included in the analysis module 150.
  • the processor 120 may generate a second arteriovenous separated image by separating an artery and a vein from the second blood vessel segmentation image using instructions included in the analysis module 150.
  • the analysis module 150 may include a convolution neural network (CNN).
  • the convolutional neural network of the analysis module 150 may be pre-trained based on pre-set labeled information. Accordingly, the processor 120 may generate the first arteriovenous segmented image by using the first blood vessel segmentation image as an input of the convolutional neural network. Similarly, the processor 120 may generate a second arteriovenous segmented image by using the second blood vessel segmented image as an input of the convolutional neural network.
  • CNN convolution neural network
  • the convolutional neural network of the analysis module 150 may be a pre-learned program using a myriad of labeled blood vessel segmentation images.
  • the convolutional neural network of the analysis module 150 includes A arterial images (i.e., an image displayed by dividing only an artery among vascular segment images) and B vein images (i.e., an image displayed by dividing only a vein among vascular segment images). ) May be a pre-learned program (however, A and B are a plurality of natural numbers). Accordingly, the convolutional neural network of the analysis module 150 may output the first arteriovenous segmented image or the second arteriovenous segmented image when the first segmented image or the second segmented image is input.
  • FIG. 15 illustrates a first arteriovenous segmented image (or a second arteriovenous segmented image) in which arteries and veins are divided.
  • the processor 120 may generate a first blood vessel graph using a first blood vessel segmentation image (ie, a first arteriovenous segmentation image), and a second vessel segmentation image (that is, a second arteriovenous segmentation image).
  • a second blood vessel graph may be generated by using.
  • FIG. 16 is a diagram for explaining an operation of extracting arterial bifurcation points and vein bifurcation points from a blood vessel segmentation image according to an embodiment of the present invention
  • FIG. 17 is a diagram illustrating a blood vessel graph according to an embodiment of the present invention.
  • the analysis module 150 may store instructions for extracting the arterial center line and the arterial branch point, and the vein center line and the vein branch point from the arteriovenous segmentation image, respectively.
  • the processor 120 may extract the arterial centerline using the instructions of the analysis module 150. That is, the processor 120 may extract only the arterial center line (ie, a line ignoring the thickness of the artery) from the arteriovenous segmented image. In addition, the processor 120 may detect a branch point from the center line of the artery as an artery branch point. Similarly, the processor 120 may extract only the vein center line (ie, a line ignoring the thickness of the vein) from the arteriovenous segmentation image. In addition, the processor 120 may detect a branching point from the vein center line as a vein branching point.
  • the processor 120 may detect a portion of the optic nerve papilla in the arteriovenous segmented image (a central portion of a circle in FIG. 16).
  • the processor 120 may define an optic nerve papilla as a root start point, define a branch point of a detected artery as a vertex, and arrange each vertex in the order of branching from the root start point and connect to generate a blood vessel graph.
  • the thickness of the line connecting each vertex may correspond to the thickness of the corresponding blood vessel.
  • the length of the line connecting each vertex may correspond to the length of the corresponding blood vessel. That is, the thickness of a line connecting the first vertex and the second vertex may correspond to the thickness of a corresponding blood vessel, and the length of the line may correspond to the length of the blood vessel.
  • the graph root of the nth blood vessel graph may correspond to the optic nerve papilla as the root starting point
  • the graph branching from the graphic root to the left may correspond to the arterial graph
  • a graph branching from the graphic root to the right May correspond to the vein graph. Accordingly, the processor 120 can easily recognize the distribution of blood vessels in the n-th eyeball image through the n-th blood vessel graph.
  • the processor 120 compares the first blood vessel graph and the second blood vessel graph using the instructions of the determination module 160, and then uses the comparison result to "patient time flow. Ocular blood vessel changes according to "can be observed. As described above, the processor 120 can easily recognize the blood vessel distribution of the patient through the blood vessel graph.
  • the determination module 160 may include instructions for detecting different portions by comparing the first blood vessel graph and the second blood vessel graph. Accordingly, the processor 120 may recognize a difference between the first blood vessel graph and the second blood vessel graph using the instructions of the determination module 160. In the example of FIG. 18, the processor 120 may easily recognize that the circle portion is different.
  • the processor 120 can recognize the temporal change of the blood vessel with a simple operation. Since the vascular segmented images are too complex, a huge amount of computation will be required when comparing them with each other, but since the vascular graph is only a simple line connection, the processor 120 compares and observes the first segmented image and the second segmented image with a simple operation. It can be done.
  • step S250 the processor 120 may determine the presence or absence of a disease of the patient according to the observation result using the instructions of the determination module 160.
  • the determination module 160 may also include a convolution neural network (CNN).
  • CNN convolution neural network
  • the convolutional neural network of the determination module 160 may also be learned in advance based on pre-labeled information through a correlation between a type of disease and a change in a blood vessel graph according to a time change. Accordingly, the processor 120 may determine whether the patient has a disease by using the comparison result of the first blood vessel graph and the second blood vessel graph as an input of the convolutional neural network of the determination module 160.
  • CNN convolution neural network
  • the convolutional neural network of the determination module 160 may be a pre-learned program using a myriad of labeled comparison results.
  • the convolutional neural network of the determination module 160 may be a program that has been learned in advance using the result of comparing C blood vessel graphs corresponding to glaucoma and the result of comparing D blood vessel graphs corresponding to cataract (however, C and D is a plural natural number). Accordingly, the convolutional neural network of the determination module 160 may automatically determine whether a disease corresponding thereto is input when the comparison result of the first blood vessel graph and the second blood vessel graph is input.
  • the operation of the automatic disease determination apparatus may be implemented as a computer-readable code in a computer-readable recording medium and implemented as an automatic disease determination method by a computer.
  • Computer-readable recording media include all types of recording media in which data that can be read by a computer system are stored. For example, there may be read only memory (ROM), random access memory (RAM), magnetic tape, magnetic disk, flash memory, optical data storage device, and the like.
  • ROM read only memory
  • RAM random access memory
  • magnetic tape magnetic tape
  • magnetic disk magnetic disk
  • flash memory optical data storage device
  • optical data storage device optical data storage device

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Pathology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Epidemiology (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Primary Health Care (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

L'invention concerne un dispositif et un procédé de diagnostic automatique d'une maladie au moyen d'une image ophtalmique et, plus spécifiquement, un dispositif et un procédé de diagnostic automatique d'une maladie à l'aide d'une segmentation de vaisseau sanguin dans une image ophtalmique. Le dispositif de diagnostic automatique d'une maladie selon un aspect de la présente invention comprend : un processeur ; et une mémoire connectée électriquement au processeur et ayant un réseau neuronal convolutif stocké dans celle-ci, la mémoire pouvant comprendre des instructions pour : générer une première image de segmentation de vaisseau sanguin pour une première image ophtalmique et une seconde image de segmentation de vaisseau sanguin pour une seconde image ophtalmique selon un procédé prédéfini ; générer un premier graphique de vaisseau sanguin à l'aide de la première image de segmentation de vaisseau sanguin et un second graphique de vaisseau sanguin à l'aide de la seconde image de segmentation de vaisseau sanguin ; et diagnostiquer la présence ou l'absence d'une maladie par le biais du réseau neuronal convolutif par comparaison du premier graphique de vaisseau sanguin au second graphe de vaisseau sanguin. Selon la présente invention, une zone précise d'un vaisseau sanguin rétinien dans une image de fond d'œil peut être automatiquement extraite par mise en correspondance de l'image de fond d'œil avec une image d'angiographie à fluorescéine et il est ainsi possible d'analyser avec précision des vaisseaux sanguins ophtalmiques.
PCT/KR2020/010274 2019-08-30 2020-08-04 Dispositif et procédé de diagnostic automatique d'une maladie à l'aide d'une segmentation de vaisseau sanguin dans une image ophtalmique WO2021040258A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020190107602A KR102250694B1 (ko) 2019-08-30 2019-08-30 안구 영상 내 혈관 분할을 이용한 자동 질환 판단 장치 및 그 방법
KR10-2019-0107602 2019-08-30

Publications (1)

Publication Number Publication Date
WO2021040258A1 true WO2021040258A1 (fr) 2021-03-04

Family

ID=74684508

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/010274 WO2021040258A1 (fr) 2019-08-30 2020-08-04 Dispositif et procédé de diagnostic automatique d'une maladie à l'aide d'une segmentation de vaisseau sanguin dans une image ophtalmique

Country Status (2)

Country Link
KR (1) KR102250694B1 (fr)
WO (1) WO2021040258A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470107A (zh) * 2021-06-04 2021-10-01 广州医科大学附属第一医院 支气管中心线提取方法及其系统和存储介质
CN114732431A (zh) * 2022-06-13 2022-07-12 深圳科亚医疗科技有限公司 对血管病变进行检测的计算机实现方法、装置及介质
CN115690124A (zh) * 2022-11-02 2023-02-03 中国科学院苏州生物医学工程技术研究所 高精度单帧眼底荧光造影图像渗漏区域分割方法及系统

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4273955A1 (fr) 2021-02-26 2023-11-08 Lg Energy Solution, Ltd. Électrode positive et batterie secondaire au lithium la comprenant
WO2023277589A1 (fr) 2021-06-30 2023-01-05 주식회사 타이로스코프 Procédé de guidage de visite pour examen de maladie oculaire thyroïdienne active et système pour la mise en œuvre de celui-ci
WO2023277548A1 (fr) 2021-06-30 2023-01-05 주식회사 타이로스코프 Procédé d'acquisition d'image latérale pour analyse de protrusion oculaire, dispositif de capture d'image pour sa mise en œuvre et support d'enregistrement
WO2023277622A1 (fr) 2021-06-30 2023-01-05 주식회사 타이로스코프 Procédé d'accompagnement à la consultation à l'hôpital en vue du traitement d'une ophtalmopathie thyroïdienne active et système pour sa mise en œuvre
KR102477694B1 (ko) * 2022-06-29 2022-12-14 주식회사 타이로스코프 활동성 갑상선 눈병증 진료를 위한 내원 안내 방법 및 이를 수행하는 시스템
KR102580279B1 (ko) * 2021-10-25 2023-09-19 아주대학교산학협력단 알츠하이머병 진단에 필요한 정보를 제공하는 방법 및 이를 수행하기 위한 장치
KR20230172106A (ko) * 2022-06-15 2023-12-22 경상국립대학교산학협력단 딥러닝 모델 학습 방법, 딥러닝 모델을 이용한 안과질환 진단 방법 및 이를 수행하는 프로그램이 기록된 컴퓨터 판독이 가능한 기록매체

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018114031A (ja) * 2017-01-16 2018-07-26 大日本印刷株式会社 眼底画像処理装置
JP2018143427A (ja) * 2017-03-03 2018-09-20 キヤノン株式会社 眼科装置、装置の制御方法及びプログラム
JP2018171177A (ja) * 2017-03-31 2018-11-08 大日本印刷株式会社 眼底画像処理装置
KR101977645B1 (ko) * 2017-08-25 2019-06-12 주식회사 메디웨일 안구영상 분석방법
KR20190087272A (ko) * 2018-01-16 2019-07-24 한국전자통신연구원 안저영상을 이용한 녹내장 진단 방법 및 이를 위한 장치

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018114031A (ja) * 2017-01-16 2018-07-26 大日本印刷株式会社 眼底画像処理装置
JP2018143427A (ja) * 2017-03-03 2018-09-20 キヤノン株式会社 眼科装置、装置の制御方法及びプログラム
JP2018171177A (ja) * 2017-03-31 2018-11-08 大日本印刷株式会社 眼底画像処理装置
KR101977645B1 (ko) * 2017-08-25 2019-06-12 주식회사 메디웨일 안구영상 분석방법
KR20190087272A (ko) * 2018-01-16 2019-07-24 한국전자통신연구원 안저영상을 이용한 녹내장 진단 방법 및 이를 위한 장치

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470107A (zh) * 2021-06-04 2021-10-01 广州医科大学附属第一医院 支气管中心线提取方法及其系统和存储介质
CN113470107B (zh) * 2021-06-04 2023-07-14 广州医科大学附属第一医院 支气管中心线提取方法及其系统和存储介质
CN114732431A (zh) * 2022-06-13 2022-07-12 深圳科亚医疗科技有限公司 对血管病变进行检测的计算机实现方法、装置及介质
CN114732431B (zh) * 2022-06-13 2022-10-18 深圳科亚医疗科技有限公司 对血管病变进行检测的计算机实现方法、装置及介质
CN115690124A (zh) * 2022-11-02 2023-02-03 中国科学院苏州生物医学工程技术研究所 高精度单帧眼底荧光造影图像渗漏区域分割方法及系统

Also Published As

Publication number Publication date
KR102250694B1 (ko) 2021-05-11
KR20210026597A (ko) 2021-03-10

Similar Documents

Publication Publication Date Title
WO2021040258A1 (fr) Dispositif et procédé de diagnostic automatique d'une maladie à l'aide d'une segmentation de vaisseau sanguin dans une image ophtalmique
Gagnon et al. Procedure to detect anatomical structures in optical fundus images
US6373968B2 (en) System for identifying individuals
Kavitha et al. Early detection of glaucoma in retinal images using cup to disc ratio
WO2020122672A1 (fr) Appareil et procédé de segmentation automatique des vaisseaux sanguins par appariement d'une image fp et d'une image fag
US20120157820A1 (en) Method and system for detecting disc haemorrhages
US20080123906A1 (en) Image Processing Apparatus And Method, Image Sensing Apparatus, And Program
CN106157279A (zh) 基于形态学分割的眼底图像病变检测方法
KR19990016896A (ko) 얼굴영상에서 눈영역 검출방법
CN115578783B (zh) 基于眼部图像进行眼部疾病识别的装置、方法及相关产品
Mithun et al. Automated detection of optic disc and blood vessel in retinal image using morphological, edge detection and feature extraction technique
Dai et al. A novel meibomian gland morphology analytic system based on a convolutional neural network
KR102250689B1 (ko) 안저 영상과 형광안저혈관조영 영상의 정합을 이용한 자동 혈관 분할 장치 및 방법
CN106960199B (zh) 一种真彩色眼象图白睛区域的完整提取方法
JP2007252707A (ja) 画像解析装置及び画像解析プログラム
Akut FILM: finding the location of microaneurysms on the retina
CN105869151A (zh) 舌分割及舌苔舌质分离方法
WO2020226455A1 (fr) Dispositif de prédiction de neuropathie optique à l'aide d'une image de fond d'œil et procédé de fourniture d'un résultat de prédiction de neuropathie optique
Ali et al. Optic Disc Localization in Retinal Fundus Images Based on You Only Look Once Network (YOLO).
WO2022158843A1 (fr) Procédé d'affinage d'image d'échantillon de tissu, et système informatique le mettant en œuvre
Thanh et al. A Real-Time Classification Of Glaucoma from Retinal Fundus Images Using AI Technology
CN115272333A (zh) 一种杯盘比数据的存储系统
Zhang et al. Automated segmentation of optic disc and cup depicted on color fundus images using a distance-guided deep learning strategy
Doan et al. Implementation of complete glaucoma diagnostic system using machine learning and retinal fundus image processing
Saranya et al. Detecting Exudates in Color Fundus Images for Diabetic Retinopathy Detection Using Deep Learning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20858560

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20858560

Country of ref document: EP

Kind code of ref document: A1