WO2020122672A1 - Apparatus and method for automatically segmenting blood vessels by matching fp image and fag image - Google Patents

Apparatus and method for automatically segmenting blood vessels by matching fp image and fag image Download PDF

Info

Publication number
WO2020122672A1
WO2020122672A1 PCT/KR2019/017720 KR2019017720W WO2020122672A1 WO 2020122672 A1 WO2020122672 A1 WO 2020122672A1 KR 2019017720 W KR2019017720 W KR 2019017720W WO 2020122672 A1 WO2020122672 A1 WO 2020122672A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
fag
matching
fundus
blood vessel
Prior art date
Application number
PCT/KR2019/017720
Other languages
French (fr)
Korean (ko)
Inventor
이수찬
박상준
노경진
Original Assignee
서울대학교병원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 서울대학교병원 filed Critical 서울대학교병원
Publication of WO2020122672A1 publication Critical patent/WO2020122672A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • A61B3/1241Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes specially adapted for observation of ocular blood flow, e.g. by fluorescein angiography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Definitions

  • the present invention relates to an automatic blood vessel segmentation apparatus and method using matching of a fundus image and a fluorescein angiogram, and more specifically, a fundus image and a fluorescein angiogram that can automatically extract a precise region of retinal blood vessels in the fundus image.
  • Fundus imaging is one of the most used ophthalmic photographs in ophthalmology for diagnostic or documentation purposes.
  • the fundus image is relatively similar to the fundus of the subject observed at the time of treatment, and thus is intuitive, and thus is used for examining ophthalmic diseases.
  • clinicians intend to quantitatively analyze blood vessel characteristics based on these fundus images, and develop a system for diagnosing diseases based on the fundus image, but the accuracy of the technique for extracting blood vessel regions is still limited in accuracy.
  • Patent Document 1 Korean Registered Patent No. 10-1761510 (announced on July 26, 2017)
  • the present invention is proposed to solve the above problems, matching the fundus image and the fluorescein angiography image capable of automatically extracting a precise region of retinal blood vessels in the fundus image by matching the fundus image and the fluorescein angiography image. It is an object of the present invention to provide an automatic blood vessel segmentation device and method.
  • the patient's fundus image (FP image) and fluorescein angiography image (FAG image) Obtaining frames; Rigid registration of each frame of the fluorescein angiography image using a feature point matching technique; Performing vascular extraction of the matched fluorescein angiography image (FAG image) based on deep learning according to the characteristics of the fluorescein angiography image (FAG image); Integrating blood vessel extraction results of fluorescein angiography (FAG image) frames into an average value; Performing blood vessel extraction of the fundus image based on deep learning according to the characteristics of the fundus image; Performing a matching of a fluorescence fundus angiography image (FAG image) and a fundus image (FP image) from the extracted blood vessels; And dividing blood vessels based on the matched results.
  • Rigid registration of each frame of the fluorescein angiography image using a feature point matching technique uses RANSAC (RANdom Sample Consensus) during feature point detection, feature point descriptor extraction, and feature point matching. And performing registration of a fluorescein angiography image (FAG image).
  • RANSAC Random Sample Consensus
  • the deep learning is a learned convolutional neural network (CNN), and the blood vessels of the fluorescein angiogram (FAG image) matched based on deep learning according to the characteristics of the fluorescein angiogram (FAG image).
  • the step of performing the extraction is characterized by including a step of deriving a deep learning-based FAG vessel probability map (FAGVP) based on blood vessels in a fluorescein angiography image (FAG image).
  • FAGVP deep learning-based FAG vessel probability map
  • Integrating the blood vessel extraction results of the fluorescein angiography images (FAG image) frames into an average value is a non-rigid matching based on a free-form deformation technique of a coordinate grid represented by a B-Spline model in FAGVP. and performing a rigid registration) and deriving a Maximum FAG Vessel Probability map (M-FAGVP) extracting the average value of the average FAG Vessel Probability map (A-FAGVP) of the matched FAGVP and the maximum value for each pixel location. Is done.
  • M-FAGVP Maximum FAG Vessel Probability map
  • the step of performing blood vessel extraction of the fundus image based on deep learning derives a deep learning-based Fundus Photo Vessel Probability map (FPVP) for matching between the fundus image (FP image) and A-FAGVP. It characterized in that it comprises a step.
  • FPVP Fundus Photo Vessel Probability map
  • the step of dividing blood vessels based on the matched result is a step of deriving a binary vessel segmentation mask by applying a hysteresis thresholding technique based on the probability value for each pixel of A-FAGVP. ; Removing noise by applying a connected component analysis technique to the derived vascular segmentation mask; And segmentation based on the matched A-FAGVP reinforced to a gap occurring in a vein.
  • An automatic vascular segmentation apparatus using matching of a fundus image and a fluorescein angiogram for achieving the above object, the patient's fundus image (FP image) and fluorescein angiography image (FAG image) )
  • An image acquisition unit that acquires frames;
  • a FAG matching unit for rigid registration of each frame of the fluorescein angiography image using a feature point matching technique;
  • a FAG blood vessel extraction unit performing blood vessel extraction of a matched fluorescein angiographic image (FAG image) based on deep learning according to the characteristics of the fluorescence angiography image (FAG image);
  • a FP blood vessel extraction unit performing blood vessel extraction of the fundus image based on deep learning according to characteristics of the fundus image;
  • a FAG-FP matching unit performing registration of a fluorescein angiography image (FAG image) and a
  • the effect of early diagnosis and treatment of blindness-induced disease and/or chronic vascular disease is obtained. have.
  • FIG. 1 is a schematic configuration diagram of a device for matching a fundus image and a fluorescein angiography image according to an embodiment of the present invention
  • FIG. 2 is a diagram for a result of matching using a SIFT technique according to an embodiment of the present invention
  • Figure 3 is a result of deriving the FAG Vessel Probability map (FAGVP) in the fluorescence fundus angiography image (FAG image) according to an embodiment of the present invention
  • FIG. 4 shows the results of two images that are similarly matched by rigid registration according to an embodiment of the present invention, which are precisely registered through the B-Spline technique,
  • FIG. 6 is a flow chart of a method of matching a fundus image and a fluorescein angiography image according to an embodiment of the present invention
  • FIG. 7 is a view showing a FAG Vessel Probability map using deep learning according to an embodiment of the present invention.
  • FIG. 8 is a view showing the results of rigid body matching (left) and non-rigid matching (right) according to an embodiment of the present invention
  • FIG. 9 is a view showing an Average FAGVP map (left) and a Maximum FAGVP map (right) according to an embodiment of the present invention
  • FIG. 10 is a diagram showing a FAGVP map derived using FP image (left) and deep learning according to an embodiment of the present invention
  • FIG. 11 is a view showing the results of rigid body matching using chamfer matching according to an embodiment of the present invention.
  • FIG. 12 is a view showing the results of non-rigid matching using the B-Spline technique according to an embodiment of the present invention.
  • part means a unit that processes at least one function or operation, which may be implemented by hardware or software or a combination of hardware and software.
  • FIG. 1 is a schematic configuration diagram of an apparatus for matching a fundus image and a fluorescein angiogram according to an embodiment of the present invention
  • FIG. 2 is a diagram for a result of matching using a SIFT technique according to an embodiment of the present invention It is a drawing.
  • an apparatus for matching a fundus image and a fluorescein angiography image includes an image acquisition unit 110, a FAG matching unit 120, a FAG blood vessel extraction unit 130, and an integration unit 140 ), FP blood vessel extraction unit 150, FAG-FP matching unit 160.
  • the image acquisition unit 110 acquires frames of a patient's fundus image (FP image) and a fluorescein angiography image (FAG image).
  • the fundus image (FP image) can be obtained through a fundus imaging device for the examination of eye diseases in the ophthalmology
  • the fluorescein angiography image (FAG image) is injected with a fluorescent substance (Fluorescein) in a vein, a fluorescent substance It can be acquired through a device that optically photographs circulating through the retinal circulatory system and displays blood vessels. Meanwhile, the above-described image is composed of multiple frames according to the passage of time.
  • the FAG matching unit 120 matches each frame of the fluorescein angiography image (FAG image) composed of multiple frames with the passage of time using a feature point matching technique. More specifically, the FAG matching unit 120 performs rigid registration of a fluorescein angiography image using a Scale Invariant Feature Transform (SIFT) technique.
  • SIFT Scale Invariant Feature Transform
  • the FAG matching unit 120 may perform matching of a fluorescence angiography image (FAG image) using RANSAC (RANdom Sample Consensus) during feature point detection, feature point descriptor extraction, and feature point matching. Fluorescein angiography (FAG image) changes in blood vessels and background over time.
  • the present invention utilizes a Scale Invariant Feature Transform (SIFT) technique to detect various features while minimizing image changes.
  • SIFT Scale Invariant Feature Transform
  • various regional features such as optic disc, fovea, local vessel structure in fluorescein angiography image can be found.
  • perspectivce transform based on RANSAC (RANDOM SAMPLE CONSENSUS) technique is estimated.
  • RANSAC RANDOM SAMPLE CONSENSUS
  • SIFT-based registration is performed on the basis of the previously performed image. The result of this series of processes is as shown in FIG. 2.
  • FIG. 2 Through this process, all fluorescein angiography images (FAG images) are repeatedly registered (registration). After performing all fluorescein angiography images (FAG images), the finally registered results are registered based on the first image.
  • the FAG blood vessel extraction unit 130 performs blood vessel extraction of the matched fluorescein angiography image (FAG image) based on deep learning according to the characteristics of the fluorescein angiography image (FAG image).
  • deep learning may be a learned convolutional neural network (CNN).
  • CNN convolutional neural network
  • the FAG blood vessel extraction unit 130 may derive a deep learning-based FAG vessel probability map (FAGVP) based on blood vessels in a fluorescein angiography image (FAG image).
  • FAGVP deep learning-based FAG vessel probability map
  • the fluorescein angiographic image (FAG image) is forcibly registered from the perspective transform through the above-described FAG matching unit 120, the fluorescence angiography image (FAG image) changes over time, optic disc, Since the properties of the original fluorescein angiogram (FAG image) such as background are retained, it is necessary to remove other properties that interfere with registration in order to match the blood vessels very closely. Accordingly, the FAG blood vessel extraction unit 130 applies a deep learning (DL) based vascular region technique with high precision to perform blood vessel extraction of the matched fluorescein angiography image (FAG image). Deep learning-based vascular zoning techniques are known techniques, and other attributes except blood vessels can be removed with a very high probability.
  • DL deep learning
  • databases (DRIVE, STARE, CHASE, HRF) of the Retinal Vessel Segmentation techniques in the prior art are all fundus images (FP images) and have different characteristics from fluorescein angiography images (FAG images).
  • FOG images fluorescein angiography images
  • the characteristics of blood vessels appear very different.
  • the fundus image is converted to grayscale, it is expressed with a lower pixel value than the periphery of the blood vessel.
  • FOG image fluorescein angiography image
  • FOG image fluorescein angiography image
  • the existing fundus image (FP image) database is appropriately converted so that these opposite characteristics can be reflected in the fluorescein angiography image (FAG image).
  • the blood vessels are estimated very precisely as shown in FIG. 3. 3, the upper left is a Color Fundus Photo Image, the upper right is a Gray Fundus Photo Image, the lower left is an Inverse Gray Fundus Photo, and the lower right is a FAG Image.
  • the integration unit 140 integrates the results of vascular extraction of the fluorescein angiography images (FAG image) frames as an average value. More specifically, the integration unit 140 performs non-rigid registration based on the B-Spline technique in FAGVP and derives an average FAG vessel probability map (A-FAGVP) of the matched FAGVP. In more detail, the integration unit 140 performs non-rigid registration based on the free-form deformation technique of the coordinate grid represented by the B-Spline model in FAGVP and average FAG Vessel Probability of the matched FAGVP The average value of map(A-FAGVP) can be derived.
  • A-FAGVP average FAG vessel probability map
  • Non-rigid registration using the B-Spline registration technique that calculates information by performing iterative matching from FAGVP that is fairly similarly matched through rigid registration, and reduces errors based on this -rigid registration).
  • the FP blood vessel extraction unit 150 performs blood vessel extraction of the fundus image based on deep learning according to the characteristics of the fundus image. More specifically, the FP blood vessel extraction unit 150 derives a deep learning based Fundus Photo Vessel Probability map (FPVP) for matching between the fundus image (FP image) and A-FAGVP. After non-rigid registration through the integration unit 140, average FAG Vessel Probability map (A-FAGVP) is calculated by calculating the average of the same pixel position on the time axis to include all changes in blood vessels over time. And a Maximum FAG Vessel Probability map (M-FAGVP) from which the maximum value for each pixel location is extracted.
  • FPVP Fundus Photo Vessel Probability map
  • A-FAGVP calculated as the average value for each pixel along the time axis, not only reflects changes in blood vessels over time, but also effectively suppresses small noise that may occur. Since the FP vascular extraction unit 150 according to the present embodiment is a matching technique based on blood vessels, the matching technique of the fluorescence fundus angiography image (FAG image) and the fundus image (FP image) is also performed in the fundus image (FP image). It is desirable to derive FPVP using deep learning (DL). On the other hand, the vessel segmentation (vessel segmentation) technique from the fundus image (FP image), there are a number of techniques known in advance, the present invention utilizes a DL technique learned from DRIVE, a database, one of the known techniques.
  • the FAG-FP matching unit 160 performs matching of the fluorescence fundus angiography image (FAG image) and the fundus image (FP image) from the extracted blood vessels.
  • the reason for this matching is that the characteristics of blood vessels are different in the fluorescein angiography image (FAG image) and the fundus image (FP image).
  • the FAG-FP matching unit 160 matches the fluorescence angiography image (FAG image) and the fundus image (FP image) based on the FPVP derived from the fundus image using the deep learning technique as described above. To perform.
  • the FAG-FP matching unit 160 may proceed through two processes of matching the fluorescence fundus angiography image (FAG image) and the fundus image (FP image). First, the FAG-FP matching unit 160 uses a chamfer matching technique from blood vessels derived from FPVP and A-FAGVP to provide rigid registration between fundus image-fluorescence angiography image (FP-FAG). ), and second, the FAG-FP matching unit 160 between the final fundus image-fluorescence fundus angiography image (FP-FAG) based on the free-form deformation technique of the coordinate grid represented by the B-Spline model. Non-rigid registration is performed.
  • the FAG-FP matching unit 160 binarizes the A-FAGVP and FPVP derived from the above-described process based on an appropriate threshold value, and performs rigid registration using a chamfer matching technique.
  • the FAG-FP matching unit 160 performs non-rigid registration based on the free-form deformation technique of the coordinate grid represented by the B-Spline model for precise registration between blood vessels. .
  • A-FAGVP is registered based on FPVP.
  • the input sources for registration are all vessel probability maps, they have a part of information about blood vessels, so instead of the above-described SIFT technique, a chamfer matching technique is used.
  • the SIFT technique detects a feature from an image and creates a descriptor. Then, a matching point is found from a feature descriptor, and a perspective transform using RANSAC (RANDOM SAMPLE CONSENSUS) is calculated based on this.
  • RANSAC RANDOM SAMPLE CONSENSUS
  • the chamfer matching technique is to calculate the distance of all pixels from the target and source binary image and to calculate the similarity between the two distance images based on this. Therefore, the chamfer matching technique takes a very small amount of computation and time compared to the SIFT technique, and is also effective for registration between vessels.
  • the chamfer matching (Chamfer Matching) technique according to the present embodiment is a customizing (Chamfer Matching) technique that improves the conventional chamfer matching (Chamfer Matching) technique for calculating translation, considering the rotation (rotation) do.
  • the FAG-FP matching unit 160 performs non-rigid registration to match the final fundus image-fluorescence angiography image (FP-FAG).
  • non-rigid registration is performed based on a free-form deformation technique of a coordinate grid represented by a B-Spline model between fluorescein angiography images (FP-FAGs).
  • the FAG-FP matching unit 160 is a coordinate represented by a B-Spline model in consideration of non-rigid motion in order to finally match a precise fundus image-fluorescence angiography image (FP-FAG).
  • Non-rigid registration based on the free-form deformation technique of the grid is performed.
  • the two images which have already been similarly matched by rigid registration, derive very precise registration results as shown in FIG. 4 through the free-form deformation technique of the coordinate grid represented by the B-Spline model.
  • FIG. 4 the result of matching before the match between the top left image of the fundus image, the top right image of the blood vessel probability map of the fundus image, and the bottom left image of the fundus image map of the fundus image (white) and the vascular probability map (blue) of the fluorescein angiography image, right
  • the bottom is the result after matching between the fundus probability map (white) of the fundus image and the blood vessel probability map (blue) of the fluorescein angiography image.
  • the blood vessel division unit 170 divides the blood vessel based on the matched result.
  • a very precise vessel segmentation mask that can be utilized as a ground truth (GT) must be derived from the matched A-FAGVP
  • the blood vessel segmentation unit 170 has a probability value for each pixel of the A-FAGVP Based on this, a binary threshold segmentation technique is applied to derive a binary vessel segmentation mask and noise is removed by applying a connected component analysis technique to the derived vessel segmentation mask. do.
  • the blood vessel segmentation unit 170 must fill a hole occurring in a vein region because contrast is late in a vein in a fluorescein angiography image.
  • the blood vessel segmentation unit 170 detects a hole inside the vein in the matched A-FAGVP and reinforces the relatively low pixel probability. Thereafter, the blood vessel division unit 170 divides the thin blood vessel and the thick blood vessel separately by applying the concept of hysteresis threshold in the matched A-FAGVP. Then, very small regions are removed from the connected components to remove noise.
  • A-FAGVP a crevice occurs mainly in the center of a vein. This occurs due to the time difference between the contrast reaching the artery and the vein.
  • the average probability corresponding to the blood vessel wall appears to be quite high as the contrast first reaches the blood vessel wall close to the capillaries.
  • the gap is filled from the binarized image using a morphological closing technique.
  • A-FAGVP is binarized (Y), and then the difference (X-Y) between the two images is calculated and the negative value is discarded.
  • the calculated subtracted image is a representation of a vein gap as a binary image.
  • the blood vessel dividing unit 170 is segmented based on the matched A-FAGVP reinforced up to the gap occurring in the vein.
  • segmentation based on the concept of hysteresis, rather than evolution from a simple threshold is performed.
  • a binary vessel mask based on coarse vessels is obtained through the first threshold.
  • the second threshold is set to a low level so that a thin vessel can be detected to derive a second binary vessel mask.
  • a vessel center line mask is derived from a second binay vessel mask through a skeletonization technique.
  • the vessel center line mask which is represented by a thin vessel at a terminal close to 1 in pixel thickness, is merged with a primary binary vessel mask in which thick vessels are derived. Since the vessel center line mask is merged, some broken regions can be connected in the primary binary vessel mask. Finally, very small areas that are not connected to the main vessels are removed through the connected component labeling technique.
  • the present invention described above it is possible to obtain a result of a precisely segmented blood vessel image as shown in FIG. 5, and through this result image, early diagnosis and treatment of blindness-induced disease and/or full-blood vessel disease is possible.
  • the upper left is the entire fundus image
  • the upper right is the entire fundus image vascular region result
  • the lower left is from the left optic disc center enlarged fundus image
  • the lower right of the image is the result of enlarged vascular region of the optic disc center from the left and the enlarged vascular region of the fovea center.
  • FIG. 6 is a flowchart of a method of matching a fundus image and a fluorescein angiography image according to an embodiment of the present invention.
  • the image matching method according to the present embodiment is performed in an apparatus for matching the above-described fundus image and fluorescein angiography image.
  • frames of a patient's fundus image (FP image) and a fluorescence fundus angiography image (FAG image) are acquired (S510).
  • the fundus image (FP image) can be obtained through a fundus imaging device for the examination of eye diseases in the ophthalmology
  • the fluorescein angiography image (FAG image) is injected with a fluorescent substance (Fluorescein) in a vein, a fluorescent substance It can be acquired through a device that optically photographs circulating through the retinal circulatory system and displays blood vessels.
  • each frame of the fluorescein angiography image is matched using a feature point matching technique (S520). More specifically, rigid registration of a fluorescence angiography image (FAG image) is performed using a Scale Invariant Feature Transform (SIFT) technique. Since a fluorescein angiography image (FAG image) has a change in blood vessels and background over time, the present invention utilizes a Scale Invariant Feature Transform (SIFT) technique to detect various features while minimizing the change in the image.
  • SIFT Scale Invariant Feature Transform
  • Deep learning may be a learned convolutional neural network (CNN).
  • the blood vessel extraction results of the fluorescein angiography images (FAG image) frames are integrated as an average value (S540). More specifically, non-rigid registration based on the free-form deformation technique of the coordinate lattice represented by the B-Spline model in FAGVP and average FAG Vessel Probability map (A-FAGVP of the matched FAGVP) ).
  • FPVP Fundus Photo Vessel Probability map
  • matching of the fluorescence fundus angiography image (FAG image) and the fundus image (FP image) is performed from the extracted blood vessels (S560).
  • the reason for this matching is that the characteristics of blood vessels are different in the fluorescein angiography image (FAG image) and the fundus image (FP image).
  • matching of a fluorescein angiography image (FAG image) and a fundus image (FP image) is performed based on FPVP derived from a fundus image using a deep learning technique.
  • the blood vessel is divided based on the matched result (S570). More specifically, the vessel segmentation mask is derived from the matched A-FAGVP using the vessel segmentation mask generation technique, and the matched A reinforced to the gap occurring in the vein -Segmentation based on FAGVP.
  • the applicant conducted the following experiment to confirm the result of automatic blood vessel segmentation using matching of the fundus image and fluorescein angiography image.
  • the experiment was conducted in hardware consisting of Intel(R) Core(TM) i7-6850K CPU @ 3.6GHz, 32G RAM, GeForce GTX 1080TI 11G, Ubuntu 16.04 LTS OS, python 2.7 development environment.
  • the database used in the experiment is a total of 10 FP-FAG sets, consisting of a single FP image and multiple FAG images in one set.
  • 20 FP images and GT (ground truth) of each train and test set of the published DRIVE Database were used for deep learning.
  • perspective transform was calculated and matched through feature detection and feature matching using SIFT technique, and RANSAC technique.
  • deep learning was used to derive the vessel probability map of all FAG images.
  • the input image of deep learning based vessel segmentation which has been studied a lot, is FP. Therefore, we converted the FP image of the DRIVE Database to gray scale and then inverse transformed to show characteristics similar to the FAG image. After that, I learned deep learning using the inverted FP image as input.
  • the network model used for learning was based on SSA-Vessel Segmentation, which showed the best performance by applying the scale space theory recently. When learning, the mean from the input image was subtracted by preprocessing and divided by standard deviation. Then, when testing from the learning network, the input image is changed to the FAG image to derive the result. The derived results show that even very finely thin blood vessels appear as shown in FIG. 7.
  • non-rigid matching is performed through the free-form deformation technique of the coordinate grid represented by the B-Spline model of the free form deformation series.
  • the FAG matching results derived up to the non-rigid matching can see very precise results.
  • the map that can synthesize the entire sequence should be derived using the FAGVP map, which has been completed up to the non-rigid matching. Therefore, we derived the results including information along the time axis as the Average FAGVP map and the Maximum FAGVP map.
  • the aggregated image shows that the blood vessels are estimated very precisely and precisely.
  • the vessel probability map of the FP image is derived using deep learning.
  • the database used DRIVE and the network model used the same model.
  • the input was studied using the FP image of the DRIVE Database as it was, and the average was subtracted by preprocessing and divided by standard deviation. Based on the learned results, the results as shown in Fig. 10 were derived by using our FP image as an input.
  • the final process of matching is to match A-FAGVP map to FPVP map.
  • the first step is FP-FAG rigid registration.
  • Methods according to an embodiment of the present invention may be implemented as an application or in the form of program instructions that can be executed through various computer components to be recorded in a computer-readable recording medium.
  • the computer-readable recording medium may include program instructions, data files, data structures, or the like alone or in combination.
  • the program instructions recorded on the computer-readable recording medium are specially designed and configured for the present invention, and may be known and available to those skilled in the computer software field. Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tapes, optical recording media such as CD-ROMs, DVDs, and magneto-optical media such as floptical disks. media) and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like.
  • Examples of program instructions include not only machine language codes produced by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like.
  • the hardware device may be configured to operate as one or more software modules to perform processing according to the present invention, and vice versa.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Surgery (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Evolutionary Computation (AREA)
  • Ophthalmology & Optometry (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Vascular Medicine (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Hematology (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The present invention discloses an apparatus and a method for automatically segmenting blood vessels by matching an FP image and an FAG image. A method, according to one aspect of the present invention, may automatically extract a precise region of retinal blood vessels in an FP image by matching the FP image and an FAG image.

Description

안저 영상과 형광안저혈관조영 영상의 정합을 이용한 자동 혈관 분할 장치 및 방법Automatic blood vessel segmentation device and method using matching of fundus image and fluorescein angiography image
본 발명은 안저 영상과 형광안저혈관조영 영상의 정합을 이용한 자동 혈관 분할 장치 및 방법에 관한 것으로, 더욱 상세하게는 안저 영상 내 망막 혈관의 정밀한 영역을 자동 추출할 수 있는 안저 영상과 형광안저혈관조영 영상의 정합을 이용한 자동 혈관 분할 장치 및 방법에 관한 것이다.The present invention relates to an automatic blood vessel segmentation apparatus and method using matching of a fundus image and a fluorescein angiogram, and more specifically, a fundus image and a fluorescein angiogram that can automatically extract a precise region of retinal blood vessels in the fundus image. An automatic blood vessel segmentation apparatus and method using image matching.
본 출원은 2018년 12월 13일에 출원된 한국특허출원 제10-2018-0161131호에 기초한 우선권을 주장하며, 해당 출원의 명세서 및 도면에 개시된 모든 내용은 본 출원에 원용된다.This application claims priority based on Korean Patent Application No. 10-2018-0161131 filed on December 13, 2018, and all contents disclosed in the specification and drawings of the application are incorporated in this application.
안저 영상은 안과에서 진단 또는 기록을 목적으로 가장 많이 사용되는 안과 사진 중 하나이다. 안저 영상은 진료시 관찰되는 피검자의 안저와 비교적 유사하여 직관적이므로, 안과 질환 검사에 사용되고 있다. 한편, 임상의들은 이러한 안저 영상을 기초로 혈관 특성을 정량적으로 분석하고, 이를 기반으로 질환을 진단하는 체계를 개발하고자 하지만, 아직까지 정밀한 혈관 영역 추출 기술은 정확도에 한계가 있다. Fundus imaging is one of the most used ophthalmic photographs in ophthalmology for diagnostic or documentation purposes. The fundus image is relatively similar to the fundus of the subject observed at the time of treatment, and thus is intuitive, and thus is used for examining ophthalmic diseases. On the other hand, clinicians intend to quantitatively analyze blood vessel characteristics based on these fundus images, and develop a system for diagnosing diseases based on the fundus image, but the accuracy of the technique for extracting blood vessel regions is still limited in accuracy.
(특허문헌 1) 한국등록특허 제10-1761510호(2017.07.26 공고)(Patent Document 1) Korean Registered Patent No. 10-1761510 (announced on July 26, 2017)
본 발명은 상기와 같은 문제점을 해결하기 위해 제안된 것으로서, 안저 영상과 형광안저혈관조영 영상을 정합하여 안저 영상 내 망막 혈관의 정밀한 영역을 자동 추출할 수 있는 안저 영상과 형광안저혈관조영 영상의 정합을 이용한 자동 혈관 분할 장치 및 방법을 제공하는데 그 목적이 있다. The present invention is proposed to solve the above problems, matching the fundus image and the fluorescein angiography image capable of automatically extracting a precise region of retinal blood vessels in the fundus image by matching the fundus image and the fluorescein angiography image. It is an object of the present invention to provide an automatic blood vessel segmentation device and method.
본 발명의 다른 목적 및 장점들은 하기의 설명에 의해서 이해될 수 있으며, 본 발명의 일 실시예에 의해 보다 분명하게 알게 될 것이다. 또한, 본 발명의 목적 및 장점들은 특허청구범위에 나타낸 수단 및 그 조합에 의해 실현될 수 있음을 쉽게 알 수 있을 것이다.Other objects and advantages of the present invention may be understood by the following description, and will be more clearly understood by an embodiment of the present invention. In addition, it will be readily appreciated that the objects and advantages of the present invention can be realized by means of the appended claims and combinations thereof.
상기와 같은 목적을 달성하기 위한 본 발명의 일 측면에 따른 안저 영상과 형광안저혈관조영 영상의 정합을 이용한 자동 혈관 분할 방법은, 환자의 안저 영상(FP image)과 형광안저혈관조영 영상(FAG image)의 프레임들을 취득하는 단계; 형광안저혈관조영 영상(FAG image)의 각 프레임들을 특징점 정합 기법을 활용하여 강체 정합(rigid registration)하는 단계; 형광안저혈관조영 영상(FAG image)의 특성에 맞춰 딥러닝 기반으로 정합된 형광안저혈관조영 영상(FAG image)의 혈관 추출을 수행하는 단계; 형광안저혈관조영 영상(FAG image) 프레임들의 혈관 추출 결과를 평균값으로 통합하는 단계; 안저 영상의 특성에 맞춰 딥러닝 기반으로 안저 영상의 혈관 추출을 수행하는 단계; 상기 추출된 혈관으로부터 형광안저혈관조영 영상(FAG image)과 안저 영상(FP image)의 정합을 수행하는 단계; 및 상기 정합된 결과를 기초로 혈관을 분할하는 단계;를 포함한다. Automatic blood vessel segmentation method using matching of the fundus image and the fluorescein angiogram according to an aspect of the present invention for achieving the above object, the patient's fundus image (FP image) and fluorescein angiography image (FAG image) ) Obtaining frames; Rigid registration of each frame of the fluorescein angiography image using a feature point matching technique; Performing vascular extraction of the matched fluorescein angiography image (FAG image) based on deep learning according to the characteristics of the fluorescein angiography image (FAG image); Integrating blood vessel extraction results of fluorescein angiography (FAG image) frames into an average value; Performing blood vessel extraction of the fundus image based on deep learning according to the characteristics of the fundus image; Performing a matching of a fluorescence fundus angiography image (FAG image) and a fundus image (FP image) from the extracted blood vessels; And dividing blood vessels based on the matched results.
상기 형광안저혈관조영 영상(FAG image)의 각 프레임들을 특징점 정합 기법을 활용하여 강체 정합(rigid registration)하는 단계는, 특징점 검출, 특징점 기술자 추출, 특징점 정합 과정 중 RANSAC(RANdom Sample Consensus)를 사용하여 형광안저혈관조영 영상(FAG image)의 정합을 수행하는 단계를 포함하는 것을 특징으로 한다.Rigid registration of each frame of the fluorescein angiography image using a feature point matching technique uses RANSAC (RANdom Sample Consensus) during feature point detection, feature point descriptor extraction, and feature point matching. And performing registration of a fluorescein angiography image (FAG image).
상기 딥러닝은 학습된 컨볼루셔널 신경망(convolutional neural network : CNN)이고, 상기 형광안저혈관조영 영상(FAG image)의 특성에 맞춰 딥러닝 기반으로 정합된 형광안저혈관조영 영상(FAG image)의 혈관 추출을 수행하는 단계는, 형광안저혈관조영 영상(FAG image)에서의 혈관을 바탕으로 딥러닝 기반의 FAG Vessel Probability map(FAGVP)를 도출하는 단계를 포함하는 것을 특징으로 한다.The deep learning is a learned convolutional neural network (CNN), and the blood vessels of the fluorescein angiogram (FAG image) matched based on deep learning according to the characteristics of the fluorescein angiogram (FAG image). The step of performing the extraction is characterized by including a step of deriving a deep learning-based FAG vessel probability map (FAGVP) based on blood vessels in a fluorescein angiography image (FAG image).
상기 형광안저혈관조영 영상(FAG image) 프레임들의 혈관 추출 결과를 평균값으로 통합하는 단계는, FAGVP에서 B-Spline 모델로 표현되는 좌표격자의 free-form deformation 기법을 기반으로 하는 비강체 정합(non-rigid registration)을 수행하고 정합된 FAGVP의 Average FAG Vessel Probability map(A-FAGVP)의 평균값 및 동일한 pixel 위치별 최대값을 추출한 Maximum FAG Vessel Probability map(M-FAGVP)을 도출하는 단계를 포함하는 것을 특징으로 한다. Integrating the blood vessel extraction results of the fluorescein angiography images (FAG image) frames into an average value is a non-rigid matching based on a free-form deformation technique of a coordinate grid represented by a B-Spline model in FAGVP. and performing a rigid registration) and deriving a Maximum FAG Vessel Probability map (M-FAGVP) extracting the average value of the average FAG Vessel Probability map (A-FAGVP) of the matched FAGVP and the maximum value for each pixel location. Is done.
상기 안저 영상의 특성에 맞춰 딥러닝 기반으로 안저 영상의 혈관 추출을 수행하는 단계는, 안저 영상(FP image)과 A-FAGVP 간의 정합을 위해 딥러닝 기반의 Fundus Photo Vessel Probability map(FPVP)을 도출하는 단계를 포함하는 것을 특징으로 한다. In accordance with the characteristics of the fundus image, the step of performing blood vessel extraction of the fundus image based on deep learning derives a deep learning-based Fundus Photo Vessel Probability map (FPVP) for matching between the fundus image (FP image) and A-FAGVP. It characterized in that it comprises a step.
상기 추출된 혈관으로부터 형광안저혈관조영 영상(FAG image)과 안저 영상(FP image)의 정합을 수행하는 단계는, FPVP와 A-FAGVP에서 도출된 혈관으로부터 챔퍼 매칭(Chamfer Matching) 기법을 이용하여 안저 영상-형광안저혈관조영 영상(FP-FAG) 간 강체 정합(rigid registration)을 수행하는 단계; 및 B-Spline 모델로 표현되는 좌표격자의 free-form deformation 기법을 기반으로 하는 최종 안저 영상-형광안저혈관조영 영상(FP-FAG) 간 비강체 정합(non-rigid registration)을 수행하는 단계;를 포함하는 것을 특징으로 한다. The step of performing the matching of the fluorescence fundus angiography image (FAG image) and the fundus image (FP image) from the extracted blood vessels, using a chamfer matching (Chamfer Matching) technique from blood vessels derived from FPVP and A-FAGVP Performing rigid registration between image-fluorescein angiography (FP-FAG) images; And performing non-rigid registration between the final fundus image-fluorescein angiography image (FP-FAG) based on the free-form deformation technique of the coordinate grid represented by the B-Spline model. It is characterized by including.
상기 정합된 결과를 기초로 혈관을 분할하는 단계는, A-FAGVP의 픽셀별 확률값을 기반으로 이중 역치(hysteresis thresholding) 기법을 적용하여 이진(binary) 혈관 분할 마스크(vessel segmentation mask)를 도출하는 단계; 도출된 혈관 분할 마스크에 연결된 요소 분석(connected component analysis) 기법을 적용하여 노이즈를 제거하는 단계; 및 정맥(vein)에 발생하는 틈새까지 보강된 정합된 A-FAGVP를 바탕으로 분할(segmentation)하는 단계;를 포함하는 것을 특징으로 한다.The step of dividing blood vessels based on the matched result is a step of deriving a binary vessel segmentation mask by applying a hysteresis thresholding technique based on the probability value for each pixel of A-FAGVP. ; Removing noise by applying a connected component analysis technique to the derived vascular segmentation mask; And segmentation based on the matched A-FAGVP reinforced to a gap occurring in a vein.
상기와 같은 목적을 달성하기 위한 본 발명의 다른 측면에 따른 안저 영상과 형광안저혈관조영 영상의 정합을 이용한 자동 혈관 분할 장치는, 환자의 안저 영상(FP image)과 형광안저혈관조영 영상(FAG image)의 프레임들을 취득하는 영상 취득부; 형광안저혈관조영 영상(FAG image)의 각 프레임들을 특징점 정합 기법을 활용하여 강체 정합(rigid registration)하는 FAG 정합부; 형광안저혈관조영 영상(FAG image)의 특성에 맞춰 딥러닝 기반으로 정합된 형광안저혈관조영 영상(FAG image)의 혈관 추출을 수행하는 FAG 혈관 추출부; 형광안저혈관조영 영상(FAG image) 프레임들의 혈관 추출 결과를 평균값으로 통합하는 통합부; 안저 영상의 특성에 맞춰 딥러닝 기반으로 안저 영상의 혈관 추출을 수행하는 FP 혈관 추출부; 상기 추출된 혈관으로부터 형광안저혈관조영 영상(FAG image)과 안저 영상(FP image)의 정합을 수행하는 FAG-FP 정합부; 및 상기 정합된 결과를 기초로 혈관을 분할하는 혈관 분할부;를 포함한다.An automatic vascular segmentation apparatus using matching of a fundus image and a fluorescein angiogram according to another aspect of the present invention for achieving the above object, the patient's fundus image (FP image) and fluorescein angiography image (FAG image) ) An image acquisition unit that acquires frames; A FAG matching unit for rigid registration of each frame of the fluorescein angiography image using a feature point matching technique; A FAG blood vessel extraction unit performing blood vessel extraction of a matched fluorescein angiographic image (FAG image) based on deep learning according to the characteristics of the fluorescence angiography image (FAG image); An integrating unit for integrating vascular extraction results of fluorescein angiography (FAG image) frames into an average value; A FP blood vessel extraction unit performing blood vessel extraction of the fundus image based on deep learning according to characteristics of the fundus image; A FAG-FP matching unit performing registration of a fluorescein angiography image (FAG image) and a fundus image (FP image) from the extracted blood vessels; And a blood vessel dividing unit for dividing blood vessels based on the matched results.
본 발명의 일 측면에 따르면, 안저 영상과 형광안저혈관조영 영상을 정합하여 안저 영상 내 망막 혈관의 정밀한 영역을 자동 추출함으로써, 실명유발질환 및/또는 만성혈관질환을 조기진단 및 치료할 수 있는 효과가 있다.According to an aspect of the present invention, by matching the fundus image and the fluorescein angiography image and automatically extracting a precise region of retinal blood vessels in the fundus image, the effect of early diagnosis and treatment of blindness-induced disease and/or chronic vascular disease is obtained. have.
본 발명에서 얻을 수 있는 효과는 이상에서 언급한 효과로 제한되지 않으며, 언급하지 않은 또 다른 효과들은 아래의 기재로부터 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자에게 명확하게 이해될 수 있을 것이다.The effects obtainable in the present invention are not limited to the above-mentioned effects, and other effects not mentioned may be clearly understood by those skilled in the art from the following description. .
본 명세서에 첨부되는 다음의 도면들은 본 발명의 바람직한 실시예를 예시하는 것이며, 발명을 실시하기 위한 구체적인 내용들과 함께 본 발명의 기술사상을 더욱 이해시키는 역할을 하는 것이므로, 본 발명은 그러한 도면에 기재된 사항에만 한정되어 해석되어서는 아니 된다.The following drawings attached to the present specification are intended to illustrate preferred embodiments of the present invention, and serve to further understand the technical idea of the present invention together with specific details for carrying out the invention, and thus the present invention It should not be interpreted as being limited only to the items described.
도 1은 본 발명의 일 실시예에 따른 안저 영상과 형광안저혈관조영 영상을 정합하는 장치의 개략적인 구성도,1 is a schematic configuration diagram of a device for matching a fundus image and a fluorescein angiography image according to an embodiment of the present invention,
도 2는 본 발명의 일 실시예에 따른 SIFT 기법을 이용하여 정합한 결과에 대한 도면,2 is a diagram for a result of matching using a SIFT technique according to an embodiment of the present invention,
도 3은 본 발명의 일 실시예에 따른 형광안저혈관조영 영상(FAG image)에서의 FAG Vessel Probability map(FAGVP)을 도출한 결과,Figure 3 is a result of deriving the FAG Vessel Probability map (FAGVP) in the fluorescence fundus angiography image (FAG image) according to an embodiment of the present invention,
도 4는 본 발명의 일 실시예에 따른 강체 정합(rigid registration)으로 유사하게 정합된 두 영상이 B-Spline 기법을 통해 매우 정밀하게 정합(registration)된 결과,FIG. 4 shows the results of two images that are similarly matched by rigid registration according to an embodiment of the present invention, which are precisely registered through the B-Spline technique,
도 5는 본 발명의 일 실시예에 따른 정밀하게 분할된 혈관 영상의 결과,5 is a result of precisely segmented blood vessel images according to an embodiment of the present invention,
도 6은 본 발명의 일 실시예에 따른 안저 영상과 형광안저혈관조영 영상을 정합하는 방법의 흐름도,6 is a flow chart of a method of matching a fundus image and a fluorescein angiography image according to an embodiment of the present invention,
도 7은 본 발명의 일 실시예에 따른 딥러닝을 이용한 FAG Vessel Probability map을 도시한 도면, 7 is a view showing a FAG Vessel Probability map using deep learning according to an embodiment of the present invention,
도 8은 본 발명의 일 실시예에 따른 강체 정합(왼쪽)과 비강체 정합(오른쪽)에 따른 결과를 도시한 도면, 8 is a view showing the results of rigid body matching (left) and non-rigid matching (right) according to an embodiment of the present invention,
도 9는 본 발명의 일 실시예에 따른 Average FAGVP map(왼쪽)과 Maximum FAGVP map(오른쪽)을 도시한 도면, 9 is a view showing an Average FAGVP map (left) and a Maximum FAGVP map (right) according to an embodiment of the present invention,
도 10은 본 발명의 일 실시예에 따른 FP 이미지(왼쪽)과 딥러닝을 이용하여 도출된 FAGVP map을 도시한 도면,10 is a diagram showing a FAGVP map derived using FP image (left) and deep learning according to an embodiment of the present invention,
도 11은 본 발명의 일 실시예에 따른 챔퍼 매칭을 이용한 강체 정합에 따른 결과를 도시한 도면, 11 is a view showing the results of rigid body matching using chamfer matching according to an embodiment of the present invention,
도 12는 본 발명의 일 실시예에 따른 B-Spline 기법을 이용한 비강체 정합에 따른 결과를 도시한 도면이다.12 is a view showing the results of non-rigid matching using the B-Spline technique according to an embodiment of the present invention.
상술한 목적, 특징 및 장점은 첨부된 도면과 관련한 다음의 상세한 설명을 통하여 보다 분명해질 것이며, 그에 따라 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자가 본 발명의 기술적 사상을 용이하게 실시할 수 있을 것이다. 또한, 본 발명을 설명함에 있어서 본 발명과 관련된 공지기술에 대한 구체적인 설명이 본 발명의 요지를 불필요하게 흐릴 수 있다고 판단되는 경우에는 그 상세한 설명을 생략하기로 한다. 이하, 첨부된 도면을 참조하여 본 발명에 따른 바람직한 일 실시예를 상세히 설명하기로 한다.The above objects, features, and advantages will become more apparent through the following detailed description in connection with the accompanying drawings, and accordingly, those skilled in the art to which the present invention pertains can easily implement the technical spirit of the present invention. There will be. In addition, in the description of the present invention, when it is determined that the detailed description of the known technology related to the present invention may unnecessarily obscure the subject matter of the present invention, the detailed description will be omitted. Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings.
명세서 전체에서, 어떤 부분이 어떤 구성요소를 “포함”한다고 할 때, 이는 특별히 반대되는 기재가 없는 한 다른 구성요소를 제외하는 것이 아니라 다른 구성 요소를 더 포함할 수 있는 것을 의미한다. 또한, 명세서에 기재된 “…부” 등의 용어는 적어도 하나의 기능이나 동작을 처리하는 단위를 의미하며, 이는 하드웨어나 소프트웨어 또는 하드웨어 및 소프트웨어의 결합으로 구현될 수 있다.Throughout the specification, when a part “includes” a certain component, it means that the component may further include other components, not to exclude other components, unless otherwise stated. Also, “… The term “part” means a unit that processes at least one function or operation, which may be implemented by hardware or software or a combination of hardware and software.
도 1은 본 발명의 일 실시예에 따른 안저 영상과 형광안저혈관조영 영상을 정합하는 장치의 개략적인 구성도, 도 2는 본 발명의 일 실시예에 따른 SIFT 기법을 이용하여 정합한 결과에 대한 도면이다.1 is a schematic configuration diagram of an apparatus for matching a fundus image and a fluorescein angiogram according to an embodiment of the present invention, and FIG. 2 is a diagram for a result of matching using a SIFT technique according to an embodiment of the present invention It is a drawing.
도 1을 참조하면, 본 실시예에 따른 안저 영상과 형광안저혈관조영 영상을 정합하는 장치는 영상 취득부(110), FAG 정합부(120), FAG 혈관 추출부(130), 통합부(140), FP 혈관 추출부(150), FAG-FP 정합부(160)를 포함한다.Referring to FIG. 1, an apparatus for matching a fundus image and a fluorescein angiography image according to the present embodiment includes an image acquisition unit 110, a FAG matching unit 120, a FAG blood vessel extraction unit 130, and an integration unit 140 ), FP blood vessel extraction unit 150, FAG-FP matching unit 160.
영상 취득부(110)는 환자의 안저 영상(FP image)과 형광안저혈관조영 영상(FAG image)의 프레임들을 취득한다. 이때, 안저 영상(FP image)은 안과에서 안구 질환의 검사를 위한 안저 촬영 장치를 통해 취득될 수 있으며, 형광안저혈관조영 영상(FAG image)은 정맥에 형광물질(Fluorescein)을 주입하고, 형광 물질이 망막순환계를 통해 순환하는 것을 광학적으로 촬영하여 혈관을 표시하는 장치를 통해 취득될 수 있다. 한편, 상술한 영상은 시간의 흐름에 따라 다중 프레임으로 구성된다.The image acquisition unit 110 acquires frames of a patient's fundus image (FP image) and a fluorescein angiography image (FAG image). At this time, the fundus image (FP image) can be obtained through a fundus imaging device for the examination of eye diseases in the ophthalmology, the fluorescein angiography image (FAG image) is injected with a fluorescent substance (Fluorescein) in a vein, a fluorescent substance It can be acquired through a device that optically photographs circulating through the retinal circulatory system and displays blood vessels. Meanwhile, the above-described image is composed of multiple frames according to the passage of time.
FAG 정합부(120)는 시간의 흐름에 따라 다중 프레임으로 구성된 형광안저혈관조영 영상(FAG image)의 각 프레임들을 특징점 정합 기법을 활용하여 정합한다. 보다 구체적으로, FAG 정합부(120)는 SIFT(Scale Invariant Feature Transform) 기법을 이용하여 형광안저혈관조영 영상(FAG image)의 강체 정합(rigid registration)을 수행한다. FAG 정합부(120)는 특징점 검출, 특징점 기술자 추출, 특징점 정합 과정 중 RANSAC(RANdom Sample Consensus)를 사용하여 형광안저혈관조영 영상(FAG image)의 정합을 수행할 수 있다. 형광안저혈관조영 영상(FAG image)은 시간에 따른 혈관 및 배경의 변화가 존재한다. 따라서 본 발명에서는 영상의 변화를 최소화하면서 다양한 특징을 검출할 수 있도록 SIFT(Scale Invariant Feature Transform) 기법을 활용한다. SIFT(Scale Invariant Feature Transform) 기법을 이용하여 형광안저혈관조영 영상(FAG image) 내의 optic disc, fovea, local vessel structure 등 다양한 지역적인 특징을 찾을 수 있다. 이러한 특징들을 바탕으로 시각적인 변화를 최소화할 수 있도록, 두 영상 내에서의 특징들을 매칭(matching)하고, 이를 바탕으로 바탕으로 RANSAC(RANDOM SAMPLE CONSENSUS)기법을 기반으로 하는 perspectivce transform을 추정한다. 추정된 perspective transform으로 두 영상 간의 강체 정합(rigid registration)을 수행한다. 연속적으로 이어지는 영상은 앞서 정합(registration)이 수행된 영상을 기준으로 하여 SIFT기반의 정합(registration)을 수행한다. 이러한 일련의 과정에 따른 결과는 도 2에 도시된 바와 같다. 이와 같은 과정을 통해 모든 형광안저혈관조영 영상(FAG image)을 반복으로 정합(registration)한다. 모든 형광안저혈관조영 영상(FAG image)을 수행하고 나면, 최종적으로 정합(registration)된 결과는 가장 첫번째의 영상을 기준으로 정합(registration) 된다.The FAG matching unit 120 matches each frame of the fluorescein angiography image (FAG image) composed of multiple frames with the passage of time using a feature point matching technique. More specifically, the FAG matching unit 120 performs rigid registration of a fluorescein angiography image using a Scale Invariant Feature Transform (SIFT) technique. The FAG matching unit 120 may perform matching of a fluorescence angiography image (FAG image) using RANSAC (RANdom Sample Consensus) during feature point detection, feature point descriptor extraction, and feature point matching. Fluorescein angiography (FAG image) changes in blood vessels and background over time. Therefore, the present invention utilizes a Scale Invariant Feature Transform (SIFT) technique to detect various features while minimizing image changes. Using SIFT (Scale Invariant Feature Transform) technique, various regional features such as optic disc, fovea, local vessel structure in fluorescein angiography image can be found. To minimize visual changes based on these features, features within two images are matched, and based on this, perspectivce transform based on RANSAC (RANDOM SAMPLE CONSENSUS) technique is estimated. Rigid registration between two images is performed with the estimated perspective transform. Continuously performing an image, SIFT-based registration is performed on the basis of the previously performed image. The result of this series of processes is as shown in FIG. 2. Through this process, all fluorescein angiography images (FAG images) are repeatedly registered (registration). After performing all fluorescein angiography images (FAG images), the finally registered results are registered based on the first image.
FAG 혈관 추출부(130)는 형광안저혈관조영 영상(FAG image)의 특성에 맞춰 딥러닝 기반으로 정합된 형광안저혈관조영 영상(FAG image)의 혈관 추출을 수행한다. 이때, 딥러닝은 학습된 컨볼루셔널 신경망(convolutional neural network : CNN)일 수 있다. 보다 구체적으로, FAG 혈관 추출부(130)는 형광안저혈관조영 영상(FAG image)에서의 혈관을 바탕으로 딥러닝 기반의 FAG Vessel Probability map(FAGVP)를 도출할 수 있다. 상술한 FAG 정합부(120)를 통해 형광안저혈관조영 영상(FAG image)은 perspective transform으로부터 강제 정합(rigid registration)되지만, 여전히 형광안저혈관조영 영상(FAG image)에는 시간에 따른 변화, optic disc, background 등 오리지널 형광안저혈관조영 영상(FAG image)의 속성을 그대로 가지고 있으므로, 매우 밀접한 혈관 간의 정합을 위해서 정합에 방해가 되는 다른 속성들을 제거해야 할 필요가 있다. 따라서, FAG 혈관 추출부(130)는 정밀도가 높은 딥러닝(Deep Learning : DL) 기반의 혈관 영역화 기법을 적용하여 정합된 형광안저혈관조영 영상(FAG image)의 혈관 추출을 수행한다. 딥러닝 기반의 혈관 영역화 기법들은 공지된 기술로 다양하며, 혈관을 제외한 다른 속성들을 매우 높은 확률로 제거할 수 있다. 하지만, 종래의 기술 중 Retinal Vessel Segmentation 기법들의 Database(DRIVE, STARE, CHASE, HRF)들은 모두 안저 영상(FP image)으로써 형광안저혈관조영 영상(FAG image)과 다른 특징을 가지고 있다. 특히, 혈관의 특징이 매우 다르게 나타난다. 안저 영상(FP image)을 그레이스케일(grayscale)로 변환했을 때, 혈관의 주변보다 더욱 낮은 pixel value로 표현된다. 반면, 형광안저혈관조영 영상(FAG image)에서는 주변에 비해 더욱 높은 pixel value로 표현된다. 따라서, 이러한 상반된 특성이 형광안저혈관조영 영상(FAG image)에 반영될 수 있도록 기존의 안저 영상(FP image) 데이터베이스를 적절히 변환한다. 혈관을 기준으로 안저 영상(FP image)에서 그린 채널(green channel)의 pixel value를 인버스 트랜스폼(inverse transform)하면 형광안저혈관조영 영상(FAG image)과 유사한 특성을 지니도록 만들 수 있다. 변환된 안저 영상(FP image)을 바탕으로 동일한 Ground Truth(GT)에 대해 학습하고, 이를 통해 형광안저혈관조영 영상(FAG image)에서의 FAG Vessel Probability map(FAGVP)을 도출한다. 이때 도출된 결과는 도 3에 도시된 바와 같이 매우 정밀하게 혈관이 추정되는 것을 볼 수 있다. 도 3에서 좌측 상단은 Color Fundus Photo Image, 우측 상단은 Gray Fundus Photo Image, 좌측 하단은 Inverse Gray Fundus Photo, 우측 하단은 FAG Image이다.The FAG blood vessel extraction unit 130 performs blood vessel extraction of the matched fluorescein angiography image (FAG image) based on deep learning according to the characteristics of the fluorescein angiography image (FAG image). In this case, deep learning may be a learned convolutional neural network (CNN). More specifically, the FAG blood vessel extraction unit 130 may derive a deep learning-based FAG vessel probability map (FAGVP) based on blood vessels in a fluorescein angiography image (FAG image). Although the fluorescein angiographic image (FAG image) is forcibly registered from the perspective transform through the above-described FAG matching unit 120, the fluorescence angiography image (FAG image) changes over time, optic disc, Since the properties of the original fluorescein angiogram (FAG image) such as background are retained, it is necessary to remove other properties that interfere with registration in order to match the blood vessels very closely. Accordingly, the FAG blood vessel extraction unit 130 applies a deep learning (DL) based vascular region technique with high precision to perform blood vessel extraction of the matched fluorescein angiography image (FAG image). Deep learning-based vascular zoning techniques are known techniques, and other attributes except blood vessels can be removed with a very high probability. However, databases (DRIVE, STARE, CHASE, HRF) of the Retinal Vessel Segmentation techniques in the prior art are all fundus images (FP images) and have different characteristics from fluorescein angiography images (FAG images). In particular, the characteristics of blood vessels appear very different. When the fundus image is converted to grayscale, it is expressed with a lower pixel value than the periphery of the blood vessel. On the other hand, in the fluorescein angiography image (FAG image), it is expressed with a higher pixel value than the surroundings. Therefore, the existing fundus image (FP image) database is appropriately converted so that these opposite characteristics can be reflected in the fluorescein angiography image (FAG image). By inverse transforming the pixel value of the green channel in the fundus image based on the blood vessel, it can be made to have characteristics similar to the fluorescein angiography image. Based on the converted fundus image (FP image), the same ground truth (GT) is learned, and through this, a FAG vessel probability map (FAGVP) in a fluorescein angiography image (FAG image) is derived. The results obtained at this time can be seen that the blood vessels are estimated very precisely as shown in FIG. 3. 3, the upper left is a Color Fundus Photo Image, the upper right is a Gray Fundus Photo Image, the lower left is an Inverse Gray Fundus Photo, and the lower right is a FAG Image.
통합부(140)는 형광안저혈관조영 영상(FAG image) 프레임들의 혈관 추출 결과를 평균값으로 통합한다. 보다 구체적으로, 통합부(140)는 FAGVP에서 B-Spline 기법을 기반으로 하는 비강체 정합(non-rigid registration)을 수행하고 정합된 FAGVP의 Average FAG Vessel Probability map(A-FAGVP)을 도출한다. 보다 자세하게, 통합부(140)는 FAGVP에서 B-Spline 모델로 표현되는 좌표격자의 free-form deformation 기법을 기반으로 하는 비강체 정합(non-rigid registration)을 수행하고 정합된 FAGVP의 Average FAG Vessel Probability map(A-FAGVP)의 평균값을 도출할 수 있다. 상술한 FAG 혈관 추출부(130)를 통한 SIFT 기반의 강체 정합(rigid registration)은 혈관이 아닌 주변 구조물 혹은 비강체(non-rigid)한 움직임으로 인해 정밀하게 정합되지 못한다. 따라서, 본 발명에서는 다른 구조물들을 제거한 FAGVP로부터 혈관 간의 정밀도가 높은 비강체 정합(non-rigid registration)을 수행한다. 이미 강체 정합(rigid registration)을 통해 상당히 유사하게 정합된 FAGVP로부터 반복적인 정합을 수행하여 정보를 계산하고, 이를 바탕으로 오차를 줄여나가는 B-Spline 정합(registration) 기법을 활용하여 비강체 정합(non-rigid registration)을 수행한다. The integration unit 140 integrates the results of vascular extraction of the fluorescein angiography images (FAG image) frames as an average value. More specifically, the integration unit 140 performs non-rigid registration based on the B-Spline technique in FAGVP and derives an average FAG vessel probability map (A-FAGVP) of the matched FAGVP. In more detail, the integration unit 140 performs non-rigid registration based on the free-form deformation technique of the coordinate grid represented by the B-Spline model in FAGVP and average FAG Vessel Probability of the matched FAGVP The average value of map(A-FAGVP) can be derived. SIFT-based rigid registration through the FAG blood vessel extraction unit 130 described above is not precisely matched due to non-rigid peripheral structures or non-rigid movement. Therefore, in the present invention, non-rigid registration with high precision between blood vessels is performed from FAGVP with other structures removed. Non-rigid registration (non-rigid registration) using the B-Spline registration technique that calculates information by performing iterative matching from FAGVP that is fairly similarly matched through rigid registration, and reduces errors based on this -rigid registration).
FP 혈관 추출부(150)는 안저 영상의 특성에 맞춰 딥러닝 기반으로 안저 영상의 혈관 추출을 수행한다. 보다 구체적으로, FP 혈관 추출부(150)는 안저 영상(FP image)와 A-FAGVP 간의 정합을 위해 딥러닝 기반의 Fundus Photo Vessel Probability map(FPVP)을 도출한다. 통합부(140)를 통한 비강체 정합(non-rigid registration) 이후, 시간에 따른 전체적인 혈관의 변화를 모두 포함하기 위해 시간축으로 동일한 pixel 위치의 평균을 계산하여 average FAG Vessel Probability map(A-FAGVP) 및 동일한 pixel 위치별 최대값을 추출한 Maximum FAG Vessel Probability map(M-FAGVP)을 도출한다. 시간축을 따라 pixel 별 평균값으로 계산된 A-FAGVP는 전체적인 시간에 따른 혈관의 변화를 반영할 뿐만 아니라, 발생할 수 있는 작은 노이즈(noise)를 효과적으로 억제할 수 있다. 본 실시예에 따른 FP 혈관 추출부(150)는 형광안저혈관조영 영상(FAG image)과 안저 영상(FP image)의 정합 기법이 혈관을 기반으로 하는 정밀한 정합이기 때문에, 안저 영상(FP image)에서도 딥러닝(DL)을 이용하여 FPVP를 도출하는 것이 바람직하다. 한편, 안저 영상(FP image)으로부터의 혈관 분할(vessel segmentation) 기법은 사전에 공지된 많은 기법들이 있으며, 본 발명에서는 공지된 기법 중 하나인 Database인 DRIVE로부터 학습한 DL기법을 활용한다.The FP blood vessel extraction unit 150 performs blood vessel extraction of the fundus image based on deep learning according to the characteristics of the fundus image. More specifically, the FP blood vessel extraction unit 150 derives a deep learning based Fundus Photo Vessel Probability map (FPVP) for matching between the fundus image (FP image) and A-FAGVP. After non-rigid registration through the integration unit 140, average FAG Vessel Probability map (A-FAGVP) is calculated by calculating the average of the same pixel position on the time axis to include all changes in blood vessels over time. And a Maximum FAG Vessel Probability map (M-FAGVP) from which the maximum value for each pixel location is extracted. A-FAGVP, calculated as the average value for each pixel along the time axis, not only reflects changes in blood vessels over time, but also effectively suppresses small noise that may occur. Since the FP vascular extraction unit 150 according to the present embodiment is a matching technique based on blood vessels, the matching technique of the fluorescence fundus angiography image (FAG image) and the fundus image (FP image) is also performed in the fundus image (FP image). It is desirable to derive FPVP using deep learning (DL). On the other hand, the vessel segmentation (vessel segmentation) technique from the fundus image (FP image), there are a number of techniques known in advance, the present invention utilizes a DL technique learned from DRIVE, a database, one of the known techniques.
FAG-FP 정합부(160)는 추출된 혈관으로부터 형광안저혈관조영 영상(FAG image)과 안저 영상(FP image)의 정합을 수행한다. 이와 같이 정합하는 이유는, 형광안저혈관조영 영상(FAG image)과 안저 영상(FP image)에서 혈관의 특징이 다르게 나타나기 때문이다. FAG-FP 정합부(160)는 상술한 바와 같이 딥러닝 기법을 이용하여 안저 영상(FP image)으로부터 도출된 FPVP를 바탕으로 형광안저혈관조영 영상(FAG image)과 안저 영상(FP image)의 정합을 수행한다. The FAG-FP matching unit 160 performs matching of the fluorescence fundus angiography image (FAG image) and the fundus image (FP image) from the extracted blood vessels. The reason for this matching is that the characteristics of blood vessels are different in the fluorescein angiography image (FAG image) and the fundus image (FP image). The FAG-FP matching unit 160 matches the fluorescence angiography image (FAG image) and the fundus image (FP image) based on the FPVP derived from the fundus image using the deep learning technique as described above. To perform.
FAG-FP 정합부(160)는 형광안저혈관조영 영상(FAG image)과 안저 영상(FP image)의 정합을 크게 2가지의 과정을 거쳐 진행할 수 있다. 첫째, FAG-FP 정합부(160)는 FPVP와 A-FAGVP에서 도출된 혈관으로부터 챔퍼 매칭(Chamfer Matching) 기법을 이용하여 안저 영상-형광안저혈관조영 영상(FP-FAG) 간 강체 정합(rigid registration)을 수행하고, 둘째, FAG-FP 정합부(160)는 B-Spline 모델로 표현되는 좌표격자의 free-form deformation 기법을 기반으로 하는 최종 안저 영상-형광안저혈관조영 영상(FP-FAG) 간 비강체 정합(non-rigid registration)을 수행한다. 보다 구체적으로, FAG-FP 정합부(160)는 상술한 과정으로부터 도출된 A-FAGVP와 FPVP를 적정한 threshold값을 기준으로 이진화하고, 챔퍼 매칭(Chamfer Matching) 기법을 이용해 강체 정합(rigid registration)한다. 둘째, FAG-FP 정합부(160)는 혈관 간의 정밀한 정합(registration)을 위해 B-Spline 모델로 표현되는 좌표격자의 free-form deformation 기법을 기반으로 비강체 정합(non-rigid registration)을 수행한다. The FAG-FP matching unit 160 may proceed through two processes of matching the fluorescence fundus angiography image (FAG image) and the fundus image (FP image). First, the FAG-FP matching unit 160 uses a chamfer matching technique from blood vessels derived from FPVP and A-FAGVP to provide rigid registration between fundus image-fluorescence angiography image (FP-FAG). ), and second, the FAG-FP matching unit 160 between the final fundus image-fluorescence fundus angiography image (FP-FAG) based on the free-form deformation technique of the coordinate grid represented by the B-Spline model. Non-rigid registration is performed. More specifically, the FAG-FP matching unit 160 binarizes the A-FAGVP and FPVP derived from the above-described process based on an appropriate threshold value, and performs rigid registration using a chamfer matching technique. . Second, the FAG-FP matching unit 160 performs non-rigid registration based on the free-form deformation technique of the coordinate grid represented by the B-Spline model for precise registration between blood vessels. .
한편, 상술한 바에 따르면, FPVP를 기준으로 A-FAGVP를 정합(registration)한다. 본 발명에 있어서, 정합(registration)하기 위한 입력 소스(input source)는 모두 vessel probability map이기 때문에, 혈관에 대한 일부분의 정보를 가지고 있으므로, 상술한 SIFT 기법 대신 챔퍼 매칭(Chamfer Matching) 기법을 이용한다. SIFT 기법은 영상(image)으로부터 피쳐(feature)을 탐지하고 디스크립터(descriptor)를 생성한다. 이후, 피쳐 디스크립터(feature descriptor)로부터 매칭 포인트(matching point)를 찾고, 이를 기준으로 RANSAC(RANDOM SAMPLE CONSENSUS)을 활용한 perspective transform을 계산한다. 이러한 일련의 과정들은 매우 많은 연산량이 필요하며, 또한 복잡한 local feature가 많은 영상에서 효과적이다. 반면에 챔퍼 매칭(Chamfer Matching) 기법은 target and source binary image로부터 모든 pixel의 distance를 계산하고 이를 바탕으로 두 distance image 간의 유사도를 계산하는 것이다. 따라서, 챔퍼 매칭(Chamfer Matching) 기법은 SIFT 기법 대비 매우 적은 연산량과 시간이 소요되며, 혈관(vessel) 간의 정합(registration)에도 효과적이다. 한편, 본 실시예에 따른 챔퍼 매칭(Chamfer Matching) 기법은 translation을 계산하는 종래의 챔퍼 매칭(Chamfer Matching) 기법을 개선한 커스터마이즈(customize) 챔퍼 매칭(Chamfer Matching) 기법으로, 로테이션(rotation)까지 고려한다. Meanwhile, according to the above, A-FAGVP is registered based on FPVP. In the present invention, since the input sources for registration are all vessel probability maps, they have a part of information about blood vessels, so instead of the above-described SIFT technique, a chamfer matching technique is used. The SIFT technique detects a feature from an image and creates a descriptor. Then, a matching point is found from a feature descriptor, and a perspective transform using RANSAC (RANDOM SAMPLE CONSENSUS) is calculated based on this. These series of processes require a very large amount of computation, and are also effective in images with many complex local features. On the other hand, the chamfer matching technique is to calculate the distance of all pixels from the target and source binary image and to calculate the similarity between the two distance images based on this. Therefore, the chamfer matching technique takes a very small amount of computation and time compared to the SIFT technique, and is also effective for registration between vessels. On the other hand, the chamfer matching (Chamfer Matching) technique according to the present embodiment is a customizing (Chamfer Matching) technique that improves the conventional chamfer matching (Chamfer Matching) technique for calculating translation, considering the rotation (rotation) do.
마지막으로, FAG-FP 정합부(160)는 최종 안저 영상-형광안저혈관조영 영상(FP-FAG)의 정합을 위해 비강체 정합(non-rigid registration)을 한다. 상술한 바에 따르면, 형광안저혈관조영 영상(FP-FAG)들 간에 B-Spline 모델로 표현되는 좌표격자의 free-form deformation 기법 기반으로 비강체 정합(non-rigid registration)을 수행한다. FAG-FP 정합부(160)는 최종적으로 정밀한 안저 영상-형광안저혈관조영 영상(FP-FAG)의 정합을 위해, 비강체한 동작(non-rigid motion)까지 고려하여 B-Spline 모델로 표현되는 좌표격자의 free-form deformation 기법 기반의 비강체 정합(non-rigid registration)을 수행한다. 이미 강체 정합(rigid registration)으로 유사하게 정합된 두 영상은 B-Spline 모델로 표현되는 좌표격자의 free-form deformation 기법을 통해 도 4와 같은 매우 정밀한 정합(registration) 결과를 도출한다. 도 4에서, 좌측 상단은 안저 영상, 우측 상단은 안저 영상의 혈관 확률 맵, 좌측 하단은 안저 영상 혈관 확률 맵(흰색)과 형광안저혈관조영 영상의 혈관 확률 맵(파란색) 간의 정합 이전 결과, 우측 하단은 안저 영상 혈관 확률 맵(흰색)과 형광안저혈관조영 영상의 혈관 확률 맵(파란색) 간의 정합 후 결과이다. Finally, the FAG-FP matching unit 160 performs non-rigid registration to match the final fundus image-fluorescence angiography image (FP-FAG). According to the above, non-rigid registration is performed based on a free-form deformation technique of a coordinate grid represented by a B-Spline model between fluorescein angiography images (FP-FAGs). The FAG-FP matching unit 160 is a coordinate represented by a B-Spline model in consideration of non-rigid motion in order to finally match a precise fundus image-fluorescence angiography image (FP-FAG). Non-rigid registration based on the free-form deformation technique of the grid is performed. The two images, which have already been similarly matched by rigid registration, derive very precise registration results as shown in FIG. 4 through the free-form deformation technique of the coordinate grid represented by the B-Spline model. In FIG. 4, the result of matching before the match between the top left image of the fundus image, the top right image of the blood vessel probability map of the fundus image, and the bottom left image of the fundus image map of the fundus image (white) and the vascular probability map (blue) of the fluorescein angiography image, right The bottom is the result after matching between the fundus probability map (white) of the fundus image and the blood vessel probability map (blue) of the fluorescein angiography image.
혈관 분할부(170)는 정합된 결과를 기초로 혈관을 분할한다. 본 실시예에 따르면, GT(Ground Truth)로 활용할 수 있는 매우 정밀한 혈관 분할 마스크(vessel segmentation mask)를 정합된 A-FAGVP로부터 도출해야 하므로, 혈관 분할부(170)는 A-FAGVP의 픽셀별 확률값을 기반으로 이중 역치(hysteresis thresholding) 기법을 적용하여 이진(binary) 혈관 분할 마스크(vessel segmentation mask)를 도출하고, 도출된 혈관 분할 마스크에 연결된 요소 분석(connected component analysis) 기법을 적용하여 노이즈를 제거한다. 먼저, 혈관 분할부(170)는 형광안저혈관조영 영상(FAG image)에서 정맥(vein)에는 콘트라스트(contrast)가 늦게 차오르기 때문에 정맥(vein) 영역에서 발생하는 홀(hole)을 채워야 한다. 따라서, 혈관 분할부(170)는 정합된 A-FAGVP에서 정맥(vein) 내부의 홀(hole)을 탐지하고 상대적으로 낮은 pixel probability를 보강 한다. 이후, 혈관 분할부(170)는 정합된 A-FAGVP에서 히스테리시스 임계값(hysteresis threshold) 개념을 적용해 얇은 혈관과 굵은 혈관을 따로 분할한다. 이후 noise 제거를 위해 연결된 구성(connected components)으로부터 매우 작은 영역(region)을 제거한다. 한편, A-FAGVP에서 정맥(Vein)의 영역에서 주로 중심부에 틈새가 발생한다. 이는 콘트라스트(contrast)가 동맥(artery)을 지나 정맥(vein)까지 도달하기까지의 시간차 때문에 발생한다. 또한, 모세혈관으로부터 가까운 혈관벽에 먼저 콘트라스트(contrast)가 도달함에 따라 혈관 벽에 해당하는 average probability는 상당히 높게 나타난다. 반면에 정맥(vein)의 중심부는 거의 마지막쯤 짧은 영역에서 차오르기 때문에 average probability가 낮게 나타난다. 따라서, vessel segmentation mask 획득을 위해 정맥(vein)에서 발생하는 틈새를 채워야 한다. 보다 구체적으로 설명하면 먼저, 틈새를 찾기 위해 A-FAGVP를 낮은 임계값으로 설정하여(t=0.3) 이진화(X)한다. 다음으로 이진화된 영상으로부터 morphological closing 기법을 이용하여 틈새를 채운다. 마지막으로 높은 임계값으로 설정하여(t=0.7) A-FAGVP를 이진화(Y)한 뒤 두 영상간의 차(X-Y)를 계산하고 음의 값은 버린다. 계산된 subtracted image는 정맥(vein)의 틈을 이진 영상으로 표현한 것이 된다. 이후, A-FAGVP 안에 해당하는 subtracted image영역의 probability를 보강 한다. 이때, 본 발명에서는 고정된 probability로(p=0.5) 보강한다. The blood vessel division unit 170 divides the blood vessel based on the matched result. According to this embodiment, since a very precise vessel segmentation mask that can be utilized as a ground truth (GT) must be derived from the matched A-FAGVP, the blood vessel segmentation unit 170 has a probability value for each pixel of the A-FAGVP Based on this, a binary threshold segmentation technique is applied to derive a binary vessel segmentation mask and noise is removed by applying a connected component analysis technique to the derived vessel segmentation mask. do. First, the blood vessel segmentation unit 170 must fill a hole occurring in a vein region because contrast is late in a vein in a fluorescein angiography image. Therefore, the blood vessel segmentation unit 170 detects a hole inside the vein in the matched A-FAGVP and reinforces the relatively low pixel probability. Thereafter, the blood vessel division unit 170 divides the thin blood vessel and the thick blood vessel separately by applying the concept of hysteresis threshold in the matched A-FAGVP. Then, very small regions are removed from the connected components to remove noise. On the other hand, in A-FAGVP, a crevice occurs mainly in the center of a vein. This occurs due to the time difference between the contrast reaching the artery and the vein. In addition, the average probability corresponding to the blood vessel wall appears to be quite high as the contrast first reaches the blood vessel wall close to the capillaries. On the other hand, since the center of the vein fills up in the short region almost at the end, the average probability is low. Therefore, in order to obtain the vessel segmentation mask, a gap generated in a vein must be filled. More specifically, first, in order to find a gap, A-FAGVP is set to a low threshold (t=0.3) and binarization (X) is performed. Next, the gap is filled from the binarized image using a morphological closing technique. Finally, by setting the high threshold value (t=0.7), A-FAGVP is binarized (Y), and then the difference (X-Y) between the two images is calculated and the negative value is discarded. The calculated subtracted image is a representation of a vein gap as a binary image. Thereafter, the probability of the subtracted image area corresponding to A-FAGVP is reinforced. At this time, in the present invention, it is reinforced with a fixed probability (p=0.5).
다음으로, 혈관 분할부(170)는 정맥(vein)에 발생하는 틈새까지 보강된 정합된 A-FAGVP를 바탕으로 분할(segmentation)을 한다. 이때 단순한 임계값으로부터의 진화가 아닌 히스테리시스(Hysteresis) 개념을 바탕으로 한 분할(segmentation)을 진행한다. 먼저, 1차 임계값을 통해 굵은 혈관(vessel) 위주의 binary vessel mask를 획득한다. 다음으로 얇은 혈관(vessel)을 검출 할 수 있도록 2차 임계값을 낮게 설정하여 2차 binary vessel mask를 도출한다. 이후, 2차 binay vessel mask로부터 skeletonization 기법을 통해 vessel center line mask를 도출한다. pixel 굵기가 1에 가까운 말단의 얇은 vessel까지 표현된 vessel center line mask를 굵은 혈관이 도출된 1차 binary vessel mask와 병합한다. vessel center line mask를 병합하였기 때문에, 1차 binary vessel mask에서 일부 끊어진 영역들을 연결할 수 있다. 최종적으로 메인 혈관(main vessel)들과 연결되지 못한 매우 작은 영역들은 connected component labeling 기법을 통해 제거한다.Next, the blood vessel dividing unit 170 is segmented based on the matched A-FAGVP reinforced up to the gap occurring in the vein. In this case, segmentation based on the concept of hysteresis, rather than evolution from a simple threshold, is performed. First, a binary vessel mask based on coarse vessels is obtained through the first threshold. Next, the second threshold is set to a low level so that a thin vessel can be detected to derive a second binary vessel mask. Thereafter, a vessel center line mask is derived from a second binay vessel mask through a skeletonization technique. The vessel center line mask, which is represented by a thin vessel at a terminal close to 1 in pixel thickness, is merged with a primary binary vessel mask in which thick vessels are derived. Since the vessel center line mask is merged, some broken regions can be connected in the primary binary vessel mask. Finally, very small areas that are not connected to the main vessels are removed through the connected component labeling technique.
상술한 본 발명에 따르면, 도 5와 같이 정밀하게 분할된 혈관 영상의 결과를 얻을 수 있으며, 이러한 결과 영상을 통해 실명유발질환 및/또는 만선혈관질환의 조기진단 및 치료가 가능하다. 도 5의 A 및 B에서 좌측 상단은 전체 안저 영상, 우측 상단은 전체 안저 영상 혈관영역화 결과, 좌측 하단은 왼쪽부터 시신경 유두 중심(Optic disc center) 확대 안저 영상, 황반 중심(fovea center) 확대 안저 영상, 우측 하단은 왼쪽부터 시신경 유두 중심(Optic disc center) 확대 혈관영역화 결과, 황반 중심(fovea center) 확대 혈관영역화 결과이다. According to the present invention described above, it is possible to obtain a result of a precisely segmented blood vessel image as shown in FIG. 5, and through this result image, early diagnosis and treatment of blindness-induced disease and/or full-blood vessel disease is possible. In FIG. 5A and B, the upper left is the entire fundus image, the upper right is the entire fundus image vascular region result, the lower left is from the left optic disc center enlarged fundus image, macular center (fovea center) enlarged fundus The lower right of the image is the result of enlarged vascular region of the optic disc center from the left and the enlarged vascular region of the fovea center.
도 6은 본 발명의 일 실시예에 따른 안저 영상과 형광안저혈관조영 영상을 정합하는 방법의 흐름도이다. 6 is a flowchart of a method of matching a fundus image and a fluorescein angiography image according to an embodiment of the present invention.
본 실시예에 따른 영상 정합 방법은 상술한 안저 영상과 형광안저혈관조영 영상을 정합하는 장치에서 수행된다.The image matching method according to the present embodiment is performed in an apparatus for matching the above-described fundus image and fluorescein angiography image.
도 6을 참조하면, 먼저, 환자의 안저 영상(FP image)과 형광안저혈관조영 영상(FAG image)의 프레임들을 취득한다(S510). 이때, 안저 영상(FP image)은 안과에서 안구 질환의 검사를 위한 안저 촬영 장치를 통해 취득될 수 있으며, 형광안저혈관조영 영상(FAG image)은 정맥에 형광물질(Fluorescein)을 주입하고, 형광 물질이 망막순환계를 통해 순환하는 것을 광학적으로 촬영하여 혈관을 표시하는 장치를 통해 취득될 수 있다.Referring to FIG. 6, first, frames of a patient's fundus image (FP image) and a fluorescence fundus angiography image (FAG image) are acquired (S510). At this time, the fundus image (FP image) can be obtained through a fundus imaging device for the examination of eye diseases in the ophthalmology, the fluorescein angiography image (FAG image) is injected with a fluorescent substance (Fluorescein) in a vein, a fluorescent substance It can be acquired through a device that optically photographs circulating through the retinal circulatory system and displays blood vessels.
다음으로, 형광안저혈관조영 영상(FAG image)의 각 프레임들을 특징점 정합 기법을 활용하여 정합한다(S520). 보다 구체적으로, SIFT(Scale Invariant Feature Transform) 기법을 이용하여 형광안저혈관조영 영상(FAG image)의 강체 정합(rigid registration)을 수행한다. 형광안저혈관조영 영상(FAG image)은 시간에 따른 혈관 및 배경의 변화가 존재하므로, 본 발명에서는 영상의 변화를 최소화하면서 다양한 특징을 검출할 수 있도록 SIFT(Scale Invariant Feature Transform) 기법을 활용한다.Next, each frame of the fluorescein angiography image (FAG image) is matched using a feature point matching technique (S520). More specifically, rigid registration of a fluorescence angiography image (FAG image) is performed using a Scale Invariant Feature Transform (SIFT) technique. Since a fluorescein angiography image (FAG image) has a change in blood vessels and background over time, the present invention utilizes a Scale Invariant Feature Transform (SIFT) technique to detect various features while minimizing the change in the image.
다음으로, 형광안저혈관조영 영상(FAG image)의 특성에 맞춰 딥러닝 기반으로 정합된 형광안저혈관조영 영상(FAG image)의 혈관 추출을 수행한다(S530). 이때, 딥러닝은 학습된 컨볼루셔널 신경망(convolutional neural network : CNN)일 수 있다.Next, blood vessel extraction of the matched fluorescein angiography image (FAG image) is performed based on deep learning according to the characteristics of the fluorescein angiography image (S530). In this case, deep learning may be a learned convolutional neural network (CNN).
다음으로, 형광안저혈관조영 영상(FAG image) 프레임들의 혈관 추출 결과를 평균값으로 통합한다(S540). 보다 구체적으로, FAGVP에서 B-Spline 모델로 표현되는 좌표격자의 free-form deformation 기법을 기반으로 하는 비강체 정합(non-rigid registration)을 수행하고 정합된 FAGVP의 Average FAG Vessel Probability map(A-FAGVP)을 도출한다.Next, the blood vessel extraction results of the fluorescein angiography images (FAG image) frames are integrated as an average value (S540). More specifically, non-rigid registration based on the free-form deformation technique of the coordinate lattice represented by the B-Spline model in FAGVP and average FAG Vessel Probability map (A-FAGVP of the matched FAGVP) ).
다음으로, 안저 영상의 특성에 맞춰 딥러닝 기반으로 안저 영상의 혈관 추출을 수행한다(S550). 보다 구체적으로, 안저 영상(FP image)와 A-FAGVP 간의 정합을 위해 딥러닝 기반의 Fundus Photo Vessel Probability map(FPVP)을 도출한다.Next, blood vessel extraction of the fundus image is performed based on deep learning according to the characteristics of the fundus image (S550 ). More specifically, a deep learning-based Fundus Photo Vessel Probability map (FPVP) is derived for matching between the fundus image (FP image) and A-FAGVP.
다음으로, 추출된 혈관으로부터 형광안저혈관조영 영상(FAG image)과 안저 영상(FP image)의 정합을 수행한다(S560). 이와 같이 정합하는 이유는, 형광안저혈관조영 영상(FAG image)과 안저 영상(FP image)에서 혈관의 특징이 다르게 나타나기 때문이다. 본 실시예에 따르면, 딥러닝 기법을 이용하여 안저 영상(FP image)으로부터 도출된 FPVP를 바탕으로 형광안저혈관조영 영상(FAG image)과 안저 영상(FP image)의 정합을 수행한다.Next, matching of the fluorescence fundus angiography image (FAG image) and the fundus image (FP image) is performed from the extracted blood vessels (S560). The reason for this matching is that the characteristics of blood vessels are different in the fluorescein angiography image (FAG image) and the fundus image (FP image). According to the present embodiment, matching of a fluorescein angiography image (FAG image) and a fundus image (FP image) is performed based on FPVP derived from a fundus image using a deep learning technique.
다음으로, 정합된 결과를 기초로 혈관을 분할한다(S570). 보다 구체적으로, 혈관 분할 마스트 제너레이션(vessel segmentation mask generation) 기법을 이용하여 혈관 분할 마스크(vessel segmentation mask)를 정합된 A-FAGVP로부터 도출하고, 정맥(vein)에 발생하는 틈새까지 보강된 정합된 A-FAGVP를 바탕으로 분할(segmentation)한다.Next, the blood vessel is divided based on the matched result (S570). More specifically, the vessel segmentation mask is derived from the matched A-FAGVP using the vessel segmentation mask generation technique, and the matched A reinforced to the gap occurring in the vein -Segmentation based on FAGVP.
한편, 본 출원인은 안저 영상과 형광안저혈관조영 영상의 정합을 이용한 자동 혈관 분할에 따른 결과를 확인하기 위해 아래와 같은 실험을 실시하였다.On the other hand, the applicant conducted the following experiment to confirm the result of automatic blood vessel segmentation using matching of the fundus image and fluorescein angiography image.
실험은 Intel(R) Core(TM) i7-6850K CPU @ 3.6GHz, 32G RAM, GeForce GTX 1080TI 11G로 구성된 하드웨어와 Ubuntu 16.04 LTS OS, python 2.7 개발 환경에서 진행하였다. 실험에 사용된 Database는 한장의 FP image와 여러장의 FAG image로 구성이 하나의 set로 된 총 10개의 FP-FAG set이다. 또한 딥러닝 학습을 위해 공개된 DRIVE Database의 train과 test set 각각 20장의 FP image와 GT(ground truth)를 이용하였다. 먼저, FAG image간의 비강체 정합을 위해 SIFT기법을 활용한 형태 검출(feature detection)과 형태 매칭(feature matching), 그리고 RANSAC기법을 통해 perspective transform을 계산하여 정합하였다. 다음으로, 모든 FAG image의 Vessel Probability map을 도출하기 위해 딥러닝을 활용하였다. 이미 많은 연구가 되어있는 딥러닝 기반의 Vessel segmentation의 input image는 FP이다. 따라서 우리는 DRIVE Database의 FP image을 gray scale로 변환 후 inverse transform하여 FAG image과 유사한 특성을 보이도록 변형 하였다. 이후 inverted FP image을 입력으로하는 딥러닝 학습을 하였다. 학습에 사용된 network model은 최근 scale space 이론을 적용하여 가장 좋은 성능을 보여준 SSA-Vessel Segmentation을 기반으로 하였다. 학습 시 preprocessing으로 input image로부터의 mean을 빼고 standard deviation으로 나누었다. 이후 학습 network로부터의 test시 input image를 FAG image를 변경하여 결과를 도출한다. 도출된 결과는 도 7에 도시된 바와 같이 매우 정밀하게 얇은 혈관까지도 나타나는 것을 보여준다. The experiment was conducted in hardware consisting of Intel(R) Core(TM) i7-6850K CPU @ 3.6GHz, 32G RAM, GeForce GTX 1080TI 11G, Ubuntu 16.04 LTS OS, python 2.7 development environment. The database used in the experiment is a total of 10 FP-FAG sets, consisting of a single FP image and multiple FAG images in one set. In addition, 20 FP images and GT (ground truth) of each train and test set of the published DRIVE Database were used for deep learning. First, for non-rigid matching between FAG images, perspective transform was calculated and matched through feature detection and feature matching using SIFT technique, and RANSAC technique. Next, deep learning was used to derive the vessel probability map of all FAG images. The input image of deep learning based vessel segmentation, which has been studied a lot, is FP. Therefore, we converted the FP image of the DRIVE Database to gray scale and then inverse transformed to show characteristics similar to the FAG image. After that, I learned deep learning using the inverted FP image as input. The network model used for learning was based on SSA-Vessel Segmentation, which showed the best performance by applying the scale space theory recently. When learning, the mean from the input image was subtracted by preprocessing and divided by standard deviation. Then, when testing from the learning network, the input image is changed to the FAG image to derive the result. The derived results show that even very finely thin blood vessels appear as shown in FIG. 7.
이후 학습된 딥러닝으로부터 도출 된 FAG Vessel Probability map을 이용해 더욱 정밀한 정합을 하기 위해 free form deformation 계열의 B-Spline 모델로 표현되는 좌표격자의 free-form deformation 기법을 통해 비강체 정합을 한다. 비강체 정합까지 도출된 FAG 정합 결과는 도 8에 도시된 바와 같이 매우 정밀한 결과를 볼 수 있다. 이렇게 비강체 정합까지 모두 완료된 FAGVP map을 이용하여 전체 시퀀스를 종합 할 수 있는 map을 도출해야 한다. 따라서 우리는 시간축에 따른 정보를 포함한 결과를 Average FAGVP map과 Maximum FAGVP map으로써 도출했다. 도 9와 같이 aggregated image는 매우 정교하고 정밀하게 혈관이 추정되는 것을 보여준다. Then, for more precise matching using the FAG Vessel Probability map derived from the learned deep learning, non-rigid matching is performed through the free-form deformation technique of the coordinate grid represented by the B-Spline model of the free form deformation series. As shown in FIG. 8, the FAG matching results derived up to the non-rigid matching can see very precise results. The map that can synthesize the entire sequence should be derived using the FAGVP map, which has been completed up to the non-rigid matching. Therefore, we derived the results including information along the time axis as the Average FAGVP map and the Maximum FAGVP map. As shown in FIG. 9, the aggregated image shows that the blood vessels are estimated very precisely and precisely.
이제 FP image와 FAG image 간의 정합을 위해 마찬가지로 FP image의 Vessel Probability map을 딥러닝을 활용하여 도출한다. FAGVP map을 도출할 때와 마찬가지로 Database는 DRIVE를 사용하였으며, network model 또한 동일한 model을 사용하였다. 학습시 input은 DRIVE Database의 FP image를 그대로 사용하여 학습했으며, 전처리로 평균을 빼고 표준편타로 나누었다. 학습된 결과를 바탕으로 실험시에는 우리의 FP image를 input으로하여 도 10과 같은 결과를 도출했다.Now, for matching between the FP image and the FAG image, the vessel probability map of the FP image is derived using deep learning. As in the case of deriving the FAGVP map, the database used DRIVE, and the network model used the same model. When learning, the input was studied using the FP image of the DRIVE Database as it was, and the average was subtracted by preprocessing and divided by standard deviation. Based on the learned results, the results as shown in Fig. 10 were derived by using our FP image as an input.
정합의 마지막 과정인 FP-FAG 정합은 A-FAGVP map을 FPVP map에 정합 하는 것이다. 첫번째 단계는 FP-FAG 강체 정합이다. 이미 우리는 각각의 영상에 대한 Vessel Probability map을 도출 하였다. 따라서 vessel probability map으로부터의 binary image를 생성하고, 이를 바탕으로 챔퍼 매칭(Chamfer matching)을 이용한 강체 정합을 하였다. 그러나 도 11과 같이 유사한 위치에 정합 이 되지만, 약간씩 오차가 발생되는 것을 볼 수 있다.The final process of matching, FP-FAG matching, is to match A-FAGVP map to FPVP map. The first step is FP-FAG rigid registration. We have already derived the vessel probability map for each image. Therefore, a binary image was generated from the vessel probability map, and rigid body matching was performed using chamfer matching. However, as shown in FIG. 11, it is matched at a similar position, but it can be seen that an error occurs slightly.
따라서, 우리는 FAG 정합 과정과 마찬가지로 B-Spline 모델로 표현되는 좌표격자의 free-form deformation 기법을 이용한 비강체 정합을 수행한다. 비강체 정합의 결과는 도 12에 도시된 바와 같이, 이전 결과보다 매우 정교하고 정밀한 것을 볼 수 있다.Therefore, we perform non-rigid matching using the free-form deformation technique of the coordinate grid represented by the B-Spline model, as in the FAG matching process. As shown in Fig. 12, the results of the non-rigid matching can be seen to be much more precise and precise than the previous results.
본 발명의 실시예에 따른 방법들은 애플리케이션으로 구현되거나 다양한 컴퓨터 구성요소를 통하여 수행될 수 있는 프로그램 명령어의 형태로 구현되어 컴퓨터 판독 가능한 기록 매체에 기록될 수 있다. 상기 컴퓨터 판독 가능한 기록 매체는 프로그램 명령어, 데이터 파일, 데이터 구조 등을 단독으로 또는 조합하여 포함할 수 있다. 상기 컴퓨터 판독 가능한 기록 매체에 기록되는 프로그램 명령어는, 본 발명을 위한 특별히 설계되고 구성된 것들이거니와 컴퓨터 소프트웨어 분야의 당업자에게 공지되어 사용 가능한 것일 수도 있다. 컴퓨터 판독 가능한 기록 매체의 예에는, 하드 디스크, 플로피 디스크 및 자기 테이프와 같은 자기 매체, CD-ROM, DVD와 같은 광기록 매체, 플롭티컬 디스크(floptical disk)와 같은 자기-광 매체(magneto-optical media) 및 ROM, RAM, 플래시 메모리 등과 같은 프로그램 명령어를 저장하고 수행하도록 특별히 구성된 하드웨어 장치가 포함된다. 프로그램 명령어의 예에는, 컴파일러에 의해 만들어지는 것과 같은 기계어 코드뿐만 아니라 인터프리터 등을 사용해서 컴퓨터에 의해서 실행될 수 있는 고급 언어 코드도 포함된다. 상기 하드웨어 장치는 본 발명에 따른 처리를 수행하기 위해 하나 이상의 소프트웨어 모듈로서 작동하도록 구성될 수 있으며, 그 역도 마찬가지이다.Methods according to an embodiment of the present invention may be implemented as an application or in the form of program instructions that can be executed through various computer components to be recorded in a computer-readable recording medium. The computer-readable recording medium may include program instructions, data files, data structures, or the like alone or in combination. The program instructions recorded on the computer-readable recording medium are specially designed and configured for the present invention, and may be known and available to those skilled in the computer software field. Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tapes, optical recording media such as CD-ROMs, DVDs, and magneto-optical media such as floptical disks. media) and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include not only machine language codes produced by a compiler, but also high-level language codes that can be executed by a computer using an interpreter or the like. The hardware device may be configured to operate as one or more software modules to perform processing according to the present invention, and vice versa.
본 명세서는 많은 특징을 포함하는 반면, 그러한 특징은 본 발명의 범위 또는 특허청구범위를 제한하는 것으로 해석되어서는 아니 된다. 또한, 본 명세서의 개별적인 실시예에서 설명된 특징들은 단일 실시예에서 결합되어 구현될 수 있다. 반대로, 본 명세서의 단일 실시예에서 설명된 다양한 특징들은 개별적으로 다양한 실시예에서 구현되거나, 적절히 결합되어 구현될 수 있다.While this specification includes many features, such features should not be construed as limiting the scope of the invention or the claims. Also, features described in individual embodiments of the present specification may be implemented in combination in a single embodiment. Conversely, various features described in a single embodiment of the present specification may be individually implemented in various embodiments, or may be implemented in appropriate combination.
도면에서 동작들이 특정한 순서로 설명되었으나, 그러한 동작들이 도시된 바와 같은 특정한 순서로 수행되는 것으로 또는 일련의 연속된 순서, 또는 원하는 결과를 얻기 위해 모든 설명된 동작이 수행되는 것으로 이해되어서는 안 된다. 특정 환경에서 멀티태스킹 및 병렬 프로세싱이 유리할 수 있다. 아울러, 상술한 실시예에서 다양한 시스템 구성요소의 구분은 모든 실시예에서 그러한 구분을 요구하지 않는 것으로 이해되어야 한다. 상술한 앱 구성요소 및 시스템은 일반적으로 단일 소프트웨어 제품 또는 멀티플 소프트웨어 제품에 패키지로 구현될 수 있다.Although the operations in the drawings have been described in a specific order, it should not be understood that such operations are performed in a specific order as shown or in a series of sequences, or that all described actions are performed to obtain a desired result. In certain circumstances, multitasking and parallel processing may be advantageous. In addition, it should be understood that the division of various system components in the above-described embodiment does not require such division in all embodiments. The above-described app components and systems may be generally implemented as a package in a single software product or multiple software products.
이상에서 설명한 본 발명은, 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자에게 있어 본 발명의 기술적 사상을 벗어나지 않는 범위 내에서 여러 가지 치환, 변형 및 변경이 가능하므로 전술한 실시예 및 첨부된 도면에 의해 한정되는 것은 아니다.The present invention described above, since it is possible for a person of ordinary skill in the art to which the present invention pertains, various substitutions, modifications and changes are possible without departing from the technical spirit of the present invention. It is not limited by the drawings.

Claims (8)

  1. 환자의 안저 영상(FP image)과 형광안저혈관조영 영상(FAG image)의 프레임들을 취득하는 단계;Obtaining frames of a patient's fundus image (FP image) and a fluorescence fundus angiography image (FAG image);
    형광안저혈관조영 영상(FAG image)의 각 프레임들을 특징점 정합 기법을 활용하여 강체 정합(rigid registration)하는 단계;Rigid registration of each frame of the fluorescein angiography image using a feature point matching technique;
    형광안저혈관조영 영상(FAG image)의 특성에 맞춰 딥러닝 기반으로 정합된 형광안저혈관조영 영상(FAG image)의 혈관 추출을 수행하는 단계;Performing vascular extraction of the matched fluorescein angiography image (FAG image) based on deep learning according to the characteristics of the fluorescein angiography image (FAG image);
    형광안저혈관조영 영상(FAG image) 프레임들의 혈관 추출 결과를 평균값으로 통합하는 단계;Integrating blood vessel extraction results of fluorescein angiography (FAG image) frames into an average value;
    안저 영상의 특성에 맞춰 딥러닝 기반으로 안저 영상의 혈관 추출을 수행하는 단계;Performing blood vessel extraction of the fundus image based on deep learning according to the characteristics of the fundus image;
    상기 추출된 혈관으로부터 형광안저혈관조영 영상(FAG image)과 안저 영상(FP image)의 정합을 수행하는 단계; 및Performing a matching of a fluorescence fundus angiography image (FAG image) and a fundus image (FP image) from the extracted blood vessels; And
    상기 정합된 결과를 기초로 혈관을 분할하는 단계;를 포함하는 안저 영상과 형광안저혈관조영 영상의 정합을 이용한 자동 혈관 분할 방법.A step of dividing blood vessels based on the matched results; Automatic blood vessel segmentation method using the matching of the fundus image and the fluorescein angiogram.
  2. 제 1 항에 있어서,According to claim 1,
    상기 형광안저혈관조영 영상(FAG image)의 각 프레임들을 특징점 정합 기법을 활용하여 강체 정합(rigid registration)하는 단계는, The step of rigid registration of each frame of the fluorescein angiography image using a feature-point matching technique is:
    특징점 검출, 특징점 기술자 추출, 특징점 정합 과정 중 RANSAC(RANdom Sample Consensus)를 사용하여 형광안저혈관조영 영상(FAG image)의 정합을 수행하는 단계를 포함하는 것을 특징으로 하는 안저 영상과 형광안저혈관조영 영상의 정합을 이용한 자동 혈관 분할 방법.And performing matching of the fluorescence angiography image (FAG image) using RANSAC (RANdom Sample Consensus) during the feature point detection, feature point descriptor extraction, and feature point matching process. Automatic blood vessel segmentation method using matching.
  3. 제 1 항에 있어서,According to claim 1,
    상기 딥러닝은 학습된 컨볼루셔널 신경망(convolutional neural network : CNN)이고, The deep learning is a learned convolutional neural network (CNN),
    상기 형광안저혈관조영 영상(FAG image)의 특성에 맞춰 딥러닝 기반으로 정합된 형광안저혈관조영 영상(FAG image)의 혈관 추출을 수행하는 단계는, In accordance with the characteristics of the fluorescein angiography image (FAG image), performing a blood vessel extraction of the matched fluorescein angiography image (FAG image) based on deep learning,
    형광안저혈관조영 영상(FAG image)에서의 혈관을 바탕으로 딥러닝 기반의 FAG Vessel Probability map(FAGVP)를 도출하는 단계를 포함하는 것을 특징으로 하는 안저 영상과 형광안저혈관조영 영상의 정합을 이용한 자동 혈관 분할 방법.Automated using the matching of the fundus image and the fluorescein angiography image, comprising deriving a deep learning-based FAG vessel probability map (FAGVP) based on blood vessels in the fluorescein angiography image (FAG image). Methods of dividing blood vessels.
  4. 제 1 항에 있어서,According to claim 1,
    상기 형광안저혈관조영 영상(FAG image) 프레임들의 혈관 추출 결과를 평균값으로 통합하는 단계는, Integrating the results of blood vessel extraction of the fluorescein angiography (FAG image) frames into an average value,
    FAGVP에서 B-Spline 모델로 표현되는 좌표격자의 free-form deformation 기법을 기반으로 하는 비강체 정합(non-rigid registration)을 수행하고 정합된 FAGVP의 Average FAG Vessel Probability map(A-FAGVP)의 평균값 및 동일한 pixel 위치별 최대값을 추출한 Maximum FAG Vessel Probability map(M-FAGVP)을 도출하는 단계를 포함하는 것을 특징으로 하는 안저 영상과 형광안저혈관조영 영상의 정합을 이용한 자동 혈관 분할 방법.Performs non-rigid registration based on the free-form deformation technique of the coordinate grid represented by the B-Spline model in FAGVP and averages the average FAG Vessel Probability map (A-FAGVP) of the matched FAGVP and Automatic blood vessel segmentation method using matching of fundus image and fluorescein angiography image, comprising deriving a Maximum FAG Vessel Probability map (M-FAGVP) extracting the maximum value for each pixel location.
  5. 제 1 항에 있어서,According to claim 1,
    상기 안저 영상의 특성에 맞춰 딥러닝 기반으로 안저 영상의 혈관 추출을 수행하는 단계는, In accordance with the characteristics of the fundus image, performing blood vessel extraction of the fundus image based on deep learning,
    안저 영상(FP image)과 A-FAGVP 간의 정합을 위해 딥러닝 기반의 Fundus Photo Vessel Probability map(FPVP)을 도출하는 단계를 포함하는 것을 특징으로 하는 안저 영상과 형광안저혈관조영 영상의 정합을 이용한 자동 혈관 분할 방법.Automatic matching using matching of the fundus image and fluorescein angiography image, comprising deriving a deep learning-based Fundus Photo Vessel Probability map (FPVP) for matching between the fundus image (FP image) and A-FAGVP. Methods of dividing blood vessels.
  6. 제 1 항에 있어서,According to claim 1,
    상기 추출된 혈관으로부터 형광안저혈관조영 영상(FAG image)과 안저 영상(FP image)의 정합을 수행하는 단계는,The step of performing the matching of the fluorescence fundus angiography image (FAG image) and the fundus image (FP image) from the extracted blood vessels,
    FPVP와 A-FAGVP에서 도출된 혈관으로부터 챔퍼 매칭(Chamfer Matching) 기법을 이용하여 안저 영상-형광안저혈관조영 영상(FP-FAG) 간 강체 정합(rigid registration)을 수행하는 단계; 및Performing rigid registration between fundus image-fluorescence fundus angiography image (FP-FAG) using a chamfer matching technique from blood vessels derived from FPVP and A-FAGVP; And
    B-Spline 모델로 표현되는 좌표격자의 free-form deformation 기법을 기반으로 하는 최종 안저 영상-형광안저혈관조영 영상(FP-FAG) 간 비강체 정합(non-rigid registration)을 수행하는 단계;를 포함하는 것을 특징으로 하는 안저 영상과 형광안저혈관조영 영상의 정합을 이용한 자동 혈관 분할 방법.And performing a non-rigid registration between the final fundus image-fluorescence fundus angiography image (FP-FAG) based on the free-form deformation technique of the coordinate grid represented by the B-Spline model. Automatic blood vessel segmentation method using matching of the fundus image and fluorescein angiography image, characterized in that.
  7. 제 1 항에 있어서,According to claim 1,
    상기 정합된 결과를 기초로 혈관을 분할하는 단계는,The step of dividing the blood vessel based on the matched result,
    A-FAGVP의 픽셀별 확률값을 기반으로 이중 역치(hysteresis thresholding) 기법을 적용하여 이진(binary) 혈관 분할 마스크(vessel segmentation mask)를 도출하는 단계;Deriving a binary vessel segmentation mask by applying a hysteresis thresholding technique based on the probability value for each pixel of A-FAGVP;
    도출된 혈관 분할 마스크에 연결된 요소 분석(connected component analysis) 기법을 적용하여 노이즈를 제거하는 단계; 및Removing noise by applying a connected component analysis technique to the derived vascular segmentation mask; And
    정맥(vein)에 발생하는 틈새까지 보강된 정합된 A-FAGVP를 바탕으로 분할(segmentation)하는 단계;를 포함하는 것을 특징으로 하는 안저 영상과 형광안저혈관조영 영상의 정합을 이용한 자동 혈관 분할 방법.Segmentation based on the matched A-FAGVP reinforced up to the gap occurring in the vein; automatic blood vessel segmentation method using matching of the fundus image and fluorescein angiography image.
  8. 환자의 안저 영상(FP image)과 형광안저혈관조영 영상(FAG image)의 프레임들을 취득하는 영상 취득부;An image acquisition unit acquiring frames of a patient's fundus image (FP image) and fluorescence fundus angiography image (FAG image);
    형광안저혈관조영 영상(FAG image)의 각 프레임들을 특징점 정합 기법을 활용하여 강체 정합(rigid registration)하는 FAG 정합부;A FAG matching unit for rigid registration of each frame of the fluorescein angiography image using a feature point matching technique;
    형광안저혈관조영 영상(FAG image)의 특성에 맞춰 딥러닝 기반으로 정합된 형광안저혈관조영 영상(FAG image)의 혈관 추출을 수행하는 FAG 혈관 추출부;A FAG blood vessel extraction unit performing blood vessel extraction of a matched fluorescein angiography image (FAG image) based on deep learning according to the characteristics of the fluorescence angiography image (FAG image);
    형광안저혈관조영 영상(FAG image) 프레임들의 혈관 추출 결과를 평균값으로 통합하는 통합부;An integrating unit for integrating blood vessel extraction results of fluorescein angiography (FAG image) frames into an average value;
    안저 영상의 특성에 맞춰 딥러닝 기반으로 안저 영상의 혈관 추출을 수행하는 FP 혈관 추출부; A FP blood vessel extraction unit performing blood vessel extraction of the fundus image based on deep learning according to characteristics of the fundus image;
    상기 추출된 혈관으로부터 형광안저혈관조영 영상(FAG image)과 안저 영상(FP image)의 정합을 수행하는 FAG-FP 정합부; 및A FAG-FP matching unit performing registration of a fluorescein angiography image (FAG image) and a fundus image (FP image) from the extracted blood vessels; And
    상기 정합된 결과를 기초로 혈관을 분할하는 혈관 분할부;를 포함하는 안저 영상과 형광안저혈관조영 영상의 정합을 이용한 자동 혈관 분할 장치.Automatic blood vessel segmentation device using the matching of the fundus image and the fluorescein angiography image comprising a; blood vessel segmentation unit for dividing blood vessels based on the matched results.
PCT/KR2019/017720 2018-12-13 2019-12-13 Apparatus and method for automatically segmenting blood vessels by matching fp image and fag image WO2020122672A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020180161131A KR102250688B1 (en) 2018-12-13 2018-12-13 Method and device for automatic vessel extraction of fundus photography using registration of fluorescein angiography
KR10-2018-0161131 2018-12-13

Publications (1)

Publication Number Publication Date
WO2020122672A1 true WO2020122672A1 (en) 2020-06-18

Family

ID=71077498

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/017720 WO2020122672A1 (en) 2018-12-13 2019-12-13 Apparatus and method for automatically segmenting blood vessels by matching fp image and fag image

Country Status (2)

Country Link
KR (1) KR102250688B1 (en)
WO (1) WO2020122672A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114305321A (en) * 2022-03-15 2022-04-12 汕头大学·香港中文大学联合汕头国际眼科中心 Method and system for measuring thickness of retinal vessel wall
CN115690124A (en) * 2022-11-02 2023-02-03 中国科学院苏州生物医学工程技术研究所 High-precision single-frame fundus fluorography image leakage area segmentation method and system

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20230127762A (en) 2022-02-25 2023-09-01 서울대학교병원 Device and method for detecting lesions of disease related to body component that conveyes fluid from medical image
CN114565620B (en) * 2022-03-01 2023-04-18 电子科技大学 Fundus image blood vessel segmentation method based on skeleton prior and contrast loss

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150012894A (en) * 2013-07-26 2015-02-04 서울여자대학교 산학협력단 Adaptive vessel segmentation system and the method for CTA
JP2015506730A (en) * 2011-12-09 2015-03-05 バードナー,スティーブン A method for combining multiple eye images into a plenoptic multifocal image
KR101761510B1 (en) * 2016-05-27 2017-07-26 이화여자대학교 산학협력단 Apparatus and method for generating fundus image filters for vascular visualization of fundus image
WO2018109640A1 (en) * 2016-12-15 2018-06-21 Novartis Ag Adaptive image registration for ophthalmic surgery
JP2018171177A (en) * 2017-03-31 2018-11-08 大日本印刷株式会社 Fundus image processing device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015506730A (en) * 2011-12-09 2015-03-05 バードナー,スティーブン A method for combining multiple eye images into a plenoptic multifocal image
KR20150012894A (en) * 2013-07-26 2015-02-04 서울여자대학교 산학협력단 Adaptive vessel segmentation system and the method for CTA
KR101761510B1 (en) * 2016-05-27 2017-07-26 이화여자대학교 산학협력단 Apparatus and method for generating fundus image filters for vascular visualization of fundus image
WO2018109640A1 (en) * 2016-12-15 2018-06-21 Novartis Ag Adaptive image registration for ophthalmic surgery
JP2018171177A (en) * 2017-03-31 2018-11-08 大日本印刷株式会社 Fundus image processing device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114305321A (en) * 2022-03-15 2022-04-12 汕头大学·香港中文大学联合汕头国际眼科中心 Method and system for measuring thickness of retinal vessel wall
CN115690124A (en) * 2022-11-02 2023-02-03 中国科学院苏州生物医学工程技术研究所 High-precision single-frame fundus fluorography image leakage area segmentation method and system

Also Published As

Publication number Publication date
KR20200075152A (en) 2020-06-26
KR102250688B1 (en) 2021-05-12

Similar Documents

Publication Publication Date Title
WO2020122672A1 (en) Apparatus and method for automatically segmenting blood vessels by matching fp image and fag image
WO2021040258A1 (en) Device and method for automatically diagnosing disease by using blood vessel segmentation in ophthalmic image
Wong et al. Level-set based automatic cup-to-disc ratio determination using retinal fundus images in ARGALI
Kavitha et al. Early detection of glaucoma in retinal images using cup to disc ratio
JP6842481B2 (en) 3D quantitative analysis of the retinal layer using deep learning
CN106157279A (en) Eye fundus image lesion detection method based on morphological segment
KR102155381B1 (en) Method, apparatus and software program for cervical cancer decision using image analysis of artificial intelligence based technology
KR102015224B1 (en) Method and apparatus for diagnosing cerebral hemorrhagic and brain tumor lesions based on deep learning
Giachetti et al. Multiresolution localization and segmentation of the optical disc in fundus images using inpainted background and vessel information
CN113436070B (en) Fundus image splicing method based on deep neural network
WO2010071597A1 (en) A method and system for determining the position of an optic cup boundary
KR102250689B1 (en) Method and device for automatic vessel extraction of fundus photography using registration of fluorescein angiography
CN112950737A (en) Fundus fluorescence radiography image generation method based on deep learning
Zhao et al. Attention residual convolution neural network based on U-net (AttentionResU-Net) for retina vessel segmentation
Prentasic et al. Weighted ensemble based automatic detection of exudates in fundus photographs
CN111326238A (en) Cancer cell detection device based on sliding window
CN111292285B (en) Automatic screening method for diabetes mellitus based on naive Bayes and support vector machine
Peroni et al. A deep learning approach for semantic segmentation of gonioscopic images to support glaucoma categorization
Pal et al. A fully connected reproducible SE-UResNet for multiorgan chest radiographs segmentation
Nugroho et al. Multithresholding approach for segmenting plasmodium parasites
CN114612484B (en) Retina OCT image segmentation method based on unsupervised learning
CN115272333A (en) Storage system of cup-to-disk ratio data
CN109816665A (en) A kind of fast partition method and device of optical coherence tomographic image
KR102304609B1 (en) Method for refining tissue specimen image, and computing system performing the same
CN114140830A (en) Repeated identification inhibition method based on circulating tumor cell image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19895086

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19895086

Country of ref document: EP

Kind code of ref document: A1