US10510145B2 - Medical image comparison method and system thereof - Google Patents

Medical image comparison method and system thereof Download PDF

Info

Publication number
US10510145B2
US10510145B2 US15/855,082 US201715855082A US10510145B2 US 10510145 B2 US10510145 B2 US 10510145B2 US 201715855082 A US201715855082 A US 201715855082A US 10510145 B2 US10510145 B2 US 10510145B2
Authority
US
United States
Prior art keywords
image
window
window areas
overlapping
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US15/855,082
Other versions
US20190197689A1 (en
Inventor
Jian-Ren Chen
Guan-An Chen
Su-Chen Huang
Yue-Min Jiang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial Technology Research Institute ITRI
Original Assignee
Industrial Technology Research Institute ITRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial Technology Research Institute ITRI filed Critical Industrial Technology Research Institute ITRI
Priority to US15/855,082 priority Critical patent/US10510145B2/en
Assigned to INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE reassignment INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHEN, GUAN-AN, CHEN, JIAN-REN, HUANG, SU-CHEN, JIANG, Yue-min
Priority to TW107101310A priority patent/TWI656323B/en
Priority to CN201810029729.XA priority patent/CN109978811A/en
Publication of US20190197689A1 publication Critical patent/US20190197689A1/en
Application granted granted Critical
Publication of US10510145B2 publication Critical patent/US10510145B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • A61B3/1241Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes specially adapted for observation of ocular blood flow, e.g. by fluorescein angiography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • A61B3/145Arrangements specially adapted for eye photography by video means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06K9/2054
    • G06K9/4671
    • G06K9/6202
    • G06K9/6218
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • G06T7/0016Biomedical image inspection using an image reference approach involving temporal comparison
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Definitions

  • the present disclosure relates to a medical image comparison method and system thereof, and more particularly, to medical image comparison method and system for assisting medical doctors to do medical diagnosis.
  • Retinal examination is a diagnostic procedure that allows the ophthalmologist to obtain a better view of the retinal of you eye and to look for signs of eye disease such as retinal detachment, optic neuritis, macular degeneration, glaucoma and other retinal issues.
  • eye is the only organ whose nerves can be detected in a non-invasive manner, and thus generally it can be treated as a microcosm of all the important organs in our body. Therefore, retinal images not only can be used by ophthalmologist for diagnosing eye diseases, but also clinically it can reveal representative pathological changes of organs other than eyes. That is, physicians of other disciplines, such as metabolism or neurology, can use retinal images for detecting early pathological changes of other diseases, such as diabetes, high blood pressure, high blood cholesterol, auto immune disease, etc.
  • the present disclosure provides a medical image comparison method and system, which can be used for allowing a physician to compare medical images of a specific area in a patient that are taken at different time so as to locate a region of variation from the medical images, and thus for assisting the physicians to do medical diagnosis.
  • the present disclosure provides a medical image comparison method, which comprises the steps of: obtaining a plurality of images of a body at different time points, while allowing the plural images to include a first image captured at a first time point and a second image captured at a second time point; obtaining a first feature point group by detecting feature points in the first image, while obtaining a second feature point group by detecting feature points in the second image; enabling an overlapping image information to be generated by aligning the second image with the first image according to the first feature point group and the second feature point group, while allowing the overlapping image information to include a first matching image corresponding to the first image and a second matching image corresponding to the second image; and sequentially extracting corresponding window areas from the first matching image and the second matching image in the overlapping image information respectively by the use of a sliding window mask, while calculating an image difference ratio for each of the window areas according to the ratio between the number of matching points and the number of unmatched points in the corresponding window areas of the first and the second matching images
  • the present disclosure provides a medical image comparison system, which comprises: an image processing device and an image calculation device.
  • the image processing device further comprises: an image capturing module, a feature extracting module and an information alignment module.
  • the image capturing module is used for obtaining a plurality of images of a body at different time points, while allowing the plural images to include a first image captured at a first time point and a second image captured at a second time point.
  • the feature extracting module is used for obtaining a first feature point group by detecting feature points in the first image, while obtaining a second feature point group by detecting feature points in the second image.
  • the image alignment module is coupled to the feature extracting module and is used for aligning the second image with the first image according to the first feature point group and the second feature point group so as to generate an overlapping image information, whereas the overlapping image information includes a first matching image corresponding to the first image and a second matching image corresponding to the second image.
  • the image calculation device that is coupled to the image processing device, further comprises: a difference comparison module, provided for sequentially extracting corresponding window areas from the first matching image and the second matching image in the overlapping image information respectively by the use of a sliding window mask, while calculating an image difference ratio for each of the window areas according to the ratio between the number of matching points and the number of unmatched points in the corresponding window areas of the first and the second matching images.
  • the medical comparison method and system of the present disclosure are capable of rapidly comparing medical images of a specific area in a patient that are taken at different time according to the image difference ratio so as to locate a region of variation from the medical images, by that physicians not only can be relieved from the time-consuming and labor-intensive task of having to manually search and find all the differences between retinal images that are taken at different time, but also from possible misdiagnosis caused by human error.
  • physicians are enabled to make a more objective valuation to determine whether or not the region of variation is an indication of deterioration for assisting the physicians to arrange corresponding tracking procedures.
  • FIG. 1 is a schematic diagram showing a medical image comparison system according to an embodiment of the present disclosure.
  • FIG. 2 is a flow chart depicting steps performed in a medical image comparison method according to an embodiment of the present disclosure.
  • FIG. 3A is a schematic diagram showing a retinal image of an eye that is captured at a first time point according to an embodiment of the present disclosure.
  • FIG. 3B is a schematic diagram showing another retinal image of the same eye of FIG. 3A that is captured at a second time point.
  • FIG. 4A is a schematic diagram showing feature points detected from the first image of FIG. 3A .
  • FIG. 4B is a schematic diagram showing feature points detected from the second image of FIG. 3B .
  • FIG. 5A is a schematic diagram showing the aligning of the second image to the first image according to an embodiment of the present disclosure.
  • FIG. 5B is a schematic diagram showing an overlapping image information extracted FIG. 5A .
  • FIG. 6 is a schematic diagram showing a sliding window mask according to an embodiment of the present disclosure.
  • FIG. 7A is a schematic of statistic distribution between the image difference ratio and cumulative amount of window area according to an embodiment of the present disclosure.
  • FIG. 7B is a schematic of statistic distribution between the image difference ratio and cumulative amount of window area according to another embodiment of the present disclosure.
  • FIG. 8A is a schematic of statistic distribution using a connected component labeling method.
  • FIG. 8B is a schematic diagram showing an overlapping image information using a connected component labeling method according to an embodiment of the present disclosure.
  • FIG. 8C is a schematic diagram showing an overlapping image information using a connected component labeling method according to another embodiment of the present disclosure.
  • FIG. 9A is a schematic of statistic distribution between the image difference ratio and cumulative amount in window area according to an embodiment of the present disclosure.
  • FIG. 9B is a schematic of statistic distribution between the image difference ratio and the amount in window area according to an embodiment of the present disclosure.
  • FIG. 9C is a schematic of statistic distribution of connected region area in window area.
  • FIG. 1 is a schematic diagram showing a medical image comparison system according to an embodiment of the present disclosure.
  • an exemplified medical image comparison system is disclosed, which comprises: an image processing device 12 , an image calculation device 14 and an output device 16 .
  • the image processing device 12 is used for performing a series of image processing procedures on images of an object that are captured at different time points, including import images, feature extraction, feature points matching, and image alignment, so as to obtain a correlated image area of the images to be used as a region of interest (ROI) for analysis.
  • ROI region of interest
  • the image processing device further comprises: an image capturing module 122 , a feature extracting module 124 and an information alignment module 126 .
  • the image capturing module 122 can be any electronic device with imaging capability, such as a camcorder with one or more CCD/CMOS, but it is not limited thereby.
  • the image capturing module 122 can be an image information transceiver interface that is used for receiving image information from other imaging unit or for transmitting image information to other units.
  • the image capturing module 122 is provided for capturing images of an object to be detected, which can be a body part of a user, while the images to be captured can be a map of vein network depicting blood vessel caliber, crotch angle and vessel angulation.
  • the image capturing module 122 is an eye exam device to be used for capturing image of eyes of a body so as to obtain corresponding retinal images.
  • the feature extracting module 124 can be a hardware, e.g. an integrated circuit, a software, e.g. a program, or the combination of the two.
  • the feature extracting module 124 is coupled to the image capturing module 122 for receiving images from the image capturing module 122 to be used for detecting feature points in the received image.
  • the image calculation device 14 can be a hardware, e.g. an integrated circuit, a software, e.g. a program, or the combination of the two.
  • the image calculation device 14 is coupled to the image processing device 12 for receiving the overlapping image information from the information alignment module 126 of the image processing device 12 .
  • the image calculation device 14 is provided enabled to perform a series of image processing procedures on the overlapping image information for detecting and calculating image difference ratios of window areas defined in the overlapping image information while clustering the window areas according to their correlation that is determined based upon the image difference ratios, and thus labeling an ROI as a region of variation.
  • the image calculation device 14 further comprises: a difference comparison module 142 , a cluster connection module 144 , and an image labeling module 146 .
  • the difference comparison module 142 uses a sliding window mask to sequentially extract corresponding window areas from two images in the overlapping image information and then calculating an image difference ratio for each of the window areas according to the ratio between the number of matching points and the number of unmatched points in the corresponding window areas.
  • the cluster connection module 144 is coupled to the difference comparison module 142 , which is performed based upon a connecting component labeling algorithm for clustering the connectivity of the window areas in the overlapping image information according to the image difference ratio of each of the window areas.
  • the image labeling module 146 is coupled to the cluster connection module 144 , which is provided for labeling an ROI as a region of variation according to the connectivity of the window areas in the overlapping image information.
  • the output device 16 is coupled to the image calculation device 14 for outputting the labeled region of variation of the overlapping image information
  • the output device 16 can be a display panel that is provided for displaying the labeled region of variation graphically for assisting the physicians to do medical diagnosis.
  • the output device 16 is not limited thereby, and in other embodiments, the labeled region of variation can be outputted and presented by sounds, characters, lighting, and so on.
  • FIG. 2 is a flow chart depicting steps performed in a medical image comparison method according to an embodiment of the present disclosure.
  • the medical image comparison method S 100 of FIG. 2 is adapted for the medical image comparison system 10 of FIG. 1 .
  • the medical image comparison method S 100 includes the step S 110 to step S 160 .
  • a plurality of images of a body at different time points are obtained, whereas the plural images include a first image captured at a first time point and a second image captured at a second time point.
  • the first time point and the second time point are not set to be the same, according to that the first image is an image captured earlier to be used as a reference image, while the second image is an image captured later in time to be used as a comparison image.
  • the two images of the body that are captured at different time points can be any two images of the same body that only have to be captured earlier and later in time, but without being limited by shooting angle, luminance, contrast, saturation and sharpness.
  • FIG. 3A is a schematic diagram showing a retinal image of an eye that is captured at a first time point according to an embodiment of the present disclosure.
  • FIG. 3B is a schematic diagram showing another retinal image of the same eye of FIG. 3A that is captured at a second time point.
  • the image capturing module 122 is enabled to capture retinal images on the same eye of a body at different time points.
  • the first image 302 that is captured at a first time point includes a first blood vessel 312 ; and as shown in FIG. 3B , the second image 304 that is captured at a second time point includes a second blood vessel 314 . Comparing with the positioning of the first blood vessel 312 in the first image 302 , the position of the second vessel 314 in the second image 304 is shifted.
  • a first feature point group is obtained by detecting feature points in the first image
  • a second feature point group is obtained by detecting feature points in the second image.
  • the feature points can be the singular points detected in the color space or illumination space of the images.
  • the feature extracting module 124 uses image feature extraction techniques, such as the scale invariant feature transform (SIFT) algorithm and the speeded-up robust features (SURF) algorithm, to obtain the first feature point group of the first image and the second feature point group of the second image.
  • FIG. 4A is a schematic diagram showing feature points detected from the first image of FIG. 3A .
  • FIG. 4B is a schematic diagram showing feature points detected from the second image of FIG. 3B . As the embodiment shown in FIG.
  • the feature extracting module 124 uses the aforesaid image feature extraction techniques to obtain the feature points P 11 ⁇ P 16 of the first image 302 and the feature points P 21 ⁇ P 25 of the second image 301 , which are defined respectively as the first feature point group and the second feature point group.
  • the feature points P 11 ⁇ P 15 match to the feature points P 21 ⁇ P 25 in an one-on-one manner, while leaving the feature point P 16 to stand along without matching point to be found in the second image 304 .
  • the present disclosure is not limited by the aforesaid embodiment.
  • an overlapping image information 306 is generated by aligning the second image 304 with the first image 302 according to the first feature point group and the second feature point group.
  • FIG. 5A is a schematic diagram showing the aligning of the second image to the first image according to an embodiment of the present disclosure.
  • FIG. 5B is a schematic diagram showing an overlapping image information extracted FIG. 5A .
  • the aligning of the second image 304 with the first image 302 that is enabled by the use of the information alignment module 126 is enabled by the use of a random sample consensus (RANSAC) algorithm with affine mapping, as shown in FIG. 5A .
  • RNSAC random sample consensus
  • FIG. 6 is a schematic diagram showing a sliding window mask according to an embodiment of the present disclosure.
  • the images shown in FIG. 6 is simply provided for separating the first matching image 402 from overlapping with the second matching image 404 in the overlapping image information 308 , whereas in reality, the first matching image 402 is overlapped with the second matching image 404 .
  • a sliding window mask 50 is used for sequentially extracting corresponding window areas from the first matching image 402 and the second matching image 404 in the overlapping image information 308 , while calculating an image difference ratio for each of the window areas according to the ratio between the number of matching points and the number of unmatched points in the corresponding window areas of the first and the second matching images 402 , 404 ,
  • the sliding window mask 50 used in the difference comparison module 142 is formed in a rectangular shape, but it is not limited thereby and can be in any shape according to actual requirement.
  • the first matching image 402 and the second matching image 404 are partitioned and segmented into a plurality of the window areas r xy , whereas x and y represents respectively the horizontal coordinate and the vertical coordinate in a coordinate system.
  • any two neighboring window areas that are defined by two continue movements of the sliding window mask 50 are not overlapped with each other, so that there is no overlapping between window areas r xy .
  • the present disclosure is not limited thereby, and thus in other embodiments, there are partial overlapping between any two neighboring window areas that are defined by two continue movements of the sliding window mask 50 , i.e. there can be overlapping between window areas r xy .
  • the image difference ratio f(r xy ) is represented by the following formula:
  • n i is the number of unmatched points in the i th image
  • n j is the number of unmatched points in the j th image
  • M i is the number of matched points in the i th image
  • M j is the number of matched points in the j th image
  • the i th image and the j th image are images of the same object that are captured at different time points.
  • the first matching image 402 is defined to be the i th image
  • the second matching image 404 is defined to be the j th image.
  • the image difference ratio f(r xy ) is the ratio between the number of matching points and the number of unmatched points, as shown in the formula (1) and formula (2), but it is not limited thereby.
  • the sliding window mask 50 is enabled to move simultaneously in the first and the second matching images 402 , 404 , while extracting corresponding window areas r xy respectively from the first matching image 402 and the second matching image 404 in each movement.
  • the initial position of the sliding window mask 50 is the r 11 window areas in the first matching image 402 and the second matching image 404
  • first an calculation is enabled to obtain the ratio between the number of matching points and the number of unmatched points in the window areas r 11 .
  • the ratio between the number of matching points and the number of unmatched points can be obtained and used in the formula (1) and the formula (2) for obtaining an image difference ratio f(r 11 ) of the window area r 11 .
  • the sliding window mask 50 is moved from the window area r 11 to the neighboring window are r 21 .
  • an image difference ratio f(r 21 ) of the window area r 21 can be obtained.
  • the image difference ratio f(r xy ) of the window area r 11 is 0, which indicates that all the feature points detected in the first matching image 402 match entirely to those detected in the second matching image 404 .
  • the difference between the window area r 11 of the first matching image 402 and the window area r 11 of the second matching image 404 is almost neglectable.
  • the image difference ratios f(r xy ) for all the window areas r xy defined in the overlapping image information 308 can be obtained and used for determining the variation between the first matching image 402 and the second matching image 404 for assisting physicians to do medical diagnosis.
  • a process for clustering the connectivity of the window areas in the overlapping image information is enabled according to the image difference ratio of each of the window areas.
  • a cumulative amount of window areas r xy is calculated with respect to each image difference ratio f(r xy ) of the window area r xy in the overlapping image information 308 by the use of the cluster connection module 144 .
  • the process then will define a series of cumulative cluster intervals to be used for clustering window areas r xy whose corresponding image difference ratios f(r xy ) is confirming to the defining of one cumulative interval in the series of cumulative cluster intervals as the same group.
  • the image difference ratios f(r xy ) are values ranged between 0 and 1 and defining the series of ten cumulative cluster intervals as following: the first cumulative cluster interval is ranged between 0 ⁇ 0.1, the second cumulative cluster interval is ranged between 0 ⁇ 0.2, . . .
  • the window areas r xy can be clustered and assigned to the cluster group according to their respective image difference ratios f(r xy ), and then the cumulative amount of window areas r xy for each of the cumulative cluster intervals can be calculated and obtained.
  • a statistic distribution between the image difference ratio f(r xy ) and the cumulative amount of window area can be generated, whereas the statistic distribution can be represented as a histogram, a bar chart, a pie chart or a line chart. In other embodiments, the statistic distribution is represented as a table. As shown in FIG. 7A and FIG.
  • the statistic distribution are represented as line charts, whereas the horizontal coordinate represents the image difference ratio f(r xy ) and the vertical coordinate represents the cumulative amount n of window area r xy .
  • the horizontal coordinate represents the image difference ratio f(r xy )
  • the vertical coordinate represents the cumulative amount n of window area r xy .
  • FIG. 8A is a schematic of statistic distribution using a connected component labeling method.
  • FIG. 8B is a schematic diagram showing an overlapping image information using a connected component labeling method according to an embodiment of the present disclosure.
  • FIG. 8C is a schematic diagram showing an overlapping image information using a connected component labeling method according to another embodiment of the present disclosure. It is noted that the amount of window area in the overlapping image information 308 of FIG. 8B as well as the amount of window area in the overlapping image information 308 of FIG. 8C are only used for illustration, and thus the present disclosure is not limited thereby.
  • the image difference ratio f(r xy ) are scanned in a manner from small to large in value.
  • the image difference ratio f(r xy ) can be scanned in a manner from large to small in value.
  • a value k 0 is defined as a specific image difference ratio f(r xy ), and then a scanning is enabled on the overlapping image information 308 for locating all the window areas with image difference ratio f(r xy ) that are conforming to the specific value k 0 so as to label those window areas r xy as the selected window areas of the specific value.
  • the image difference ratio f(r 23 ) of the window area r 23 , the image difference ratio f(r 24 ) of the window area r 24 , the image difference ratio f(r 31 ) of the window area r 31 , and the image difference ratio f(r 33 ) of the window area r 33 are conforming to the specific value k 0 , and thus the window area r 23 , the window area r 24 , the window area r 31 , and the window area r 33 are defined as the selected window areas of the specific value.
  • a detection is enabled on those selected window areas of the specific value for identifying and grouping the neighboring window areas in the selected window areas of the specific value into regions of variation.
  • a connected component labeling method is used in the detection for identifying and grouping the neighboring window areas in the selected window areas of the specific value into regions of variation, whereas the connected component labeling method can be a 8-neighbor point connected component labeling method, or a 4-neighbor point connected component labeling method.
  • the window area r 23 , the window area r 24 , and the window area r 33 are neighbors, so that they are connected to form one region of variation 20 . That is, the present disclosure is able to connect individual smaller window areas according to defining of the image difference ratio into one larger region of variation.
  • the total area of the region of variation can be obtained as the totality of the window areas in the region of variation. That is, as shown in FIG. 8B , the total area of the region of variation 20 is the total area of the window area r 23 , the window area r 24 , and the window area r 33 .
  • FIG. 8B the total area of the region of variation 20 is the total area of the window area r 23 , the window area r 24 , and the window area r 33 .
  • the window area r 31 itself forms one individual region of area 22 as it is not connected to any other selected window areas of the specific value, and thus it is defined as a variation region of one-unit-area, whereas the variation region 22 of FIG. 8A is defined as a variation region of three-unit-area as it is the collection of three window areas.
  • there can be partial overlapping in the neighboring window areas in each movement of the sliding window mask 50 so that the total area of the variation region should be the totality of the neighboring window area after subtracting the overlapping area.
  • the window area r 21 is a stand along area that is not connected to other selected area and thus is assigned as a variation region of one-unit-area; the window area r 43 and the window area r 44 are connected into a variation region 24 of two-unit-area; the window area r 25 , the window area r 35 and the window area r 36 are connected into a variation region 25 of three-unit-area; the window area r 51 , the window area r 52 , the window area r 61 , and the window area r 62 are connected into a variation region 26 of four-unit-area; and similarly the window area r 55 and the window area r 56 , the window area r 65 , and the window area r 66 are connected into another variation region 26 of four-unit-area.
  • step S 150 the connectivity of the window areas in the overlapping image information is clustered according to the image difference ratio of each of the window areas; and then the flow proceeds to step s 160 .
  • step S 160 the region of variation on the overlapping image information is identified and labeled.
  • the output device 16 is used for outputting an overlapping image information 308 and provided for assisting physicians or medical personnel to do diagnosis by labeling a specific value b to the variation regions.
  • FIG. 9A is a schematic of statistic distribution between the image difference ratio and cumulative amount in window area according to an embodiment of the present disclosure.
  • the statistic distribution shown in FIG. 9A is a histogram, in which the horizontal coordinate represents the image difference ratio f(r xy ) and the vertical coordinate represents the cumulative amount n of window area r xy .
  • the statistic distribution can be represented using a table.
  • the process then will define a series of cumulative cluster intervals d to be used for clustering window areas r xy whose corresponding image difference ratios f(r xy ) is confirming to the defining of one cumulative interval in the series of cumulative cluster intervals as the same group, while counting the total amount of the window areas r xy of the same group as its cumulative amount n.
  • a series of cumulative cluster intervals d to be used for clustering window areas r xy whose corresponding image difference ratios f(r xy ) is confirming to the defining of one cumulative interval in the series of cumulative cluster intervals as the same group, while counting the total amount of the window areas r xy of the same group as its cumulative amount n.
  • the bar at the image difference ratios f(r xy ) of k represents the window areas r xy whose image difference ratios f(r xy ) is ranged between 0 ⁇ k and the cumulative amount n of those window areas r xy is 18;
  • the bar at the image difference ratios f(r xy ) of k ⁇ d represents the window areas r xy whose image difference ratios f(r xy ) is ranged between 0 ⁇ k ⁇ d and the cumulative amount n of those window areas r xy is 15;
  • the bar at the image difference ratios f(r xy ) of k ⁇ 2d represents the window areas r xy whose image difference ratios f(r xy ) is ranged between 0 ⁇ k ⁇ 2d and the cumulative amount n of those window areas r xy is 10;
  • the bar at the image difference ratios f(r xy ) of k ⁇ 3d represents the
  • FIG. 9B is a schematic of statistic distribution between the image difference ratio and the amount in window area according to an embodiment of the present disclosure.
  • the static distribution of FIG. 9B is also a histogram, which is similar to that shown in FIG. 9A but is not limited thereby.
  • the statistic distribution shown in FIG. 9B is a histogram, in which the horizontal coordinate represents the image difference ratio f(r xy ) and the vertical coordinate represents the amount n1 of window area r xy .
  • FIG. 9B is a schematic of statistic distribution between the image difference ratio and the amount in window area according to an embodiment of the present disclosure.
  • the static distribution of FIG. 9B is also a histogram, which is similar to that shown in FIG. 9A but is not limited thereby.
  • the statistic distribution shown in FIG. 9B is a histogram, in which the horizontal coordinate represents the image difference ratio f(r xy ) and the vertical coordinate represents the amount n1 of window area r xy .
  • a cumulative cluster interval d is used for clustering the window areas.
  • the bar at the image difference ratios f(r xy ) of k ⁇ 6d represents the window areas r xy whose image difference ratios f(r xy ) is ranged between 0 ⁇ k ⁇ 6d and the amount n1 of those window areas r xy is 1, whereas in FIG.
  • the bar at the image difference ratios f(r xy ) of k ⁇ 5d represents the window areas r xy whose image difference ratios f(r x ) is ranged between 0 ⁇ k ⁇ 5d and the cumulative amount n of those window areas r xy is 2, therefore, it can conclude that after subtracting the one window area with image difference ration of k ⁇ 6d, the amount n1 of the window area with image difference ratios f(r xy ) is ranged between k ⁇ 5d ⁇ k ⁇ 6d is 1, and therefore, in FIG. 9B , the bar at the.
  • the amount n1 of those window areas r xy with image difference ratios f(r xy ) of k ⁇ 4d is 1; the amount n1 of those window areas r xy with image difference ratios f(r xy ) of k ⁇ 3d is 2; the amount n1 of those window areas r xy with image difference ratios f(r xy ) of k ⁇ 2d is 5; the amount n1 of those window areas r xy with image difference ratios f(r xy ) of k ⁇ d is 5; and the amount n1 of those window areas r xy with image difference ratios f(r xy ) of k is 3.
  • the region with maximum variation is the region with image difference ratios f(r xy ) of k, and the amount n1 of those window areas r xy with image difference ratios f(r xy ) ranged between k ⁇ d ⁇ k is 3; while the region with minimum variation is the region with image difference ratios f(r xy ) of k ⁇ 6d, and the amount n1 of those window areas r xy with image difference ratios f(r xy ) ranged between 0 ⁇ k ⁇ 6d is 1.
  • a connected component labeling algorithm is used by the defining of a specific value b, that is, when a physician input a value b, he/she is intended to locate b regions of variation that are most obvious in the overlapping image information.
  • a specific value b is 3, and in FIG. 9B where the amount n1 of those window areas r xy with image difference ratios f(r xy ) of k is 3, by that the required amount of variation regions are located and labeled for providing the variation regions to the physician to do diagnosis.
  • FIG. 9C is a schematic of statistic distribution of connected region area in window area.
  • the horizontal coordinate represents the connected region area, which are one-unit-area, two-unit-area, three-unit-area and for-unit-area from small to large, and the vertical coordinate represents the amount of the corresponding window areas of different sizes. It is noted that the histogram shown in FIG. 9C represents the window area with image difference ratios f(r xy ) of k ⁇ d of FIG. 8C . In FIG.
  • the window area r 21 indicating as the region 23 , is a stand along area that is not connected to other selected area and thus is assigned as a variation region of one-unit-area, and three is only one variation region of one-unit-area; similarly, there is one variation area of two-unit-area 24 , which is a connected region of the window area r 43 and the window area r 44 ; there is one variation area of three-unit-area 25 , which is a connected region of the window area r 25 , the window area r 35 and the window area r 36 ; and there are two variation area of four-unit-area 26 , which are respectively a region of; the window area r 51 , the window area r 52 , the window area r 61 , and the window area r 62 , and a region of the window area r 55 and the window area r 56 , the window area r 65 , and
  • the identified variation regions are ordered according to their size, while the four largest variation regions are selected. That is, in this embodiment, the two variation area of four-unit-area 26 are selected first, and then the variation area of three-unit-area 25 , the variation area of two-unit-area 24 are selected following.
  • the medical comparison method and system of the present disclosure are capable of rapidly comparing medical images of a specific area in a patient that are taken at different time according to the image difference ratio so as to locate a region of variation from the medical images, by that physicians not only can be relieved from the time-consuming and labor-intensive task of having to manually search and find all the differences between retinal images that are taken at different time, but also from possible misdiagnosis caused by human error.
  • physicians are enabled to make a more objective valuation to determine whether or not the region of variation is an indication of deterioration for assisting the physicians to arrange corresponding tracking procedures.
  • the degree of matching between two images of the same target area can be evaluated and determined, which can be performed without being limited by the differences in imaging angle, the use of light source, luminance and parameter configuration between different retinal imaging processes in each imaging.
  • window areas can be clustered and connected so as to be used for labeling variation region according to a specific label value, and thus the label variation regions can be provided to a physician to be diagnosis.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Ophthalmology & Optometry (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Hematology (AREA)
  • Vascular Medicine (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A medical image comparison method is firstly to obtain plural images of the same body at different time points. Then, obtain a first feature point group by detecting feature points in the first image captured at a first time point, and a second feature point group by identifying feature points in the second image captured at a second time point. An overlapping image information is generated by aligning the second image with the first image according to the first and second feature point groups. Then, window areas corresponding to a first matching image and a second matching image of the overlapping image information are extracted one by one by sliding a window mask, and an image difference ratio for each of the window areas is calculated. In addition, a medical image comparison system is also provided.

Description

TECHNICAL FIELD
The present disclosure relates to a medical image comparison method and system thereof, and more particularly, to medical image comparison method and system for assisting medical doctors to do medical diagnosis.
BACKGROUND
With rapid advance in medical image diagnosis, medical personnel is becoming more and more accustomed to use medical imaging as an assisting means for diagnosing the clinical condition of a patient, and thereby, minute pathological changes in living organisms can be detected before the appearance of symptoms.
Retinal examination is a diagnostic procedure that allows the ophthalmologist to obtain a better view of the retinal of you eye and to look for signs of eye disease such as retinal detachment, optic neuritis, macular degeneration, glaucoma and other retinal issues. It is noted that eye is the only organ whose nerves can be detected in a non-invasive manner, and thus generally it can be treated as a microcosm of all the important organs in our body. Therefore, retinal images not only can be used by ophthalmologist for diagnosing eye diseases, but also clinically it can reveal representative pathological changes of organs other than eyes. That is, physicians of other disciplines, such as metabolism or neurology, can use retinal images for detecting early pathological changes of other diseases, such as diabetes, high blood pressure, high blood cholesterol, auto immune disease, etc.
For those conventional medical imaging techniques that are currently available, such as the aforesaid fundus imaging, patients have to be subjected to an retinal examination once every two years for tracking the morphology of a target area. However, since there may be differences in imaging angle, the use of light source, luminance and parameter configuration between different retinal imaging processes, the resulted retinal images are different accordingly. Thus, physicians have to manually search and find all the differences between retinal images that are taken at different time, which not only it is a time-consuming and labor-intensive task, but also the manual difference identification may easily leads to misdiagnosis as human error is not a easy task to prevent. Moreover, owing to the subjective valuation difference, different physicians may have different identification results about the same retinal image.
Therefore, it is in need of an improved medical image comparison method and system thereof, capable of overcoming the aforesaid problems.
SUMMARY
The present disclosure provides a medical image comparison method and system, which can be used for allowing a physician to compare medical images of a specific area in a patient that are taken at different time so as to locate a region of variation from the medical images, and thus for assisting the physicians to do medical diagnosis.
In an embodiment, the present disclosure provides a medical image comparison method, which comprises the steps of: obtaining a plurality of images of a body at different time points, while allowing the plural images to include a first image captured at a first time point and a second image captured at a second time point; obtaining a first feature point group by detecting feature points in the first image, while obtaining a second feature point group by detecting feature points in the second image; enabling an overlapping image information to be generated by aligning the second image with the first image according to the first feature point group and the second feature point group, while allowing the overlapping image information to include a first matching image corresponding to the first image and a second matching image corresponding to the second image; and sequentially extracting corresponding window areas from the first matching image and the second matching image in the overlapping image information respectively by the use of a sliding window mask, while calculating an image difference ratio for each of the window areas according to the ratio between the number of matching points and the number of unmatched points in the corresponding window areas of the first and the second matching images.
In an embodiment, the present disclosure provides a medical image comparison system, which comprises: an image processing device and an image calculation device. The image processing device further comprises: an image capturing module, a feature extracting module and an information alignment module. The image capturing module is used for obtaining a plurality of images of a body at different time points, while allowing the plural images to include a first image captured at a first time point and a second image captured at a second time point. The feature extracting module is used for obtaining a first feature point group by detecting feature points in the first image, while obtaining a second feature point group by detecting feature points in the second image. The image alignment module is coupled to the feature extracting module and is used for aligning the second image with the first image according to the first feature point group and the second feature point group so as to generate an overlapping image information, whereas the overlapping image information includes a first matching image corresponding to the first image and a second matching image corresponding to the second image. The image calculation device, that is coupled to the image processing device, further comprises: a difference comparison module, provided for sequentially extracting corresponding window areas from the first matching image and the second matching image in the overlapping image information respectively by the use of a sliding window mask, while calculating an image difference ratio for each of the window areas according to the ratio between the number of matching points and the number of unmatched points in the corresponding window areas of the first and the second matching images.
From the above description, the medical comparison method and system of the present disclosure are capable of rapidly comparing medical images of a specific area in a patient that are taken at different time according to the image difference ratio so as to locate a region of variation from the medical images, by that physicians not only can be relieved from the time-consuming and labor-intensive task of having to manually search and find all the differences between retinal images that are taken at different time, but also from possible misdiagnosis caused by human error. Moreover, physicians are enabled to make a more objective valuation to determine whether or not the region of variation is an indication of deterioration for assisting the physicians to arrange corresponding tracking procedures.
Further scope of applicability of the present application will become more apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating exemplary embodiments of the disclosure, are given by way of illustration only, since various changes and modifications within the spirit and scope of the disclosure will become apparent to those skilled in the art from this detailed description.
BRIEF DESCRIPTION OF THE DRAWINGS
The present disclosure will become more fully understood from the detailed description given herein below and the accompanying drawings which are given by way of illustration only, and thus are not limitative of the present disclosure and wherein:
FIG. 1 is a schematic diagram showing a medical image comparison system according to an embodiment of the present disclosure.
FIG. 2 is a flow chart depicting steps performed in a medical image comparison method according to an embodiment of the present disclosure.
FIG. 3A is a schematic diagram showing a retinal image of an eye that is captured at a first time point according to an embodiment of the present disclosure.
FIG. 3B is a schematic diagram showing another retinal image of the same eye of FIG. 3A that is captured at a second time point.
FIG. 4A is a schematic diagram showing feature points detected from the first image of FIG. 3A.
FIG. 4B is a schematic diagram showing feature points detected from the second image of FIG. 3B.
FIG. 5A is a schematic diagram showing the aligning of the second image to the first image according to an embodiment of the present disclosure.
FIG. 5B is a schematic diagram showing an overlapping image information extracted FIG. 5A.
FIG. 6 is a schematic diagram showing a sliding window mask according to an embodiment of the present disclosure.
FIG. 7A is a schematic of statistic distribution between the image difference ratio and cumulative amount of window area according to an embodiment of the present disclosure.
FIG. 7B is a schematic of statistic distribution between the image difference ratio and cumulative amount of window area according to another embodiment of the present disclosure.
FIG. 8A is a schematic of statistic distribution using a connected component labeling method.
FIG. 8B is a schematic diagram showing an overlapping image information using a connected component labeling method according to an embodiment of the present disclosure.
FIG. 8C is a schematic diagram showing an overlapping image information using a connected component labeling method according to another embodiment of the present disclosure.
FIG. 9A is a schematic of statistic distribution between the image difference ratio and cumulative amount in window area according to an embodiment of the present disclosure.
FIG. 9B is a schematic of statistic distribution between the image difference ratio and the amount in window area according to an embodiment of the present disclosure.
FIG. 9C is a schematic of statistic distribution of connected region area in window area.
DETAILED DESCRIPTION
In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosed embodiments. It will be apparent, however, that one or more embodiments may be practiced without these specific details. In other instances, well-known structures and devices are schematically shown in order to simplify the drawing.
FIG. 1 is a schematic diagram showing a medical image comparison system according to an embodiment of the present disclosure. In FIG. 1, an exemplified medical image comparison system is disclosed, which comprises: an image processing device 12, an image calculation device 14 and an output device 16.
In this embodiment, the image processing device 12 is used for performing a series of image processing procedures on images of an object that are captured at different time points, including import images, feature extraction, feature points matching, and image alignment, so as to obtain a correlated image area of the images to be used as a region of interest (ROI) for analysis.
In detail, the image processing device further comprises: an image capturing module 122, a feature extracting module 124 and an information alignment module 126.
In this embodiment, the image capturing module 122 can be any electronic device with imaging capability, such as a camcorder with one or more CCD/CMOS, but it is not limited thereby. In one embodiment, the image capturing module 122 can be an image information transceiver interface that is used for receiving image information from other imaging unit or for transmitting image information to other units.
The image capturing module 122 is provided for capturing images of an object to be detected, which can be a body part of a user, while the images to be captured can be a map of vein network depicting blood vessel caliber, crotch angle and vessel angulation. In an embodiment, the image capturing module 122 is an eye exam device to be used for capturing image of eyes of a body so as to obtain corresponding retinal images.
In this embodiment, the feature extracting module 124 can be a hardware, e.g. an integrated circuit, a software, e.g. a program, or the combination of the two. The feature extracting module 124 is coupled to the image capturing module 122 for receiving images from the image capturing module 122 to be used for detecting feature points in the received image.
In this embodiment, the image calculation device 14 can be a hardware, e.g. an integrated circuit, a software, e.g. a program, or the combination of the two. The image calculation device 14 is coupled to the image processing device 12 for receiving the overlapping image information from the information alignment module 126 of the image processing device 12. The image calculation device 14 is provided enabled to perform a series of image processing procedures on the overlapping image information for detecting and calculating image difference ratios of window areas defined in the overlapping image information while clustering the window areas according to their correlation that is determined based upon the image difference ratios, and thus labeling an ROI as a region of variation.
In detail, the image calculation device 14 further comprises: a difference comparison module 142, a cluster connection module 144, and an image labeling module 146.
In this embodiment, the difference comparison module 142 uses a sliding window mask to sequentially extract corresponding window areas from two images in the overlapping image information and then calculating an image difference ratio for each of the window areas according to the ratio between the number of matching points and the number of unmatched points in the corresponding window areas.
In this embodiment, the cluster connection module 144 is coupled to the difference comparison module 142, which is performed based upon a connecting component labeling algorithm for clustering the connectivity of the window areas in the overlapping image information according to the image difference ratio of each of the window areas.
In this embodiment, the image labeling module 146 is coupled to the cluster connection module 144, which is provided for labeling an ROI as a region of variation according to the connectivity of the window areas in the overlapping image information.
In this embodiment, the output device 16 is coupled to the image calculation device 14 for outputting the labeled region of variation of the overlapping image information, The output device 16 can be a display panel that is provided for displaying the labeled region of variation graphically for assisting the physicians to do medical diagnosis. Nevertheless, the output device 16 is not limited thereby, and in other embodiments, the labeled region of variation can be outputted and presented by sounds, characters, lighting, and so on.
FIG. 2 is a flow chart depicting steps performed in a medical image comparison method according to an embodiment of the present disclosure. The medical image comparison method S100 of FIG. 2 is adapted for the medical image comparison system 10 of FIG. 1.
In this embodiment, the medical image comparison method S100 includes the step S110 to step S160.
At step S110, a plurality of images of a body at different time points are obtained, whereas the plural images include a first image captured at a first time point and a second image captured at a second time point.
In an embodiment, the first time point and the second time point are not set to be the same, according to that the first image is an image captured earlier to be used as a reference image, while the second image is an image captured later in time to be used as a comparison image. In addition, the two images of the body that are captured at different time points can be any two images of the same body that only have to be captured earlier and later in time, but without being limited by shooting angle, luminance, contrast, saturation and sharpness. FIG. 3A is a schematic diagram showing a retinal image of an eye that is captured at a first time point according to an embodiment of the present disclosure. FIG. 3B is a schematic diagram showing another retinal image of the same eye of FIG. 3A that is captured at a second time point. In this embodiment, the image capturing module 122 is enabled to capture retinal images on the same eye of a body at different time points. As shown in FIG. 3A, the first image 302 that is captured at a first time point includes a first blood vessel 312; and as shown in FIG. 3B, the second image 304 that is captured at a second time point includes a second blood vessel 314. Comparing with the positioning of the first blood vessel 312 in the first image 302, the position of the second vessel 314 in the second image 304 is shifted.
At step S120, with reference to FIG. 2, a first feature point group is obtained by detecting feature points in the first image, and a second feature point group is obtained by detecting feature points in the second image. It is noted that the feature points can be the singular points detected in the color space or illumination space of the images.
In this embodiment, the feature extracting module 124 uses image feature extraction techniques, such as the scale invariant feature transform (SIFT) algorithm and the speeded-up robust features (SURF) algorithm, to obtain the first feature point group of the first image and the second feature point group of the second image. FIG. 4A is a schematic diagram showing feature points detected from the first image of FIG. 3A. FIG. 4B is a schematic diagram showing feature points detected from the second image of FIG. 3B. As the embodiment shown in FIG. 4A, the feature extracting module 124 uses the aforesaid image feature extraction techniques to obtain the feature points P11˜P16 of the first image 302 and the feature points P21˜P25 of the second image 301, which are defined respectively as the first feature point group and the second feature point group. In this embodiment, the feature points P11˜P15 match to the feature points P21˜P25 in an one-on-one manner, while leaving the feature point P16 to stand along without matching point to be found in the second image 304. However, the present disclosure is not limited by the aforesaid embodiment.
At step S130, an overlapping image information 306 is generated by aligning the second image 304 with the first image 302 according to the first feature point group and the second feature point group. FIG. 5A is a schematic diagram showing the aligning of the second image to the first image according to an embodiment of the present disclosure. FIG. 5B is a schematic diagram showing an overlapping image information extracted FIG. 5A. In this embodiment, the aligning of the second image 304 with the first image 302 that is enabled by the use of the information alignment module 126 is enabled by the use of a random sample consensus (RANSAC) algorithm with affine mapping, as shown in FIG. 5A. After aligning the second image 304 with the first image 302, a correlated image area 306 of the images is detected and used as a region of interest (ROI) for generating the overlapping image information 308 accordingly, as shown in FIG. 5B, whereas the overlapping image information 308 includes a first matching image 402 that is corresponding to the first image 302, and a second matching image 404, that is corresponding to the second image, as shown in FIG. 6. FIG. 6 is a schematic diagram showing a sliding window mask according to an embodiment of the present disclosure. For clarification, the images shown in FIG. 6 is simply provided for separating the first matching image 402 from overlapping with the second matching image 404 in the overlapping image information 308, whereas in reality, the first matching image 402 is overlapped with the second matching image 404.
At step S140, a sliding window mask 50 is used for sequentially extracting corresponding window areas from the first matching image 402 and the second matching image 404 in the overlapping image information 308, while calculating an image difference ratio for each of the window areas according to the ratio between the number of matching points and the number of unmatched points in the corresponding window areas of the first and the second matching images 402, 404,
In an embodiment, the sliding window mask 50 used in the difference comparison module 142 is formed in a rectangular shape, but it is not limited thereby and can be in any shape according to actual requirement. By the sequential extraction of the sliding window mask 50 that are performed respectively on the first matching image 402 and the second matching image 404 for obtaining corresponding window areas rxy, the first matching image 402 and the second matching image 404 are partitioned and segmented into a plurality of the window areas rxy, whereas x and y represents respectively the horizontal coordinate and the vertical coordinate in a coordinate system.
In this embodiment, any two neighboring window areas that are defined by two continue movements of the sliding window mask 50 are not overlapped with each other, so that there is no overlapping between window areas rxy. However, the present disclosure is not limited thereby, and thus in other embodiments, there are partial overlapping between any two neighboring window areas that are defined by two continue movements of the sliding window mask 50, i.e. there can be overlapping between window areas rxy.
In an embodiment, the image difference ratio f(rxy) is represented by the following formula:
f ( r xy ) = n i + n j ( n i + M i ) + ( n j + M j ) ; ( 1 ) f ( r xy ) = n i × n j ( n i + M i ) × ( n j + M j ) ; ( 2 )
wherein, 0≤f(rxy)≤1
In the formula (1) and formula (2), ni is the number of unmatched points in the ith image; nj is the number of unmatched points in the jth image; Mi is the number of matched points in the ith image; Mj is the number of matched points in the jth image; and the ith image and the jth image are images of the same object that are captured at different time points. In this embodiment, the first matching image 402 is defined to be the ith image, and the second matching image 404 is defined to be the jth image. In the present disclosure, the image difference ratio f(rxy) is the ratio between the number of matching points and the number of unmatched points, as shown in the formula (1) and formula (2), but it is not limited thereby.
In this embodiment, the sliding window mask 50 is enabled to move simultaneously in the first and the second matching images 402, 404, while extracting corresponding window areas rxy respectively from the first matching image 402 and the second matching image 404 in each movement. In FIG. 6, assuming the initial position of the sliding window mask 50 is the r11 window areas in the first matching image 402 and the second matching image 404, first an calculation is enabled to obtain the ratio between the number of matching points and the number of unmatched points in the window areas r11. That is, by detecting the first feature point group in the first matching image 402 and the second feature point group in the second matching image 404 for matching the first feature point group with the second feature point group so as to obtain the number of matching points and the number of unmatched points; and thereby, the ratio between the number of matching points and the number of unmatched points can be obtained and used in the formula (1) and the formula (2) for obtaining an image difference ratio f(r11) of the window area r11.
Then, the sliding window mask 50 is moved from the window area r11 to the neighboring window are r21. Similarly, an image difference ratio f(r21) of the window area r21 can be obtained. In this embodiment, there is no overlapping between the window area r11 and the window are r21. However, in another embodiment that is not provided in this description, there can be overlap between the window area r11 and the window are r21.
In an embodiment, when formula (1) is used for defining the image difference ratio f(rxy), the image difference ratio f(r11) of the window area r11 is 0, which indicates that all the feature points detected in the first matching image 402 match entirely to those detected in the second matching image 404. Thus, it can conclude that the difference between the window area r11 of the first matching image 402 and the window area r11 of the second matching image 404 is almost neglectable. Accordingly, by the use of the step S140, the image difference ratios f(rxy) for all the window areas rxy defined in the overlapping image information 308 can be obtained and used for determining the variation between the first matching image 402 and the second matching image 404 for assisting physicians to do medical diagnosis.
At the step S150, a process for clustering the connectivity of the window areas in the overlapping image information is enabled according to the image difference ratio of each of the window areas.
In this embodiment, a cumulative amount of window areas rxy is calculated with respect to each image difference ratio f(rxy) of the window area rxy in the overlapping image information 308 by the use of the cluster connection module 144.
Operationally, since the image difference ratio f(rxy) for each window area rxy in the overlapping image information 308 can be obtained from the step S140, the process then will define a series of cumulative cluster intervals to be used for clustering window areas rxy whose corresponding image difference ratios f(rxy) is confirming to the defining of one cumulative interval in the series of cumulative cluster intervals as the same group. For instance, as the image difference ratios f(rxy) are values ranged between 0 and 1 and defining the series of ten cumulative cluster intervals as following: the first cumulative cluster interval is ranged between 0˜0.1, the second cumulative cluster interval is ranged between 0˜0.2, . . . , and the tenth cumulative cluster interval is ranged between 0˜1, accordingly the window areas rxy can be clustered and assigned to the cluster group according to their respective image difference ratios f(rxy), and then the cumulative amount of window areas rxy for each of the cumulative cluster intervals can be calculated and obtained. In this embodiment, a statistic distribution between the image difference ratio f(rxy) and the cumulative amount of window area can be generated, whereas the statistic distribution can be represented as a histogram, a bar chart, a pie chart or a line chart. In other embodiments, the statistic distribution is represented as a table. As shown in FIG. 7A and FIG. 7B, the statistic distribution are represented as line charts, whereas the horizontal coordinate represents the image difference ratio f(rxy) and the vertical coordinate represents the cumulative amount n of window area rxy. As shown in FIG. 7A, with the increasing of the image difference ratio f(rxy), the increasing of the cumulative amount n of window area rxy is slowing down which indicate that the region of variation is concentrating at window areas with smaller image difference ratio f(rxy), and there is little difference between the two images. Comparing to that shown in FIG. 7B where with the increasing of the image difference ratio f(rxy), the increasing of the cumulative amount n of window area rxy is rising abruptly which indicate that the region of variation is concentrating at window areas with larger image difference ratio f(rxy), and there is a big difference between the two images.
FIG. 8A is a schematic of statistic distribution using a connected component labeling method. FIG. 8B is a schematic diagram showing an overlapping image information using a connected component labeling method according to an embodiment of the present disclosure. FIG. 8C is a schematic diagram showing an overlapping image information using a connected component labeling method according to another embodiment of the present disclosure. It is noted that the amount of window area in the overlapping image information 308 of FIG. 8B as well as the amount of window area in the overlapping image information 308 of FIG. 8C are only used for illustration, and thus the present disclosure is not limited thereby. In the statistic distribution shown in FIG. 8A, the image difference ratio f(rxy) are scanned in a manner from small to large in value. In another embodiment, the image difference ratio f(rxy) can be scanned in a manner from large to small in value. For instance, in an embodiment, a value k0 is defined as a specific image difference ratio f(rxy), and then a scanning is enabled on the overlapping image information 308 for locating all the window areas with image difference ratio f(rxy) that are conforming to the specific value k0 so as to label those window areas rxy as the selected window areas of the specific value. As the overlapping image information 308 shown in FIG. 8B, the image difference ratio f(r23) of the window area r23, the image difference ratio f(r24) of the window area r24, the image difference ratio f(r31) of the window area r31, and the image difference ratio f(r33) of the window area r33 are conforming to the specific value k0, and thus the window area r23, the window area r24, the window area r31, and the window area r33 are defined as the selected window areas of the specific value. In another embodiment shown in FIG. 8C, when a specific value range k˜d of the image difference ratio f(rxy) is defined and the scanning indicates that the image difference ratio f(r21) of the window area r21, the image difference ratio f(r51) of the window area r51, the image difference ratio f(r61) of the window area r61, the image difference ratio f(r52) of the window area r52, the image difference ratio f(r62) of the window area r62, the image difference ratio f(r43) of the window area r43, the image difference ratio f(r44) of the window area r44, the image difference ratio f(r25) of the window area r25, the image difference ratio f(r35) of the window area r35, the image difference ratio f(r36) of the window area r36, the image difference ratio f(r55) of the window area r55, the image difference ratio f(r65) of the window area r65, the image difference ratio f(r56) of the window area r56, the image difference ratio f(r66) of the window area r66 are conforming to the specific value range k˜d, and thus those window areas are defined as the selected window areas of the specific value. Thereafter, a detection is enabled on those selected window areas of the specific value for identifying and grouping the neighboring window areas in the selected window areas of the specific value into regions of variation. In this embodiment, a connected component labeling method is used in the detection for identifying and grouping the neighboring window areas in the selected window areas of the specific value into regions of variation, whereas the connected component labeling method can be a 8-neighbor point connected component labeling method, or a 4-neighbor point connected component labeling method.
As shown in FIG. 8B, the window area r23, the window area r24, and the window area r33 are neighbors, so that they are connected to form one region of variation 20. That is, the present disclosure is able to connect individual smaller window areas according to defining of the image difference ratio into one larger region of variation. In addition, as each window area is defined by the use of the sliding window mask 50 with known area size, the total area of the region of variation can be obtained as the totality of the window areas in the region of variation. That is, as shown in FIG. 8B, the total area of the region of variation 20 is the total area of the window area r23, the window area r24, and the window area r33. In FIG. 8B, the window area r31 itself forms one individual region of area 22 as it is not connected to any other selected window areas of the specific value, and thus it is defined as a variation region of one-unit-area, whereas the variation region 22 of FIG. 8A is defined as a variation region of three-unit-area as it is the collection of three window areas. However, in other embodiments, there can be partial overlapping in the neighboring window areas in each movement of the sliding window mask 50, so that the total area of the variation region should be the totality of the neighboring window area after subtracting the overlapping area.
As shown in FIG. 8C, the window area r21, indicating as the region 23, is a stand along area that is not connected to other selected area and thus is assigned as a variation region of one-unit-area; the window area r43 and the window area r44 are connected into a variation region 24 of two-unit-area; the window area r25, the window area r35 and the window area r36 are connected into a variation region 25 of three-unit-area; the window area r51, the window area r52, the window area r61, and the window area r62 are connected into a variation region 26 of four-unit-area; and similarly the window area r55 and the window area r56, the window area r65, and the window area r66 are connected into another variation region 26 of four-unit-area.
At step S150, the connectivity of the window areas in the overlapping image information is clustered according to the image difference ratio of each of the window areas; and then the flow proceeds to step s160, At step S160, the region of variation on the overlapping image information is identified and labeled. In this embodiment, the output device 16 is used for outputting an overlapping image information 308 and provided for assisting physicians or medical personnel to do diagnosis by labeling a specific value b to the variation regions.
FIG. 9A is a schematic of statistic distribution between the image difference ratio and cumulative amount in window area according to an embodiment of the present disclosure. The statistic distribution shown in FIG. 9A is a histogram, in which the horizontal coordinate represents the image difference ratio f(rxy) and the vertical coordinate represents the cumulative amount n of window area rxy. In other embodiments, the statistic distribution can be represented using a table. Similar to the forgoing description, the process then will define a series of cumulative cluster intervals d to be used for clustering window areas rxy whose corresponding image difference ratios f(rxy) is confirming to the defining of one cumulative interval in the series of cumulative cluster intervals as the same group, while counting the total amount of the window areas rxy of the same group as its cumulative amount n. In the embodiment shown in FIG. 9A, the bar at the image difference ratios f(rxy) of k represents the window areas rxy whose image difference ratios f(rxy) is ranged between 0˜k and the cumulative amount n of those window areas rxy is 18; the bar at the image difference ratios f(rxy) of k−d represents the window areas rxy whose image difference ratios f(rxy) is ranged between 0˜k−d and the cumulative amount n of those window areas rxy is 15; the bar at the image difference ratios f(rxy) of k−2d represents the window areas rxy whose image difference ratios f(rxy) is ranged between 0˜k−2d and the cumulative amount n of those window areas rxy is 10; the bar at the image difference ratios f(rxy) of k−3d represents the window areas rxy whose image difference ratios f(rxy) is ranged between 0˜k−3d and the cumulative amount n of those window areas rxy is 5; the bar at the image difference ratios f(rxy) of k−4d represents the window areas rxy whose image difference ratios f(rxy) is ranged between 0˜k−4d and the cumulative amount n of those window areas rxy is 3; the bar at the image difference ratios f(rxy) of k−5d represents the window areas rxy whose image difference ratios f(rxy) is ranged between 0˜k−5d and the cumulative amount n of those window areas rxy is 2; and the bar at the image difference ratios f(rxy) of k−6d represents the window areas rxy whose image difference ratios f(rxy) is ranged between 0˜k−6d and the cumulative amount n of those window areas rxy is 1.
From the cumulative amount n of those window areas rxy shown in FIG. 9A, it is able to known the amount of window areas for each of the image difference ratio f(rxy). FIG. 9B is a schematic of statistic distribution between the image difference ratio and the amount in window area according to an embodiment of the present disclosure. The static distribution of FIG. 9B is also a histogram, which is similar to that shown in FIG. 9A but is not limited thereby. Similarly, the statistic distribution shown in FIG. 9B is a histogram, in which the horizontal coordinate represents the image difference ratio f(rxy) and the vertical coordinate represents the amount n1 of window area rxy. In the embodiment shown in FIG. 9B, a cumulative cluster interval d is used for clustering the window areas. As shown in FIG. 9B, the bar at the image difference ratios f(rxy) of k−6d represents the window areas rxy whose image difference ratios f(rxy) is ranged between 0˜k−6d and the amount n1 of those window areas rxy is 1, whereas in FIG. 9A, the bar at the image difference ratios f(rxy) of k−5d represents the window areas rxy whose image difference ratios f(rx) is ranged between 0˜k−5d and the cumulative amount n of those window areas rxy is 2, therefore, it can conclude that after subtracting the one window area with image difference ration of k−6d, the amount n1 of the window area with image difference ratios f(rxy) is ranged between k−5d˜k−6d is 1, and therefore, in FIG. 9B, the bar at the. Similarly, it can conclude that the bar at the image difference ratios f(rxy) of k−5d represents the amount n1 of those window areas rxy with image difference ratios f(rxy) of k−5d is 1. Consequently, in FIG. 9B, the amount n1 of those window areas rxy with image difference ratios f(rxy) of k−4d is 1; the amount n1 of those window areas rxy with image difference ratios f(rxy) of k−3d is 2; the amount n1 of those window areas rxy with image difference ratios f(rxy) of k−2d is 5; the amount n1 of those window areas rxy with image difference ratios f(rxy) of k−d is 5; and the amount n1 of those window areas rxy with image difference ratios f(rxy) of k is 3. In other word, the region with maximum variation is the region with image difference ratios f(rxy) of k, and the amount n1 of those window areas rxy with image difference ratios f(rxy) ranged between k−d˜k is 3; while the region with minimum variation is the region with image difference ratios f(rxy) of k−6d, and the amount n1 of those window areas rxy with image difference ratios f(rxy) ranged between 0˜k−6d is 1.
Thereafter, a connected component labeling algorithm is used by the defining of a specific value b, that is, when a physician input a value b, he/she is intended to locate b regions of variation that are most obvious in the overlapping image information. In an embodiment, if the specific value b is 3, and in FIG. 9B where the amount n1 of those window areas rxy with image difference ratios f(rxy) of k is 3, by that the required amount of variation regions are located and labeled for providing the variation regions to the physician to do diagnosis.
In another embodiment, if the specific value b is 7, and in FIG. 9B where the amount n1 of those window areas rxy with image difference ratios f(rxy) of k is 3 and the amount n1 of those window areas rxy with image difference ratios f(rxy) of k−d is 5, therefore the there are 8 window areas to be located that is more than the specific amount of 7. Please refer to FIG. 9C, which is a schematic of statistic distribution of connected region area in window area. In FIG. 9C, the horizontal coordinate represents the connected region area, which are one-unit-area, two-unit-area, three-unit-area and for-unit-area from small to large, and the vertical coordinate represents the amount of the corresponding window areas of different sizes. It is noted that the histogram shown in FIG. 9C represents the window area with image difference ratios f(rxy) of k−d of FIG. 8C. In FIG. 9C using the window area with image difference ratios f(rxy) of k−d, the window area r21, indicating as the region 23, is a stand along area that is not connected to other selected area and thus is assigned as a variation region of one-unit-area, and three is only one variation region of one-unit-area; similarly, there is one variation area of two-unit-area 24, which is a connected region of the window area r43 and the window area r44; there is one variation area of three-unit-area 25, which is a connected region of the window area r25, the window area r35 and the window area r36; and there are two variation area of four-unit-area 26, which are respectively a region of; the window area r51, the window area r52, the window area r61, and the window area r62, and a region of the window area r55 and the window area r56, the window area r65, and the window area r66. In this embodiment, regardless the variation region contains only one window area or it is an assembly of multiple window areas, the identified variation regions are ordered according to their size, while the four largest variation regions are selected. That is, in this embodiment, the two variation area of four-unit-area 26 are selected first, and then the variation area of three-unit-area 25, the variation area of two-unit-area 24 are selected following.
To sum up, the medical comparison method and system of the present disclosure are capable of rapidly comparing medical images of a specific area in a patient that are taken at different time according to the image difference ratio so as to locate a region of variation from the medical images, by that physicians not only can be relieved from the time-consuming and labor-intensive task of having to manually search and find all the differences between retinal images that are taken at different time, but also from possible misdiagnosis caused by human error. Moreover, physicians are enabled to make a more objective valuation to determine whether or not the region of variation is an indication of deterioration for assisting the physicians to arrange corresponding tracking procedures.
By the image difference ratios obtained in the present disclosure, the degree of matching between two images of the same target area can be evaluated and determined, which can be performed without being limited by the differences in imaging angle, the use of light source, luminance and parameter configuration between different retinal imaging processes in each imaging.
In addition, by the image difference ratios obtained in the present disclosure for defining the connectivity of the window areas, window areas can be clustered and connected so as to be used for labeling variation region according to a specific label value, and thus the label variation regions can be provided to a physician to be diagnosis.
With respect to the above description then, it is to be realized that the optimum dimensional relationships for the parts of the disclosure, to include variations in size, materials, shape, form, function and manner of operation, assembly and use, are deemed readily apparent and obvious to one skilled in the art, and all equivalent relationships to those illustrated in the drawings and described in the specification are intended to be encompassed by the present disclosure.

Claims (15)

What is claimed is:
1. A medical image comparison method, comprising the steps of:
obtaining a plurality of images of a body at different time points, while allowing the plural images to include a first image captured at a first time point and a second image captured at a second time point;
obtaining a first feature point group by detecting feature points in the first image, while obtaining a second feature point group by detecting feature points in the second image;
enabling an overlapping image information to be generated by aligning the second image with the first image according to the first feature point group and the second feature point group, while allowing the overlapping image information to include a first matching image corresponding to the first image and a second matching image corresponding to the second image; and
sequentially extracting corresponding window areas from the first matching image and the second matching image in the overlapping image information respectively by the use of a sliding window mask, while calculating an image difference ratio for each of the window areas according to the ratio between the number of matching points and the number of unmatched points in the corresponding window areas of the first and the second matching images.
2. The method of claim 1, further comprising the step of:
clustering the connectivity of the window areas in the overlapping image information according to the image difference ratio of each of the window areas.
3. The method of claim 2, wherein the clustering of the connectivity of the window areas in the overlapping image information further comprises the step of:
calculating a cumulative amount of window areas according to the corresponding image difference ratio.
4. The method of claim 3, wherein the clustering of the connectivity of the window areas in the overlapping image information further comprises the step of:
labeling the window areas with image difference ratios that are conforming to a specific value as a specific window area; and
determining whether the labeled window areas are neighboring to one another, while connecting the neighboring labeled window areas into a region of variation.
5. The method of claim 4, further comprising the step of:
identifying and labeling the region of variation on the overlapping image information.
6. The method of claim 1, wherein the obtaining of the first feature point group of the first image and the second feature point group of the second image further comprises the step of:
using a scale invariant feature transform (SIFT) algorithm and a speeded-up robust features (SURF) algorithm to obtain the first feature point group of the first image and the second feature point group of the second image.
7. The method of claim 1, wherein the aligning of the second image with the first image is enabled by the use of a random sample consensus (RANSAC) algorithm with affine mapping.
8. The method of claim 1, wherein the use of the sliding window mask further comprises the step of:
partitioning and segmenting the first and the second images into a plurality of the window areas.
9. The method of claim 8, wherein the window areas is arranged in a manner selected from the group consisting of: the window areas are arranged for allowing partial overlapping area against one another, and the window areas are arranged without overlapping.
10. A medical image comparison system, comprising:
an image processing device, further comprising:
an image capturing module, for obtaining a plurality of images of a body at different time points, while allowing the plural images to include a first image captured at a first time point and a second image captured at a second time point;
a feature extracting module, coupling to the image capturing module, for obtaining a first feature point group by detecting feature points in the first image, while obtaining a second feature point group by detecting feature points in the second image; and
an information alignment module, coupling to the feature extracting module, for aligning the second image with the first image according to the first feature point group and the second feature point group so as to generate an overlapping image information, while enabling the overlapping image information to include a first matching image corresponding to the first image and a second matching image corresponding to the second image; and
an image calculation device, coupling to the image processing device, further comprises: a difference comparison module, provided for sequentially extracting corresponding window areas from the first matching image and the second matching image in the overlapping image information respectively by the use of a sliding window mask, while calculating an image difference ratio for each of the window areas according to the ratio between the number of matching points and the number of unmatched points in the corresponding window areas of the first and the second matching images.
11. The system of claim 10, further comprising:
a cluster connection module, coupling to the difference comparison module, for clustering the connectivity of the window areas in the overlapping image information according to the image difference ratio of each of the window areas.
12. The system of claim 11, further comprising:
an image labeling module, coupling to the cluster connection module, for labeling a region of variation on the overlapping image information.
13. The system of claim 12, further comprising:
an output device, coupling to the image labeling module, for outputting the labeled region of variation of the overlapping image information.
14. The system of claim 10, wherein the feature extracting module uses a scale invariant feature transform (SIFT) algorithm and a speeded-up robust features (SURF) algorithm to obtain the first feature point group of the first image and the second feature point group of the second image.
15. The system of claim 10, wherein the information alignment module uses a random sample consensus (RANSAC) algorithm with affine mapping to align the second image with the first image.
US15/855,082 2017-12-27 2017-12-27 Medical image comparison method and system thereof Active 2038-04-06 US10510145B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US15/855,082 US10510145B2 (en) 2017-12-27 2017-12-27 Medical image comparison method and system thereof
TW107101310A TWI656323B (en) 2017-12-27 2018-01-12 Medical image difference comparison method and system thereof
CN201810029729.XA CN109978811A (en) 2017-12-27 2018-01-12 Medical image difference comparison method and its system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/855,082 US10510145B2 (en) 2017-12-27 2017-12-27 Medical image comparison method and system thereof

Publications (2)

Publication Number Publication Date
US20190197689A1 US20190197689A1 (en) 2019-06-27
US10510145B2 true US10510145B2 (en) 2019-12-17

Family

ID=66948973

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/855,082 Active 2038-04-06 US10510145B2 (en) 2017-12-27 2017-12-27 Medical image comparison method and system thereof

Country Status (3)

Country Link
US (1) US10510145B2 (en)
CN (1) CN109978811A (en)
TW (1) TWI656323B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110766735B (en) * 2019-10-21 2020-06-26 北京推想科技有限公司 Image matching method, device, equipment and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100530220C (en) 2006-05-10 2009-08-19 航伟科技股份有限公司 Body image abnormal area statistical detection method
US7856135B1 (en) 2009-12-02 2010-12-21 Aibili—Association for Innovation and Biomedical Research on Light and Image System for analyzing ocular fundus images
TW201225003A (en) 2010-12-08 2012-06-16 Hon Hai Prec Ind Co Ltd System and method for marking differences of two images
TW201250608A (en) 2011-06-15 2012-12-16 Hon Hai Prec Ind Co Ltd Image comparison system and method
TWI452998B (en) 2009-06-17 2014-09-21 Univ Southern Taiwan System and method for establishing and analyzing skin parameters using digital image multi-area analysis
TWI459821B (en) 2007-12-31 2014-11-01 Altek Corp Identification device of image feature pixel and its identification method
TW201509363A (en) 2013-05-29 2015-03-16 Capso Vision Inc Reconstruction of images from an in vivo multi-camera capsule
US20150110348A1 (en) 2013-10-22 2015-04-23 Eyenuk, Inc. Systems and methods for automated detection of regions of interest in retinal images
US20160012311A1 (en) * 2014-07-09 2016-01-14 Ditto Labs, Inc. Systems, methods, and devices for image matching and object recognition in images
US20160078622A1 (en) 2014-09-16 2016-03-17 National Taiwan University Method and wearable apparatus for disease diagnosis
TWM528483U (en) 2016-05-11 2016-09-11 jun-qi Wang Evaluation system of performing feature identification of living body by using image
US20170055814A1 (en) * 2014-06-01 2017-03-02 Capsovision ,Inc. Reconstruction of Images from an in Vivo Multi-Camera Capsule with Confidence Matching
US20170076145A1 (en) 2015-09-11 2017-03-16 EyeVerify Inc. Image enhancement and feature extraction for ocular-vascular and facial recognition
TWI579173B (en) 2014-07-28 2017-04-21 國立中興大學 An driver fatigue monitoring and detection method base on an ear-angle
CN106846317A (en) 2017-02-27 2017-06-13 北京连心医疗科技有限公司 A kind of feature based extracts the method for retrieving medicine image with Similarity matching

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102006029718A1 (en) * 2006-06-28 2008-01-10 Siemens Ag Organ system`s e.g. brain, images evaluating method for detecting pathological change in medical clinical picture, involves extending registration for area to extended registration, such that another area is detected
CN102081799B (en) * 2011-01-10 2012-12-12 西安电子科技大学 Method for detecting change of SAR images based on neighborhood similarity and double-window filtering
GB201208088D0 (en) * 2012-05-09 2012-06-20 Ncam Sollutions Ltd Ncam
JP6125281B2 (en) * 2013-03-06 2017-05-10 東芝メディカルシステムズ株式会社 Medical image diagnostic apparatus, medical image processing apparatus, and control program
CN104809732B (en) * 2015-05-07 2017-06-20 山东鲁能智能技术有限公司 A kind of power equipment appearance method for detecting abnormality compared based on image
CN106934807B (en) * 2015-12-31 2022-03-01 深圳迈瑞生物医疗电子股份有限公司 Medical image analysis method and system and medical equipment
CN106067168A (en) * 2016-05-25 2016-11-02 深圳市创驰蓝天科技发展有限公司 A kind of unmanned plane image change recognition methods
CN107248172A (en) * 2016-09-27 2017-10-13 中国交通通信信息中心 A kind of remote sensing image variation detection method based on CVA and samples selection

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100530220C (en) 2006-05-10 2009-08-19 航伟科技股份有限公司 Body image abnormal area statistical detection method
TWI459821B (en) 2007-12-31 2014-11-01 Altek Corp Identification device of image feature pixel and its identification method
TWI452998B (en) 2009-06-17 2014-09-21 Univ Southern Taiwan System and method for establishing and analyzing skin parameters using digital image multi-area analysis
US7856135B1 (en) 2009-12-02 2010-12-21 Aibili—Association for Innovation and Biomedical Research on Light and Image System for analyzing ocular fundus images
TW201225003A (en) 2010-12-08 2012-06-16 Hon Hai Prec Ind Co Ltd System and method for marking differences of two images
TW201250608A (en) 2011-06-15 2012-12-16 Hon Hai Prec Ind Co Ltd Image comparison system and method
TW201509363A (en) 2013-05-29 2015-03-16 Capso Vision Inc Reconstruction of images from an in vivo multi-camera capsule
US20150110348A1 (en) 2013-10-22 2015-04-23 Eyenuk, Inc. Systems and methods for automated detection of regions of interest in retinal images
US20170055814A1 (en) * 2014-06-01 2017-03-02 Capsovision ,Inc. Reconstruction of Images from an in Vivo Multi-Camera Capsule with Confidence Matching
US20160012311A1 (en) * 2014-07-09 2016-01-14 Ditto Labs, Inc. Systems, methods, and devices for image matching and object recognition in images
TWI579173B (en) 2014-07-28 2017-04-21 國立中興大學 An driver fatigue monitoring and detection method base on an ear-angle
US20160078622A1 (en) 2014-09-16 2016-03-17 National Taiwan University Method and wearable apparatus for disease diagnosis
US20170076145A1 (en) 2015-09-11 2017-03-16 EyeVerify Inc. Image enhancement and feature extraction for ocular-vascular and facial recognition
TWM528483U (en) 2016-05-11 2016-09-11 jun-qi Wang Evaluation system of performing feature identification of living body by using image
CN106846317A (en) 2017-02-27 2017-06-13 北京连心医疗科技有限公司 A kind of feature based extracts the method for retrieving medicine image with Similarity matching

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Satoshi Sakuma et al., Automated detection of changes in sequential color ocular fundus images. Part of the SPIE Conference on Image prooessing, 1998, vol. 3338.
Taiwan Patent Office, "Office Action", dated Sep. 3, 2018, Taiwan.

Also Published As

Publication number Publication date
TWI656323B (en) 2019-04-11
TW201928295A (en) 2019-07-16
US20190197689A1 (en) 2019-06-27
CN109978811A (en) 2019-07-05

Similar Documents

Publication Publication Date Title
CN107564580B (en) Gastroscope visual aids processing system and method based on integrated study
CN105513077B (en) A kind of system for diabetic retinopathy screening
JP6025311B2 (en) Ophthalmic diagnosis support apparatus and method
CN111227864B (en) Device for detecting focus by using ultrasonic image and computer vision
CN106909778A (en) A kind of Multimodal medical image recognition methods and device based on deep learning
US20220172828A1 (en) Endoscopic image display method, apparatus, computer device, and storage medium
CN111214255B (en) Medical ultrasonic image computer-aided method
KR102338018B1 (en) Ultrasound diagnosis apparatus for liver steatosis using the key points of ultrasound image and remote medical-diagnosis method using the same
CN112465772B (en) Fundus colour photographic image blood vessel evaluation method, device, computer equipment and medium
CN108806776A (en) A method of the Multimodal medical image based on deep learning
CN110838114B (en) Pulmonary nodule detection method, device and computer storage medium
CN111797900A (en) Arteriovenous classification method and device of OCT-A image
US10510145B2 (en) Medical image comparison method and system thereof
KR20110019992A (en) Diagnosing system and method using iridology
JP2006263127A (en) Ocular fundus diagnostic imaging support system and ocular fundus diagnostic imaging support program
CN117495817A (en) Method and device for judging abnormal images of blood vessels under endoscope
CN112308888A (en) Full-modal medical image sequence grouping method based on deep learning physical sign structure
Giancardo et al. Quality assessment of retinal fundus images using elliptical local vessel density
Hu et al. Multi-image stitching for smartphone-based retinal fundus stitching
WO2017193581A1 (en) Automatic processing system and method for mammary gland screening image
US20230169644A1 (en) Computer vision system and method for assessing orthopedic spine condition
Lee et al. A postoperative free flap monitoring system: Circulatory compromise detection based on visible-light image
Zheng et al. New simplified fovea and optic disc localization method for retinal images
WO2017046377A1 (en) Method and computer program product for processing an examination record comprising a plurality of images of at least parts of at least one retina of a patient
CN112862754A (en) System and method for prompting missing detection of retained image based on intelligent identification

Legal Events

Date Code Title Description
AS Assignment

Owner name: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE, TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHEN, JIAN-REN;CHEN, GUAN-AN;HUANG, SU-CHEN;AND OTHERS;REEL/FRAME:044490/0152

Effective date: 20171226

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4