CN106651827A - Fundus image registering method based on SIFT characteristics - Google Patents

Fundus image registering method based on SIFT characteristics Download PDF

Info

Publication number
CN106651827A
CN106651827A CN201610813202.7A CN201610813202A CN106651827A CN 106651827 A CN106651827 A CN 106651827A CN 201610813202 A CN201610813202 A CN 201610813202A CN 106651827 A CN106651827 A CN 106651827A
Authority
CN
China
Prior art keywords
image
eye fundus
sift feature
fundus image
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610813202.7A
Other languages
Chinese (zh)
Other versions
CN106651827B (en
Inventor
吴健
韩玉强
陈亮
梁婷婷
万瑶
应豪超
高维
邓水光
李莹
尹建伟
吴朝晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201610813202.7A priority Critical patent/CN106651827B/en
Publication of CN106651827A publication Critical patent/CN106651827A/en
Application granted granted Critical
Publication of CN106651827B publication Critical patent/CN106651827B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses a fundus image registering method based on SIFT characteristics. The method comprises: conducting angle classification to a batch of inputted fundus images; calculating the transformative relationship among the images; converting the images onto the same background; and through the rapid switching of the images, finding out which parts of the fundus change. The invention mainly uses a fuzzy convergence optic disc positioning algorithm and conducts angle classification to the batch of inputted fundus images according to the position of the optic disc wherein the angle classification is referred as two types: the left side and the right side; and then in each image classification, a selected and uploaded first image is used as a reference for other images to register with; the SIFT characteristic points of all the images are extracted and the matching relation between every two points is calculated. Finally, the RANSAC algorithm is used to calculate the transformation model parameters between every two images. The images are converted onto the same background according to the transformation model; and an image switching interval is configured so that through the switching of the images, it is possible to find out the change among the images rapidly and accurately.

Description

A kind of ocular fundus image registration method based on SIFT feature
Technical field
The invention belongs to technical field of medical image processing, and in particular to a kind of ocular fundus image registration based on SIFT feature Method.
Background technology
It is objective, standard a diagnostic method that eye fundus image is diagnosed in ophthalmology.Eye fundus image is to diabetes, hypertension etc. Early detection, diagnosis and the guidance of the fundus oculi disease such as eyeground pathological changes and ARM, ocular fundus arteriosclerosis and PVR Treatment is significant.Generally, each clients has multiple diagnosis records, produces multiple eye fundus images.By right Than the image of different time, the eyeground pathological changes situation of clients can be rapidly and accurately followed the trail of and found.
The abundant blood vessel of eye fundus image and texture structure, for eye diseases such as the little wart in eyeground, macula luteas, and diabetes, The disease of the whole bodies such as leukaemia, artery sclerosis has very big diagnostic value.Doctor is often by the eye of contrast different time Base map picture, finds out the change of eyeground generation, to carry out follow-up diagnosis treatment.But because eye fundus image is comprising abundant blood vessel And texture structure, and the quantity of eye fundus image is relatively more, and doctor is difficult to observe by the naked eye and completes this process.Thus just need Multiple eye fundus images are carried out with registration, the purpose of image registration techniques is to compare same target to obtain at different conditions Image, such as image can be from different collecting devices, different time, different shooting visual angles etc..Specifically, for The two width images that one group of view data is concentrated, another piece image is mapped to by finding a kind of spatial alternation piece image, is made Obtain the point one-to-one corresponding in two figures corresponding to space same position to get up, so as to reach the purpose of information fusion.
The focus that ocular fundus image registration is always studied, can be by the ocular fundus image registration of different time, by quick change Picture is changed it can be found that the change between eye fundus image, so as to help doctor to carry out eyeground pathological changes diagnosis.Yet with eyeground figure As comprising abundant blood vessel and texture structure, not only wasting time and energy by the change between naked eyes contrast images, and it is difficult to send out Existing trickle change.Traditional-handwork registration has that the degree of accuracy is low, workload is big, unrepeatable defect, and eyeground scanning figure The big bad dream for being even more manual registration person of data volume of picture, thus the autoregistration of eye fundus image has very big using value.
The content of the invention
For the above-mentioned technological deficiency existing for prior art, the invention provides a kind of eyeground figure based on SIFT feature As method for registering, the SIFT feature point of eye fundus image is used, eye fundus image has been transformed into identical background, cut by dynamic The mode of picture is changed, the difference between image can be rapidly found.
A kind of ocular fundus image registration method based on SIFT feature, comprises the steps:
(1) all eye fundus images subject to registration are divided into two classes according to left and right eyeball correspondence;
(2) all eye fundus images subject to registration are carried out with SIFT (Scale-invariant feature transform) Feature detection, to extract eye fundus image in SIFT feature point, there is described SIFT feature point multidimensional characteristic to describe, these Feature interpretation constitutes corresponding SIFT feature vector;
(3) one is chosen from similar eye fundus image as benchmark image, is made based on described SIFT feature vector similar In other eye fundus images carry out SIFT feature Point matching with benchmark image respectively;
(4) for the arbitrary eye fundus image in similar in addition to benchmark image, the eye fundus image is calculated imitative with benchmark image Penetrate transformation matrix, so according to affine transformation matrix by each pixel affine transformation in the eye fundus image into benchmark image with reality Existing registration;Travel through according to this it is similar in all eye fundus images in addition to benchmark image.
For arbitrary eye fundus image in described step (1), optic disk positioning is carried out to it using fuzzy convergence algorithm:First Binaryzation is carried out to eye fundus image to distinguish target and background, for arbitrary blood vessel pixel in image, is counted with the blood vessel picture Belong to the pixel number of target centered on vegetarian refreshments in the image block of N × N sizes, and the gained vote as the blood vessel pixel is tied Really, all blood vessel pixels in traversing graph picture according to this;Then, the blood vessel pixel of η % before gained vote highest is filtered out, so it is right The coordinate of these blood vessel pixels for filtering out is averaging the center for obtaining optic disk;Finally, optic disk center is compared With the left-right relation of picture centre coordinate, eye fundus image correspondence is classified as into left eye ball image or right eye ball image;N and η are pre- If numerical value.
In described step (3) an earliest eye fundus image of shooting time is chosen as base from similar eye fundus image Quasi- image.
Make in described step (3) it is similar in other eye fundus images carry out SIFT feature point with benchmark image respectively Match somebody with somebody, detailed process is:For the arbitrary eye fundus image in similar in addition to benchmark image, arbitrary SIFT in the eye fundus image is calculated special Levy a little with each SIFT feature point in benchmark image with regard to the Euclidean distance of SIFT feature vector, take minimum of a value min of Euclidean distance With sub-minimum secmin, if min<K*secmin, then judge that the SIFT feature point in eye fundus image is corresponding with benchmark image The SIFT feature point of Euclidean distance minimum of a value min matches, and k is default numerical value;Travel through according to this in the eye fundus image and own SIFT feature point, then complete the SIFT feature Point matching of the eye fundus image and benchmark image.
For the arbitrary eye fundus image in similar in addition to benchmark image in described step (4), using RANSAC (Random Sample consensus) algorithm calculates the affine transformation matrix of the eye fundus image and benchmark image, and detailed process is:
4.1 randomly select four pairs from all SIFT feature Point matching centerings of the eye fundus image and benchmark image, Jin Ergen The affine transformation matrix M of the eye fundus image and benchmark image is determined by below equation according to the coordinate of this four pairs of SIFT feature points:
[x ', y ', 1]T=M* [x, y, 1]T
Wherein:m0、m1、m3And m4Scaling twiddle factor is, Δ x and Δ y is respectively the eye fundus image relative datum image Side-play amount in the x-direction and the z-direction, [x ', y ', 1]T[x, y, 1]TIt is respectively arbitrary to SIFT feature point in respective image In homogeneous coordinates;
4.2 carry out affine transformation according to affine transformation matrix M to all SIFT feature points for having matched in the eye fundus image, Obtain these mapping points of SIFT feature point in benchmark image;
4.3 calculate each mapping point with the SIFT feature point of Corresponding matching in benchmark image with regard to the European of SIFT feature vector Distance, all Euclidean distances to trying to achieve constitute evaluated error vector after being normalized, and then make evaluated error vector medium and small SIFT feature Point matching corresponding to Euclidean distance in certain threshold value is to as interior point, putting quantity and preserving corresponding in statistics Affine transformation matrix M;
4.4 perform several times repeatedly according to step 4.1~4.3, take the point most affine transformation matrix M of quantity in correspondence and make For the eye fundus image and the final affine transformation matrix of benchmark image.
The present invention carries out angle classification by the batch eye fundus image to being input into, and the conversion then calculated between image is closed System, by image same background is transformed to, by being switched fast picture, it can be found which partly there occurs change on eyeground.This It is bright mainly to make use of fuzzy convergence optic disk location algorithm, angle point is carried out to the eye fundus image of batch input according to optic disk position Class, angle is divided into two classes:Left side and right side;Then the first pictures for uploading are selected in each class picture as benchmark, its Its picture carries out registering with the reference base picture, and the SIFT feature point for extracting all pictures simultaneously calculates characteristic point matching between any two Relation;Finally the transformation model parameter using the calculating of RANSAC algorithms two-by-two between picture, converts image according to transformation model To identical background, image switching interval is set, the change between image can be rapidly and accurately found by the switching of image. Thus, the present invention has following Advantageous Effects:
(1) according to optic disk position eye fundus image is classified automatically, user can in batches upload the eye of different eyeballs Base map picture.
(2) registration process has used the SIFT feature point of eye fundus image, this feature point to have than traditional feature more preferable Robustness.
(3) transformation model parameter estimation procedure has used the RANSAC i.e. RANSAC algorithm of robust, the algorithm The error that can be brought with the match point of debug.
(4) present invention is not spliced the image of registration, but they have been transformed into identical background, by dynamic State switches the mode of picture, can rapidly find the difference between image.
Description of the drawings
Fig. 1 is the schematic flow sheet of ocular fundus image registration method of the present invention.
Fig. 2 is the schematic flow sheet of eye fundus image angle.
Fig. 3 is characterized a detection and the schematic flow sheet for matching.
Fig. 4 is the schematic flow sheet of transformation model parameter Estimation.
Specific embodiment
In order to more specifically describe the present invention, below in conjunction with the accompanying drawings and specific embodiment is to technical scheme It is described in detail.
As shown in figure 1, ocular fundus image registration method of the present invention based on SIFT feature is mainly including eye fundus image angle point There are strict sequencing, user crowd in class, feature point detection and matching and three parts of transformation model parameter Estimation, three part After amount input eye fundus image, system can carry out optic disk positioning to each width eye fundus image first, obtain the position coordinates of optic disk, The coordinate refers to the center position coordinates of optic disk, is then compared with eye fundus image center position coordinates according to coordinate position, Judge the angle of the eye fundus image, it classified, the first pictures are then selected in each class as reference base picture, This pictures is also often time earliest eyeground picture, and other pictures all carry out registering with reference base picture.Part II is inspection Survey the characteristic point of all pictures, and calculate the similarity between the picture and reference base picture characteristic point, carry out characteristic point Match somebody with somebody.According to the characteristics of eye fundus image, the step uses the SIFT feature point of eye fundus image, and the use of SIFT is Scale invariant Detecting characteristic point, it is a kind of based on metric space to eigentransformation algorithm, with image scaling, rotation and imitative Inalterability of displacement Feature point target detection and image registration algorithm.Detect by calculating similarity of the point between after SIFT feature point, Such as Euclidean distance, finds the matching relationship a little between.After the matching characteristic of two width images is found, generally two width Can there is the change of translation, rotation, yardstick in section, then be accomplished by calculating a transformation matrix, but be because the matching for obtaining Point there may exist exterior point, i.e., wrong match point, so first have to exclude exterior point, therefore the conversion using robust is estimated to calculate Method --- RANSAC algorithms, i.e. RANSAC algorithm, the algorithm can utilize the inherent restriction relation of feature point set to enter One step removes the matching of mistake, finally obtains a transformation matrix M.Two pictures can be transformed to by phase according to the transformation matrix Same background, is easy to subsequent dynamic to show and uses.
The flow process of eye fundus image angle classification shown in Fig. 2.Some patients can shoot two eyes when funduscopy is carried out The eye fundus image of eyeball, accordingly the present invention eye fundus image is divided into two classes according to angle, the Main Basiss of classification and most significantly mark Will is the position of optic disk, therefore carries out optic disk positioning first with fuzzy convergence algorithm.Fuzzy convergence algorithm is a kind of based on throwing The algorithm of ticket, strong robustness, for the retina for having pathology to occur, the method still can obtain good effect.The process It is broadly divided into two steps:First it is to be thrown to the sliding window of N × N around according to the pixel of every vessel centerline Ticket;Then according to voting results, self adaptation selected threshold filters out the pixel of gained vote highest 10%, obtains optic disk candidate Collection, to all of coordinate in Candidate Set, the method by averaging, obtains the center position coordinates of optic disk.Relatively the coordinate with The relation of eye fundus image centre coordinate, is classified.
The process of feature point detection and registration shown in Fig. 3.Because eye fundus image does not have obvious angle point or boundary information, And have abundant SIFT key points, compared with traditional characterization method, this feature has preferably robustness.SIFT feature is detected Algorithm is a kind of based on metric space, to image scaling, the rotation image local feature that even affine transformation maintains the invariance Description operator, the algorithm carries out feature detection in metric space first, and determine key point position and key point residing for chi Degree, then using crucial vertex neighborhood gradient principal direction as the point direction character, to realize operator to yardstick and direction Independence.It maintains the invariance to rotation, scaling, brightness change, and to view transformation, radiation conversion, noise one is also kept Determine the stability of degree, and the feature descriptor of one 128 dimension is produced to each key point using as matching foundation.Build first Vertical metric space kernel function, two-dimensional Gaussian function is defined as follows:
Wherein, σ is the variance of Gauss normal distribution.Metric space of the two-dimensional image I (x, y) under different scale is represented can To be obtained with Gaussian kernel convolution by image:
L (x, y, σ)=G (x, y, σ) * I (x, y)
Wherein, σ represents the metric space factor in the L (x, y, σ), and its value is more little, and characterize that the image is smoothed is fewer, Corresponding yardstick is also less.Large scale corresponds to the general picture feature of image, minutia of the little yardstick corresponding to image.L generations The table metric space of image.Middle tomographic image in difference of Gaussian pyramid carries out feature point detection, and middle is detected Pixel will 9 × 2 pixels corresponding with the 8 of same yardstick pixels and adjacent yardstick be compared, if be detected pixel be The Local Extremum of 2 dimension image spaces of metric space, then elect candidate feature point as.With the corresponding chi of histogram statistical features point The method of degree spatial image L neighborhood territory pixel gradient directions, determines the direction of characteristic point, then sets up the description of each characteristic point Symbol, with the characteristic vector of 128 dimensions each characteristic point is described.It is determined that after characteristic point and its feature descriptor, it is possible to according to Similarity in two width images between respective characteristic point neighborhood information carrying out Feature Points Matching, i.e., using feature in two width images The similitude of the Euclidean distance between vector comes whether judging characteristic point matches.Two width section L1And L2The quantity of key point is respectively N1And N2, use L1In a key point characteristic vector, successively and L2In all key points characteristic vector calculate Euclidean away from From obtaining N2Individual Euclidean distance, selects minimum of a value therein and sub-minimum, respectively min and secmin, if min<k* Secmin (such as k=0.7), then it is assumed that L1In the key point and L2The point of middle this distance of generation min matches.
The flow process of transformation model parameter Estimation shown in Fig. 4.After the matching characteristic of two width images is found, generally Can there is the change of translation, rotation, yardstick in the section of two width, then being accomplished by calculating one carries out affine transformation matrix:
Wherein, M is affine transformation matrix, m0、m1、m3、m4For scaling, twiddle factor, Δ x and Δ y is respectively two sections and exists The side-play amount in x, y direction.Variation relation between two sections can be expressed as:
[x ', y ', 1]T=M* [x, y, 1]T
Wherein, [x, y, 1]T[x ', y ', 1]THomogeneous coordinates of the respectively one group match point in the section of two width.
The present invention adopts conversion algorithm for estimating --- the RANSAC algorithms of robust, and it can utilize the inherence of feature point set about Beam relation further removes the matching of mistake.
RANSAC estimates that the idiographic flow of affine transformation matrix M algorithms is as follows:
(1) data prepare:Find two width image (L1And L2) between matching double points and coordinate information.
(2) model is estimated:Hypothesis has the matching double points of n > 4 couples, randomly selects 4 pairs, and according to Formula Solution affine transformation mould is sought Shape parameter m0、m1、m3、m4、Δx、Δy。
(3) model evaluation:To image L1In all of point (x, y) for having matched do above-mentioned affine transformation, obtain a series of Coordinate (x ', y ') after conversion.The coordinate that these are obtained and L2In corresponding match point (u, v) seek Euclidean distance, i.e., it is affine Evaluated error E of conversion.And use L2The dimensional information of middle individual features point is normalized, and so each characteristic matching can be in the hope of Go out a normalized evaluated error, finally give an evaluated error vector.
(4) evaluated error vector and error threshold T are compared, the corresponding characteristic matching in evaluated error vector less than T is Interior point.
(5) if interior count out more than counting out at present maximum, as have found current best affine transformation, Preserve this transformation matrix.
(6) if iteration does not terminate, repeat step (2)~(5), the affine transformation for finally giving is considered L1And L2Between most Good affine Transform Model.
After obtaining transformation model, all of eye fundus image transforms to identical background in each class, and being not will be all Picture is spliced, but generates the image after equal number conversion, arranges certain time interval switching at runtime image, can be with Quickly and clearly find the change between eye fundus image.
The above-mentioned description to embodiment is that the present invention is understood that and applied for ease of those skilled in the art. Person skilled in the art obviously easily can make various modifications to above-described embodiment, and described herein general Principle is applied in other embodiment without through performing creative labour.Therefore, the invention is not restricted to above-described embodiment, ability Field technique personnel announcement of the invention, the improvement made for the present invention and modification all should be in protection scope of the present invention Within.

Claims (5)

1. a kind of ocular fundus image registration method based on SIFT feature, comprises the steps:
(1) all eye fundus images subject to registration are divided into two classes according to left and right eyeball correspondence;
(2) SIFT feature detection is carried out to all eye fundus images subject to registration, to extract eye fundus image in SIFT feature point, There is described SIFT feature point multidimensional characteristic to describe, and these feature interpretations constitute corresponding SIFT feature vector;
(3) one is chosen from similar eye fundus image as benchmark image, based on described SIFT feature vector make it is similar in Other eye fundus images carry out SIFT feature Point matching with benchmark image respectively;
(4) for the arbitrary eye fundus image in similar in addition to benchmark image, the affine change of the eye fundus image and benchmark image is calculated Matrix is changed, and then is matched somebody with somebody each pixel affine transformation in the eye fundus image with realization into benchmark image according to affine transformation matrix It is accurate;Travel through according to this it is similar in all eye fundus images in addition to benchmark image.
2. ocular fundus image registration method according to claim 1, it is characterised in that:For arbitrary in described step (1) Eye fundus image, optic disk positioning is carried out using fuzzy convergence algorithm to it:First eye fundus image is carried out binaryzation to distinguish target And background, for arbitrary blood vessel pixel in image, statistics is centered on the blood vessel pixel in the image block of N × N sizes Belong to the pixel number of target, and as the gained vote result of the blood vessel pixel, according to this all blood vessel pixels in traversing graph picture; Then, the blood vessel pixel of η % before gained vote highest is filtered out, and then the coordinate of these blood vessel pixels to filtering out asks flat Obtain the center of optic disk;Finally, the left-right relation of optic disk center and picture centre coordinate is compared, by eyeground figure As correspondence is classified as left eye ball image or right eye ball image;N and η are default numerical value.
3. ocular fundus image registration method according to claim 1, it is characterised in that:From similar in described step (3) An earliest eye fundus image of shooting time is chosen in eye fundus image as benchmark image.
4. ocular fundus image registration method according to claim 1, it is characterised in that:Make in described step (3) it is similar in Other eye fundus images carry out SIFT feature Point matching with benchmark image respectively, detailed process is:For removing reference map in similar Arbitrary eye fundus image as outside, calculate in the eye fundus image in arbitrary SIFT feature point and benchmark image each SIFT feature point with regard to The Euclidean distance of SIFT feature vector, takes minimum of a value min and sub-minimum secmin of Euclidean distance, if min<K*secmin, then Judge the SIFT feature point phase of SIFT feature point Euclidean distance minimum of a value min corresponding with benchmark image in eye fundus image Match somebody with somebody, k is default numerical value;All SIFT feature points in the eye fundus image are traveled through according to this, then complete the eye fundus image and reference map The SIFT feature Point matching of picture.
5. ocular fundus image registration method according to claim 1, it is characterised in that:For similar in described step (4) In arbitrary eye fundus image in addition to benchmark image, the affine transformation of the eye fundus image and benchmark image is calculated using RANSAC algorithms Matrix, detailed process is:
4.1 randomly select four pairs from all SIFT feature Point matching centerings of the eye fundus image and benchmark image, and then according to this The coordinate of four pairs of SIFT feature points determines the affine transformation matrix M of the eye fundus image and benchmark image by below equation:
M = m 0 m 1 &Delta; x m 3 m 4 &Delta; y 0 0 1
[x ', y ', 1]T=M* [x, y, 1]T
Wherein:m0、m1、m3And m4Scaling twiddle factor is, Δ x and Δ y is respectively the eye fundus image relative datum image in X side To with Y-direction on side-play amount, [x ', y ', 1]T[x, y, 1]TIt is respectively arbitrary neat in respective image to SIFT feature point Secondary coordinate;
4.2 carry out affine transformation according to affine transformation matrix M to all SIFT feature points for having matched in the eye fundus image, obtain These mapping points of SIFT feature point in benchmark image;
4.3 calculate each mapping point and the SIFT feature point of Corresponding matching in benchmark image with regard to SIFT feature vector it is European away from From all Euclidean distances to trying to achieve constitute evaluated error vector after being normalized, and then make to be less than in evaluated error vector SIFT feature Point matching corresponding to the Euclidean distance of certain threshold value is to as interior point, putting quantity and preserving corresponding imitative in statistics Penetrate transformation matrix M;
4.4 perform several times repeatedly according to step 4.1~4.3, take and put in correspondence the most affine transformation matrix M of quantity as this Eye fundus image and the final affine transformation matrix of benchmark image.
CN201610813202.7A 2016-09-09 2016-09-09 A kind of ocular fundus image registration method based on SIFT feature Active CN106651827B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610813202.7A CN106651827B (en) 2016-09-09 2016-09-09 A kind of ocular fundus image registration method based on SIFT feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610813202.7A CN106651827B (en) 2016-09-09 2016-09-09 A kind of ocular fundus image registration method based on SIFT feature

Publications (2)

Publication Number Publication Date
CN106651827A true CN106651827A (en) 2017-05-10
CN106651827B CN106651827B (en) 2019-05-07

Family

ID=58851966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610813202.7A Active CN106651827B (en) 2016-09-09 2016-09-09 A kind of ocular fundus image registration method based on SIFT feature

Country Status (1)

Country Link
CN (1) CN106651827B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107256410A (en) * 2017-05-26 2017-10-17 北京郁金香伙伴科技有限公司 To the method and device of class mirror image image classification
CN107292877A (en) * 2017-07-05 2017-10-24 北京至真互联网技术有限公司 A kind of right and left eyes recognition methods based on eye fundus image feature
CN107330449A (en) * 2017-06-13 2017-11-07 瑞达昇科技(大连)有限公司 A kind of BDR sign detection method and device
CN107958445A (en) * 2017-12-29 2018-04-24 四川和生视界医药技术开发有限公司 The joining method and splicing apparatus of retinal images
CN107980140A (en) * 2017-10-16 2018-05-01 厦门中控智慧信息技术有限公司 A kind of recognition methods of vena metacarpea and device
CN108876770A (en) * 2018-06-01 2018-11-23 山东师范大学 A kind of eyeground multispectral image joint method for registering and system
CN108961334A (en) * 2018-06-26 2018-12-07 电子科技大学 A kind of retinal blood pipe thickness measurement method based on image registration
CN109166117A (en) * 2018-08-31 2019-01-08 福州依影健康科技有限公司 A kind of eye fundus image automatically analyzes comparison method and a kind of storage equipment
CN109658393A (en) * 2018-12-06 2019-04-19 代黎明 Eye fundus image joining method and system
CN110033422A (en) * 2019-04-10 2019-07-19 北京科技大学 A kind of eyeground OCT image fusion method and device
CN110544274A (en) * 2019-07-18 2019-12-06 山东师范大学 multispectral-based fundus image registration method and system
CN110543802A (en) * 2018-05-29 2019-12-06 北京大恒普信医疗技术有限公司 Method and device for identifying left eye and right eye in fundus image
CN110660089A (en) * 2019-09-25 2020-01-07 云南电网有限责任公司电力科学研究院 Satellite image registration method and device
CN110664435A (en) * 2019-09-23 2020-01-10 东软医疗系统股份有限公司 Method and device for acquiring cardiac data and ultrasonic imaging equipment
WO2020103288A1 (en) * 2018-11-21 2020-05-28 福州依影健康科技有限公司 Analysis method and system for feature data change of diabetic retinopathy fundus, and storage device
CN111222361A (en) * 2018-11-23 2020-06-02 福州依影健康科技有限公司 Method and system for analyzing hypertension retina vascular change characteristic data
CN113379808A (en) * 2021-06-21 2021-09-10 昆明理工大学 Method for registration of multiband solar images
CN114926659A (en) * 2022-05-16 2022-08-19 上海贝特威自动化科技有限公司 Deformation target positioning algorithm based on SIFT and CM

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593351A (en) * 2008-05-28 2009-12-02 中国科学院自动化研究所 Ocular fundus image registration method based on range conversion and rigid transformation parameters estimation
CN101732031A (en) * 2008-11-25 2010-06-16 中国大恒(集团)有限公司北京图像视觉技术分公司 Method for processing fundus images
CN102908120A (en) * 2012-10-09 2013-02-06 北京大恒图像视觉有限公司 Eye fundus image registration method, eye fundus image optic disk nerve and vessel measuring method and eye fundus image matching method
CN104933715A (en) * 2015-06-16 2015-09-23 山东大学(威海) Registration method applied to retina fundus image
US20160150182A1 (en) * 2014-11-25 2016-05-26 Electronics And Telecommunications Research Institute Method and apparatus for providing eye-contact function to multiple points of attendance using stereo image in video conference system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593351A (en) * 2008-05-28 2009-12-02 中国科学院自动化研究所 Ocular fundus image registration method based on range conversion and rigid transformation parameters estimation
CN101732031A (en) * 2008-11-25 2010-06-16 中国大恒(集团)有限公司北京图像视觉技术分公司 Method for processing fundus images
CN102908120A (en) * 2012-10-09 2013-02-06 北京大恒图像视觉有限公司 Eye fundus image registration method, eye fundus image optic disk nerve and vessel measuring method and eye fundus image matching method
US20160150182A1 (en) * 2014-11-25 2016-05-26 Electronics And Telecommunications Research Institute Method and apparatus for providing eye-contact function to multiple points of attendance using stereo image in video conference system
CN104933715A (en) * 2015-06-16 2015-09-23 山东大学(威海) Registration method applied to retina fundus image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
于庆: "基于SIFT算法的眼底图像拼接的研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107256410A (en) * 2017-05-26 2017-10-17 北京郁金香伙伴科技有限公司 To the method and device of class mirror image image classification
CN107256410B (en) * 2017-05-26 2021-05-14 上海鹰瞳医疗科技有限公司 Fundus image classification method and device
CN107330449A (en) * 2017-06-13 2017-11-07 瑞达昇科技(大连)有限公司 A kind of BDR sign detection method and device
CN107292877A (en) * 2017-07-05 2017-10-24 北京至真互联网技术有限公司 A kind of right and left eyes recognition methods based on eye fundus image feature
CN107292877B (en) * 2017-07-05 2020-07-03 北京至真互联网技术有限公司 Left and right eye identification method based on fundus image characteristics
CN107980140A (en) * 2017-10-16 2018-05-01 厦门中控智慧信息技术有限公司 A kind of recognition methods of vena metacarpea and device
CN107980140B (en) * 2017-10-16 2021-09-14 厦门熵基科技有限公司 Palm vein identification method and device
CN107958445A (en) * 2017-12-29 2018-04-24 四川和生视界医药技术开发有限公司 The joining method and splicing apparatus of retinal images
CN107958445B (en) * 2017-12-29 2022-02-25 四川和生视界医药技术开发有限公司 Splicing method and splicing device of retina images
CN110543802A (en) * 2018-05-29 2019-12-06 北京大恒普信医疗技术有限公司 Method and device for identifying left eye and right eye in fundus image
CN108876770A (en) * 2018-06-01 2018-11-23 山东师范大学 A kind of eyeground multispectral image joint method for registering and system
CN108961334A (en) * 2018-06-26 2018-12-07 电子科技大学 A kind of retinal blood pipe thickness measurement method based on image registration
CN108961334B (en) * 2018-06-26 2020-05-08 电子科技大学 Retinal vessel wall thickness measuring method based on image registration
CN109166117A (en) * 2018-08-31 2019-01-08 福州依影健康科技有限公司 A kind of eye fundus image automatically analyzes comparison method and a kind of storage equipment
WO2020042406A1 (en) * 2018-08-31 2020-03-05 福州依影健康科技有限公司 Fundus image automatic analysis and comparison method and storage device
WO2020103288A1 (en) * 2018-11-21 2020-05-28 福州依影健康科技有限公司 Analysis method and system for feature data change of diabetic retinopathy fundus, and storage device
CN111292286A (en) * 2018-11-21 2020-06-16 福州依影健康科技有限公司 Method and system for analyzing change of characteristic data of fundus oculi of sugar net and storage device
GB2593824A (en) * 2018-11-21 2021-10-06 Fuzhou Yiying Health Tech Co Ltd Analysis method and system for feature data change of diabetic retinopathy fundus, and storage device
CN111222361B (en) * 2018-11-23 2023-12-19 福州依影健康科技有限公司 Method and system for analyzing characteristic data of change of blood vessel of retina in hypertension
CN111222361A (en) * 2018-11-23 2020-06-02 福州依影健康科技有限公司 Method and system for analyzing hypertension retina vascular change characteristic data
CN109658393A (en) * 2018-12-06 2019-04-19 代黎明 Eye fundus image joining method and system
CN110033422A (en) * 2019-04-10 2019-07-19 北京科技大学 A kind of eyeground OCT image fusion method and device
CN110033422B (en) * 2019-04-10 2021-03-23 北京科技大学 Fundus OCT image fusion method and device
CN110544274B (en) * 2019-07-18 2022-03-29 山东师范大学 Multispectral-based fundus image registration method and system
CN110544274A (en) * 2019-07-18 2019-12-06 山东师范大学 multispectral-based fundus image registration method and system
CN110664435A (en) * 2019-09-23 2020-01-10 东软医疗系统股份有限公司 Method and device for acquiring cardiac data and ultrasonic imaging equipment
CN110660089A (en) * 2019-09-25 2020-01-07 云南电网有限责任公司电力科学研究院 Satellite image registration method and device
CN113379808A (en) * 2021-06-21 2021-09-10 昆明理工大学 Method for registration of multiband solar images
CN114926659A (en) * 2022-05-16 2022-08-19 上海贝特威自动化科技有限公司 Deformation target positioning algorithm based on SIFT and CM
CN114926659B (en) * 2022-05-16 2023-08-08 上海贝特威自动化科技有限公司 Deformation target positioning algorithm based on SIFT and CM

Also Published As

Publication number Publication date
CN106651827B (en) 2019-05-07

Similar Documents

Publication Publication Date Title
CN106651827A (en) Fundus image registering method based on SIFT characteristics
Lu et al. Automatic optic disc detection from retinal images by a line operator
Cheng et al. Sparse dissimilarity-constrained coding for glaucoma screening
Li et al. Automated feature extraction in color retinal images by a model based approach
Lupascu et al. Automated detection of optic disc location in retinal images
Boyer et al. Automatic recovery of the optic nervehead geometry in optical coherence tomography
CN112465772B (en) Fundus colour photographic image blood vessel evaluation method, device, computer equipment and medium
AU2021202217B2 (en) Methods and systems for ocular imaging, diagnosis and prognosis
JP2017016593A (en) Image processing apparatus, image processing method, and program
Hu et al. Automated segmentation of 3-D spectral OCT retinal blood vessels by neural canal opening false positive suppression
Loureiro et al. Using a skeleton gait energy image for pathological gait classification
CN105869166A (en) Human body action identification method and system based on binocular vision
Kafieh et al. An accurate multimodal 3-D vessel segmentation method based on brightness variations on OCT layers and curvelet domain fundus image analysis
Zhang et al. Optic disc localization by projection with vessel distribution and appearance characteristics
CN108665474B (en) B-COSFIRE-based retinal vessel segmentation method for fundus image
Parvathi et al. Automatic drusen detection from colour retinal images
Hsu A hybrid approach for brain image registration with local constraints
Niu et al. Registration of SD-OCT en-face images with color fundus photographs based on local patch matching
Sumathy et al. Distance-based method used to localize the eyeball effectively for cerebral palsy rehabilitation
TW200807309A (en) Method and system for reconstructing 3-D endoscopic images
Wei et al. The retinal image registration based on scale invariant feature
CN115205241A (en) Metering method and system for apparent cell density
Bathina et al. Robust matching of multi-modal retinal images using radon transform based local descriptor
Kim et al. Robust Detection Model of Vascular Landmarks for Retinal Image Registration: A Two‐Stage Convolutional Neural Network
Patankar et al. Orthogonal moments for determining correspondence between vessel bifurcations for retinal image registration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant