CN106651827B - A kind of ocular fundus image registration method based on SIFT feature - Google Patents

A kind of ocular fundus image registration method based on SIFT feature Download PDF

Info

Publication number
CN106651827B
CN106651827B CN201610813202.7A CN201610813202A CN106651827B CN 106651827 B CN106651827 B CN 106651827B CN 201610813202 A CN201610813202 A CN 201610813202A CN 106651827 B CN106651827 B CN 106651827B
Authority
CN
China
Prior art keywords
image
eye fundus
fundus image
sift feature
benchmark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610813202.7A
Other languages
Chinese (zh)
Other versions
CN106651827A (en
Inventor
吴健
韩玉强
陈亮
梁婷婷
万瑶
应豪超
高维
邓水光
李莹
尹建伟
吴朝晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201610813202.7A priority Critical patent/CN106651827B/en
Publication of CN106651827A publication Critical patent/CN106651827A/en
Application granted granted Critical
Publication of CN106651827B publication Critical patent/CN106651827B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses a kind of ocular fundus image registration methods based on SIFT feature, by carrying out angle classification to the batch eye fundus image of input, the transformation relation between image is then calculated, image is transformed into same background, by being switched fast picture, it can be found which is partially changed on eyeground.Fuzzy convergence optic disk location algorithm is mainly utilized in the present invention, carries out angle classification according to eye fundus image of the optic disk position to batch input, angle is divided into two classes: left and right side;Then select the first picture uploaded as benchmark in every a kind of picture, other pictures are registrated with the reference base picture, are extracted the SIFT feature of all pictures and are calculated the matching relationship of characteristic point between any two;The transformation model parameter between picture two-by-two finally is calculated using RANSAC algorithm, image is transformed to by identical background according to transformation model, image switching interval is set, the variation between image can rapidly and accurately be found by the switching of image.

Description

A kind of ocular fundus image registration method based on SIFT feature
Technical field
The invention belongs to technical field of medical image processing, and in particular to a kind of ocular fundus image registration based on SIFT feature Method.
Background technique
Eye fundus image diagnosis is objective, standard a diagnostic method in ophthalmology.Eye fundus image is to diabetes, hypertension etc. Early detection, diagnosis and the guidance of the fundus oculi diseases such as eyeground pathological changes and maculopathy, ocular fundus arteriosclerosis and retinopathy It treats significant.Under normal circumstances, each clients have multiple diagnosis records, generate multiple eye fundus images.By right Than the image of different time, the eyeground pathological changes situation of clients can be rapidly and accurately tracked and found.
Eye fundus image blood vessel abundant and texture structure, the eye diseases such as wart small for eyeground, macula lutea and diabetes, The disease of the whole bodies such as leukaemia, artery sclerosis has very big diagnostic value.Doctor often passes through the eye of comparison different time Base map picture finds out the variation of eyeground generation, to carry out follow-up diagnosis treatment.But since eye fundus image includes blood vessel abundant And texture structure, and the quantity of eye fundus image is relatively more, and doctor, which is difficult to observe by the naked eye, completes this process.Thus it just needs Multiple eye fundus images are registrated, the purpose of image registration techniques is to compare what same target obtained at different conditions Image, such as image can be from different acquisition equipment, different time, different shooting visual angles etc..Specifically, for Piece image is mapped to another piece image by finding a kind of spatial alternation, made by the two images in one group of image data set It obtains in two figures and corresponds corresponding to the point of space same position, to achieve the purpose that information merges.
Ocular fundus image registration has been a hot spot of research, can be by the ocular fundus image registration of different time, by quickly becoming The variation that picture is can be found that between eye fundus image is changed, so that doctor be helped to carry out eyeground pathological changes diagnosis.However due to eyeground figure Picture includes blood vessel abundant and texture structure, not only time-consuming and laborious by the variation between naked eyes contrast images, but also is difficult to send out Existing subtle variation.Traditional-handwork registration haves the defects that low accuracy, heavy workload, not reproducible, and eyeground scanning figure The bad dream of the big even more manual registration person of the data volume of picture, thus the autoregistration of eye fundus image has very big application value.
Summary of the invention
For above-mentioned technological deficiency present in the prior art, the present invention provides a kind of eyeground figure based on SIFT feature As method for registering, the SIFT feature of eye fundus image is used, eye fundus image has been transformed into identical background, by dynamically cutting The mode of picture is changed, can rapidly find the difference between image.
A kind of ocular fundus image registration method based on SIFT feature, includes the following steps:
(1) all eye fundus images to be registered are divided into two classes according to left and right eyeball correspondence;
(2) SIFT (Scale-invariant feature transform) is carried out to all eye fundus images to be registered Feature detection, to extract the SIFT feature in eye fundus image, the SIFT feature is described with multidimensional characteristic, these Feature description constitutes corresponding SIFT feature vector;
(3) one is chosen from similar eye fundus image and be used as benchmark image, made based on the SIFT feature vector similar In other eye fundus images respectively with benchmark image carry out SIFT feature matching;
(4) for any eye fundus image in similar in addition to benchmark image, the imitative of the eye fundus image and benchmark image is calculated Penetrate transformation matrix, so according to affine transformation matrix by pixel affine transformation each in the eye fundus image into benchmark image with reality Now it is registrated;All eye fundus images in similar in addition to benchmark image are traversed according to this.
For any eye fundus image in the step (1), optic disk positioning is carried out to it using fuzzy convergence algorithm: first Binaryzation is carried out to distinguish target and background to eye fundus image, for blood vessel pixel any in image, is counted with the blood vessel picture Vegetarian refreshments be center N × N size image block in belong to the pixel number of target, and the gained vote knot as the blood vessel pixel Fruit traverses all blood vessel pixels in image according to this;Then, the blood vessel pixel for the highest preceding η % that wins the vote is filtered out, and then right The coordinate of these blood vessel pixels filtered out is averaging the center for obtaining optic disk;Finally, comparing optic disk center With the left-right relation of picture centre coordinate, eye fundus image correspondence is classified as left eye ball image or right eye ball image;N and η is pre- If numerical value.
An earliest eye fundus image of shooting time is chosen in the step (3) from similar eye fundus image as base Quasi- image.
Other eye fundus images in similar are made to carry out SIFT feature with benchmark image respectively in the step (3) Match, detailed process are as follows: for any eye fundus image in similar in addition to benchmark image, it is special to calculate any SIFT in the eye fundus image Euclidean distance of the sign point with SIFT feature each in benchmark image about SIFT feature vector, takes the minimum value min of Euclidean distance Determine that the SIFT feature in eye fundus image is corresponding with benchmark image if min < k*secmin with sub-minimum secmin The SIFT feature of Euclidean distance minimum value min matches, and k is preset numerical value;It traverses in the eye fundus image and owns according to this SIFT feature is then completed the eye fundus image and is matched with the SIFT feature of benchmark image.
For any eye fundus image in similar in addition to benchmark image in the step (4), using RANSAC (Random Sample consensus) algorithm calculates the affine transformation matrix of the eye fundus image and benchmark image, detailed process are as follows:
4.1 match with all SIFT features of benchmark image from the eye fundus image and randomly select four pairs in, Jin Ergen The affine transformation matrix M of the eye fundus image and benchmark image is determined by following formula according to the coordinate of this four pairs of SIFT features:
[x ', y ', 1]T=M* [x, y, 1]T
Wherein: m0、m1、m3And m4It is scaling twiddle factor, Δ x and Δ y are respectively the eye fundus image relative datum image Offset in the x-direction and the z-direction, [x ', y ', 1]T[x, y, 1]TRespectively any pair of SIFT feature is in respective image In homogeneous coordinates;
4.2 carry out affine transformation to matched SIFT features all in the eye fundus image according to affine transformation matrix M, Obtain mapping point of these SIFT features in benchmark image;
4.3 calculate the SIFT feature of each mapping points and Corresponding matching in benchmark image about the European of SIFT feature vector Distance forms evaluated error vector, and then makes small in evaluated error vector after all Euclidean distances acquired are normalized The matching of the SIFT feature corresponding to the Euclidean distance of certain threshold value is to point quantity in as interior point, counting and saves corresponding Affine transformation matrix M;
4.4 execute the affine transformation matrix M work for taking that point quantity is most in corresponding several times according to step 4.1~4.3 repeatedly For the eye fundus image and the final affine transformation matrix of benchmark image.
The present invention by carrying out angle classification to the batch eye fundus image of input, close by the transformation then calculated between image System, transforms to same background for image, by being switched fast picture, it can be found which is partially changed on eyeground.This hair It is bright that fuzzy convergence optic disk location algorithm is mainly utilized, angle point is carried out according to eye fundus image of the optic disk position to batch input Class, angle are divided into two classes: left and right side;Then select the first picture uploaded as benchmark in every a kind of picture, Its picture is registrated with the reference base picture, is extracted the SIFT feature of all pictures and is calculated the matching of characteristic point between any two Relationship;The transformation model parameter between picture two-by-two finally is calculated using RANSAC algorithm, is converted image according to transformation model To identical background, image switching interval is set, the variation between image can rapidly and accurately be found by the switching of image. The present invention has following advantageous effects as a result:
(1) classified automatically to eye fundus image according to optic disk position, user can upload the eye of different eyeballs in batches Base map picture.
(2) registration process has used the SIFT feature of eye fundus image, and this feature point has better than traditional feature Robustness.
(3) transformation model parameter estimation procedure has used the RANSAC i.e. RANSAC algorithm of robust, the algorithm It can be with the match point bring error of debug.
(4) present invention does not splice the image of registration, but they have been transformed to identical background, by dynamic State switches the mode of picture, can rapidly find the difference between image.
Detailed description of the invention
Fig. 1 is the flow diagram of ocular fundus image registration method of the present invention.
Fig. 2 is the flow diagram of eye fundus image angle.
Fig. 3 is characterized detection and a matched flow diagram.
Fig. 4 is the flow diagram of transformation model parameter Estimation.
Specific embodiment
In order to more specifically describe the present invention, with reference to the accompanying drawing and specific embodiment is to technical solution of the present invention It is described in detail.
As shown in Figure 1, mainly including eye fundus image angle point the present invention is based on the ocular fundus image registration method of SIFT feature Class, characteristic point detection and matching and three parts of transformation model parameter Estimation, the three parts have strict sequence, user crowd After amount input eye fundus image, system can carry out optic disk positioning to each width eye fundus image first, obtain the position coordinates of optic disk, The coordinate refers to the center position coordinates of optic disk, is then compared according to coordinate position with eye fundus image center position coordinates, The angle for judging the eye fundus image, classifies to it, then selects the first picture as reference base picture in each class, This picture is often also time earliest eyeground picture, and other pictures are all registrated with reference base picture.Second part is inspection The characteristic point of all pictures is surveyed, and calculates the similarity between the picture and reference base picture characteristic point, carries out of characteristic point Match.The characteristics of according to eye fundus image, which uses the SIFT feature of eye fundus image, and uses SIFT, that is, Scale invariant Eigentransformation algorithm detects characteristic point, it is a kind of based on scale space, has image scaling, rotation and imitative Inalterability of displacement Feature point target detection and image registration algorithm.Detect that SIFT feature passes through the similarity for calculating point between later, Such as Euclidean distance, find the matching relationship a little between.After finding the matching characteristic of two images, two width under normal circumstances Slice can have translation, rotation, the variation of scale, then just need to calculate a transformation matrix, but because obtained matching Point there may be exterior point, i.e. wrong match point, so first having to exclude exterior point, therefore estimate to calculate using the transformation of robust Method --- RANSAC algorithm, i.e. RANSAC algorithm, the algorithm can use the inherent the constraint relationship of feature point set into The matching of one step removal mistake, finally obtains a transformation matrix M.Two pictures can be transformed into phase according to the transformation matrix Same background shows convenient for subsequent dynamic and uses.
The process of eye fundus image angle classification shown in Fig. 2.Some patients can shoot two eyes when carrying out funduscopy The eye fundus image of eyeball, eye fundus image is divided into two classes, the main foundation of classification and most apparent mark according to angle by the present invention accordingly Will is the position of optic disk, therefore carries out optic disk positioning first with fuzzy convergence algorithm.Fuzzy convergence algorithm is a kind of based on throwing The algorithm of ticket, strong robustness, for the retina for having lesion to occur, this method still can obtain good effect.The process It is broadly divided into two steps: being to be thrown according to the pixel of every vessel centerline to the sliding window of N × N around first Ticket;Then according to voting results, adaptive selected threshold filters out the pixel of gained vote highest 10%, obtains optic disk candidate Collection, to coordinate all in Candidate Set, method by averaging obtains the center position coordinates of optic disk.Compare the coordinate with The relationship of eye fundus image centre coordinate, is classified.
The process of characteristic point detection and registration shown in Fig. 3.Since eye fundus image does not have apparent angle point or boundary information, And have SIFT key point abundant, compared with traditional characterization method, this feature has preferably robustness.SIFT feature detection Algorithm is a kind of based on scale space, the image local feature to maintain the invariance to image scaling, rotation even affine transformation Operator is described, which carries out feature detection, and ruler locating for the position of determining key point and key point in scale space first Then degree use direction character of the principal direction of key vertex neighborhood gradient as the point, to realize operator to scale and direction Independence.It maintains the invariance to rotation, scaling, brightness change, also keeps one to view transformation, radiation transformation, noise Determine the stability of degree, and the feature descriptor of one 128 dimension is generated using as matching foundation to each key point.It builds first Vertical scale space kernel function, two-dimensional Gaussian function are defined as follows:
Wherein, σ is the variance of Gauss normal distribution.Scale space expression of the two-dimensional image I (x, y) under different scale can To be obtained by image and Gauss nuclear convolution:
L (x, y, σ)=G (x, y, σ) * I (x, y)
Wherein, σ indicates the scale space factor in the L (x, y, σ), be worth it is smaller, characterize the image be smoothed it is fewer, Corresponding scale is also just smaller.Large scale corresponds to the general picture feature of image, and small scale corresponds to the minutia of image.L generation The table scale space of image.Characteristic point detection is carried out to the intermediate tomographic image in difference of Gaussian pyramid, intermediate is detected Pixel will 9 × 2 pixels corresponding with 8 pixels of same scale and adjacent scale be compared, if be detected pixel be The Local Extremum of 2 dimension image spaces of scale space, then be selected as candidate feature point.With the corresponding ruler of histogram statistical features point The method for spending spatial image L neighborhood territory pixel gradient direction, determines the direction of characteristic point, then establishes the description of each characteristic point Symbol describes each characteristic point with the feature vector of 128 dimensions.After determining characteristic point and its feature descriptor, so that it may according to Similarity in two images between respective characteristic point neighborhood information carries out Feature Points Matching, that is, utilizes feature in two images The similitude of Euclidean distance between vector comes whether judging characteristic point matches.Two width are sliced L1And L2The quantity of key point is respectively N1And N2, use L1In a key point feature vector, successively and L2In all key points feature vector calculate Euclidean away from From obtaining N2A Euclidean distance selects minimum value therein and sub-minimum, respectively min and secmin, if min < k* Secmin (such as k=0.7), then it is assumed that L1In the key point and L2The middle point for generating this distance of min matches.
The process of transformation model parameter Estimation shown in Fig. 4.After finding the matching characteristic of two images, under normal circumstances Two width slice can have translation, rotation, the variation of scale, then just needing to calculate a progress affine transformation matrix:
Wherein, M is affine transformation matrix, m0、m1、m3、m4For scaling, twiddle factor, Δ x and Δ y are respectively that two slices exist The offset in the direction x, y.Variation relation between two slices can indicate are as follows:
[x ', y ', 1]T=M* [x, y, 1]T
Wherein, [x, y, 1]T[x ', y ', 1]THomogeneous coordinates of the respectively one group of match point in two width slice.
The present invention uses transformation algorithm for estimating --- the RANSAC algorithm of robust, it can use the inherence of feature point set about Beam relationship further removes the matching of mistake.
RANSAC estimates that the detailed process of affine transformation matrix M algorithm is as follows:
(1) data preparation: two images (L is found1And L2) between matching double points and coordinate information.
(2) model is estimated: assuming that there is 4 pairs of n > of matching double points, randomly selecting 4 pairs, seeks affine transformation mould according to Formula Solution Shape parameter m0、m1、m3、m4、Δx、Δy。
(3) model evaluation: to image L1In all matched point (x, y) do above-mentioned affine transformation, obtain a series of Transformed coordinate (x ', y ').The coordinate and L that these are obtained2In corresponding match point (u, v) seek Euclidean distance, i.e., it is affine The evaluated error E of transformation.And use L2The dimensional information of middle individual features point normalizes, each characteristic matching in this way can be in the hope of A normalized evaluated error out finally obtains an evaluated error vector.
(4) compare evaluated error vector and error threshold T, the corresponding characteristic matching that T is lower than in evaluated error vector is Interior point.
(5) if interior number is greater than current maximum interior number, as having found current best affine transformation, Save this transformation matrix.
(6) if iteration is not finished, step (2)~(5) are repeated, finally obtained affine transformation is considered L1And L2Between most Good affine Transform Model.
After obtaining transformation model, all eye fundus images transform to identical background in each class, are not will own Picture is spliced, but generates the image after identical quantitative transformations, and certain time interval switching at runtime image is arranged, can be with Quickly and clearly find the variation between eye fundus image.
The above-mentioned description to embodiment is for that can understand and apply the invention convenient for those skilled in the art. Person skilled in the art obviously easily can make various modifications to above-described embodiment, and described herein general Principle is applied in other embodiments without having to go through creative labor.Therefore, the present invention is not limited to the above embodiments, ability Field technique personnel announcement according to the present invention, the improvement made for the present invention and modification all should be in protection scope of the present invention Within.

Claims (4)

1. a kind of ocular fundus image registration method based on SIFT feature, includes the following steps:
(1) all eye fundus images to be registered are divided into two classes according to left and right eyeball correspondence, specifically:
For any eye fundus image, optic disk positioning is carried out to it using fuzzy convergence algorithm: two-value being carried out to eye fundus image first Change to distinguish target and background, for blood vessel pixel any in image, counts using the blood vessel pixel as center N × N size Image block in belong to the pixel number of target, and as the gained vote of the blood vessel pixel as a result, traversing institute in image according to this There is blood vessel pixel;Then, the blood vessel pixel for the highest preceding η % that wins the vote is filtered out, and then to these blood vessel pictures filtered out The coordinate of vegetarian refreshments, which is averaging, obtains the center of optic disk;Finally, comparing a left side for optic disk center Yu picture centre coordinate Eye fundus image correspondence is classified as left eye ball image or right eye ball image by right relationship;N and η is preset numerical value;
(2) SIFT feature detection is carried out to all eye fundus images to be registered, to extract the SIFT feature in eye fundus image, The SIFT feature is described with multidimensional characteristic, and the description of these features constitutes corresponding SIFT feature vector;
(3) one is chosen from similar eye fundus image and be used as benchmark image, made in similar based on the SIFT feature vector Other eye fundus images carry out SIFT feature matching with benchmark image respectively;
(4) for any eye fundus image in similar in addition to benchmark image, the affine change of the eye fundus image and benchmark image is calculated Matrix is changed, and then is matched pixel affine transformation each in the eye fundus image to realize into benchmark image according to affine transformation matrix It is quasi-;All eye fundus images in similar in addition to benchmark image are traversed according to this.
2. ocular fundus image registration method according to claim 1, it is characterised in that: from similar in the step (3) An earliest eye fundus image of shooting time is chosen in eye fundus image as benchmark image.
3. ocular fundus image registration method according to claim 1, it is characterised in that: make in similar in the step (3) Other eye fundus images carry out SIFT feature matching, detailed process with benchmark image respectively are as follows: for removing reference map in similar Any eye fundus image as outside, calculate in the eye fundus image in any SIFT feature and benchmark image each SIFT feature about The Euclidean distance of SIFT feature vector takes the minimum value min and sub-minimum secmin of Euclidean distance, if min < k*secmin, Determine the SIFT feature phase that Euclidean distance minimum value min is corresponded in the SIFT feature and benchmark image in eye fundus image Match, k is preset numerical value;All SIFT features in the eye fundus image are traversed according to this, then complete the eye fundus image and reference map The SIFT feature of picture matches.
4. ocular fundus image registration method according to claim 1, it is characterised in that: for similar in the step (4) In any eye fundus image in addition to benchmark image, the affine transformation of the eye fundus image and benchmark image is calculated using RANSAC algorithm Matrix, detailed process are as follows:
4.1 match with all SIFT features of benchmark image from the eye fundus image and randomly select four pairs in, and then according to this The coordinate of four pairs of SIFT features determines the affine transformation matrix M of the eye fundus image and benchmark image by following formula:
[x ', y ', 1]T=M* [x, y, 1]T
Wherein: m0、m1、m3And m4It is scaling twiddle factor, Δ x and Δ y are respectively the eye fundus image relative datum image in the side X To with the offset in Y-direction, [x ', y ', 1]T[x, y, 1]TRespectively any pair of SIFT feature is neat in respective image Secondary coordinate;
4.2 carry out affine transformation to matched SIFT features all in the eye fundus image according to affine transformation matrix M, obtain Mapping point of these SIFT features in benchmark image;
4.3 calculate the SIFT feature of Corresponding matching in each mapping points and benchmark image about SIFT feature vector it is European away from From, composition evaluated error vector after all Euclidean distances acquired are normalized, and then make to be less than in evaluated error vector SIFT feature matching corresponding to the Euclidean distance of certain threshold value is to point quantity in as interior point, counting and saves corresponding imitative Penetrate transformation matrix M;
4.4 execute repeatedly according to step 4.1~4.3 and take point quantity is most in corresponding affine transformation matrix M as should several times Eye fundus image and the final affine transformation matrix of benchmark image.
CN201610813202.7A 2016-09-09 2016-09-09 A kind of ocular fundus image registration method based on SIFT feature Active CN106651827B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610813202.7A CN106651827B (en) 2016-09-09 2016-09-09 A kind of ocular fundus image registration method based on SIFT feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610813202.7A CN106651827B (en) 2016-09-09 2016-09-09 A kind of ocular fundus image registration method based on SIFT feature

Publications (2)

Publication Number Publication Date
CN106651827A CN106651827A (en) 2017-05-10
CN106651827B true CN106651827B (en) 2019-05-07

Family

ID=58851966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610813202.7A Active CN106651827B (en) 2016-09-09 2016-09-09 A kind of ocular fundus image registration method based on SIFT feature

Country Status (1)

Country Link
CN (1) CN106651827B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107256410B (en) * 2017-05-26 2021-05-14 上海鹰瞳医疗科技有限公司 Fundus image classification method and device
CN107330449A (en) * 2017-06-13 2017-11-07 瑞达昇科技(大连)有限公司 A kind of BDR sign detection method and device
CN107292877B (en) * 2017-07-05 2020-07-03 北京至真互联网技术有限公司 Left and right eye identification method based on fundus image characteristics
WO2019075601A1 (en) * 2017-10-16 2019-04-25 厦门中控智慧信息技术有限公司 Palm vein recognition method and device
CN107958445B (en) * 2017-12-29 2022-02-25 四川和生视界医药技术开发有限公司 Splicing method and splicing device of retina images
CN110543802A (en) * 2018-05-29 2019-12-06 北京大恒普信医疗技术有限公司 Method and device for identifying left eye and right eye in fundus image
CN108876770B (en) * 2018-06-01 2021-06-25 山东师范大学 Fundus multispectral image joint registration method and system
CN108961334B (en) * 2018-06-26 2020-05-08 电子科技大学 Retinal vessel wall thickness measuring method based on image registration
CN109166117B (en) * 2018-08-31 2022-04-12 福州依影健康科技有限公司 Automatic eye fundus image analysis and comparison method and storage device
CN111292286B (en) * 2018-11-21 2023-07-11 福州依影健康科技有限公司 Analysis method and system for change of characteristic data of sugar mesh bottom and storage device
CN111222361B (en) * 2018-11-23 2023-12-19 福州依影健康科技有限公司 Method and system for analyzing characteristic data of change of blood vessel of retina in hypertension
CN109658393B (en) * 2018-12-06 2022-11-22 代黎明 Fundus image splicing method and system
CN110033422B (en) * 2019-04-10 2021-03-23 北京科技大学 Fundus OCT image fusion method and device
CN110544274B (en) * 2019-07-18 2022-03-29 山东师范大学 Multispectral-based fundus image registration method and system
CN110664435A (en) * 2019-09-23 2020-01-10 东软医疗系统股份有限公司 Method and device for acquiring cardiac data and ultrasonic imaging equipment
CN110660089A (en) * 2019-09-25 2020-01-07 云南电网有限责任公司电力科学研究院 Satellite image registration method and device
CN113379808B (en) * 2021-06-21 2022-08-12 昆明理工大学 Method for registration of multiband solar images
CN114926659B (en) * 2022-05-16 2023-08-08 上海贝特威自动化科技有限公司 Deformation target positioning algorithm based on SIFT and CM

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593351A (en) * 2008-05-28 2009-12-02 中国科学院自动化研究所 Ocular fundus image registration method based on range conversion and rigid transformation parameters estimation
CN101732031A (en) * 2008-11-25 2010-06-16 中国大恒(集团)有限公司北京图像视觉技术分公司 Method for processing fundus images
CN102908120A (en) * 2012-10-09 2013-02-06 北京大恒图像视觉有限公司 Eye fundus image registration method, eye fundus image optic disk nerve and vessel measuring method and eye fundus image matching method
CN104933715A (en) * 2015-06-16 2015-09-23 山东大学(威海) Registration method applied to retina fundus image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9807340B2 (en) * 2014-11-25 2017-10-31 Electronics And Telecommunications Research Institute Method and apparatus for providing eye-contact function to multiple points of attendance using stereo image in video conference system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593351A (en) * 2008-05-28 2009-12-02 中国科学院自动化研究所 Ocular fundus image registration method based on range conversion and rigid transformation parameters estimation
CN101732031A (en) * 2008-11-25 2010-06-16 中国大恒(集团)有限公司北京图像视觉技术分公司 Method for processing fundus images
CN102908120A (en) * 2012-10-09 2013-02-06 北京大恒图像视觉有限公司 Eye fundus image registration method, eye fundus image optic disk nerve and vessel measuring method and eye fundus image matching method
CN104933715A (en) * 2015-06-16 2015-09-23 山东大学(威海) Registration method applied to retina fundus image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于SIFT算法的眼底图像拼接的研究与实现;于庆;《中国优秀硕士学位论文全文数据库 信息科技辑》;20110715;第I,9-10,34-36,39,43-44,53页

Also Published As

Publication number Publication date
CN106651827A (en) 2017-05-10

Similar Documents

Publication Publication Date Title
CN106651827B (en) A kind of ocular fundus image registration method based on SIFT feature
Niemeijer et al. Fast detection of the optic disc and fovea in color fundus photographs
Lu et al. Automatic optic disc detection from retinal images by a line operator
Delibasis et al. Automatic model-based tracing algorithm for vessel segmentation and diameter estimation
Li et al. Automated feature extraction in color retinal images by a model based approach
CN105719278B (en) A kind of medical image cutting method based on statistics deformation model
CN104809480B (en) A kind of eye fundus image Segmentation Method of Retinal Blood Vessels based on post-class processing and AdaBoost
Kolar et al. Hybrid retinal image registration using phase correlation
Lupascu et al. Automated detection of optic disc location in retinal images
Hu et al. Automated segmentation of 3-D spectral OCT retinal blood vessels by neural canal opening false positive suppression
Kafieh et al. An accurate multimodal 3-D vessel segmentation method based on brightness variations on OCT layers and curvelet domain fundus image analysis
Zhang et al. Optic disc localization by projection with vessel distribution and appearance characteristics
Bogunović et al. Geodesic graph cut based retinal fluid segmentation in optical coherence tomography
Hacihaliloglu et al. Statistical shape model to 3D ultrasound registration for spine interventions using enhanced local phase features
Ashok et al. Detection of retinal area from scanning laser ophthalmoscope images (SLO) using deep neural network
Hsu A hybrid approach for brain image registration with local constraints
Niu et al. Registration of SD-OCT en-face images with color fundus photographs based on local patch matching
Sumathy et al. Distance-based method used to localize the eyeball effectively for cerebral palsy rehabilitation
Wei et al. The retinal image registration based on scale invariant feature
Bathina et al. Robust matching of multi-modal retinal images using radon transform based local descriptor
Elseid et al. Glaucoma detection using retinal nerve fiber layer texture features
Liang et al. Location of optic disk in the fundus image based on visual attention
Charoenpong et al. Accurate pupil extraction algorithm by using integrated method
Ramasubramanian et al. A novel approach for automated detection of exudates using retinal image processing
Charoenpong et al. Pupil extraction system for Nystagmus diagnosis by using K-mean clustering and Mahalanobis distance technique

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant