CN113436070B - Fundus image splicing method based on deep neural network - Google Patents

Fundus image splicing method based on deep neural network Download PDF

Info

Publication number
CN113436070B
CN113436070B CN202110682282.8A CN202110682282A CN113436070B CN 113436070 B CN113436070 B CN 113436070B CN 202110682282 A CN202110682282 A CN 202110682282A CN 113436070 B CN113436070 B CN 113436070B
Authority
CN
China
Prior art keywords
image
fundus
splicing
algorithm
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110682282.8A
Other languages
Chinese (zh)
Other versions
CN113436070A (en
Inventor
邹耀徵
龚炜
文一帆
文怀敏
付源溟
王沐珊
王秋昊
李鑫宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202110682282.8A priority Critical patent/CN113436070B/en
Publication of CN113436070A publication Critical patent/CN113436070A/en
Application granted granted Critical
Publication of CN113436070B publication Critical patent/CN113436070B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Abstract

The invention discloses a fundus image splicing method based on a deep neural network, which comprises the following steps: s1 reading the acquired multiple fundus images, and processing the fundus images into a fundus blood vessel map; s2 performs a black frame removal process on the fundus image and the fundus blood vessel map; s3, determining a reference image and preliminarily judging the type of the eye disease through a pre-trained deep neural network, and endowing each fundus image with a label; s4, extracting characteristic points of the fundus image and the fundus blood vessel map by adopting a SURF algorithm, a HOG algorithm and an LBP algorithm, and endowing different weight values to all the characteristic points; s5, matching all the feature points; s6, screening all feature point pairs by using a RANSAC algorithm according to the principle of preferentially reserving feature point pairs with large weight values; s7, calculating a perspective transformation matrix of the characteristic point pairs, and splicing the images; s8, inputting the spliced images into a deep neural network for detection. The invention can improve the accuracy and splicing efficiency of the spliced images.

Description

Fundus image splicing method based on deep neural network
Technical Field
The invention relates to a fundus image splicing method based on a deep neural network, and belongs to the technical field of medical image processing.
Background
Currently, fundus images are generally obtained by a fundus camera, and due to the limitation of the fundus camera, the obtained images can only be local images of the fundus, so that an ophthalmologist can only observe and manually align by naked eyes in clinical diagnosis and treatment, the efficiency is low, and the accuracy cannot be guaranteed. There are two ways to solve this problem, one is to increase the field of view of the device imaging, but this usually requires more expensive costs and is not practical for most hospitals. Another approach is to stitch multiple fundus images to present an image of the entire fundus of the patient on a single map to meet the needs of clinical diagnosis and treatment.
The existing fundus image splicing technology mainly has the following defects: firstly, too few feature points result in no registration, or too many mismatching points result in mismatching parameter solution; secondly, the splicing scheme cannot be adjusted according to the type of the eye disease, so that the splicing effect is influenced; thirdly, the splicing speed is slow, and the splicing result cannot be verified.
Disclosure of Invention
The invention aims to provide a fundus image splicing method based on a deep neural network aiming at the defects of the prior art, the method can extract characteristic points from a fundus image and a fundus blood vessel map through three algorithms, can obtain more characteristic points, and can effectively screen characteristic point pairs by endowing the characteristic points with different weighted values, so that the spliced image is more accurate, and the splicing efficiency is higher.
The purpose of the invention is realized by the following technical scheme:
a fundus image splicing method based on a deep neural network comprises the following steps:
s1: reading a plurality of acquired fundus images, and processing all the fundus images into a fundus blood vessel map by adopting a U-NET algorithm;
s2: performing black frame removing processing on the fundus image and the fundus blood vessel image;
s3: determining a reference image and preliminarily judging the type of eye diseases through a pre-trained deep neural network, and endowing each fundus image with a label, wherein the label records whether the fundus image is the reference image, the position of the fundus image relative to the reference image and whether the fundus image expressed by int-type data has focuses and focus types;
s4: extracting characteristic points of all fundus images and fundus blood vessel images by using a SURF (Speed-Up route Features) algorithm, a HOG (histogram of ordered gradient) algorithm and an LBP (local Binary Pattern) algorithm, endowing different weight values to the characteristic points which simultaneously meet the three algorithms, meet any two algorithms in the three algorithms or meet any one algorithm in the three algorithms, and additionally increasing the weight values to the characteristic points on the focus image according to different eye diseases;
s5: matching all the characteristic points, converting the characteristic point pairs matched with the fundus blood vessel map into corresponding characteristic point pairs of the fundus image after matching is finished, and recalculating the weight values of the coincident characteristic point pairs;
s6: screening all feature point pairs by using a RANSAC (Random Sample Consensus) algorithm according to the principle of preferentially reserving feature point pairs with large weight values, and removing mismatching points;
s7: cutting the image into a plurality of small blocks, performing perspective Transformation matrix calculation on the characteristic point pairs of each small block by adopting a DLT (Direct Linear Transformation-DLT) algorithm, then performing local accurate splicing on the image according to the position relation on the label relative to a reference image, and reserving a reference image part if an overlapped area occurs during splicing;
s8: eliminating a splicing gap of the spliced images, inputting the images into a deep neural network to detect the int data representing the eye disease type, and finishing splicing if the numerical value of the detected int data is the same as the numerical value of the int data of the image with the focus in the step S2; and if the detected numerical value of the int data is not the same as the numerical value of the int data of the image with the focus in the step S2, performing image splicing again according to the steps by taking the fundus image with the focus as a reference image until the numerical value of the int data of the spliced image is the same as the numerical value of the int data of the image with the focus in the step S2, and finishing the image splicing.
Further, the black frame removal processing in step S2 adopts a method of detecting each line of the image matrix and removing all the pixel points whose values are zero.
Further, in step S6, the RANSAC algorithm takes values according to the weight values when the feature points are counted and input.
Further, in step 8, a weighted average is used to eliminate the splicing gap.
The invention has the following effects:
(1) the trained deep neural network is adopted to preliminarily diagnose eye diseases, and the splicing scheme is adjusted according to the types of the eye diseases, so that more fundus images can be spliced, and the images can be spliced more quickly and accurately;
(2) the characteristic points of the fundus original image and the fundus vascular image are extracted through three algorithms of SURF, HOG and LBP, more characteristic points can be obtained, and characteristic point pairs can be effectively screened by endowing the characteristic points with different weight values, so that the spliced image is more accurate, and the splicing efficiency is higher;
(3) the accuracy of the spliced image is ensured by verifying the splicing result.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
Fig. 2 is a fundus vascular map obtained by processing a fundus image by a U-NET algorithm.
Fig. 3 is a fundus image before the black frame is removed.
Fig. 4 is a fundus image after the black frame is removed.
Fig. 5 is a fundus splicing view before the splicing gap is eliminated.
Fig. 6 is a fundus splicing view after the splicing gap is eliminated.
Detailed Description
As shown in fig. 1 to 6, the fundus image stitching method based on the deep neural network provided in the present embodiment includes the following steps:
s1: reading a plurality of acquired fundus images, processing all the fundus images into a fundus blood vessel map shown in figure 2 by adopting a U-NET algorithm, and respectively storing the fundus images and the fundus blood vessel map in different storage spaces;
s2: performing black frame removal processing on the fundus image and the fundus blood vessel image, specifically adopting a method that each line of an image matrix is detected, and removing all pixel points with zero values;
s3: determining a reference image and preliminarily judging the type of eye diseases through a depth neural network trained in advance, and endowing each fundus image with a label, wherein the label records whether the fundus image is the reference image, the position of the fundus image relative to the reference image and whether the fundus image expressed by int-type data has focus and eye disease type;
s4: extracting characteristic points of all fundus images and fundus blood vessel images by adopting an SURF algorithm, an HOG algorithm (used for dealing with the influence of lighting on eyeball images when fundus is photographed) and an LBP algorithm (used for reducing the influence caused by rotation), giving maximum weight values to the characteristic points simultaneously meeting the three algorithms, meeting the requirement that the weight values of the characteristic points of any two algorithms in the three algorithms are the next to the weight values of the characteristic points of any one algorithm in the three algorithms are the minimum, additionally adding weight values to the characteristic points on the image with focuses, and additionally adding different weight values to different eye diseases;
the SURF algorithm adopted in this embodiment respectively performs vector combination for setting weight distribution on wavelet features in the horizontal direction and the vertical direction, and in the horizontal absolute value direction and the vertical absolute value direction, to obtain a new round of feature point descriptors, which have two directions in total, 4 × 2 — 32 descriptors in total, and performs dimension reduction again on the basis of the original SURF algorithm, and these descriptions will be used for matching of subsequent feature points;
s5: matching all the characteristic points, converting the characteristic point pairs matched with the fundus blood vessel map into corresponding characteristic point pairs of the fundus image after matching is finished, and if the characteristic points of the blood vessel map are just overlapped with the characteristic points of the fundus image when the characteristic points of the fundus image are converted into the characteristic points of the fundus image, selecting the weight value with a larger weight value between the characteristic point pairs and the characteristic points;
s6: the RANSAC algorithm is adopted to screen all feature point pairs, the RANSAC algorithm is improved, the improved RANSAC algorithm takes values according to weight values when counting and inputting feature points, namely the weight values of all the feature points in the original RANSAC algorithm are the same, the improved RANSAC algorithm needs to be added with the weight values when inputting the feature points, and meanwhile, the improved RANSAC algorithm preferentially keeps the feature points with large weight values during iterative screening;
s7: cutting the image into a plurality of small blocks, calculating a perspective transformation matrix of a characteristic point pair of each small block by adopting a DLT algorithm, then carrying out local accurate splicing on the image according to the position relation on the label relative to a reference image through the perspective transformation matrix, and reserving a reference image part if an overlapping area occurs during splicing, thereby avoiding the generation of ghost images;
s8: eliminating splicing gaps of the spliced images by adopting weighted average, inputting the images into a deep neural network to detect the int data of the eye disease type, and finishing splicing if the numerical value of the detected int data is the same as the numerical value of the int data of the image with the focus in the step S2; and if the detected numerical value of the int data is not the same as the numerical value of the int data of the image with the focus in the step S2, selecting the eye bottom image with the focus as a reference image, and performing image splicing again according to the steps S3-S8 until the numerical value of the int data of the spliced image is the same as the numerical value of the int data of the image with the focus in the step S3, and finishing image splicing.
The above description is only a preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any modification and replacement based on the technical solution and inventive concept provided by the present invention should be covered within the scope of the present invention.

Claims (4)

1. A fundus image splicing method based on a deep neural network is characterized by comprising the following steps:
s1: reading a plurality of acquired fundus images, and processing all the fundus images into a fundus blood vessel map by adopting a U-NET algorithm;
s2: performing black frame removing processing on the fundus image and the fundus blood vessel image;
s3: determining a reference image and preliminarily judging the type of eye diseases through a depth neural network trained in advance, and endowing each fundus image with a label, wherein the label records whether the fundus image is the reference image, the position of the fundus image relative to the reference image and whether the fundus image expressed by int-type data has focus and eye disease type;
s4: extracting characteristic points of all fundus images and fundus blood vessel images by using a SURF (Speed-Up route Features) algorithm, a HOG (histogram of ordered gradient) algorithm and an LBP (local Binary Pattern) algorithm, endowing different weight values to the characteristic points which simultaneously meet the three algorithms, meet any two algorithms in the three algorithms or meet any one algorithm in the three algorithms, and additionally increasing the weight values to the characteristic points on the focus image according to the types of eye diseases;
s5: matching all the characteristic points, converting the characteristic point pairs matched with the fundus blood vessel map into corresponding characteristic point pairs of the fundus image after matching is finished, and recalculating the weight values of the coincident characteristic point pairs;
s6: screening all characteristic point pairs by using a RANSAC (Random Sample Consensus) algorithm according to the principle of preferentially reserving the characteristic point pairs with large weight values;
s7: cutting the image into a plurality of small blocks, performing perspective Transformation matrix calculation on the characteristic point pairs of each small block by adopting a DLT (Direct Linear Transformation-DLT) algorithm, then performing local accurate splicing on the image according to the position relation on the label relative to a reference image, and reserving a reference image part if an overlapped area occurs during splicing;
s8: eliminating a splicing gap of the spliced images, inputting the images into a deep neural network to detect the int data of the eye disease type, and finishing splicing if the numerical value of the detected int data is the same as the numerical value of the int data of the image with the focus in the step S2; and if the detected numerical value of the int data is not the same as the numerical value of the int data of the image with the focus in the step S2, selecting the eye bottom image with the focus as a reference image, and performing image splicing again according to the steps until the numerical value of the int data of the spliced image is the same as the numerical value of the int data of the image with the focus in the step S2, and finishing image splicing.
2. The fundus image stitching method based on the deep neural network according to claim 1, characterized in that: the black frame removal processing in step S2 adopts a method of detecting each line of the image matrix and removing all the pixel points whose values are zero.
3. The fundus image stitching method based on the deep neural network according to claim 1, characterized in that: in step S6, the RANSAC algorithm takes values according to the weight values when counting and inputting the feature points.
4. The fundus image stitching method based on the deep neural network according to claim 1, characterized in that: and 8, eliminating splicing gaps by adopting weighted average.
CN202110682282.8A 2021-06-20 2021-06-20 Fundus image splicing method based on deep neural network Active CN113436070B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110682282.8A CN113436070B (en) 2021-06-20 2021-06-20 Fundus image splicing method based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110682282.8A CN113436070B (en) 2021-06-20 2021-06-20 Fundus image splicing method based on deep neural network

Publications (2)

Publication Number Publication Date
CN113436070A CN113436070A (en) 2021-09-24
CN113436070B true CN113436070B (en) 2022-05-17

Family

ID=77756774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110682282.8A Active CN113436070B (en) 2021-06-20 2021-06-20 Fundus image splicing method based on deep neural network

Country Status (1)

Country Link
CN (1) CN113436070B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114862760B (en) * 2022-03-30 2023-04-28 中山大学中山眼科中心 Retinopathy of prematurity detection method and device
CN115619747B (en) * 2022-10-26 2023-09-19 中山大学中山眼科中心 Child fundus retina panoramic image map generation and follow-up data alignment method
CN116152073B (en) * 2023-04-04 2023-08-22 江苏富翰医疗产业发展有限公司 Improved multi-scale fundus image stitching method based on Loftr algorithm

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679636A (en) * 2013-12-23 2014-03-26 江苏物联网研究发展中心 Rapid image splicing method based on point and line features
CN104376548A (en) * 2014-11-07 2015-02-25 中国电子科技集团公司第二十八研究所 Fast image splicing method based on improved SURF algorithm
CN106447708A (en) * 2016-10-10 2017-02-22 吉林大学 OCT eye fundus image data registration method
CN107256398A (en) * 2017-06-13 2017-10-17 河北工业大学 The milk cow individual discrimination method of feature based fusion
CN108022228A (en) * 2016-10-31 2018-05-11 天津工业大学 Based on the matched colored eye fundus image joining method of SIFT conversion and Otsu
CN109241905A (en) * 2018-08-31 2019-01-18 北方工业大学 Image processing method and device
CN112164043A (en) * 2020-09-23 2021-01-01 苏州大学 Method and system for splicing multiple fundus images

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633206B (en) * 2017-08-17 2018-09-11 平安科技(深圳)有限公司 Eyeball motion capture method, device and storage medium
CN110838116B (en) * 2019-11-14 2023-01-03 上海联影医疗科技股份有限公司 Medical image acquisition method, device, equipment and computer-readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679636A (en) * 2013-12-23 2014-03-26 江苏物联网研究发展中心 Rapid image splicing method based on point and line features
CN104376548A (en) * 2014-11-07 2015-02-25 中国电子科技集团公司第二十八研究所 Fast image splicing method based on improved SURF algorithm
CN106447708A (en) * 2016-10-10 2017-02-22 吉林大学 OCT eye fundus image data registration method
CN108022228A (en) * 2016-10-31 2018-05-11 天津工业大学 Based on the matched colored eye fundus image joining method of SIFT conversion and Otsu
CN107256398A (en) * 2017-06-13 2017-10-17 河北工业大学 The milk cow individual discrimination method of feature based fusion
CN109241905A (en) * 2018-08-31 2019-01-18 北方工业大学 Image processing method and device
CN112164043A (en) * 2020-09-23 2021-01-01 苏州大学 Method and system for splicing multiple fundus images

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Deep convolution feature aggregation:an application to diabetic retinopathy severity level prediction;jyostna Devi Bodapati等;《original paper》;20210104;923-930 *
利用高分影像与特征匹配算法标定滑坡位移场;张慧慧;《测绘通报》;20170825(第8期);41-44 *
基于深度特征与局部特征融合的图像检索;黄娜;《北京工业大学学报》;20201210;第46卷(第12期);1345-1354 *

Also Published As

Publication number Publication date
CN113436070A (en) 2021-09-24

Similar Documents

Publication Publication Date Title
CN113436070B (en) Fundus image splicing method based on deep neural network
Al-Bander et al. Multiscale sequential convolutional neural networks for simultaneous detection of fovea and optic disc
Niemeijer et al. Segmentation of the optic disc, macula and vascular arch in fundus photographs
EP2888718B1 (en) Methods and systems for automatic location of optic structures in an image of an eye, and for automatic retina cup-to-disc ratio computation
CN107292835B (en) Method and device for automatically vectorizing retinal blood vessels of fundus image
Jaafar et al. Automated detection of red lesions from digital colour fundus photographs
CN107564048A (en) Based on bifurcation feature registration method
Almazroa et al. An automatic image processing system for glaucoma screening
CN108510493A (en) Boundary alignment method, storage medium and the terminal of target object in medical image
CN106846293A (en) Image processing method and device
CN112164043A (en) Method and system for splicing multiple fundus images
CN113643354B (en) Measuring device of vascular caliber based on fundus image with enhanced resolution
CN114627067A (en) Wound area measurement and auxiliary diagnosis and treatment method based on image processing
CN114937024A (en) Image evaluation method and device and computer equipment
Kusumaningtyas et al. Auto cropping for application of heart abnormalities detection through Iris based on mobile devices
Shaik et al. Glaucoma identification based on segmentation and fusion techniques
Goldbaum et al. Image understanding for automated retinal diagnosis
CN116030042B (en) Diagnostic device, method, equipment and storage medium for doctor's diagnosis
Lu et al. Adaboost-based detection and segmentation of bioresorbable vascular scaffolds struts in IVOCT images
Kusuma et al. Retracted: Heart Abnormalities Detection Through Iris Based on Mobile
WO2023103609A1 (en) Eye tracking method and apparatus for anterior segment octa, device, and storage medium
CN116092667A (en) Disease detection method, system, device and storage medium based on multi-mode images
CN110930346A (en) Automatic detection method and storage device for fundus image microangioma
Niemeijer Automatic detection of diabetic retinopathy in digital fundus photographs
Ghorab et al. Computer-Based Detection of Glaucoma Using Fundus Image Processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant