CN114862760A - Method and device for detecting retinopathy of prematurity - Google Patents
Method and device for detecting retinopathy of prematurity Download PDFInfo
- Publication number
- CN114862760A CN114862760A CN202210327065.1A CN202210327065A CN114862760A CN 114862760 A CN114862760 A CN 114862760A CN 202210327065 A CN202210327065 A CN 202210327065A CN 114862760 A CN114862760 A CN 114862760A
- Authority
- CN
- China
- Prior art keywords
- image
- wide
- fundus
- angle
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 206010038933 Retinopathy of prematurity Diseases 0.000 title claims abstract description 34
- 238000001514 detection method Methods 0.000 claims abstract description 71
- 230000003902 lesion Effects 0.000 claims abstract description 26
- 238000005192 partition Methods 0.000 claims abstract description 23
- 210000001525 retina Anatomy 0.000 claims abstract description 15
- 238000000638 solvent extraction Methods 0.000 claims abstract description 6
- 238000013507 mapping Methods 0.000 claims description 36
- 230000009466 transformation Effects 0.000 claims description 20
- 230000004927 fusion Effects 0.000 claims description 14
- 239000011159 matrix material Substances 0.000 claims description 14
- 238000004458 analytical method Methods 0.000 claims description 12
- 238000012549 training Methods 0.000 claims description 11
- 210000002189 macula lutea Anatomy 0.000 claims description 7
- 238000007499 fusion processing Methods 0.000 claims description 6
- 230000002207 retinal effect Effects 0.000 claims description 6
- 238000012216 screening Methods 0.000 claims description 5
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 4
- 238000002372 labelling Methods 0.000 claims description 3
- 238000012163 sequencing technique Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 19
- 238000003745 diagnosis Methods 0.000 abstract description 12
- 201000010099 disease Diseases 0.000 description 7
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 239000013598 vector Substances 0.000 description 3
- 201000004569 Blindness Diseases 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 208000032843 Hemorrhage Diseases 0.000 description 1
- 206010029113 Neovascularisation Diseases 0.000 description 1
- 206010038848 Retinal detachment Diseases 0.000 description 1
- 208000017442 Retinal disease Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 210000004220 fundus oculi Anatomy 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 208000018773 low birth weight Diseases 0.000 description 1
- 231100000533 low birth weight Toxicity 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000002028 premature Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000002062 proliferating effect Effects 0.000 description 1
- 238000003908 quality control method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004264 retinal detachment Effects 0.000 description 1
- 238000011309 routine diagnosis Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000001225 therapeutic effect Effects 0.000 description 1
- 230000002792 vascular Effects 0.000 description 1
- 230000024883 vasodilation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/12—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Ophthalmology & Optometry (AREA)
- Molecular Biology (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Computing Systems (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The invention discloses a method and a device for detecting retinopathy of prematurity, wherein the method comprises the following steps: s1: acquiring wide-angle images of the eyeground at the left side and the right side of an individual to be screened in different directions; s2: sending the fundus wide-angle images in different directions into a pre-trained focus detection model by taking an individual to be screened as a unit to obtain a focus detection result of each fundus wide-angle image; s3: splicing and merging the fundus wide-angle images with focus detection results in different directions to obtain a final spliced image; s4: obtaining fundus retina subareas according to the final spliced image; s5: and visualizing the lesion detection result and the partition identification in the partitioned image, and giving a staging result according to the type of the lesion. The invention realizes the complete diagnosis process of automatically detecting the focus, splicing and fusing multi-azimuth images, partitioning, staging and identifying each subtype.
Description
Technical Field
The invention relates to the field of image processing, in particular to a method and a device for detecting retinopathy of prematurity.
Background
Retinopathy Of Prematurity (ROP) is a vascular proliferative retinal disease commonly seen in premature infants or very low birth weight infants and is the leading cause Of blindness in infants. According to statistics, the ROP incidence rate of infants with birth weight less than or equal to 1500g in China is about 26%. Because of the small therapeutic window of ROP, early screening and timely intervention are key factors in preventing ROP blindness. The current consensus process for fundus screening is as follows: a group of neonate fundus image data is collected by using a professional device, and then a group of image analysis diagnoses are carried out by a professional ophthalmologist to provide a detection report.
The problem of ROP detection is that the detection process from lesion detection analysis to zonal localization of the entire fundus region to disease classification in combination with all information is a complicated process, and specialized training for many years is required to enable an ophthalmologist to be skilled and correct to give a diagnosis result. The existing methods do not solve all the problems in the complete ROP detection and diagnosis process. The patent (107945870B) realizes the prediction of ROP lesion stage by establishing a deep neural network model; the patent (108392174B) sets up a set of detection procedures of focus detection, key position detection, partition, stage prediction and additional lesion detection according to the common recognition of clinical guidelines; patents (111259982A) and (112308830a) similarly use an attention convolutional neural network to implement ROP ranking and partitioning functions, respectively; the patent (111374632a) proposes a method for realizing critical position and lesion detection through modules of quality control, enhancement and the like; children's research and other people propose an ROP lesion detection process based on a deep convolutional network model, which realizes classification of diseases based on lesion detection (Chinese experimental ophthalmology journal, 2019,37(008): 647-; a novel depth feature fusion network model is designed in an IEEE Trans Med imaging.2021Mar 12 to predict ROP image classification; james et al use deep convolutional networks to achieve the identification of pre-plus, plus disease and normal types (JAMA Ophthalmol.2018Jul 1).
The main problems of the existing method are: (1) most of the published work in the prior art focuses on solving some problems in the ROP detection process, such as plus lesion detection, disease staging, etc.; (2) the individual work describes the detection process completely according to clinical guidelines, and a feasible method is not proposed to solve the problems in each part, such as partition, key part detection and the like; (3) compared with the traditional diagnosis process, the method proposed in the existing work lacks interpretability of a diagnosis result, generally represents a region highly related to diseases by means of disease probability or thermodynamic diagram, and cannot provide clear diagnosis basis; (4) the existing method usually analyzes in a single-image mode to give a prediction result, and is different from a mode of single-sample multi-image comprehensive evaluation in an actual process, because the ROP focus relates to the whole retina area, the existing detection equipment usually cannot cover the whole area, so that the final diagnosis needs to be evaluated by combining multi-aspect images, and most of the existing method loses azimuth information in the multi-image.
Disclosure of Invention
The invention aims to provide a retinopathy of prematurity detection method, which realizes the complete diagnosis process of automatically detecting focus, multi-aspect image splicing fusion, partition, staging and identification of each subtype.
It is a further object of the present invention to provide a retinopathy of prematurity detection apparatus.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a method for detecting retinopathy of prematurity, which comprises the following steps:
s1: acquiring wide-angle images of the eyeground at the left side and the right side of an individual to be screened in different directions;
s2: sending the fundus wide-angle images in different directions into a pre-trained focus detection model by taking an individual to be screened as a unit to obtain a focus detection result of each fundus wide-angle image;
s3: splicing and merging the fundus wide-angle images with focus detection results in different directions to obtain a final spliced image;
s4: obtaining fundus retina subareas according to the final spliced image;
s5: and visualizing the lesion detection result and the partition identification in the partitioned image, and giving a staging result according to the type of the lesion.
Preferably, in the step S2, the training of the lesion detection model requires to label a batch of image data, and the specific label information includes a key part and a specific type of lesion in the fundus wide-angle image, where the key part includes the optic disc and the macula lutea, and the specific type of lesion includes, but is not limited to, vasodilation, neovascularization, hemorrhage, retinal detachment (incomplete/complete), and the like; and (4) obtaining a pre-trained focus detection model by using the marked image data to detect the focus detection model.
Preferably, in step S3, the stitching and combining the fundus wide-angle images with the focus detection result in different orientations includes:
s31: performing characteristic matching on the wide-angle fundus images in different directions to obtain stably matched image pairs a and b;
s32: performing iterative fitting on the set according to the matching point pairs of the stably matched image pair to obtain a single mapping transformation matrix H and a matching score from the image b to the image a;
s33: pairing fundus images in different directions in sequence to obtain all matching scores and sequencing, splicing and fusing the image pairs with the highest scores, determining whether splicing is successful according to the matching scores, recording image id of splicing failure, and storing the image id into a list L;
s34: and (4) repeating the matching and fusion steps of the steps S31 to S33 until all fundus wide-angle images in the fundus wide-angle Image sequence are processed, obtaining fundus retina images Image _1 spliced in different directions at the moment, checking the record list L at the moment, repeating the steps S31 to S33 once for the Image id failed in splicing to obtain a spliced Image Image _2, and splicing the Image _1 and the Image _2 according to the steps S31 to S33 to obtain a final spliced Image.
Preferably, in step S31, the feature matching is performed on the fundus wide-angle images in different orientations, specifically:
s311: traversing local feature operator sequences F (F1, F2, …, fn) corresponding to the local key points of the screened images;
s312: respectively calculating characteristic operators Fa (Fa1, Fa2, …, fan) and Fb (Fb1, Fb2 and … fbn) of the image a and the image b, sequentially traversing and calculating the distance similarity between Fa and Fb to obtain the element pair matching relation S of the two sequences, and only keeping the closest distance pair S-S of each element pair after screening;
s313: dividing the image a and the image b into M-by-M grids, and respectively analyzing matching pairs in S-S by taking each grid as a unit, wherein the analysis principle is as follows: if the feature point bi in the matching relation S-S corresponding to the feature point ai in the grid ma in the image a is located in the grid mb in the image b, and at least x matching relation feature points corresponding to the feature points in the neighborhood of the feature point ai in the grid ma are also located in the grid mb, the feature point ai and the feature point bi are considered to be stably matched, the matching relation is reserved, the non-conforming matching relation is eliminated, the stable matching relation is obtained after traversing all grids, and the image pair a and the image pair b which are stably matched are obtained.
Preferably, in the step S33, the image pairs with the highest scores are merged and fused, specifically:
s331: carrying out affine transformation on the image b by using a single mapping transformation matrix H to obtain an image b';
s332: respectively calculating circumscribed circles of the retinal areas in the image a and the image b' to obtain the information of the circle center and the radius;
s333: the splicing and fusion process performs fusion according to the principle that the closer the edges are, the lower the weight is, and the closer the center of the image is, the higher the weight is, and the fusion process divides a fusion area into three types of intersecting non-boundary, intersecting boundary and non-intersecting:
filling corresponding areas by using pixel values corresponding to the image a and the image b' in the disjoint areas respectively;
intersecting the non-boundary area, calculating the distances La and Lb ' of the circle centers in the image a and the image b ' corresponding to the current position, and setting pixel filling weights from the image a and the image b ' according to the ratio of the two distances;
and thirdly, intersecting the boundary region, similarly calculating La and Lb ', introducing a threshold t to compress or stretch the obtained weights of the La and Lb', and finishing pixel filling according to the final weight.
Preferably, after the final stitched image is obtained in step S3, coordinate mapping is further performed on the lesion information in the image sequence according to the stitched position, specifically:
storing position coordinates (x, y) according to the types of different focuses or key position points, calculating a single mapping transformation matrix H of each image obtained by matching with the position coordinates to obtain fused position coordinates (x ', y') (x, y) × H, merging focus information by using the principle of same type and similar coordinate region merging, and determining the positions of the optic disc and the yellow spots in the partitioned key positions.
Preferably, after the positions of the optic disc and the macula lutea in the key part of the subarea are determined, the position of the sawtooth edge is determined according to the designed optic disc position Loc in the fundus wide-angle image Vision Macular position Loc Yellow colour And wide angle image width I Width of The mapping relationship F of (A) is calculated, and the formula is as follows:
Loc saw with cutting blade =F(Loc Vision ,Loc Yellow colour ,I Width of ),
In the formula, Loc Saw with cutting blade The position of the serrated edge.
Preferably, the determination of the mapping relationship F specifically includes:
a plurality of groups of wide-angle fundus image sets { [ a1, a2, …, an ] acquired by the same individual at different orientations are prepared in advance],[b1,b2,…,bn]… } and a complete retinal fundus wide-area image containing jagged edges [ A, B, …]Establishing a position mapping relation from a wide-angle image set to a wide-area image, matching the wide-angle image sets a1, a2 and … an with the image A respectively to obtain corresponding single mapping transformation matrixes h1, h2, … and hn, matching the images in the sets to corresponding positions in the image A, judging that the position point coordinate information Loc saw of the sawtooth edge appears in the wide-angle image through the position of the sawtooth edge key part marked in advance in the image A, and enabling the Loc saw to be located Saw with cutting blade =F(Loc Vision ,Loc Yellow colour ,I Width of ) And if so, training a fitting polynomial F by using the plurality of groups of data as training data to obtain a mapping function F from the optic disc, the yellow spot and the image width on the wide-angle image to the position of the sawtooth edge, and then directly estimating the position of the sawtooth edge by using a mapping function F model.
Preferably, in step S4, the fundus retina partition is obtained according to the final stitched image, specifically:
taking the position of the optic disc as the center of a circle, and taking twice of the distance from the optic disc to the macula lutea as a radius, wherein the circular area is an I area; the circle is made by taking the optic disc as the center of the circle and the radius from the optic disc to the sawtooth edge, the circular ring area after the area I is removed is taken as the area II, and the whole eyeground area is taken as the area III except the area I and the area II.
A retinopathy of prematurity detection device comprising:
the image acquisition module is used for acquiring wide-angle images of the eyeground at the left side and the right side of the individual to be screened in different directions;
the detection module sends the fundus wide-angle images in different directions into a focus detection model trained in advance by taking an individual to be screened as a unit to obtain a focus detection result of each fundus wide-angle image;
the image merging module is used for splicing and merging the eyeground wide-angle images with focus detection results in different directions to obtain a final spliced image;
the partitioning module is used for obtaining fundus retina partitions according to the final spliced image;
and the analysis module visualizes the focus detection result and the partition identification in the partitioned image and gives a staging result according to the focus type.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
(1) the invention realizes the complete diagnosis process of automatically detecting the focus, splicing and fusing multi-azimuth images, partitioning, staging and identifying each subtype;
(2) according to the invention, by integrating multi-directional image information, focus, subarea and clock point statistical information in a complete fundus retina area are visually displayed, and a doctor can select to directly use an analysis result or autonomously give a diagnosis result according to a visual basis;
(3) the invention provides an objective and feasible method for automatically identifying key parts (sawtooth edges) of the fundus oculi by utilizing the modeling of a wide-angle image and a wide-area image;
(4) the invention visually displays the detection result through image splicing and fusion, and is more in line with the routine diagnosis analysis process compared with the method of sequentially inputting and outputting the prediction result in the prior work.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
FIG. 2 is a schematic view of a process of image stitching and merging according to the present invention.
Fig. 3 is a schematic flow chart of image feature matching according to the present invention.
FIG. 4 is a flow diagram illustrating the process of stitching and fusing the highest scoring image pairs according to the present invention.
Fig. 5 is a schematic diagram of an image stitching fusion result provided by the embodiment.
Fig. 6 is a schematic view of a lesion information fusion result provided in the embodiment.
FIG. 7 is a schematic diagram illustrating wide-area image jagged edge labeling according to an embodiment.
Fig. 8 is a schematic diagram illustrating the mapping of the wide-angle image of fig. 7 to a wide-area image.
Fig. 9 is a schematic view of lesion merging and partition identification provided in the examples.
FIG. 10 is a schematic view of a device module according to the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
The present embodiment provides a method for detecting retinopathy of prematurity, as shown in fig. 1, including the following steps:
s1: acquiring fundus wide-angle images of the left side and the right side of an individual to be screened in different directions;
s2: sending the fundus wide-angle images in different directions into a pre-trained focus detection model by taking an individual to be screened as a unit to obtain a focus detection result of each fundus wide-angle image;
s3: splicing and merging the fundus wide-angle images with focus detection results in different directions to obtain a final spliced image;
s4: obtaining fundus retina subareas according to the final spliced image;
s5: and visualizing the lesion detection result and the partition identification in the partitioned image, and giving a staging result according to the type of the lesion.
In the specific implementation process, the embodiment provides a clinical automatic auxiliary diagnosis process including the functions of lesion detection, fundus partition positioning, multi-aspect image combination, ROP stage analysis and the like. In the embodiment, the type and the position of a focus can be respectively detected from multi-azimuth fundus images acquired from each case, and then the images in different azimuths are combined and spliced, as shown in fig. 5; simultaneously merging the same focus in different orientation images, as shown in fig. 6; the position of a partition positioning key part is obtained by designing and defining a mapping relation model between a wide-angle and a wide-area fundus image to complete regional division, as shown in fig. 7 and 8; the relationship between the region and the focus is synthesized to determine the stage of the disease and the judgment of various additional lesions, and the previous detection results are visualized to the operating doctor, as shown in fig. 9, the doctor can choose to directly adopt the analysis results or adjust the model analysis results to finally provide a diagnosis report.
Example 2
In this example, based on example 1, a training process of the lesion detection model trained in advance in step S2 is provided:
the method comprises the following steps that a batch of image data are marked during training of a focus detection model, and specific marking information is a key part and a specific type of focus in an eye fundus wide-angle image, wherein the key part comprises an optic disc and yellow spots; and (4) obtaining a pre-trained focus detection model by using the marked image data to detect the focus detection model.
Example 3
In this embodiment, based on embodiment 1, a specific process of stitching and merging the fundus wide-angle images in different orientations with the focus detection result in step S3 is provided, as shown in fig. 2, specifically:
s31: carrying out characteristic matching on fundus wide-angle images in different directions to obtain a stably matched image pair a and a stably matched image pair b;
s32: performing iterative fitting on the set according to the matching point pairs of the stably matched image pair to obtain a single mapping transformation matrix H and a matching score from the image b to the image a;
s33: pairing fundus images in different directions in sequence to obtain all matching scores and sequencing, splicing and fusing the image pairs with the highest scores, determining whether splicing is successful according to the matching scores, recording image id of splicing failure, and storing the image id into a list L;
s34: and (4) repeating the matching and fusion steps of the steps S31 to S33 until all fundus wide-angle images in the fundus wide-angle Image sequence are processed, obtaining fundus retina images Image _1 spliced in different directions at the moment, checking the record list L at the moment, repeating the steps S31 to S33 once for the Image id failed in splicing to obtain a spliced Image Image _2, and splicing the Image _1 and the Image _2 according to the steps S31 to S33 to obtain a final spliced Image.
In step S31, feature matching is performed on the fundus wide-angle images in different directions, as shown in fig. 3, specifically:
s311: traversing local feature operator sequences F (F1, F2, …, fn) corresponding to the local key points of the screened images;
s312: respectively calculating characteristic operators Fa (Fa1, Fa2, …, fan) and Fb (Fb1, Fb2 and … fbn) of the image a and the image b, sequentially traversing and calculating the distance similarity between Fa and Fb to obtain the element pair matching relation S of the two sequences, and only keeping the closest distance pair S-S of each element pair after screening;
s313: dividing the image a and the image b into M-by-M grids, and respectively analyzing matching pairs in S-S by taking each grid as a unit, wherein the analysis principle is as follows: if the feature point bi in the matching relation S-S corresponding to the feature point ai in the grid ma in the image a is located in the grid mb in the image b, and at least x matching relation feature points corresponding to the feature points in the neighborhood of the feature point ai in the grid ma are also located in the grid mb, the feature point ai and the feature point bi are considered to be stably matched, the matching relation is reserved, the non-conforming matching relation is eliminated, the stable matching relation is obtained after traversing all grids, and the image pair a and the image pair b which are stably matched are obtained.
The step S32 obtains a single mapping transformation matrix H and a matching score from the image b to the image a, specifically:
s321: filling the boundary of the image a and the image b;
s322: separating image channels and extracting G channels;
s323: segmenting a fundus image region ROI;
s324: corroding the ROI to obtain a mask;
s325: traversing, detecting and calculating local feature vectors in the mask;
s326: calculating nearest neighbor distances pairwise according to the local feature vectors of the image a and the image b;
s327: based on the characteristic matching of the grid statistics, eliminating error matching;
s328: judging whether the matching score needs to be calculated or not, if so, evaluating the matching score according to the quality and the quantity of the matched feature vectors, and if not, entering the step S329;
s329: based on a random sampling consistency algorithm, further eliminating error matching, reserving main matching pairs, and fitting to obtain a single mapping matrix;
s3210: obtaining a single mapping transformation matrix H from the image b to the image a according to the matching pair, and evaluating scale, angle and displacement transformation indexes in the affine transformation matrix;
s3211: and judging whether the index is proper, if so, obtaining a single mapping transformation matrix H from the image b to the image a, and if not, failing to match.
In step S33, the image pairs with the highest scores are merged and fused, as shown in fig. 4, specifically:
s331: carrying out affine transformation on the image b by using a single mapping transformation matrix H to obtain an image b';
s332: respectively calculating circumscribed circles of the retinal areas in the image a and the image b' to obtain the information of the circle center and the radius;
s333: the splicing and fusion process performs fusion according to the principle that the closer the edges are, the lower the weight is, and the closer the center of the image is, the higher the weight is, and the fusion process divides a fusion area into three types of intersecting non-boundary, intersecting boundary and non-intersecting:
filling corresponding areas by using pixel values corresponding to the image a and the image b' in the disjoint areas respectively;
intersecting the non-boundary area, calculating the distances La and Lb ' of the circle centers in the image a and the image b ' corresponding to the current position, and setting pixel filling weights from the image a and the image b ' according to the ratio of the two distances;
and thirdly, intersecting the boundary region, similarly calculating La and Lb ', introducing a threshold t to compress or stretch the obtained weights of the La and Lb', and finishing pixel filling according to the final weight.
Example 4
In this embodiment, on the basis of embodiment 1, after the final stitched image is obtained in step S3, coordinate mapping is further performed on the lesion information in the image sequence according to the stitched position, specifically:
storing position coordinates (x, y) according to the types of different focuses or key position points, calculating a single mapping transformation matrix H of each image obtained by matching with the position coordinates to obtain fused position coordinates (x ', y') (x, y) × H, merging focus information by using the principle of same type and similar coordinate region merging, and determining the positions of the optic disc and the yellow spots in the partitioned key positions.
Determining the disc in critical locations of the partition andafter the macular position, the position of the sawtooth edge is determined by the previously designed disc position Loc in the fundus-based wide-angle image Vision screen Macular position Loc Yellow colour And wide angle image width I Width of The mapping relationship F of (A) is calculated, and the formula is as follows:
Loc saw with cutting blade =F(Loc Vision ,Loc Yellow colour ,I Width of ),
In the formula, Loc Saw with cutting blade The position of the serrated edge.
The determination of the mapping relationship F specifically includes:
a plurality of groups of wide-angle fundus image sets { [ a1, a2, …, an ] acquired by the same individual at different orientations are prepared in advance],[b1,b2,…,bn]… } and a complete retinal fundus wide-area image containing jagged edges [ A, B, …]Establishing a position mapping relation from a wide-angle image set to a wide-area image, matching the wide-angle image sets a1, a2 and … an with the image A respectively to obtain corresponding single mapping transformation matrixes h1, h2, … and hn, matching the images in the sets to corresponding positions in the image A, judging that the position point coordinate information Loc saw of the sawtooth edge appears in the wide-angle image through the position of the sawtooth edge key part marked in advance in the image A, and enabling the Loc saw to be located Saw with cutting blade =F(Loc Vision ,Loc Yellow colour ,I Width of ) And if so, training a fitting polynomial F by using the plurality of groups of data as training data to obtain a mapping function F from the optic disc, the yellow spot and the image width on the wide-angle image to the position of the sawtooth edge, and then directly estimating the position of the sawtooth edge by using a mapping function F model.
Example 5
In this embodiment, based on embodiment 1, it is disclosed that the fundus retina partition is obtained from the final stitched image in step S4, specifically:
taking the position of the optic disc as the center of a circle, and taking twice of the distance from the optic disc to the macula lutea as a radius, wherein the circular area is an I area; the circle is made by taking the optic disc as the center of the circle and the radius from the optic disc to the sawtooth edge, the circular ring area after the area I is removed is taken as the area II, and the whole eyeground area is taken as the area III except the area I and the area II.
Visualizing the focus detection result and the partition identification in the partitioned image, and giving a staging result according to the focus type; determining various ROP subtypes according to partition information corresponding to the type and the position of the focus; according to the partition distribution condition of the focus, counting the distribution clock number of a certain type of focus, and displaying the integrated information by structured information.
Example 6
The present embodiment provides a retinopathy of prematurity detection apparatus, as shown in fig. 10, including:
the image acquisition module is used for acquiring wide-angle images of the eyeground at the left side and the right side of the individual to be screened in different directions;
the detection module sends the fundus wide-angle images in different directions into a focus detection model trained in advance by taking an individual to be screened as a unit to obtain a focus detection result of each fundus wide-angle image;
the image merging module is used for splicing and merging the eyeground wide-angle images with focus detection results in different directions to obtain a final spliced image;
the partitioning module is used for obtaining fundus retina partitions according to the final spliced image;
and the analysis module visualizes the focus detection result and the partition identification in the partitioned image and gives a staging result according to the focus type.
The same or similar reference numerals correspond to the same or similar parts;
the terms describing positional relationships in the drawings are for illustrative purposes only and are not to be construed as limiting the patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.
Claims (10)
1. A method for detecting retinopathy of prematurity, which is characterized by comprising the following steps:
s1: acquiring wide-angle images of the eyeground at the left side and the right side of an individual to be screened in different directions;
s2: sending the fundus wide-angle images in different directions into a pre-trained focus detection model by taking an individual to be screened as a unit to obtain a focus detection result of each fundus wide-angle image;
s3: splicing and merging the fundus wide-angle images with focus detection results in different directions to obtain a final spliced image;
s4: obtaining fundus retina subareas according to the final spliced image;
s5: and visualizing a focus detection result and a partition identification in the partitioned image, and giving a staging result according to the focus type.
2. The method for detecting retinopathy of prematurity as in claim 1, wherein the training of the lesion detection model in step S2 requires labeling a batch of image data, and the specific labeling information includes a key part and a specific type of lesion in the fundus wide-angle image, wherein the key part includes optic disc and macula lutea; and (5) utilizing the marked image data to detect a focus detection model to obtain a pre-trained focus detection model.
3. The method for detecting retinopathy of prematurity as in claim 1, wherein in step S3, the fundus wide-angle images in different orientations with the focus detection result are merged, specifically:
s31: performing characteristic matching on the wide-angle fundus images in different directions to obtain stably matched image pairs a and b;
s32: performing iterative fitting on the set according to the matching point pairs of the stably matched image pair to obtain a single mapping transformation matrix H and a matching score from the image b to the image a;
s33: pairing fundus images in different directions in sequence to obtain all matching scores and sequencing, splicing and fusing the image pairs with the highest scores, determining whether splicing is successful according to the matching scores, recording image id of splicing failure, and storing the image id into a list L;
s34: and (4) repeating the matching and fusion steps of the steps S31 to S33 until all fundus wide-angle images in the fundus wide-angle Image sequence are processed, obtaining fundus retina images Image _1 spliced in different directions at the moment, checking the record list L at the moment, repeating the steps S31 to S33 once for the Image id failed in splicing to obtain a spliced Image Image _2, and splicing the Image _1 and the Image _2 according to the steps S31 to S33 to obtain a final spliced Image.
4. The retinopathy of prematurity detection method of claim 3, wherein in the step S31, feature matching is performed on fundus wide-angle images in different orientations, specifically:
s311: traversing local feature operator sequences F (F1, F2, …, fn) corresponding to the local key points of the screened images;
s312: respectively calculating characteristic operators Fa (Fa1, Fa2, …, fan) and Fb (Fb1, Fb2 and … fbn) of the image a and the image b, sequentially traversing and calculating the distance similarity between Fa and Fb to obtain the element pair matching relation S of the two sequences, and only keeping the closest distance pair S-S of each element pair after screening;
s313: dividing the image a and the image b into M-by-M grids, and respectively analyzing matching pairs in S-S by taking each grid as a unit, wherein the analysis principle is as follows: if the feature point bi in the matching relation S-S corresponding to the feature point ai in the grid ma in the image a is located in the grid mb in the image b, and at least x matching relation feature points corresponding to the feature points in the neighborhood of the feature point ai in the grid ma are also located in the grid mb, the feature point ai and the feature point bi are considered to be stably matched, the matching relation is reserved, the non-conforming matching relation is eliminated, the stable matching relation is obtained after traversing all grids, and the image pair a and the image pair b which are stably matched are obtained.
5. The method for detecting retinopathy of prematurity as in claim 4, wherein the image pairs with the highest scores in step S33 are merged and fused, specifically:
s331: carrying out affine transformation on the image b by using a single mapping transformation matrix H to obtain an image b';
s332: respectively calculating circumscribed circles of the retinal areas in the image a and the image b' to obtain the information of the circle center and the radius;
s333: the splicing and fusion process performs fusion according to the principle that the closer the edges are, the lower the weight is, and the closer the center of the image is, the higher the weight is, and the fusion process divides a fusion area into three types of intersecting non-boundary, intersecting boundary and non-intersecting:
filling corresponding areas by using pixel values corresponding to the image a and the image b' in the disjoint areas respectively;
intersecting the non-boundary area, calculating the distances La and Lb ' of the circle centers in the image a and the image b ' corresponding to the current position, and setting pixel filling weights from the image a and the image b ' according to the ratio of the two distances;
and thirdly, intersecting the boundary region, similarly calculating La and Lb ', introducing a threshold t to compress or stretch the obtained weights of the La and Lb', and finishing pixel filling according to the final weight.
6. The method for detecting retinopathy of prematurity as in claim 5, wherein after the final stitched image is obtained in step S3, coordinate mapping is further performed on the lesion information in the image sequence according to the stitched position, specifically:
storing position coordinates (x, y) according to the types of different focuses or key position points, calculating a single mapping transformation matrix H of each image obtained by matching with the position coordinates to obtain fused position coordinates (x ', y') (x, y) × H, merging focus information by using the principle of same type and similar coordinate region merging, and determining the positions of the optic disc and the yellow spots in the partitioned key positions.
7. The method of claim 6, wherein the discs and areas in key regions are identifiedAfter the macula lutea position, the position of the sawtooth edge is determined by the previously designed optic disc position Loc based on the fundus wide-angle image Vision Macular position Loc Yellow colour And wide angle image width I Width of The mapping relationship F of (A) is calculated, and the formula is as follows:
Loc saw with cutting blade =F(Loc Vision ,Loc Yellow colour ,I Width of ),
In the formula, Loc Saw with cutting blade The position of the serrated edge.
8. The method for detecting retinopathy of prematurity as in claim 7, wherein the determination of the mapping relationship F is specifically as follows:
a plurality of groups of wide-angle fundus image sets { [ a1, a2, …, an ] acquired by the same individual at different orientations are prepared in advance],[b1,b2,…,bn]… } and a complete retinal fundus wide-area image containing jagged edges [ A, B, …]Establishing a position mapping relation from a wide-angle image set to a wide-area image, matching the wide-angle image sets a1, a2 and … an with the image A respectively to obtain corresponding single mapping transformation matrixes h1, h2, … and hn, matching the images in the sets to corresponding positions in the image A, judging that the position point coordinate information Loc saw of the sawtooth edge appears in the wide-angle image through the position of the sawtooth edge key part marked in advance in the image A, and enabling the Loc saw to be located Saw with cutting blade =F(Loc Vision screen ,Loc Yellow colour ,I Width of ) And if so, training a fitting polynomial F by using the plurality of groups of data as training data to obtain a mapping function F from the optic disc, the yellow spot and the image width on the wide-angle image to the position of the sawtooth edge, and then directly estimating the position of the sawtooth edge by using a mapping function F model.
9. The method for detecting retinopathy of prematurity as claimed in claim 8, wherein the step S4 obtains fundus retina partitions from the final stitched image, specifically:
taking the position of the optic disc as the center of a circle, and taking twice of the distance from the optic disc to the macula lutea as a radius, wherein the circular area is an I area; the circle is made by taking the optic disc as the center of the circle and the radius from the optic disc to the sawtooth edge, the circular ring area after the area I is removed is taken as the area II, and the whole eyeground area is taken as the area III except the area I and the area II.
10. A retinopathy of prematurity detection device comprising:
the image acquisition module is used for acquiring wide-angle images of the eyeground at the left side and the right side of the individual to be screened in different directions;
the detection module sends the fundus wide-angle images in different directions into a focus detection model trained in advance by taking an individual to be screened as a unit to obtain a focus detection result of each fundus wide-angle image;
the image merging module is used for splicing and merging the eyeground wide-angle images with focus detection results in different directions to obtain a final spliced image;
the partitioning module is used for obtaining fundus retina partitions according to the final spliced image;
and the analysis module visualizes the focus detection result and the partition identification in the partitioned image and gives a staging result according to the focus type.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210327065.1A CN114862760B (en) | 2022-03-30 | 2022-03-30 | Retinopathy of prematurity detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210327065.1A CN114862760B (en) | 2022-03-30 | 2022-03-30 | Retinopathy of prematurity detection method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114862760A true CN114862760A (en) | 2022-08-05 |
CN114862760B CN114862760B (en) | 2023-04-28 |
Family
ID=82629219
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210327065.1A Active CN114862760B (en) | 2022-03-30 | 2022-03-30 | Retinopathy of prematurity detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114862760B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115619747A (en) * | 2022-10-26 | 2023-01-17 | 中山大学中山眼科中心 | Method for generating panoramic image map of eye fundus retina of infant and aligning follow-up data |
CN117132590A (en) * | 2023-10-24 | 2023-11-28 | 威海天拓合创电子工程有限公司 | Image-based multi-board defect detection method and device |
CN117854700A (en) * | 2024-01-19 | 2024-04-09 | 首都医科大学宣武医院 | Postoperative management method and system based on wearable monitoring equipment |
CN118195924A (en) * | 2024-05-17 | 2024-06-14 | 南昌大学第二附属医院 | Premature infant retinopathy analysis system based on image recognition |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130271728A1 (en) * | 2011-06-01 | 2013-10-17 | Tushar Mahendra Ranchod | Multiple-lens retinal imaging device and methods for using device to identify, document, and diagnose eye disease |
CN108022228A (en) * | 2016-10-31 | 2018-05-11 | 天津工业大学 | Based on the matched colored eye fundus image joining method of SIFT conversion and Otsu |
CN108392174A (en) * | 2018-04-19 | 2018-08-14 | 梁建宏 | A kind of automatic check method and system of retinopathy of prematurity |
CN109255753A (en) * | 2018-08-27 | 2019-01-22 | 重庆贝奥新视野医疗设备有限公司 | A kind of eye fundus image joining method |
CN109934787A (en) * | 2019-03-18 | 2019-06-25 | 湖南科技大学 | A kind of image split-joint method based on high dynamic range |
CN112164043A (en) * | 2020-09-23 | 2021-01-01 | 苏州大学 | Method and system for splicing multiple fundus images |
CN113298742A (en) * | 2021-05-20 | 2021-08-24 | 广东省人民医院 | Multi-modal retinal image fusion method and system based on image registration |
CN113436070A (en) * | 2021-06-20 | 2021-09-24 | 四川大学 | Fundus image splicing method based on deep neural network |
CN113425248A (en) * | 2021-06-24 | 2021-09-24 | 平安科技(深圳)有限公司 | Medical image evaluation method, device, equipment and computer storage medium |
CN113674157A (en) * | 2021-10-21 | 2021-11-19 | 广东唯仁医疗科技有限公司 | Fundus image stitching method, computer device and storage medium |
-
2022
- 2022-03-30 CN CN202210327065.1A patent/CN114862760B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130271728A1 (en) * | 2011-06-01 | 2013-10-17 | Tushar Mahendra Ranchod | Multiple-lens retinal imaging device and methods for using device to identify, document, and diagnose eye disease |
CN108022228A (en) * | 2016-10-31 | 2018-05-11 | 天津工业大学 | Based on the matched colored eye fundus image joining method of SIFT conversion and Otsu |
CN108392174A (en) * | 2018-04-19 | 2018-08-14 | 梁建宏 | A kind of automatic check method and system of retinopathy of prematurity |
CN109255753A (en) * | 2018-08-27 | 2019-01-22 | 重庆贝奥新视野医疗设备有限公司 | A kind of eye fundus image joining method |
CN109934787A (en) * | 2019-03-18 | 2019-06-25 | 湖南科技大学 | A kind of image split-joint method based on high dynamic range |
CN112164043A (en) * | 2020-09-23 | 2021-01-01 | 苏州大学 | Method and system for splicing multiple fundus images |
CN113298742A (en) * | 2021-05-20 | 2021-08-24 | 广东省人民医院 | Multi-modal retinal image fusion method and system based on image registration |
CN113436070A (en) * | 2021-06-20 | 2021-09-24 | 四川大学 | Fundus image splicing method based on deep neural network |
CN113425248A (en) * | 2021-06-24 | 2021-09-24 | 平安科技(深圳)有限公司 | Medical image evaluation method, device, equipment and computer storage medium |
CN113674157A (en) * | 2021-10-21 | 2021-11-19 | 广东唯仁医疗科技有限公司 | Fundus image stitching method, computer device and storage medium |
Non-Patent Citations (3)
Title |
---|
LINGJIAO PAN ET.AL: "Retinal OCT Image Registration Methods and Applications", 《IEEE REVIEWS IN BIOMEDICAL ENGINEERING》 * |
席静伟: "多眼底图像拼接技术研究", 《中国优秀硕士学位论文全文数据库医药卫生科技辑》 * |
连先峰等: "一种基于深度学习的视网膜病变图像识别方法", 《 计算机应用与软件 》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115619747A (en) * | 2022-10-26 | 2023-01-17 | 中山大学中山眼科中心 | Method for generating panoramic image map of eye fundus retina of infant and aligning follow-up data |
CN115619747B (en) * | 2022-10-26 | 2023-09-19 | 中山大学中山眼科中心 | Child fundus retina panoramic image map generation and follow-up data alignment method |
CN117132590A (en) * | 2023-10-24 | 2023-11-28 | 威海天拓合创电子工程有限公司 | Image-based multi-board defect detection method and device |
CN117132590B (en) * | 2023-10-24 | 2024-03-01 | 威海天拓合创电子工程有限公司 | Image-based multi-board defect detection method and device |
CN117854700A (en) * | 2024-01-19 | 2024-04-09 | 首都医科大学宣武医院 | Postoperative management method and system based on wearable monitoring equipment |
CN117854700B (en) * | 2024-01-19 | 2024-05-28 | 首都医科大学宣武医院 | Postoperative management method and system based on wearable monitoring equipment |
CN118195924A (en) * | 2024-05-17 | 2024-06-14 | 南昌大学第二附属医院 | Premature infant retinopathy analysis system based on image recognition |
Also Published As
Publication number | Publication date |
---|---|
CN114862760B (en) | 2023-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114862760A (en) | Method and device for detecting retinopathy of prematurity | |
CN105513077B (en) | A kind of system for diabetic retinopathy screening | |
CN110570421B (en) | Multitask fundus image classification method and apparatus | |
CN102458225B (en) | Image processing apparatus and control method thereof | |
CN110555845A (en) | Fundus OCT image identification method and equipment | |
CN100530204C (en) | Assessment of lesions in an image | |
CN110428421A (en) | Macula lutea image region segmentation method and apparatus | |
CN110517219B (en) | Corneal topography distinguishing method and system based on deep learning | |
US10878574B2 (en) | 3D quantitative analysis of retinal layers with deep learning | |
CN109961848A (en) | Macula lutea image classification method and equipment | |
CN111126180B (en) | Facial paralysis severity automatic detection system based on computer vision | |
CN102186407A (en) | Image processing apparatus for ophthalmic tomogram, and image processing method | |
CN112837805B (en) | Eyelid topological morphology feature extraction method based on deep learning | |
CN111179258A (en) | Artificial intelligence method and system for identifying retinal hemorrhage image | |
CN116758038A (en) | Infant retina disease information identification method and system based on training network | |
CN114445666A (en) | Deep learning-based method and system for classifying left eye, right eye and visual field positions of fundus images | |
Giancardo | Automated fundus images analysis techniques to screen retinal diseases in diabetic patients | |
CN115619747B (en) | Child fundus retina panoramic image map generation and follow-up data alignment method | |
CN112634221A (en) | Image and depth-based cornea level identification and lesion positioning method and system | |
CN111402246A (en) | Eye ground image classification method based on combined network | |
Zhou et al. | Computer aided diagnosis for diabetic retinopathy based on fundus image | |
Perez-Rovira et al. | Robust optic disc location via combination of weak detectors | |
CN113558564B (en) | Data processing system based on simple high myopia database construction | |
CN112991289B (en) | Processing method and device for standard section of image | |
US11302006B2 (en) | 3D quantitative analysis with deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |