CN114862760B - Retinopathy of prematurity detection method and device - Google Patents

Retinopathy of prematurity detection method and device Download PDF

Info

Publication number
CN114862760B
CN114862760B CN202210327065.1A CN202210327065A CN114862760B CN 114862760 B CN114862760 B CN 114862760B CN 202210327065 A CN202210327065 A CN 202210327065A CN 114862760 B CN114862760 B CN 114862760B
Authority
CN
China
Prior art keywords
image
wide
fundus
images
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210327065.1A
Other languages
Chinese (zh)
Other versions
CN114862760A (en
Inventor
丁小燕
谢志
周昊
孙立梅
何尧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongshan Ophthalmic Center
Original Assignee
Zhongshan Ophthalmic Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongshan Ophthalmic Center filed Critical Zhongshan Ophthalmic Center
Priority to CN202210327065.1A priority Critical patent/CN114862760B/en
Publication of CN114862760A publication Critical patent/CN114862760A/en
Application granted granted Critical
Publication of CN114862760B publication Critical patent/CN114862760B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Ophthalmology & Optometry (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Computing Systems (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses a method and a device for detecting retinopathy of prematurity, wherein the method comprises the following steps: s1: acquiring fundus wide-angle images of the left and right sides of an individual to be screened in different directions; s2: sending the fundus wide-angle images with different directions into a pre-trained focus detection model by taking an individual to be screened as a unit to obtain focus detection results of each fundus wide-angle image; s3: the fundus wide-angle images with focus detection results in different directions are spliced and combined to obtain a final spliced image; s4: obtaining fundus retina partitions according to the final spliced image; s5: and visualizing the focus detection result and the partition identification in the partitioned image, and giving a stage result according to the focus type. The invention realizes the complete diagnosis flow of automatically detecting focus, splicing and fusing multi-azimuth images, partitioning, staging and identifying each subtype.

Description

Retinopathy of prematurity detection method and device
Technical Field
The invention relates to the field of image processing, in particular to a method and a device for detecting retinopathy of prematurity.
Background
Retinopathy of prematurity (Retinopathy Of Prematurity, ROP) is a vascular proliferative retinal disease commonly seen in premature infants or very low weight infants, and is the leading cause of blindness in infants. The incidence rate of ROP of infants with birth weight less than or equal to 1500g in China is about 26 percent. Because of the small treatment window period of ROP, early screening and timely intervention are key factors in preventing ROP blinding. The common identification flow of the current fundus screening is as follows: a set of fundus image data of the neonate is acquired using a specialized device, and then the set of images is analyzed and diagnosed by a specialized ophthalmologist to provide a test report.
The difficulty with ROP testing is that the testing procedure from lesion detection analysis to zonal localization of the entire fundus area to disease classification in combination with all information is a cumbersome process and requires years of specialized training to be experienced by an ophthalmic physician to be able to accurately and proficiently give a diagnostic result. The existing methods do not solve all the problems in the whole flow of ROP detection and diagnosis well. The patent (107945870B) realizes the prediction of the ROP lesion stage by establishing a deep neural network model; the patent (108392174B) establishes a set of detection flows of focus detection, key position detection, zoning, stage prediction and additional lesion detection according to the consensus of clinical guidelines; patent (111259982 a) and patent (112308830 a) similarly implement ROP classification and zoning functions, respectively, using an attention convolutional neural network; the patent (111374632A) proposes a method for realizing the detection of key positions and focus by means of quality control, enhancement and other modules; tong Yan et al propose a ROP lesion detection procedure based on a deep convolutional network model to achieve classification of disease based on lesion detection (J.Zhonghua laboratory ophthalmic journal 2019,37 (008): 647-651 and Eye Vis (Lond). 2020Aug 1); the paper (IEEE Trans Med imaging.2021Mar12) designs a novel depth feature fusion network model to predict ROP image classification; james et al use a deep convolutional network to effect the identification of pre-plus, plus disease and normal types (JAMA ophthalmol.2018jul 1).
The main problems of the existing method are: (1) Most of the published work is focused on solving part of the problems in the ROP detection flow, such as plus lesion detection, disease stage and the like; (2) Individual works describe the detection flow entirely according to clinical guidelines, and do not propose a practical method to solve problems in each part, such as partition, critical part detection, etc.; (3) Compared with the traditional diagnosis flow, the method proposed in the prior work lacks the interpretation of the diagnosis result, and the area highly related to the disease is usually represented by the probability of illness or thermodynamic diagram, so that clear diagnosis basis cannot be given; (4) The existing method usually analyzes in a single image mode to give a prediction result, and is different from a single-sample multi-image comprehensive evaluation mode in an actual process, because ROP focus relates to the whole retina area, the existing detection equipment cannot cover the whole area, and therefore, the final diagnosis needs to be evaluated by combining multi-azimuth images, and azimuth information in a plurality of images is mostly lost in the existing method.
Disclosure of Invention
The primary aim of the invention is to provide a method for detecting retinopathy of prematurity, which realizes the complete diagnosis flow of automatic detection of focus, multi-azimuth image splicing and fusion, zoning, stage and subtype identification.
It is a further object of the present invention to provide a retinopathy of prematurity detection device.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a method of retinopathy of prematurity detection comprising the steps of:
s1: acquiring fundus wide-angle images of the left and right sides of an individual to be screened in different directions;
s2: sending the fundus wide-angle images with different directions into a pre-trained focus detection model by taking an individual to be screened as a unit to obtain focus detection results of each fundus wide-angle image;
s3: the fundus wide-angle images with focus detection results in different directions are spliced and combined to obtain a final spliced image;
s4: obtaining fundus retina partitions according to the final spliced image;
s5: and visualizing the focus detection result and the partition identification in the partitioned image, and giving a stage result according to the focus type.
Preferably, in the training of the focus detection model in the step S2, a batch of image data is first labeled, and specific labeling information is a key part in the fundus wide-angle image and a specific type focus, wherein the key part comprises a optic disc and a macula, and the specific type focus comprises but is not limited to vasodilation, neovascularization, hemorrhage, retinal detachment (insufficiency/total), and the like; and obtaining a pre-trained focus detection model by using the marked image data to the focus detection model.
Preferably, in the step S3, the fundus wide-angle images with different orientations of the focus detection result are combined, specifically:
s31: performing feature matching on fundus wide-angle images in different directions to obtain stably matched image pairs a and b;
s32: performing iterative fitting according to a matching point pair set of the stably matched image pairs to obtain a single-mapping transformation matrix H and a matching score from the image b to the image a;
s33: sequentially pairing fundus images in different directions in pairs to obtain all matching scores, sequencing, splicing and fusing the image pair with the highest score, determining whether splicing is successful or not according to the matching score, recording image ids with failed splicing, and storing the image ids in a list L;
s34: repeating the matching and fusing steps of the steps S31 to S33 until all the fundus wide-angle images in the fundus wide-angle Image sequence are processed, obtaining fundus retina images image_1 spliced in different directions, checking a record list L, repeating the steps S31 to S33 once for the images id which are spliced to obtain spliced images image_2, and splicing the images image_1 and image_2 according to the steps S31 to S33 to obtain a final spliced Image.
Preferably, in the step S31, feature matching is performed on fundus wide-angle images in different directions, specifically:
s311: traversing a local feature operator sequence F (F1, F2, …, fn) corresponding to the local key points of the screened image;
s312: calculating feature operators Fa (Fa 1, fa2, …, fan) and Fb (Fb 1, fb2, … fbn) of the image a and the image b respectively, traversing and calculating distance similarity in Fa and Fb in sequence to obtain element pair matching relation S of two sequences, and only reserving nearest distance pairing S-S of each element pair after screening;
s313: dividing an image a and an image b into M grids, and respectively analyzing matching pairs in S-S by taking each grid as a unit, wherein the analysis principle is as follows: if the feature points bi in the matching relation S-S corresponding to the feature points ai in the grid ma in the image a are located in the grid mb in the image b, and at least x matching relation feature points corresponding to the feature points ai in the surrounding neighborhood of the feature points ai in the grid ma are also located in the grid mb, the feature points ai and the feature points bi are considered to be stably matched, the matching relation is reserved, the matching relation which is not matched is removed, and the stable matching relation is obtained after traversing all the grids, so that the stably matched image pairs a and b are obtained.
Preferably, in the step S33, the image pair with the highest score is spliced and fused, specifically:
s331: carrying out affine transformation on the image b by utilizing a single-mapping transformation matrix H to obtain an image b';
s332: respectively calculating the circumscribed circles of the retina areas in the image a and the image b' to obtain circle center and radius information;
s333: the fusion process is performed according to the principle that the edge weights are lower when the fusion process is closer to each other and the center weights of the images are higher, and the fusion process divides the fusion area into three types of intersecting non-boundaries, intersecting boundaries and non-intersecting:
(1) the disjoint areas are filled with corresponding areas by utilizing pixel values corresponding to the image a and the image b' respectively;
(2) intersecting the non-boundary region, calculating distances La and Lb ' of circle centers in the image a and the image b ' corresponding to the current position, and setting pixel filling weights from the image a and the image b ' according to the ratio of the two distances;
(3) and (3) in the intersection boundary region, la and Lb 'are also calculated, and at the moment, the threshold t is introduced to compress or stretch the obtained weights of La and Lb', and the pixel filling is completed according to the final weights.
Preferably, after the final stitched image is obtained in step S3, coordinate mapping is further performed on focus information in the image sequence according to the stitched position, specifically:
and respectively storing position coordinates (x, y) according to the categories of different focuses or key position points, respectively calculating a single-map transformation matrix H of each image obtained by matching with the position coordinates to obtain fused position coordinates (x ', y') = (x, y) H, and combining focus information by utilizing the same type and similar coordinate region combining principle to determine the positions of the video disc and the macula lutea in the key parts of the subarea.
Preferably, after the positions of the optic disc and the macula in the key part of the subarea are determined, the positions of the sawtooth rims are determined by the positions Loc of the optic disc in the pre-designed fundus-based wide-angle image Vision device Macula position Loc Yellow colour Wide angle image width I Wide width of The mapping relation F of (2) is calculated, and the formula is as follows:
Loc saw =F(Loc Vision device ,Loc Yellow colour ,I Wide width of ),
In Loc Saw Is the location of the serrated edge.
Preferably, the determining of the mapping relation F specifically includes:
preparing a plurality of groups of wide-angle fundus images of different directions obtained by the same individual { [ a1, a2, …, an ]],[b1,b2,…,bn]… and a complete retinal fundus wide area image containing a serrated edge [ A, B, … ]]Establishing a position mapping relation from a wide-angle image set to a wide-angle image, matching the wide-angle image sets a1, a2 and … an with the image A respectively to obtain corresponding single-mapping transformation matrixes h1, h2, … and hn, matching the images in the sets to corresponding positions in the image A, judging that saw-tooth edge position point coordinate information Loc saw appears in the wide-angle image through the positions of key positions of saw-tooth edges marked in advance in the image A, and enabling Loc to be realized Saw =F(Loc Vision device ,Loc Yellow colour ,I Wide width of ) And (3) taking the plurality of groups of data as training data, training a fitting polynomial F to obtain a mapping function F of the positions of the video disc, the macula lutea and the image on the wide-angle image from wide to the sawtooth edge, and then directly using a mapping function F model to carry out the position estimation of the sawtooth edge.
Preferably, in the step S4, a fundus retina partition is obtained according to the final stitched image, specifically:
taking the position of the video disc as the center of a circle, taking twice the distance from the video disc to the macula as the radius, wherein the circular area is an area I; taking the optic disc as the center of a circle, taking the saw-tooth edge of the optic disc as the radius to make a circle, taking the circular ring area after removing the area I as the area II, and taking the whole fundus area as the area III except the area I and the area II.
A retinopathy of prematurity detection device comprising:
the image acquisition module is used for acquiring fundus wide-angle images of the left side and the right side of the individual to be screened in different directions;
the detection module sends fundus wide-angle images with different directions into a focus detection model trained in advance by taking an individual to be screened as a unit to obtain focus detection results of each fundus wide-angle image;
the image merging module is used for merging the fundus wide-angle images with focus detection results in different directions to obtain a final merged image;
the partitioning module is used for obtaining fundus retina partitions according to the final spliced image;
and the analysis module visualizes the focus detection result and the partition identification in the partitioned image and gives a stage result according to the focus type.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
(1) The invention realizes the complete diagnosis flow of automatically detecting focus, splicing and fusing multi-azimuth images, partitioning, staging and identifying each subtype;
(2) According to the invention, through integrating multi-azimuth image information, focus and subarea in a complete fundus retina area are intuitively displayed, statistics information of clock count is displayed, and a doctor can select to directly use an analysis result or autonomously give a diagnosis result according to a visual basis;
(3) The invention provides an objective and feasible method for automatically identifying the key parts (sawtooth edges) of the fundus by modeling by using a wide-angle image and a wide-area image;
(4) Compared with the method of sequentially inputting and sequentially outputting the predicted result in the prior work, the method of the invention is more in line with the conventional diagnosis and analysis flow.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
Fig. 2 is a flow chart of image merging according to the present invention.
Fig. 3 is a flow chart of image feature matching according to the present invention.
Fig. 4 is a schematic flow chart of the highest scoring image pair stitching fusion of the present invention.
Fig. 5 is a schematic diagram of an image stitching fusion result provided in the embodiment.
Fig. 6 is a schematic diagram of a focus information fusion result provided in the example.
Fig. 7 is a schematic diagram of a wide area image sawtooth edge labeling provided in an embodiment.
Fig. 8 is a schematic diagram of the wide-angle image map of fig. 7 to a wide-area image.
Fig. 9 is a schematic diagram of lesion merging and zoning identification provided in the examples.
Fig. 10 is a schematic view of the device module of the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
for the purpose of better illustrating the embodiments, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the actual product dimensions;
it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical scheme of the invention is further described below with reference to the accompanying drawings and examples.
Example 1
The present embodiment provides a method for detecting retinopathy of prematurity, as shown in fig. 1, comprising the steps of:
s1: acquiring fundus wide-angle images of the left and right sides of an individual to be screened in different directions;
s2: sending the fundus wide-angle images with different directions into a pre-trained focus detection model by taking an individual to be screened as a unit to obtain focus detection results of each fundus wide-angle image;
s3: the fundus wide-angle images with focus detection results in different directions are spliced and combined to obtain a final spliced image;
s4: obtaining fundus retina partitions according to the final spliced image;
s5: and visualizing the focus detection result and the partition identification in the partitioned image, and giving a stage result according to the focus type.
In a specific implementation process, the embodiment provides a clinical automatic auxiliary diagnosis process comprising functions of focus detection, fundus partition positioning, multi-azimuth image combination, ROP stage analysis and the like. The embodiment can respectively detect the focus type and the position of the multi-azimuth fundus image acquired by each case, and then combine and splice the images in different azimuth, as shown in fig. 5; simultaneously merging the same focus in the images of different directions, as shown in fig. 6; obtaining the position of the partitioned positioning key part by designing and defining a mapping relation model between a wide angle and a wide area fundus image to complete the regional division, as shown in fig. 7 and 8; the relationship between the comprehensive area and the focus determines the stage of the disease and the judgment of various additional lesions, and visualizes the detection result to an operator, as shown in fig. 9, the operator can choose to directly adopt the analysis result or adjust the model analysis result to finally give a diagnosis report.
Example 2
The present embodiment provides the training procedure of the focus detection model trained in advance in step S2 on the basis of embodiment 1:
firstly, marking a batch of image data for training a focus detection model, wherein specific marking information is a key part and a specific type focus in a fundus wide-angle image, and the key part comprises a video disc and a macula lutea; and obtaining a pre-trained focus detection model by using the marked image data to the focus detection model.
Example 3
In this embodiment, based on embodiment 1, a specific flow is provided for merging fundus wide-angle images with different orientations of the focus detection result in step S3, as shown in fig. 2, specifically:
s31: performing feature matching on fundus wide-angle images in different directions to obtain stably matched image pairs a and b;
s32: performing iterative fitting according to a matching point pair set of the stably matched image pairs to obtain a single-mapping transformation matrix H and a matching score from the image b to the image a;
s33: sequentially pairing fundus images in different directions in pairs to obtain all matching scores, sequencing, splicing and fusing the image pair with the highest score, determining whether splicing is successful or not according to the matching score, recording image ids with failed splicing, and storing the image ids in a list L;
s34: repeating the matching and fusing steps of the steps S31 to S33 until all the fundus wide-angle images in the fundus wide-angle Image sequence are processed, obtaining fundus retina images image_1 spliced in different directions, checking a record list L, repeating the steps S31 to S33 once for the images id which are spliced to obtain spliced images image_2, and splicing the images image_1 and image_2 according to the steps S31 to S33 to obtain a final spliced Image.
In step S31, feature matching is performed on fundus wide-angle images in different directions, as shown in fig. 3, specifically:
s311: traversing a local feature operator sequence F (F1, F2, …, fn) corresponding to the local key points of the screened image;
s312: calculating feature operators Fa (Fa 1, fa2, …, fan) and Fb (Fb 1, fb2, … fbn) of the image a and the image b respectively, traversing and calculating distance similarity in Fa and Fb in sequence to obtain element pair matching relation S of two sequences, and only reserving nearest distance pairing S-S of each element pair after screening;
s313: dividing an image a and an image b into M grids, and respectively analyzing matching pairs in S-S by taking each grid as a unit, wherein the analysis principle is as follows: if the feature points bi in the matching relation S-S corresponding to the feature points ai in the grid ma in the image a are located in the grid mb in the image b, and at least x matching relation feature points corresponding to the feature points ai in the surrounding neighborhood of the feature points ai in the grid ma are also located in the grid mb, the feature points ai and the feature points bi are considered to be stably matched, the matching relation is reserved, the matching relation which is not matched is removed, and the stable matching relation is obtained after traversing all the grids, so that the stably matched image pairs a and b are obtained.
The step S32 obtains a single-mapping transformation matrix H and a matching score from the image b to the image a, specifically:
s321: filling the boundary of the image a and the image b;
s322: separating image channels and extracting G channels;
s323: segmenting a fundus image region ROI;
s324: corroding the ROI to obtain a mask;
s325: traversing to detect and calculate local feature vectors in the mask;
s326: calculating nearest neighbor distances according to the local feature vectors of the image a and the image b in pairs;
s327: based on the feature matching of grid statistics, eliminating error matching;
s328: judging whether the matching score is required to be calculated, if so, evaluating the matching score according to the quality and the quantity of the matched feature vector pair, and if not, entering step S329;
s329: further eliminating error matching based on a random sampling consistency algorithm, reserving main matching pairs, and fitting to obtain a single mapping matrix;
s3210: obtaining a single-mapping transformation matrix H from the image b to the image a according to the matching pair, and evaluating scale, angle and displacement transformation indexes in the affine transformation matrix;
s3211: judging whether the index is suitable, if so, obtaining a single-mapping transformation matrix H from the image b to the image a, and if not, failing to match.
In the step S33, the image pair with the highest score is spliced and fused, as shown in fig. 4, specifically:
s331: carrying out affine transformation on the image b by utilizing a single-mapping transformation matrix H to obtain an image b';
s332: respectively calculating the circumscribed circles of the retina areas in the image a and the image b' to obtain circle center and radius information;
s333: the fusion process is performed according to the principle that the edge weights are lower when the fusion process is closer to each other and the center weights of the images are higher, and the fusion process divides the fusion area into three types of intersecting non-boundaries, intersecting boundaries and non-intersecting:
(1) the disjoint areas are filled with corresponding areas by utilizing pixel values corresponding to the image a and the image b' respectively;
(2) intersecting the non-boundary region, calculating distances La and Lb ' of circle centers in the image a and the image b ' corresponding to the current position, and setting pixel filling weights from the image a and the image b ' according to the ratio of the two distances;
(3) and (3) in the intersection boundary region, la and Lb 'are also calculated, and at the moment, the threshold t is introduced to compress or stretch the obtained weights of La and Lb', and the pixel filling is completed according to the final weights.
Example 4
In this embodiment, on the basis of embodiment 1, after the final stitched image obtained in step S3 is continuously disclosed, coordinate mapping is further performed on focus information in the image sequence according to the stitched position, which specifically includes:
and respectively storing position coordinates (x, y) according to the categories of different focuses or key position points, respectively calculating a single-map transformation matrix H of each image obtained by matching with the position coordinates to obtain fused position coordinates (x ', y') = (x, y) H, and combining focus information by utilizing the same type and similar coordinate region combining principle to determine the positions of the video disc and the macula lutea in the key parts of the subarea.
After the positions of the optic disc and the macula in the key part of the subarea are determined, the positions of the sawtooth edge are determined by the positions Loc of the optic disc in the pre-designed fundus-based wide-angle image Vision device Macula position Loc Yellow colour Wide angle image width I Wide width of The mapping relation F of (2) is calculated, and the formula is as follows:
Loc saw =F(Loc Vision device ,Loc Yellow colour ,I Wide width of ),
In Loc Saw Is the location of the serrated edge.
The determination of the mapping relation F specifically comprises the following steps:
preparing a plurality of groups of wide-angle fundus images of different directions obtained by the same individual { [ a1, a2, …, an ]],[b1,b2,…,bn]… and a complete retinal fundus wide area image containing a serrated edge [ A, B, … ]]Establishing a position mapping relation from a wide-angle image set to a wide-angle image, respectively matching the wide-angle image sets a1, a2 and … an with the image A to obtain corresponding single-mapping transformation matrixes h1, h2, … and hn, matching the images in the sets to corresponding positions in the image A, and passing through saw teeth marked in advance in the image AThe position of the key position of the edge can judge the coordinate information Loc saw of the position point of the sawtooth edge in the wide-angle diagram, so that the Loc Saw =F(Loc Vision device ,Loc Yellow colour ,I Wide width of ) And (3) taking the plurality of groups of data as training data, training a fitting polynomial F to obtain a mapping function F of the positions of the video disc, the macula lutea and the image on the wide-angle image from wide to the sawtooth edge, and then directly using a mapping function F model to carry out the position estimation of the sawtooth edge.
Example 5
The present embodiment further discloses, based on embodiment 1, that in step S4, the fundus retina partition is obtained according to the final stitched image, specifically:
taking the position of the video disc as the center of a circle, taking twice the distance from the video disc to the macula as the radius, wherein the circular area is an area I; taking the optic disc as the center of a circle, taking the saw-tooth edge of the optic disc as the radius to make a circle, taking the circular ring area after removing the area I as the area II, and taking the whole fundus area as the area III except the area I and the area II.
Visualizing the focus detection result and the partition mark in the partitioned image, and giving a stage result according to the focus type; determining various ROP subtypes according to the partition information corresponding to the focus types and the positions; according to the regional distribution condition of the focus, the distribution Zhong Dianshu of a focus of a certain type is counted, and the information is displayed in a structured way after being integrated.
Example 6
The present embodiment provides a retinopathy of prematurity detection apparatus, as shown in fig. 10, comprising:
the image acquisition module is used for acquiring fundus wide-angle images of the left side and the right side of the individual to be screened in different directions;
the detection module sends fundus wide-angle images with different directions into a focus detection model trained in advance by taking an individual to be screened as a unit to obtain focus detection results of each fundus wide-angle image;
the image merging module is used for merging the fundus wide-angle images with focus detection results in different directions to obtain a final merged image;
the partitioning module is used for obtaining fundus retina partitions according to the final spliced image;
and the analysis module visualizes the focus detection result and the partition identification in the partitioned image and gives a stage result according to the focus type.
The same or similar reference numerals correspond to the same or similar components;
the terms describing the positional relationship in the drawings are merely illustrative, and are not to be construed as limiting the present patent;
it is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.

Claims (8)

1. A method for detecting retinopathy of prematurity comprising the steps of:
s1: acquiring fundus wide-angle images of the left and right sides of an individual to be screened in different directions;
s2: sending the fundus wide-angle images with different directions into a pre-trained focus detection model by taking an individual to be screened as a unit to obtain focus detection results of each fundus wide-angle image;
s3: the fundus wide-angle images with focus detection results in different directions are spliced and combined to obtain a final spliced image;
s4: obtaining fundus retina partitions according to the final spliced image;
s5: visualizing the focus detection result and the partition mark in the partitioned image, and giving a stage result according to the focus type;
in the step S3, the fundus wide-angle images with focus detection results in different directions are combined, specifically:
s31: performing feature matching on fundus wide-angle images in different directions to obtain stably matched image pairs a and b;
s32: performing iterative fitting according to a matching point pair set of the stably matched image pairs to obtain a single-mapping transformation matrix H and a matching score from the image b to the image a;
s33: sequentially pairing fundus images in different directions in pairs to obtain all matching scores, sequencing, splicing and fusing the image pair with the highest score, determining whether splicing is successful or not according to the matching score, recording image ids with failed splicing, and storing the image ids in a list L;
s34: repeating the matching and fusing steps from S31 to S33 until all fundus wide-angle images in the fundus wide-angle Image sequence are processed, obtaining fundus retina images image_1 spliced in different directions at the moment, checking a record list L at the moment, repeating the steps from S31 to S33 once for the images id which are spliced to obtain spliced images image_2, and splicing the images image_1 and image_2 according to the steps from S31 to S33 to obtain a final spliced Image;
in the step S33, the image pair with the highest score is spliced and fused, specifically:
s331: carrying out affine transformation on the image b by utilizing a single-mapping transformation matrix H to obtain an image b';
s332: respectively calculating the circumscribed circles of the retina areas in the image a and the image b' to obtain circle center and radius information;
s333: the fusion process is performed according to the principle that the edge weights are lower when the fusion process is closer to each other and the center weights of the images are higher, and the fusion process divides the fusion area into three types of intersecting non-boundaries, intersecting boundaries and non-intersecting:
(1) the disjoint areas are filled with corresponding areas by utilizing pixel values corresponding to the image a and the image b' respectively;
(2) intersecting the non-boundary region, calculating distances La and Lb ' of circle centers in the image a and the image b ' corresponding to the current position, and setting pixel filling weights from the image a and the image b ' according to the ratio of the two distances;
(3) and (3) in the intersection boundary region, la and Lb 'are also calculated, and at the moment, the threshold t is introduced to compress or stretch the obtained weights of La and Lb', and the pixel filling is completed according to the final weights.
2. The method according to claim 1, wherein the training of the lesion detection model in step S2 requires labeling a batch of image data, and the specific labeling information is a critical part and a specific type of lesion in the fundus wide-angle image, wherein the critical part includes optic disc and macula; and obtaining a pre-trained focus detection model by using the marked image data to the focus detection model.
3. The method according to claim 1, wherein the feature matching is performed on fundus wide-angle images of different orientations in step S31, specifically:
s311: traversing a local feature operator sequence F (F1, F2, …, fn) corresponding to the local key points of the screened image;
s312: calculating feature operators Fa (Fa 1, fa2, …, fan) and Fb (Fb 1, fb2, … fbn) of the image a and the image b respectively, traversing and calculating distance similarity in Fa and Fb in sequence to obtain element pair matching relation S of two sequences, and only reserving nearest distance pairing S-S of each element pair after screening;
s313: dividing an image a and an image b into M grids, and respectively analyzing matching pairs in S-S by taking each grid as a unit, wherein the analysis principle is as follows: if the feature points bi in the matching relation S-S corresponding to the feature points ai in the grid ma in the image a are located in the grid mb in the image b, and at least x matching relation feature points corresponding to the feature points ai in the surrounding neighborhood of the feature points ai in the grid ma are also located in the grid mb, the feature points ai and the feature points bi are considered to be stably matched, the matching relation is reserved, the matching relation which is not matched is removed, and the stable matching relation is obtained after traversing all the grids, so that the stably matched image pairs a and b are obtained.
4. The method of claim 3, wherein after obtaining the final stitched image in step S3, the coordinate mapping is further performed on the lesion information in the image sequence according to the stitched position, specifically:
and respectively storing position coordinates (x, y) according to the categories of different focuses or key position points, respectively calculating a single-map transformation matrix H of each image obtained by matching with the position coordinates to obtain fused position coordinates (x ', y') = (x, y) H, and combining focus information by utilizing the same type and similar coordinate region combining principle to determine the positions of the video disc and the macula lutea in the key parts of the subarea.
5. The method according to claim 4, wherein the positions of the sawtooth rims after determining the positions of the optic disc and the macula in the key parts of the division are determined from the positions Loc of the optic disc in the fundus wide-angle image based on the design in advance Vision device Macula position Loc Yellow colour Wide angle image width I Wide width of The mapping relation F of (2) is calculated, and the formula is as follows:
Loc saw =F(Loc Vision device ,Loc Yellow colour ,I Wide width of ),
In Loc Saw Is the location of the serrated edge.
6. The method for detecting retinopathy of prematurity according to claim 5, wherein the determining of the mapping relation F specifically comprises:
preparing a plurality of groups of wide-angle fundus images of different directions obtained by the same individual { [ a1, a2, …, an ]],[b1,b2,…,bn]… and a complete retinal fundus wide area image containing a serrated edge [ A, B, … ]]Establishing a position mapping relation from a wide-angle image set to a wide-angle image, matching the wide-angle image sets a1, a2 and … an with the image A respectively to obtain corresponding single-mapping transformation matrixes h1, h2, … and hn, matching the images in the sets to corresponding positions in the image A, judging that saw-tooth edge position point coordinate information Loc saw appears in the wide-angle image through the positions of key positions of saw-tooth edges marked in advance in the image A, and enabling Loc to be realized Saw =F(Loc Vision device ,Loc Yellow colour ,I Wide width of ) And (3) taking the plurality of groups of data as training data, training a fitting polynomial F to obtain a mapping function F of the positions of the video disc, the macula lutea and the image on the wide-angle image from wide to the sawtooth edge, and then directly using a mapping function F model to carry out the position estimation of the sawtooth edge.
7. The method according to claim 6, wherein the step S4 is performed to obtain the fundus retina partition from the final stitched image, specifically:
taking the position of the video disc as the center of a circle, taking twice the distance from the video disc to the macula as the radius, wherein the circular area is an area I; taking the optic disc as the center of a circle, taking the saw-tooth edge of the optic disc as the radius to make a circle, taking the circular ring area after removing the area I as the area II, and taking the whole fundus area as the area III except the area I and the area II.
8. A retinopathy of prematurity detection device comprising:
the image acquisition module is used for acquiring fundus wide-angle images of different directions on the left side and the right side of an individual to be screened;
the detection module sends fundus wide-angle images with different directions into a focus detection model trained in advance by taking an individual to be screened as a unit to obtain focus detection results of each fundus wide-angle image;
the image merging module is used for merging the fundus wide-angle images with focus detection results in different directions to obtain a final merged image;
the partitioning module is used for obtaining fundus retina partitions according to the final spliced image;
the analysis module visualizes the focus detection result and the partition mark in the partitioned image and gives a stage result according to the focus type;
the image merging module is used for merging fundus wide-angle images with focus detection results in different directions, and specifically comprises the following steps:
performing feature matching on fundus wide-angle images in different directions to obtain stably matched image pairs a and b;
performing iterative fitting according to a matching point pair set of the stably matched image pairs to obtain a single-mapping transformation matrix H and a matching score from the image b to the image a;
sequentially pairing fundus images in different directions in pairs to obtain all matching scores, sequencing, splicing and fusing the image pair with the highest score, determining whether splicing is successful or not according to the matching score, recording image ids with failed splicing, and storing the image ids in a list L;
repeating the matching and fusing steps until all fundus wide-angle images in the fundus wide-angle Image sequence are processed, obtaining fundus retina images image_1 spliced in different directions at the moment, checking a record list L at the moment, repeating the steps S31 to S33 for one time for the images id which are failed to be spliced to obtain spliced images image_2, and splicing the images_1 and image_2 according to the steps S31 to S33 to obtain final spliced images;
the image pair with the highest score is spliced and fused, specifically:
carrying out affine transformation on the image b by utilizing a single-mapping transformation matrix H to obtain an image b';
respectively calculating the circumscribed circles of the retina areas in the image a and the image b' to obtain circle center and radius information;
the fusion process is performed according to the principle that the edge weights are lower when the fusion process is closer to each other and the center weights of the images are higher, and the fusion process divides the fusion area into three types of intersecting non-boundaries, intersecting boundaries and non-intersecting:
(1) the disjoint areas are filled with corresponding areas by utilizing pixel values corresponding to the image a and the image b' respectively;
(2) intersecting the non-boundary region, calculating distances La and Lb ' of circle centers in the image a and the image b ' corresponding to the current position, and setting pixel filling weights from the image a and the image b ' according to the ratio of the two distances;
(3) and (3) in the intersection boundary region, la and Lb 'are also calculated, and at the moment, the threshold t is introduced to compress or stretch the obtained weights of La and Lb', and the pixel filling is completed according to the final weights.
CN202210327065.1A 2022-03-30 2022-03-30 Retinopathy of prematurity detection method and device Active CN114862760B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210327065.1A CN114862760B (en) 2022-03-30 2022-03-30 Retinopathy of prematurity detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210327065.1A CN114862760B (en) 2022-03-30 2022-03-30 Retinopathy of prematurity detection method and device

Publications (2)

Publication Number Publication Date
CN114862760A CN114862760A (en) 2022-08-05
CN114862760B true CN114862760B (en) 2023-04-28

Family

ID=82629219

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210327065.1A Active CN114862760B (en) 2022-03-30 2022-03-30 Retinopathy of prematurity detection method and device

Country Status (1)

Country Link
CN (1) CN114862760B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115619747B (en) * 2022-10-26 2023-09-19 中山大学中山眼科中心 Child fundus retina panoramic image map generation and follow-up data alignment method
CN117132590B (en) * 2023-10-24 2024-03-01 威海天拓合创电子工程有限公司 Image-based multi-board defect detection method and device
CN117854700B (en) * 2024-01-19 2024-05-28 首都医科大学宣武医院 Postoperative management method and system based on wearable monitoring equipment
CN118195924B (en) * 2024-05-17 2024-07-26 南昌大学第二附属医院 Premature infant retinopathy analysis system based on image recognition

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113425248A (en) * 2021-06-24 2021-09-24 平安科技(深圳)有限公司 Medical image evaluation method, device, equipment and computer storage medium
CN113674157A (en) * 2021-10-21 2021-11-19 广东唯仁医疗科技有限公司 Fundus image stitching method, computer device and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130271728A1 (en) * 2011-06-01 2013-10-17 Tushar Mahendra Ranchod Multiple-lens retinal imaging device and methods for using device to identify, document, and diagnose eye disease
CN108022228A (en) * 2016-10-31 2018-05-11 天津工业大学 Based on the matched colored eye fundus image joining method of SIFT conversion and Otsu
CN108392174B (en) * 2018-04-19 2021-01-19 梁建宏 Automatic examination method and system for retinopathy of prematurity
CN109255753B (en) * 2018-08-27 2023-04-11 重庆贝奥新视野医疗设备有限公司 Fundus image splicing method
CN109934787B (en) * 2019-03-18 2022-11-25 湖南科技大学 Image splicing method based on high dynamic range
CN112164043A (en) * 2020-09-23 2021-01-01 苏州大学 Method and system for splicing multiple fundus images
CN113298742A (en) * 2021-05-20 2021-08-24 广东省人民医院 Multi-modal retinal image fusion method and system based on image registration
CN113436070B (en) * 2021-06-20 2022-05-17 四川大学 Fundus image splicing method based on deep neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113425248A (en) * 2021-06-24 2021-09-24 平安科技(深圳)有限公司 Medical image evaluation method, device, equipment and computer storage medium
CN113674157A (en) * 2021-10-21 2021-11-19 广东唯仁医疗科技有限公司 Fundus image stitching method, computer device and storage medium

Also Published As

Publication number Publication date
CN114862760A (en) 2022-08-05

Similar Documents

Publication Publication Date Title
CN114862760B (en) Retinopathy of prematurity detection method and device
CN112288706B (en) Automatic chromosome karyotype analysis and abnormality detection method
CN105513077B (en) A kind of system for diabetic retinopathy screening
CN102626305B (en) Image processing equipment and image processing method
CN110555845B (en) Fundus OCT image recognition method and device
US11080850B2 (en) Glaucoma diagnosis method using fundus image and apparatus for the same
CN103315702B (en) Image processing apparatus
CN110570421B (en) Multitask fundus image classification method and apparatus
CN102458225B (en) Image processing apparatus and control method thereof
US9149179B2 (en) System and method for identifying eye conditions
CN110327013B (en) Fundus image detection method, device and equipment and storage medium
CN103717122B (en) Ophthalmic diagnosis holding equipment and ophthalmic diagnosis support method
JP4767570B2 (en) Corneal shape analysis system
CN110517219B (en) Corneal topography distinguishing method and system based on deep learning
KR20190087272A (en) Method for diagnosing glaucoma using fundus image and apparatus therefor
CN110428421A (en) Macula lutea image region segmentation method and apparatus
CN109829882A (en) A kind of stages of DR prediction technique
CN109961848A (en) Macula lutea image classification method and equipment
CN109697719A (en) A kind of image quality measure method, apparatus and computer readable storage medium
CN114343563A (en) Method, device and system for assisting dry eye diagnosis and typing through multi-modal fusion
CN115115841A (en) Shadow spot image processing and analyzing method and system
CN113903082A (en) Human body gait monitoring algorithm based on dynamic time planning
CN109549619B (en) Fundus disc edge width determination method, glaucoma disease diagnosis device and system
CN115619747B (en) Child fundus retina panoramic image map generation and follow-up data alignment method
Zhou et al. Computer aided diagnosis for diabetic retinopathy based on fundus image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant