CN114119588A - Method, device and system for training fundus macular lesion region detection model - Google Patents

Method, device and system for training fundus macular lesion region detection model Download PDF

Info

Publication number
CN114119588A
CN114119588A CN202111457655.8A CN202111457655A CN114119588A CN 114119588 A CN114119588 A CN 114119588A CN 202111457655 A CN202111457655 A CN 202111457655A CN 114119588 A CN114119588 A CN 114119588A
Authority
CN
China
Prior art keywords
lesion region
region
fundus
predicted
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111457655.8A
Other languages
Chinese (zh)
Inventor
钟利伟
赵雷
唐轶
金蒙
李博超
何彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Daheng Prust Medical Technology Co ltd
Original Assignee
Beijing Daheng Prust Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Daheng Prust Medical Technology Co ltd filed Critical Beijing Daheng Prust Medical Technology Co ltd
Priority to CN202111457655.8A priority Critical patent/CN114119588A/en
Publication of CN114119588A publication Critical patent/CN114119588A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

Some embodiments of the present application provide a method, apparatus and system for training a fundus macular degeneration area detection model, the method comprising: acquiring a plurality of fundus image data, wherein the fundus image data includes: original data and labeled area data of each fundus image; obtaining a training dataset from the plurality of fundus image data; repeating the following training process according to the training data set: inputting the original data of the fundus image in the training data set into a fundus macular lesion region detection model in training to obtain a predicted lesion region; and confirming the accuracy of the detection result according to the predicted lesion area, and adjusting the value of the trained parameter according to the accuracy of the detection result. The embodiment of the application can quickly and accurately detect the fundus macular degeneration area and show the fundus macular degeneration area to doctors, and has certain application and popularization values.

Description

Method, device and system for training fundus macular lesion region detection model
Technical Field
The application relates to the technical field of medical detection, in particular to a method, a device and a system for training a fundus macular degeneration area detection model.
Background
The macular area of the fundus oculi is an important area of the retina of the fundus oculi, which has no blood vessels and is located in the posterior pole of the eye, and is mainly related to the visual functions of fine vision, color vision and the like. If the macular area of the eyeground is diseased, the vision is usually degraded, and the dark shadow before the eye is usually generated. Therefore, fundus examination is required to be regularly performed to prevent pathological changes in macular region.
At present, the inventor of the present application finds in research that, because the research algorithm for detecting macular region lesion is less, the fundus data set for detecting macular region lesion is less, and the difficulty of training the detection model is higher, the fundus data can not be detected accurately, quickly and in batch currently.
Therefore, how to provide a technical scheme for efficiently detecting the macular degeneration area of the fundus becomes a technical problem which needs to be solved urgently.
Disclosure of Invention
An object of some embodiments of this application is to provide a method, device and system for training eyeground macular degeneration regional detection model, utilize a plurality of eyeground image data to carry out training, verification and test many times to eyeground macular degeneration regional detection model through the technical scheme of some embodiments of this application, modify the model parameter, obtain final eyeground macular degeneration regional detection model, realized the quick accurate detection to eyeground macular degeneration data, detection efficiency is higher.
In a first aspect, some embodiments of the present application provide a method for training a fundus macular degeneration area detection model, including: acquiring a plurality of fundus image data, wherein the fundus image data includes: original data and labeled area data of each fundus image; obtaining a training dataset from the plurality of fundus image data; repeating the following training process according to the training data set: inputting the original data of the fundus image in the training data set into a fundus macular lesion region detection model in training to obtain a predicted lesion region; determining the accuracy of a detection result according to a predicted lesion region, and adjusting the value of a trained parameter according to the accuracy of the detection result, wherein the accuracy is characterized by adopting a decision factor, if the predicted lesion region is determined to include an annotated lesion region for the fundus image or the annotated lesion region for the fundus image is determined to include the predicted lesion region, the value of the decision factor is set as a target value, the target value is used for representing that the detection result is accurate, or if the predicted lesion region and the annotated lesion region for the fundus image are partially overlapped or not overlapped, the accuracy of the detection result is determined according to the calculated value of the decision factor.
Since some embodiments of the present application redefine the decision factor, i.e. assign the decision factor directly when confirming that the predicted lesion region includes the annotated lesion region annotated on the image or confirming that the annotated lesion region of the fundus image includes the predicted lesion region, the value of the decision factor is obtained by a conventional calculation method for the decision factor corresponding to the predicted lesion area which does not include the relation, therefore, the missed detection of the lesion can be effectively reduced, because the decision factor IOU value directly calculated by the method in the prior art when the predicted lesion area comprises the standard lesion area and has the inclusion relation is possibly very low, at the moment, the related art takes the predicted lesion area as a negative sample, the embodiment of the application judges the detection result as accurate detection, so that the embodiment of the application can effectively reduce the missed detection of the lesion.
In some embodiments, the predicted lesion region is confirmed to include the labeled lesion region or the labeled lesion region is confirmed to include the predicted lesion region by a relationship between the positional information of the predicted lesion region and the positional information of the labeled lesion region.
According to the embodiment of the application, the predicted lesion area is confirmed to comprise the marked lesion area or the marked lesion area comprises the predicted lesion area according to the position relation between the predicted lesion area and the marked lesion area, the algorithm is simple, the problem that whether the two areas belong to the inclusion relation or not can not be accurately judged by adopting other methods due to different sizes of the lesion areas marked by different experts can be effectively solved, and the speed of training the model can be effectively increased.
In some embodiments, the confirming that the predicted lesion region comprises an annotated lesion region for the fundus image or confirming that the annotated lesion region of the fundus image comprises the predicted lesion region comprises: confirming that the predicted lesion region includes the labeled lesion region if the distance between the center position of the predicted lesion region and the center position of the labeled lesion region is less than or equal to a set threshold value and confirming that the boundary of the predicted lesion region is outside or overlapping the boundary of the labeled lesion region, or confirming that the labeled lesion region includes the predicted lesion region if the distance between the center position of the predicted lesion region and the center position of the labeled lesion region is less than or equal to a set threshold value and confirming that the boundary of the labeled lesion region is outside or overlapping the boundary of the predicted lesion region.
According to the embodiment of the application, the relation between the central position of the predicted lesion area and the central position of the marked lesion area and the boundary position of the two areas are determined in a double mode, the accuracy of adjusting model parameters in training is further guaranteed, and finally the probability of false detection or false detection of the predicted lesion area on the input image to be detected can be greatly reduced.
In some embodiments, the confirming that the predicted lesion region comprises an annotated lesion region for the fundus image or confirming that the annotated lesion region of the fundus image comprises the predicted lesion region comprises: and according to the boundary of the circumscribed polygon corresponding to the predicted lesion area, the boundary of the circumscribed polygon corresponding to the marked lesion area is included, or according to the boundary of the circumscribed polygon corresponding to the marked lesion area, the boundary of the circumscribed polygon corresponding to the predicted lesion area is included.
According to the embodiment of the application, the relation between the predicted lesion area and the marked lesion area is confirmed by comparing the boundary of the circumscribed polygon corresponding to the predicted lesion area with the boundary of the circumscribed polygon corresponding to the marked lesion area. Because the result of predicting the lesion area and marking the lesion area is irregular shape, the position relation of the two can be judged more simply, rapidly and accurately by adopting the external polygon.
In some embodiments, the boundary circumscribing the polygon is characterized using coordinates of the vertices comprised by the boundary.
According to the embodiment of the application, the coordinate information of the circumscribed polygon of the predicted lesion area and the marked lesion area is compared, so that the position relation between the predicted lesion area and the marked lesion area can be visually and quickly positioned.
In some embodiments, the confirming that the predicted lesion region and the labeled lesion region for the fundus image partially overlap or do not overlap comprises: confirming that the predicted lesion region and the labeled lesion region of the fundus image are partially overlapped or not overlapped if the distance between the center position of the predicted lesion region and the center position of the labeled lesion region is greater than a set threshold value and the intersection or non-overlapping of the boundary of the predicted lesion region and the boundary of the labeled lesion region is confirmed, or the distance between the center position of the predicted lesion region and the center position of the labeled lesion region is greater than a set threshold value and the boundary of the labeled lesion region is confirmed to be outside or non-overlapping of the boundary of the predicted lesion region; or, if the distance between the center position of the predicted lesion region and the center position of the labeled lesion region is less than or equal to a set threshold value and if it is confirmed that the boundary of the predicted lesion region and the boundary of the labeled lesion region intersect or do not overlap, it is confirmed that the predicted lesion region and the labeled lesion region of the fundus image partially overlap or do not overlap.
According to the embodiment of the application, the relation between the distance between the central position of the predicted lesion area and the central position of the marked lesion area and the boundary is predicted, the relation between the partial overlapping and non-overlapping of the areas of the predicted lesion area and the marked lesion area is presumed, the predicted lesion area can be rapidly determined not to contain the marked lesion area, and a basis is provided for adjusting model parameters.
In some embodiments, the target value is set in a range of 0.9 to 1.
According to the embodiment of the application, the accuracy of the prediction result is represented by directly setting the decision factor for predicting the lesion area, including the marked lesion area, to be a number close to or equal to the maximum value 1, so that the error caused by the fact that the decision factor is low in the case obtained according to a conventional calculation mode is improved.
In some embodiments, the plurality of fundus image data is divided into the training dataset, the verification dataset, and the test dataset; wherein after said repeating the following training process from said training data set, said method further comprises: repeating the following verification process according to the verification data set: inputting original data of the fundus image included in the verification data set into a fundus macular degeneration area detection model in verification to obtain a lesion area to be verified, wherein the fundus macular degeneration area detection model in verification obtained in the first verification process is the fundus macular degeneration area detection model to be verified obtained in the training process; and confirming the precision of a detection result according to the lesion area to be verified, and adjusting the value of the detection model parameter of the fundus macular lesion area to be verified according to the precision of the detection result.
According to the embodiment of the application, the data set is divided into the training set and the verification set, the model obtained through training is verified through the verification data set, and the model parameters are adjusted, so that the prediction accuracy of the obtained model on the lesion area is further improved.
In some embodiments, after said repeating the following verification process from said verification data set, said method further comprises: repeating the following test procedure according to the test data set: inputting the original data of the fundus image in the test data set into a fundus macular degeneration area detection model in the test to obtain an image to be tested containing at least one detection area, wherein the corresponding fundus macular degeneration area detection model in the test is the fundus macular degeneration area detection model to be tested obtained through the verification process when the test process is executed for the first time; confirming an image of a detection region to be tested according to at least one confidence coefficient corresponding to the at least one detection region, wherein one confidence coefficient corresponds to one detection region; and confirming the accuracy of a detection result according to the image of the detection area to be tested, and adjusting the value of the detection model parameter of the fundus macular degeneration area to be tested according to the accuracy of the detection result.
According to the embodiment of the application, the verified model is tested through the test data set, and then the accuracy of the test result is confirmed by combining the confidence coefficient, so that the generalization capability of the model can be better evaluated, and the prediction accuracy of the obtained model on the lesion area is further improved.
In some embodiments, confirming the image of the detection region to be tested according to at least one confidence corresponding to the at least one detection region comprises: and confirming a detection area image to be tested according to a non-maximum suppression algorithm and the confidence coefficient, wherein the detection area image to be tested comprises a fundus macular region and one or more lesion type regions.
According to the method and the device, the image of the detection area to be detected is obtained according to the non-maximum suppression algorithm and the confidence coefficient, so that a more reliable test result is ensured to be selected, the probability of false detection is reduced in subsequent model application, and the detection accuracy is improved.
In a second aspect, an embodiment of the present application provides a method for detecting a macular degeneration area, including: acquiring a fundus macular degeneration target detection result including at least one detection area and at least one confidence corresponding to the at least one detection area according to a fundus image to be detected and a fundus macular degeneration target detection model obtained by executing the corresponding method of any embodiment in the first aspect; and confirming a fundus macular detection result according to a non-maximum value inhibition algorithm and the confidence coefficient, wherein the fundus macular detection result comprises a fundus macular region and one or more lesion type regions.
In a third aspect, an embodiment of the present application provides an apparatus for training a fundus macular degeneration area detection model, including: a data acquisition module configured to acquire a plurality of fundus image data, wherein the fundus image data includes: original data and labeled area data of each fundus image; a training data set module configured to obtain a training data set from the plurality of fundus image data; a model training module configured to repeat the following training process from the training data set: inputting the original data of the fundus image in the training data set into a fundus macular lesion region detection model in training to obtain a predicted lesion region; determining the accuracy of a detection result according to a predicted lesion region, and adjusting the value of a trained parameter according to the accuracy of the detection result, wherein the accuracy is characterized by adopting a decision factor, if the predicted lesion region is determined to include an annotated lesion region for the fundus image or the annotated lesion region for the fundus image is determined to include the predicted lesion region, the value of the decision factor is set as a target value, the target value is used for representing that the detection result is accurate, or if the predicted lesion region and the annotated lesion region for the fundus image are partially overlapped or not overlapped, the accuracy of the detection result is determined according to the calculated value of the decision factor.
In a fourth aspect, the present embodiments provide a system comprising one or more computers and one or more storage devices storing instructions that, when executed by the one or more computers, cause the one or more computers to perform the operations of the respective methods of any of the embodiments of the first aspect.
In a fifth aspect, the present embodiments provide one or more computer storage media storing instructions that, when executed by one or more computers, cause the one or more computers to perform the operations of the respective methods of any of the embodiments in the first aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a system diagram of a method for detecting a macular degeneration area of a fundus according to an embodiment of the present application;
FIG. 2 is a flowchart of a method for training a fundus macular degeneration area detection model according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a labeled box for predicting a lesion region and a real result according to an embodiment of the present disclosure;
FIG. 4 is a flowchart of a method for training a fundus macular degeneration area detection model based on an artificial neural network model according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating labeled fundus macular degeneration data for training an artificial neural network model according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating labeled fundus macular degeneration data for training an artificial neural network model according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram illustrating a position relationship between a real result and a predicted lesion area provided in an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating an annotation result of a test image in a test data set according to an embodiment of the present application;
fig. 9 is a schematic diagram of a detection result of an image of a detection area to be tested according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a detection result of a region containing multiple types of lesions according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a lesion region to be tested after treatment between the same categories according to an embodiment of the present application;
FIG. 12 is a schematic view of a lesion area to be tested after treatment between different categories provided by an embodiment of the present application;
fig. 13 is a block diagram of a training fundus macular degeneration area detection model apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
In the related art example, on the one hand, there are fewer algorithms for focusing on the detection of the fundus macular degeneration region. On the other hand, the disclosed fundus data sets for fundus macular degeneration detection are fewer, the data with labels is fewer, the detection accuracy of the fundus disease detection model obtained through training is lower, and the model with higher training accuracy is difficult. As can be seen from the above-mentioned related art, efficient detection of the fundus macular degeneration area is not currently achieved.
In view of this, some embodiments of the present application provide a method, an apparatus, and a system for training a fundus macular degeneration area detection model, in which a large amount of fundus data is collected, and multiple experts mark the fundus data as a data set for training the fundus macular degeneration area detection model, and training parameters are modified in the training process, so as to improve the detection rate and accuracy of the fundus macular degeneration area detection model.
The following exemplarily describes a process of training a fundus macular degeneration region detection model.
As shown in fig. 1, the figure provides a system diagram of a fundus macular degeneration area detection method to which the embodiment of the present application can be applied, and the diagram includes an image capturing terminal device 100 and a server detection terminal device 200, where the image capturing terminal device 100 can be used to capture a piece of fundus image data 101, and the captured fundus image data 101 is sent to the server detection terminal device 200 to be processed to obtain a predicted degeneration area.
In addition, it should be noted that in other embodiments of the present application, the photographing terminal device 100 has a function of photographing and detecting to obtain a predicted lesion area, and in this case, the server detection terminal device may not be provided. The image capturing terminal device may be a PC terminal or a mobile terminal.
The server detection terminal device 200 of fig. 1 is provided with a fundus macular degeneration region detection model, and it is the model obtained by this training that makes it possible for the server detection terminal device 200 to obtain a predicted degeneration region. It should be noted that, unlike the related art method for training the fundus macular degeneration area detection model, the fundus macular degeneration area detection model set by the detection terminal device 200 in fig. 1 is trained by redefined decision factor parameters.
It will be appreciated that in order to provide the fundus macular lesion region detection model on the server with the ability to predict the lesion region, this model needs to be trained first.
The following describes an exemplary process for training a fundus macular degeneration area detection model according to some embodiments of the present application with reference to fig. 2.
As shown in fig. 2, some embodiments of the present application provide a flowchart of a method of training a fundus macular degeneration area detection model, the method including: s210, acquiring a plurality of fundus image data, wherein the fundus image data includes: original data and labeled area data of each fundus image; s220, obtaining a training data set from the plurality of fundus image data; s230, repeating the following training process according to the training data set: inputting the original data of the fundus image in the training data set into a fundus macular lesion region detection model in training to obtain a predicted lesion region; determining the accuracy of a detection result according to a predicted lesion region, and adjusting the value of a trained parameter according to the accuracy of the detection result, wherein the accuracy is characterized by adopting a decision factor, if the predicted lesion region is determined to include an annotated lesion region for the fundus image or the annotated lesion region for the fundus image is determined to include the predicted lesion region, the value of the decision factor is set as a target value, the target value is used for representing that the detection result is accurate, or if the predicted lesion region and the annotated lesion region for the fundus image are partially overlapped or not overlapped, the accuracy of the detection result is determined according to the calculated value of the decision factor.
In some embodiments of the present application, the raw data of the plurality of fundus image data in S210 may include a plurality of images of fundus macular degeneration and a plurality of images of fundus macular degeneration, and the marked region data may include an image of a fundus macular degeneration region marked on each image of fundus macular degeneration (i.e., a marked degeneration region) and an image of a fundus macular region marked on each image of fundus macular degeneration (MAC). In order to balance the two types of data, the pathological data and the non-pathological data are mixed in a certain proportion, wherein only the MAC position is marked on the fundus macular non-pathological image.
Note that, in some embodiments of the present application, the labeling data in S210 is obtained by collecting a large amount of fundus data and labeling the data by a plurality of experts. Due to the fact that different specialist doctors have some degree of difference in the size, severity and classification of the same fundus maculopathy. In order to reduce the difference of labeling data of different specialist doctors, a large amount of fundus data are divided into multiple specialist doctors to be subjected to cross labeling, so that high-quality labeling data are obtained, and the accuracy of a training model is improved.
In some embodiments of the present application, the fundus macular degeneration region detection model involved in the training in S210 may be any one of segmentation network models, such as any one of an artificial neural network Mask-RCNN model, a Cascade-RCNN model, and a gcnet model, which may obtain a rectangular frame result (i.e., a predicted degeneration region of an output rectangle) and a segmentation result of target detection. Since the shape of the macular degeneration is irregular, the detection result may be presented as a polygon segmentation result in some embodiments of the present application.
The process of confirming that the predicted lesion region includes labeling a lesion region for the fundus image referred to at S230 is exemplarily set forth below. In some embodiments of the present application, the position relationship between the predicted lesion region and the labeled lesion region may be directly confirmed according to the boundary position information of the predicted lesion region and the labeled lesion region. Or, the position relation between the predicted lesion area and the marked lesion area is confirmed according to the central position of the predicted lesion area, the central position of the marked lesion area and the boundary position information of the predicted lesion area and the marked lesion area. In other embodiments of the present application, the position relationship between the predicted lesion region and the labeled lesion region may also be directly determined according to the coordinate information of the fixed points of the circumscribed polygon of the predicted lesion region and the labeled lesion region.
In order to accurately evaluate the prediction data and improve the accuracy of the trained model, in some embodiments of the present application, S230 may confirm that the predicted lesion region includes the labeled lesion region or confirm that the labeled lesion region includes the predicted lesion region according to a relationship between the position information of the predicted lesion region and the position information of the labeled lesion region.
It should be noted that the predicted lesion region includes the labeled lesion region or the labeled lesion region is confirmed to include the predicted lesion region, the sizes and the positions of the predicted lesion region and the labeled lesion region are substantially the same, the predicted lesion region may completely include the labeled lesion region, or the labeled lesion region may completely include the predicted lesion region.
In some other embodiments of the present application, the confirmation method of S230 may include: confirming that the predicted lesion region includes the labeled lesion region if the distance between the center position of the predicted lesion region and the center position of the labeled lesion region is less than or equal to a set threshold value and confirming that the boundary of the predicted lesion region is outside or overlapping the boundary of the labeled lesion region, or confirming that the labeled lesion region includes the predicted lesion region if the distance between the center position of the predicted lesion region and the center position of the labeled lesion region is less than or equal to a set threshold value and confirming that the boundary of the labeled lesion region is outside or overlapping the boundary of the predicted lesion region.
In some other embodiments of the present application, the confirming method of S230 may further include: and according to the boundary of the circumscribed polygon corresponding to the predicted lesion area, the boundary of the circumscribed polygon corresponding to the marked lesion area is included, or according to the boundary of the circumscribed polygon corresponding to the marked lesion area, the boundary of the circumscribed polygon corresponding to the predicted lesion area is included. And the boundary of the circumscribed polygon is represented by coordinates of fixed points included by the boundary.
For example, in some embodiments of the present application, different specialist physicians label regions of different sizes for the same fundus lesion, but the location of the lesion regions is consistent. And determining that the predicted lesion region includes the labeled lesion region (namely, a real result, called GT for short) by determining that the distance between the central position of the predicted lesion region and the central position of the labeled lesion region is less than or equal to a set threshold and determining that the boundary of the predicted lesion region is outside the boundary of the labeled lesion region, otherwise, determining that the predicted lesion region and the labeled lesion region do not belong to the included relationship (namely, the predicted lesion region and the labeled lesion region may have partial overlap or no overlap). In addition, the display results of the predicted lesion area and the labeled lesion area can be in the form of a circumscribed polygon, and the results both contain the position coordinate information of the labeled frame. For example, by judging the relationship of the position coordinate information, it is determined that the predicted lesion region includes the true outcome. For example, as shown in fig. 3, coordinate information of opposite corners of a rectangular frame circumscribing the predicted lesion region (solid irregular region) is bbox1(x1, y1) and bbox1(x2, y2), respectively, coordinate information of opposite corners of a rectangular frame circumscribing the real result (dotted irregular region) is bbox2(s1, t1) and bbox2(s2, t2), respectively, and it is determined that the predicted lesion region includes the real result if the circumscribed rectangular frame of the predicted lesion region includes the circumscribed rectangular frame labeling the lesion region according to the information in the graph. It is to be understood that the acquired diagonal coordinate information is not limited to that shown in the figure, and at least two pieces of coordinate information of other positions may be acquired to confirm the positional relationship between the predicted lesion region and the actual result.
In some embodiments of the present application, the confirmation method of confirming that the predicted lesion region and the labeled lesion region for the fundus image partially overlap or do not overlap in S230 exemplarily includes: confirming that the predicted lesion region and the labeled lesion region of the fundus image are partially overlapped or not overlapped if the distance between the center position of the predicted lesion region and the center position of the labeled lesion region is greater than a set threshold value and the intersection or non-overlapping of the boundary of the predicted lesion region and the boundary of the labeled lesion region is confirmed, or the distance between the center position of the predicted lesion region and the center position of the labeled lesion region is greater than a set threshold value and the boundary of the labeled lesion region is confirmed to be outside or non-overlapping of the boundary of the predicted lesion region; or, if the distance between the center position of the predicted lesion region and the center position of the labeled lesion region is less than or equal to a set threshold value and if it is confirmed that the boundary of the predicted lesion region and the boundary of the labeled lesion region intersect or do not overlap, it is confirmed that the predicted lesion region and the labeled lesion region of the fundus image partially overlap or do not overlap.
In some embodiments of the present application, the target value referred to in S230 may be set to any value from 0.9 to 1, for example, in some embodiments, the target value may be directly set to 1, or the target value may be set to 0.9, and the like.
In some embodiments of the present application, when it is determined that the predicted lesion region includes the labeled lesion region, it is easily known that the predicted result (i.e., the predicted lesion region) output by using the trained model is consistent with the actual value (i.e., the labeled lesion region), and in order to avoid the conclusion that the situation is determined as a detection error, the embodiments of the present application directly set the decision factor to a larger value (i.e., directly assign the determined target value to the decision factor corresponding to the situation). It is understood that a higher target value indicates a higher accuracy, and for example, the target value may be set to 1, and the prediction accuracy for characterizing the predicted lesion region may be higher. When the predicted lesion area is confirmed to be partially overlapped or not overlapped with the marked lesion area, the accuracy of the detection result is determined according to the value of the calculated decision factor, namely the larger the value is, the higher the accuracy is. As an example, the decision factor may be represented by an Intersection Over Union (IOU). The value of the IOU is 0-1, and the IOU is 0, so that the predicted lesion area is not overlapped with the marked lesion area, and the result of the predicted lesion area is wrong. The IOU is 1, which means that the predicted lesion area is overlapped with the marked lesion area, and the result accuracy of the predicted lesion area is high. It can be seen that a larger value indicates a larger overlap between the predicted lesion region and the labeled lesion region, and further reflects a better detection effect of the model.
In order to guarantee the detection effect of the model in training, the detection accuracy of the final model is improved. In some embodiments of the present application, the plurality of fundus image data is divided into the training dataset, the verification dataset, and the test dataset; wherein after S230, the method further comprises: s240 (not shown in the figure), repeating the following verification process according to the verification data set: inputting original data of the fundus image included in the verification data set into a fundus macular degeneration area detection model in verification to obtain a lesion area to be verified, wherein the fundus macular degeneration area detection model in verification obtained in the first verification process is the fundus macular degeneration area detection model to be verified obtained in the training process; and confirming the precision of a detection result according to the lesion area to be verified, and adjusting the value of the detection model parameter of the fundus macular lesion area to be verified according to the precision of the detection result.
In some embodiments of the present application, a plurality of fundus image data may be divided in a set scale. For example, a plurality of fundus image data are divided into a training data set, a verification data set and a test data set according to the proportion of 7:2:1, and the three data sets are adjusted into format files of fundus macular degeneration area detection models suitable for training, verification and testing. And then, verifying the fundus macular degeneration area detection model to be verified by using a verification data set, and finishing the training and verification of the model when the precision of the detection result of the model is kept unchanged and is not promoted or reaches the preset training times.
Before practical application, the detection effect of the model is ensured. In some embodiments of the present application, after S240, the method further comprises: s250 (not shown), repeating the following test procedure according to the test data set: inputting the original data of the fundus image in the test data set into a fundus macular degeneration area detection model in the test to obtain an image to be tested containing at least one detection area, wherein the corresponding fundus macular degeneration area detection model in the test is the fundus macular degeneration area detection model to be tested obtained through the verification process when the test process is executed for the first time; confirming an image of a detection region to be tested according to at least one confidence coefficient corresponding to the at least one detection region, wherein one confidence coefficient corresponds to one detection region; and confirming the accuracy of a detection result according to the image of the detection area to be tested, and adjusting the value of the detection model parameter of the fundus macular degeneration area to be tested according to the accuracy of the detection result.
In some embodiments of the present application, when the testing data set is used to test the fundus macular degeneration area detection model under test in S250, after the raw data of the fundus image is input, a detection result to be tested containing at least one detection area and at least one confidence are obtained. Based on the detection result analysis, it can be found that there is a situation of inclusion or partial overlap between the lesions of the same category or between the lesions of different categories, and based on this, the embodiment of the present application removes redundant detection results by using a post-processing technique (for example, using a non-maximum suppression algorithm).
In some embodiments of the present application, in S250, an image of a detection region to be tested may be confirmed according to a non-maximum suppression algorithm and the confidence level, wherein the image of the detection region to be tested includes a fundus macular region, one or more lesion type regions.
For example, in some embodiments of the present application, in order to reasonably remove repeated detection results, the confidence levels corresponding to the lesion type regions may be sorted from large to small or from small to large, and then an unreasonable result is removed by using a Non-Maximum Suppression (NMS) algorithm to obtain an image of the detection region to be tested. For example, 10 detection results are obtained in the present embodiment, where the 10 detection results include lesions of the same category (e.g., 6 detection results) and lesions of different categories (e.g., 4 detection results). The same category of lesions is first treated with the NMS algorithm. And sequencing 6 confidence degrees corresponding to 6 detection results of the lesions of the same category from large to small, and then calculating the overlapping degree of the detection result with the first confidence degree in the sequencing queue and the remaining 5 detection results by using an NMS algorithm to obtain 5 overlapping degrees. And respectively comparing the 5 overlapping degrees with a set threshold, if the overlapping degrees are greater than the set threshold, rejecting the detection result of the lesion type, and if the overlapping degrees are less than or equal to the set threshold, retaining the detection result of the lesion type. Secondly, the NMS algorithm is used for processing the detection results of the lesions of different types, the processing process of different types is the same as that of the same type, and the detailed description is omitted here to avoid repetition.
In addition, in some embodiments of the present application, an object detection algorithm (e.g., soft-NMS algorithm) having the same function as the NMS algorithm may also be used to reject unreasonable detection results.
The process of training the fundus macular degeneration area detection model based on the artificial neural network model of fig. 2 is specifically described below with reference to fig. 4.
Referring to fig. 4, fig. 4 is a flowchart of a method for training a fundus macular degeneration area detection model based on an artificial neural network model according to an embodiment of the present application.
The above process is exemplarily set forth below.
First, data and tags are obtained.
Acquiring a plurality of fundus image data, wherein the fundus image data includes: raw data and labeled area data of each fundus image.
For example, a large amount of fundus data is collected and cross-labeled by a plurality of experts, resulting in more accurate labeling results. In order to balance the various types of data, the ratio of lesion data to non-lesion data in the data set is 1:1, the labeled data is shown in fig. 5 and 6, fig. 5 is fundus macular lesion data, wherein the fundus macular lesion data is labeled with an MAC region and a lesion region, and fig. 6 is fundus macular non-lesion data, wherein the non-lesion data is only labeled with an MAC position.
And secondly, generating a fundus data annotation file.
The plurality of fundus image data in the first step is segmented into a training data set, a verification data set, and a test data set.
For example, all fundus images are segmented according to the proportion of 7:2:1 of a training set, a verification set and a test set, and an annotation file format suitable for the model is generated according to the annotation result of a doctor.
And thirdly, training a model.
The following training process is repeated according to the training data set: and inputting the original data of the fundus image in the training data set into a fundus macular lesion region detection model in training to obtain a predicted lesion region.
For example, an artificial neural network model is trained using a training set. And (4) taking the fundus data concentrated in the training as the input of the artificial neural network model, and taking the detection result of the predicted lesion area as the output of the artificial neural network model to train the artificial neural network model. According to the comparison between the central position and the boundary position of the predicted lesion area and the central position and the boundary position of the real result (namely, labeling the lesion area), if the distance between the central position of the predicted lesion area and the central position of the real result is less than or equal to a set threshold value and the boundary of the predicted lesion area is outside or overlapped with the boundary of the real result, the predicted lesion area is confirmed to include the real result, the predicted result is accurate, and the value of IOU (as an example of a decision factor) is set to be 1. Confirming that the predicted lesion region and the labeled lesion region of the fundus image are partially overlapped or not overlapped if the distance between the center position of the predicted lesion region and the center position of the labeled lesion region is larger than a set threshold value and the intersection or non-overlapping of the boundary of the predicted lesion region and the boundary of the labeled lesion region is confirmed, or the distance between the center position of the predicted lesion region and the center position of the labeled lesion region is larger than a set threshold value and the boundary of the labeled lesion region is confirmed to be outside or non-overlapping of the boundary of the predicted lesion region; or, if the distance between the central position of the predicted lesion region and the central position of the labeled lesion region is less than or equal to a set threshold and it is confirmed that the boundary of the predicted lesion region and the boundary of the labeled lesion region intersect or do not overlap, it is confirmed that the predicted lesion region and the labeled lesion region of the fundus image partially overlap or do not overlap, the value of the IOU is calculated according to a conventional method, and a higher value indicates a more accurate result of the predicted lesion region. As shown in fig. 7, the real result (indicated by the white line in the figure) falls within the predicted lesion region (indicated by the black line in the figure), and the center positions of the real result and the predicted lesion region are substantially consistent, and the IOU value is set to 1 at this time, thereby representing the accuracy of the predicted result.
In addition, as another example of the present application, the positional relationship between the predicted lesion region and the actual result is confirmed based on the fixed-point coordinate information of the circumscribed polygon of the predicted lesion region and the fixed-point coordinate information of the circumscribed polygon of the actual result. Or, the distance between the central position of the predicted lesion area and the central position of the real result output by the artificial neural network model in training can be compared with a set threshold value and a boundary position, so that the predicted lesion area is confirmed to comprise the real result. When the distance is greater than a set threshold value and the boundary of the predicted lesion region and the boundary of the marked lesion region are crossed or not overlapped, confirming that the predicted lesion region and a real result are partially overlapped or not overlapped, or the distance between the central position of the predicted lesion region and the central position of the marked lesion region is greater than the set threshold value and the boundary of the marked lesion region is confirmed to be outside the boundary of the predicted lesion region, or confirming that the predicted lesion region and the marked lesion region are partially overlapped or not overlapped if the distance between the central position of the predicted lesion region and the central position of the marked lesion region is less than or equal to the set threshold value and the boundary of the predicted lesion region and the boundary of the marked lesion region are crossed or not overlapped.
And fourthly, verifying the model.
After repeating the training process of the third step on the basis of the training data set, the method further comprises: repeating the following verification process according to the verification data set: and (5) detecting the model of the fundus macular lesion area in the verification by using the verification data set to obtain the lesion area to be verified.
For example, the artificial neural network model to be verified is verified by using the verification set, fundus data in the verification set is input into the artificial neural network model to be verified, and a lesion area to be verified output by the artificial neural network model to be verified is obtained. And comparing the lesion area to be verified with the marked lesion area data concentrated in verification to obtain the precision of the lesion area to be verified. When the precision of the lesion area to be verified is low, the model parameters need to be adjusted. And when the precision of the lesion area to be verified is not increased any more, completing verification, and acquiring and storing the artificial neural network model to be tested.
And fifthly, testing the model.
And repeating the following test process on the artificial neural network model to be tested obtained in the third step by using the test data set.
For example, a piece of fundus data in the test set is input into the artificial neural network model to be tested, and an output result of the artificial neural network model to be tested is obtained, wherein the output result comprises at least one image to be tested of at least one detection area and at least one confidence level, and one detection area corresponds to one confidence level. As shown in fig. 8 and fig. 9, fig. 8 is a labeling result of a test image in a test data set, and fig. 9 is a schematic diagram of a result of an image of a detection region to be tested, where the result includes a MAC region with a confidence of 0.38, and a lesion region HBBBB with a confidence of 0.34.
It can be understood that, since the disease-free data in the test set marks the position of the MAC, the test result may detect the detection result of the disease region and also the position of the MAC, and if the MAC position is detected and the confidence is higher, the MAC position is shown in the detection result.
And sixthly, analyzing the test result and post-processing.
And analyzing the image to be tested containing at least one detection area and at least one confidence coefficient in the fifth step and removing redundancy by utilizing a post-processing technology.
For example, in the acquired images to be tested of the regions of various lesion types, the research finds that the inclusion or the intersection exists between the same lesion or between different lesions. As shown in fig. 10, fig. 10 is a display diagram of the detection result including the regions of various lesion types, and it can be seen that the display diagram includes labeled regions of various lesion types.
In order to reasonably remove duplicate test results. As an example of this application, the NMS algorithm is used to cull out unreasonable results in fig. 10. And for the lesions of the same category, the NMS algorithm is adopted to remove the test results with basically consistent positions in the graph 10 and confidence degrees lower than the set confidence threshold, and the removed results are shown in the graph 11, so that the detection regions with higher confidence degrees are left. Then, for different types of lesions in fig. 11, the result with the position coincidence degree greater than the set threshold and the confidence degree lower than the set confidence threshold is removed by the NMS algorithm, and the removed result is shown in fig. 12.
In addition, for the detection result of only the macular region position, except for applying the NMS algorithm, the final detection result of the MAC region with the highest confidence level is selected, so that one fundus image includes only one MAC region.
And seventhly, analyzing the processing result and adjusting the model parameters.
And analyzing the accuracy of the image of the detection area to be tested, and adjusting the value of the detection model parameter of the fundus macular degeneration area to be tested. For example, associated parameters (e.g., learning rate) in the model can be fine-tuned according to accuracy to improve the application effect of the model.
In the eighth step, a target artificial neural network model (as one specific example of a fundus macular region detection model) is acquired.
In addition, in some embodiments of the present application, a method for detecting a macular degeneration area is further provided, which may obtain a fundus macular degeneration target detection result including at least one detection area and at least one confidence corresponding to the at least one detection area according to a fundus image to be detected and a fundus macular degeneration target detection model obtained by performing the corresponding method of any of the embodiments in fig. 2; and confirming a fundus macular detection result according to a non-maximum value inhibition algorithm and the confidence coefficient, wherein the fundus macular detection result comprises a fundus macular region and one or more lesion type regions. By the method, the detection result of the fundus macular degeneration can be quickly and accurately acquired, and the fundus macular degeneration can be conveniently checked or referred by a doctor. And batch fundus data can be efficiently and accurately detected by the method.
Referring to fig. 13, fig. 13 is a block diagram illustrating a block diagram of a device for training a fundus macular degeneration area detection model according to an embodiment of the present application. It should be understood that the training fundus macular degeneration area detection model device corresponds to the method embodiment of fig. 2, and can perform the steps related to the method embodiment, the specific functions of the training fundus macular degeneration area detection model device can be referred to the description above, and the detailed description is appropriately omitted here to avoid repetition.
The training fundus macular degeneration area detection model of fig. 13 includes at least one software functional module that can be stored in a memory in the form of software or firmware or solidified in the training fundus macular degeneration area detection model apparatus, and includes: an S1310 data acquisition module configured to acquire a plurality of fundus image data, wherein the fundus image data includes: original data and labeled area data of each fundus image; s1320 a training data set module configured to obtain a training data set from the plurality of fundus image data; an S1330 model training module configured to repeat the following training process according to the training data set: inputting the original data of the fundus image in the training data set into a fundus macular lesion region detection model in training to obtain a predicted lesion region; determining the accuracy of a detection result according to a predicted lesion region, and adjusting the value of a trained parameter according to the accuracy of the detection result, wherein the accuracy is characterized by adopting a decision factor, if the predicted lesion region is determined to include an annotated lesion region for the fundus image or the annotated lesion region for the fundus image is determined to include the predicted lesion region, the value of the decision factor is set as a target value, the target value is used for representing that the detection result is accurate, or if the predicted lesion region and the annotated lesion region for the fundus image are partially overlapped or not overlapped, the accuracy of the detection result is determined according to the calculated value of the decision factor.
Some embodiments of the present application also provide a system comprising one or more computers and one or more storage devices storing instructions that, when executed by the one or more computers, cause the one or more computers to perform the operations of the respective methods of any of the embodiments of fig. 2.
Some embodiments of the present application also provide one or more computer storage media storing instructions that, when executed by one or more computers, cause the one or more computers to perform the operations of the respective methods in any of the embodiments of fig. 2.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (14)

1. A method of training a fundus macular degeneration region detection model, the method comprising:
acquiring a plurality of fundus image data, wherein the fundus image data includes: original data and labeled area data of each fundus image;
obtaining a training dataset from the plurality of fundus image data;
repeating the following training process according to the training data set:
inputting the original data of the fundus image in the training data set into a fundus macular lesion region detection model in training to obtain a predicted lesion region;
determining the accuracy of a detection result according to a predicted lesion region, and adjusting the value of a trained parameter according to the accuracy of the detection result, wherein the accuracy is characterized by adopting a decision factor, if the predicted lesion region is determined to include an annotated lesion region for the fundus image or the annotated lesion region for the fundus image is determined to include the predicted lesion region, the value of the decision factor is set as a target value, the target value is used for representing that the detection result is accurate, or if the predicted lesion region and the annotated lesion region for the fundus image are partially overlapped or not overlapped, the accuracy of the detection result is determined according to the calculated value of the decision factor.
2. The method of claim 1, wherein the predicted lesion region is confirmed to include the labeled lesion region or the labeled lesion region is confirmed to include the predicted lesion region by a relationship between the positional information of the predicted lesion region and the positional information of the labeled lesion region.
3. The method of any of claims 1-2, wherein the confirming the predicted diseased region comprises identifying a diseased region for the fundus image or confirming the identified diseased region for the fundus image comprises the predicted diseased region, comprising:
confirming that the predicted lesion region includes the labeled lesion region if the distance between the center position of the predicted lesion region and the center position of the labeled lesion region is less than or equal to a set threshold value and confirming that the boundary of the predicted lesion region is outside or overlapping the boundary of the labeled lesion region, or confirming that the labeled lesion region includes the predicted lesion region if the distance between the center position of the predicted lesion region and the center position of the labeled lesion region is less than or equal to a set threshold value and confirming that the boundary of the labeled lesion region is outside or overlapping the boundary of the predicted lesion region.
4. The method of claim 1, wherein the confirming that the predicted lesion region comprises an annotated lesion region for the fundus image or confirming that the annotated lesion region of the fundus image comprises the predicted lesion region comprises:
and according to the boundary of the circumscribed polygon corresponding to the predicted lesion area, the boundary of the circumscribed polygon corresponding to the marked lesion area is included, or according to the boundary of the circumscribed polygon corresponding to the marked lesion area, the boundary of the circumscribed polygon corresponding to the predicted lesion area is included.
5. The method of claim 4, wherein the boundary of the circumscribing polygon is characterized using coordinates of the fixed points comprised by the boundary.
6. The method of claim 1, wherein the confirming that the predicted lesion region and the labeled lesion region for the fundus image partially overlap or do not overlap comprises:
confirming that the predicted lesion region and the labeled lesion region of the fundus image are partially overlapped or not overlapped, or confirming that the boundary of the predicted lesion region and the labeled lesion region of the fundus image are partially overlapped or not overlapped, if the distance between the center position of the predicted lesion region and the center position of the labeled lesion region is greater than a set threshold and it is confirmed that the boundary of the predicted lesion region and the boundary of the labeled lesion region are overlapped or not overlapped, or confirming that the boundary of the predicted lesion region and the boundary of the labeled lesion region are overlapped or not overlapped, if the distance between the center position of the predicted lesion region and the center position of the labeled lesion region is less than or equal to a set threshold and it is confirmed that the boundary of the predicted lesion region and the boundary of the labeled lesion region are overlapped or not overlapped, it is confirmed that the predicted lesion region and the labeled lesion region for the fundus image are partially overlapped or not overlapped.
7. The method according to claim 1, wherein the target value is set in a range of 0.9 to 1.
8. The method of claim 1, wherein the plurality of fundus image data is divided into the training dataset, the verification dataset, and the test dataset;
wherein,
after said repeating the following training process from said training data set, said method further comprises:
repeating the following verification process according to the verification data set:
inputting original data of the fundus image included in the verification data set into a fundus macular degeneration area detection model in verification to obtain a lesion area to be verified, wherein the fundus macular degeneration area detection model in verification obtained in the first verification process is the fundus macular degeneration area detection model to be verified obtained in the training process;
and confirming the precision of a detection result according to the lesion area to be verified, and adjusting the value of the detection model parameter of the fundus macular lesion area to be verified according to the precision of the detection result.
9. The method of claim 8, wherein after said repeating the following verification process from said verification data set, said method further comprises:
repeating the following test procedure according to the test data set:
inputting the original data of the fundus image in the test data set into a fundus macular degeneration area detection model in the test to obtain an image to be tested containing at least one detection area, wherein the corresponding fundus macular degeneration area detection model in the test is the fundus macular degeneration area detection model to be tested obtained through the verification process when the test process is executed for the first time;
confirming an image of a detection region to be tested according to at least one confidence coefficient corresponding to the at least one detection region, wherein one confidence coefficient corresponds to one detection region;
and confirming the accuracy of a detection result according to the image of the detection area to be tested, and adjusting the value of the detection model parameter of the fundus macular degeneration area to be tested according to the accuracy of the detection result.
10. The method of claim 9, wherein validating the image of the detection region to be tested according to the at least one confidence level corresponding to the at least one detection region comprises:
and confirming a detection area image to be tested according to a non-maximum suppression algorithm and the confidence coefficient, wherein the detection area image to be tested comprises a fundus macular region and one or more lesion type regions.
11. A method for detecting a macular degeneration area of the fundus of the eye, comprising:
acquiring a fundus macular degeneration target detection result containing at least one detection area and at least one confidence corresponding to the at least one detection area according to a fundus image to be detected and a fundus macular degeneration target detection model obtained by the method according to any one of claims 1 to 11;
and confirming a fundus macular detection result according to a non-maximum value inhibition algorithm and the confidence coefficient, wherein the fundus macular detection result comprises a fundus macular region and one or more lesion type regions.
12. An apparatus for training a fundus macular degeneration region detection model, the apparatus comprising:
a data acquisition module configured to acquire a plurality of fundus image data, wherein the fundus image data includes: original data and labeled area data of each fundus image;
a training data set module configured to obtain a training data set from the plurality of fundus image data;
a model training module configured to repeat the following training process from the training data set: inputting the original data of the fundus image in the training data set into a fundus macular lesion region detection model in training to obtain a predicted lesion region; determining the accuracy of a detection result according to a predicted lesion region, and adjusting the value of a trained parameter according to the accuracy of the detection result, wherein the accuracy is characterized by adopting a decision factor, if the predicted lesion region is determined to include an annotated lesion region for the fundus image or the annotated lesion region for the fundus image is determined to include the predicted lesion region, the value of the decision factor is set as a target value, the target value is used for representing that the detection result is accurate, or if the predicted lesion region and the annotated lesion region for the fundus image are partially overlapped or not overlapped, the accuracy of the detection result is determined according to the calculated value of the decision factor.
13. A system comprising one or more computers and one or more storage devices storing instructions that, when executed by the one or more computers, cause the one or more computers to perform operations of the respective methods of any of claims 1-11.
14. One or more computer storage media storing instructions that, when executed by one or more computers, cause the one or more computers to perform operations of the respective methods of any of claims 1-11.
CN202111457655.8A 2021-12-02 2021-12-02 Method, device and system for training fundus macular lesion region detection model Pending CN114119588A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111457655.8A CN114119588A (en) 2021-12-02 2021-12-02 Method, device and system for training fundus macular lesion region detection model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111457655.8A CN114119588A (en) 2021-12-02 2021-12-02 Method, device and system for training fundus macular lesion region detection model

Publications (1)

Publication Number Publication Date
CN114119588A true CN114119588A (en) 2022-03-01

Family

ID=80369873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111457655.8A Pending CN114119588A (en) 2021-12-02 2021-12-02 Method, device and system for training fundus macular lesion region detection model

Country Status (1)

Country Link
CN (1) CN114119588A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114764812A (en) * 2022-03-14 2022-07-19 什维新智医疗科技(上海)有限公司 Focal region segmentation device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009090A (en) * 2019-04-02 2019-07-12 北京市商汤科技开发有限公司 Neural metwork training and image processing method and device
CN110503636A (en) * 2019-08-06 2019-11-26 腾讯医疗健康(深圳)有限公司 Parameter regulation means, lesion prediction technique, parameter adjustment controls and electronic equipment
CN110738637A (en) * 2019-09-19 2020-01-31 华中科技大学 Automatic classification method and system for breast cancer pathological sections
CN111179247A (en) * 2019-12-27 2020-05-19 上海商汤智能科技有限公司 Three-dimensional target detection method, training method of model thereof, and related device and equipment
WO2020147263A1 (en) * 2019-01-18 2020-07-23 平安科技(深圳)有限公司 Eye fundus image quality evaluation method, device and storage medium
WO2020211530A1 (en) * 2019-04-19 2020-10-22 京东方科技集团股份有限公司 Model training method and apparatus for detection on fundus image, method and apparatus for detection on fundus image, computer device, and medium
WO2020215672A1 (en) * 2019-08-05 2020-10-29 平安科技(深圳)有限公司 Method, apparatus, and device for detecting and locating lesion in medical image, and storage medium
WO2020259209A1 (en) * 2019-06-26 2020-12-30 平安科技(深圳)有限公司 Fundus image recognition method, apparatus and device, and storage medium
CN112837325A (en) * 2021-01-26 2021-05-25 南京英沃夫科技有限公司 Medical image processing method, device, electronic equipment and medium
CN113033582A (en) * 2019-12-09 2021-06-25 杭州海康威视数字技术股份有限公司 Model training method, feature extraction method and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020147263A1 (en) * 2019-01-18 2020-07-23 平安科技(深圳)有限公司 Eye fundus image quality evaluation method, device and storage medium
CN110009090A (en) * 2019-04-02 2019-07-12 北京市商汤科技开发有限公司 Neural metwork training and image processing method and device
WO2020211530A1 (en) * 2019-04-19 2020-10-22 京东方科技集团股份有限公司 Model training method and apparatus for detection on fundus image, method and apparatus for detection on fundus image, computer device, and medium
WO2020259209A1 (en) * 2019-06-26 2020-12-30 平安科技(深圳)有限公司 Fundus image recognition method, apparatus and device, and storage medium
WO2020215672A1 (en) * 2019-08-05 2020-10-29 平安科技(深圳)有限公司 Method, apparatus, and device for detecting and locating lesion in medical image, and storage medium
CN110503636A (en) * 2019-08-06 2019-11-26 腾讯医疗健康(深圳)有限公司 Parameter regulation means, lesion prediction technique, parameter adjustment controls and electronic equipment
CN110738637A (en) * 2019-09-19 2020-01-31 华中科技大学 Automatic classification method and system for breast cancer pathological sections
CN113033582A (en) * 2019-12-09 2021-06-25 杭州海康威视数字技术股份有限公司 Model training method, feature extraction method and device
CN111179247A (en) * 2019-12-27 2020-05-19 上海商汤智能科技有限公司 Three-dimensional target detection method, training method of model thereof, and related device and equipment
WO2021128825A1 (en) * 2019-12-27 2021-07-01 上海商汤智能科技有限公司 Three-dimensional target detection method, method and device for training three-dimensional target detection model, apparatus, and storage medium
CN112837325A (en) * 2021-01-26 2021-05-25 南京英沃夫科技有限公司 Medical image processing method, device, electronic equipment and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114764812A (en) * 2022-03-14 2022-07-19 什维新智医疗科技(上海)有限公司 Focal region segmentation device

Similar Documents

Publication Publication Date Title
US11922626B2 (en) Systems and methods for automatic detection and quantification of pathology using dynamic feature classification
CN111325739B (en) Method and device for detecting lung focus and training method of image detection model
US20220198230A1 (en) Auxiliary detection method and image recognition method for rib fractures based on deep learning
KR20200095504A (en) 3D medical image analysis method and system for identifying vertebral fractures
CN111753643B (en) Character gesture recognition method, character gesture recognition device, computer device and storage medium
CN109697716B (en) Identification method and equipment of cyan eye image and screening system
CN115063425B (en) Reading knowledge graph-based structured inspection finding generation method and system
US11410300B2 (en) Defect inspection device, defect inspection method, and storage medium
CN114240978B (en) Cell edge segmentation method and device based on adaptive morphology
CN112884782A (en) Biological object segmentation method, apparatus, computer device and storage medium
CN112381762A (en) CT rib fracture auxiliary diagnosis system based on deep learning algorithm
CN115222713A (en) Method and device for calculating coronary artery calcium score and storage medium
CN116188879A (en) Image classification and image classification model training method, device, equipment and medium
CN114037868B (en) Image recognition model generation method and device
CN115187566A (en) Intracranial aneurysm detection method and device based on MRA image
CN114757908A (en) Image processing method, device and equipment based on CT image and storage medium
CN114119588A (en) Method, device and system for training fundus macular lesion region detection model
CN111401102A (en) Deep learning model training method and device, electronic equipment and storage medium
CN111612749B (en) Focus detection method and device based on lung image
CN113537407B (en) Image data evaluation processing method and device based on machine learning
Thammarach et al. AI chest 4 all
CN109671091B (en) Non-calcified plaque detection method and non-calcified plaque detection equipment
CN114283114A (en) Image processing method, device, equipment and storage medium
CN113052799A (en) Osteosarcoma and osteochondroma prediction method based on Mask RCNN network
CN111369532A (en) Method and device for processing mammary gland X-ray image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination