CN112801940A - Model evaluation method, device, equipment and medium - Google Patents

Model evaluation method, device, equipment and medium Download PDF

Info

Publication number
CN112801940A
CN112801940A CN202011626216.0A CN202011626216A CN112801940A CN 112801940 A CN112801940 A CN 112801940A CN 202011626216 A CN202011626216 A CN 202011626216A CN 112801940 A CN112801940 A CN 112801940A
Authority
CN
China
Prior art keywords
model
prediction result
result
image data
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011626216.0A
Other languages
Chinese (zh)
Inventor
刘应龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen United Imaging Research Institute of Innovative Medical Equipment
Original Assignee
Shenzhen United Imaging Research Institute of Innovative Medical Equipment
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen United Imaging Research Institute of Innovative Medical Equipment filed Critical Shenzhen United Imaging Research Institute of Innovative Medical Equipment
Priority to CN202011626216.0A priority Critical patent/CN112801940A/en
Publication of CN112801940A publication Critical patent/CN112801940A/en
Priority to US17/559,473 priority patent/US20220207742A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • G06F18/2193Validation; Performance evaluation; Active pattern learning techniques based on specific statistical tests
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Abstract

The embodiment of the invention discloses a model evaluation method, a model evaluation device, model evaluation equipment and a model evaluation medium. The method comprises the following steps: acquiring first image data, and inputting the first image data into the trained segmentation model to obtain a first prediction result; the trained segmentation model is obtained by training a sample set of image data and marking data; inputting the first prediction result into a regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeled data. The technical scheme of the embodiment of the invention solves the problem that the evaluation of the segmentation model by medical experts is easily influenced by subjectivity, and realizes the effects of reducing the labor cost and improving the accuracy of the evaluation of the model.

Description

Model evaluation method, device, equipment and medium
Technical Field
The embodiment of the invention relates to an image processing technology, in particular to a model evaluation method, a model evaluation device, model evaluation equipment and a model evaluation medium.
Background
In recent decades, artificial intelligence has been rapidly developed in the medical field, and benefits from the development and progress of machine learning technology to a great extent, and machine learning has become a new innovative engine in the medical field. Unlike traditional machine learning techniques whose function is largely limited by their shallow structure, deep learning mimics the deep tissue structure of the human brain to process and express information from multiple facets. Therefore, the image segmentation network based on the deep learning technology is widely applied to the field of medical imaging.
Because deep learning is a supervised machine learning algorithm, a deep learning model needs to be trained and the performance of the model needs to be evaluated based on a large amount of labeled data (also commonly referred to as "gold standard" in medical images), and the accuracy and the generalization of the model are improved, so that scientific research and engineering personnel have higher and higher requirements on the labeled data, have higher and higher requirements on the quality of the labeled data, and need a large amount of manpower to label the data.
When the accuracy and other performances of the trained machine learning model are evaluated, a medical expert can generally organize the judgment of the output result of the model. However, the evaluation scheme occupies a high labor cost, and the judgment of the disease result by the medical expert is influenced by experience and cognition and has subjectivity, so the evaluation scheme of the machine learning model in the prior art needs to be improved.
Disclosure of Invention
The embodiment of the invention provides a model evaluation method, a model evaluation device, model evaluation equipment and a model evaluation medium, so that evaluation of a machine learning model is optimized, labor cost is reduced, and evaluation accuracy is improved.
In a first aspect, an embodiment of the present invention provides a model evaluation method, where the method includes:
acquiring first image data, and inputting the first image data into the trained segmentation model to obtain a first prediction result; the trained segmentation model is obtained by training a sample set of image data and marking data;
inputting the first prediction result into a regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeled data.
In a second aspect, an embodiment of the present invention further provides a model evaluation apparatus, where the apparatus includes:
the first prediction result acquisition module is used for acquiring first image data and inputting the first image data into the trained segmentation model to obtain a first prediction result; the trained segmentation model is obtained by training a sample set of image data and marking data;
the evaluation result acquisition module is used for inputting the first prediction result into a regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeled data.
In a third aspect, an embodiment of the present invention further provides a model evaluating apparatus, where the model evaluating apparatus includes:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a model evaluation method as provided by any of the embodiments of the invention.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the model evaluation method according to any embodiment of the present invention.
According to the technical scheme of the embodiment of the invention, first image data is acquired and input into the trained segmentation model to obtain a first prediction result; the trained segmentation model is obtained by training a sample set of image data and marking data; obtaining a segmentation prediction result of the test data, evaluating the segmentation prediction result of the test data, and further evaluating the performance of the segmentation model; inputting the first prediction result into a regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeled data, and the segmentation model is evaluated through the regression model, so that the problem that the evaluation of the segmentation model through medical experts is easily influenced by subjectivity is solved, the labor cost is reduced, and the accuracy of model evaluation is improved.
Drawings
FIG. 1 is a flow chart of a model evaluation method according to a first embodiment of the present invention;
FIG. 2 is a flowchart of a model evaluation method according to a second embodiment of the present invention;
FIG. 3 is a structural diagram of a model evaluation apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of a model evaluating apparatus according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of model evaluation provided in an embodiment of the present invention, where the embodiment is applicable to a situation of evaluating a trained model, and the method may be executed by a model evaluation apparatus, and specifically includes the following steps:
s110, acquiring first image data, and inputting the first image data into a trained segmentation model to obtain a first prediction result; the trained segmentation model is obtained by training a sample set of image data and marking data.
After the training of the segmentation model is completed through the training data, the performance of the segmentation model needs to be tested through the untrained test data, if the test is passed, the performance of the segmentation model is proved to meet the requirements, the segmentation processing of the image can be carried out through the model, if the test is not passed, the network parameters of the segmentation model are adjusted according to the output results of the test data and the test data, and the training of the segmentation model is continued until the target performance is achieved.
Optionally, the segmentation model is used to label a region of interest in the image data to obtain contour data of the region of interest as a prediction result; the image data is a human tissue image, and the region of interest is a focus region. The segmentation model can be used for segmenting the region of interest and outputting the region of interest as image information; the lesion type of the lesion region may be labeled, for example, whether the lesion region is a malignant or benign result, or whether the lesion region is a liver lesion region or a heart lesion region.
Generally, when a segmentation model is tested by using test data, an interested area of the test data needs to be labeled, the segmentation model is tested by using the test data and corresponding labeling information, the test data is input into the segmentation model to obtain a prediction labeling result, and the performance of the segmentation model is tested by labeling the prediction result and actual labeling information. However, labeling the test results wastes a certain amount of manpower, and the training efficiency of the segmentation model is reduced. The present embodiment tests the performance of the segmentation model through a regression model.
Training the segmentation model through the image data and the sample set corresponding to the annotation data, calculating a loss function between the training output of the segmentation model and the corresponding annotation data, transferring the loss function into the segmentation model through a back propagation algorithm, and adjusting network parameters in the segmentation model based on a gradient descent method. And iteratively executing the training method until the training for the preset times is finished or the segmentation precision of the segmentation model reaches the preset precision, and determining that the training of the segmentation model is finished. Alternatively, the first image data may be sample data for testing the trained segmentation model. And inputting the first image data into the trained segmentation model to obtain a first prediction result, wherein the first prediction result is a segmentation result of the segmentation model on the first image data.
S120, inputting the first prediction result into a regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeled data.
And inputting the first prediction result into the regression model, wherein the output result of the regression model is the evaluation result of the segmentation model, and the evaluation result reflects the similarity between the first prediction result and the corresponding real labeled data.
Optionally, the model evaluation method further includes a training process of the regression model, where the training process specifically includes: acquiring second image data, and carrying out region-of-interest annotation on the second image data to obtain an annotation result; alternatively, the second image data may be sample data for training the segmentation model to be trained. Inputting second image data into the trained segmentation model to obtain a second prediction result; and training a regression model by using the labeling result and the second prediction result as samples. And inputting second image data used for training the segmentation model into the trained segmentation model, and outputting a second prediction result, wherein the second prediction result is a segmentation result of the second image data. And acquiring an annotation result corresponding to the second image data, and training the regression model based on the annotation result and the second prediction result.
Optionally, the training of the regression model by using the labeling result and the second prediction result as samples includes: acquiring a measurement index based on the labeling result and the second prediction result, wherein the measurement index reflects the similarity between the distribution rule of the prediction result and the distribution rule of the labeling data; and training a regression model by using the measurement index and the second prediction result as samples. Optionally, the metric index may be obtained by performing separate calculation according to the second prediction result and the corresponding labeling result, or may be obtained by segmenting a model. The distribution rule can be the shape and the position of the region of interest, and can also be the position of the lesion marking and the result of the lesion marking. And training a regression model by taking the measurement index obtained based on the second prediction result and the corresponding labeling result and the second prediction result as samples. Inputting the second prediction result into a regression model to be trained to obtain a prediction measurement index, calculating a loss function according to the prediction measurement index and the real measurement index, exemplarily, when the measurement indexes comprise 3 indexes including index 1, index 2 and index 3, calculating loss functions of the three indexes respectively, adding the loss functions of the three indexes to obtain a target loss function, setting weights for the loss functions corresponding to the three measurement indexes according to needs, and adding the loss functions corresponding to the three measurement indexes after multiplying the loss functions by the weights respectively to obtain the target loss function. And reversely inputting the target loss function into the regression model, and adjusting network parameters in the regression model based on a gradient descent method. Alternatively, the loss function may employ a Huber loss function. And iteratively executing the training method until the training for preset times is finished or the precision of the output measurement index of the regression model reaches the preset precision, and determining that the training of the regression model is finished.
Optionally, the training of the regression model by using the metric and the second prediction result as samples includes: performing feature extraction on the second prediction result to obtain second feature information; and training the regression model to be trained based on the second characteristic information and the metric index. Optionally, feature extraction is performed on the second prediction result to obtain feature information of the second prediction result. Illustratively, when the second prediction result is image information of the region of interest, the second feature information includes: the area size of the region of interest, the first distance of the image data where the current region of interest is located and the position information of the current region of interest in the corresponding image data; wherein the first distance is a distance between the current image data and the first frame of image data in the same batch of image data. The same batch of image data belongs to the scanning result of the same medical imaging equipment, and has the same scanning parameters and scanning body position.
Optionally, the metric further includes: the accuracy, sensitivity and specificity calculated by the annotation result and the second prediction result. The measurement index includes, in addition to the similarity of the distribution rule of the labeling result and the distribution rule of the second prediction result, the accuracy, sensitivity and specificity calculated by the labeling result and the second prediction result. The accuracy rate is obtained by counting the number of differences between the second prediction result and the labeling result and dividing the number of differences by the total number of the second prediction result; the sensitivity reflects the recognition capability of the regression model on input data, the higher the sensitivity is, the lower the probability of label missing of the corresponding segmentation model is, exemplarily, one prediction result is taken from the second prediction results, the number of pixels labeled to the region of interest in the prediction results is 100, 70 pixels are matched with the corresponding labeled result pixels, 3 pixels are different from the corresponding labeled result pixels, 6 of 7 pixels which are completely matched are true matches, 1 is false complete matches, 2 of 3 pixels which are different are true matches, one is false exists, the sensitivity is the number of pixels which are true matches divided by the total number of pixels which are different, is 67%, the number of pixels which are true complete matches is divided by the number of pixels which are completely matched is 85%, the higher the specificity is, the higher the accuracy of the segmentation model output result is indicated, and training the regression model to be trained by taking the second characteristic information and the measurement index as samples. Through the regression model, the corresponding relation between the characteristic information and the measurement index is obtained, the classification model can conveniently obtain the measurement index of the first prediction result according to the corresponding relation, and then the segmentation model is evaluated.
According to the technical scheme of the embodiment of the invention, first image data is acquired and input into the trained segmentation model to obtain a first prediction result; the trained segmentation model is obtained by training a sample set of image data and marking data; obtaining a segmentation prediction result of the test data, evaluating the segmentation prediction result of the test data, and further evaluating the performance of the segmentation model; inputting the first prediction result into a regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeled data, and the segmentation model is evaluated through the regression model, so that the problem that the evaluation of the segmentation model through medical experts is easily influenced by subjectivity is solved, the labor cost is reduced, and the accuracy of model evaluation is improved.
Example two
Fig. 2 is a flowchart of model evaluation provided in an embodiment of the present invention, which is further refined based on the previous embodiment, and the inputting of the first prediction result into the regression model includes: performing feature extraction on the first prediction result to obtain first feature information; inputting the first feature information into the regression model. The first prediction result is subjected to feature extraction, the extracted feature information is input into the regression model, the regression model is more favorable for outputting the measurement index corresponding to the feature information of the first prediction result according to the corresponding relation between the feature information and the measurement index, the segmentation model is evaluated according to the measurement index of the first prediction result, and the efficiency and the accuracy of model evaluation are improved.
As shown in fig. 2, the method specifically includes the following steps:
s210, acquiring first image data, and inputting the first image data into the trained segmentation model to obtain a first prediction result; the trained segmentation model is obtained by training a sample set of image data and marking data.
S220, performing feature extraction on the first prediction result to obtain first feature information; inputting the first characteristic information into a regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeled data.
Optionally, the feature extraction method for the first prediction result is the same as the feature extraction method for the second prediction result. When the first prediction result is image information of the region of interest, the first feature information includes: the area size of the region of interest, a second distance of the image data where the current region of interest is located and position information of the current region of interest in the corresponding image data; wherein the second distance is a distance between the current image data and the first frame of image data in the same batch of image data. The same batch of image data belongs to the scanning result of the same medical imaging equipment, and has the same scanning parameters and scanning body position. When the first prediction result is the lesion type marking, the first feature information comprises the position of the lesion in the image, the area size of the lesion and the distance between the image where the current lesion is located and the first frame of image data in the same batch of image data. The extracted feature information is input into the regression model, so that the regression model can output the measurement index corresponding to the feature information of the first prediction result according to the corresponding relation between the feature information and the measurement index, the segmentation model is evaluated according to the measurement index of the first prediction result, and the efficiency and the accuracy of model evaluation are improved.
According to the technical scheme of the embodiment of the invention, first image data is acquired and input into the trained segmentation model to obtain a first prediction result; the trained segmentation model is obtained by training a sample set of image data and marking data; obtaining a segmentation prediction result of the test data, evaluating the segmentation prediction result of the test data, and further evaluating the performance of the segmentation model; performing feature extraction on the first prediction result to obtain first feature information; inputting the first characteristic information into the regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeled data, the extracted feature information is input into the regression model, the regression model can output the measurement index corresponding to the feature information of the first prediction result according to the corresponding relation between the feature information and the measurement index, and the segmentation model is evaluated through the regression model, so that the problem that the evaluation of the segmentation model through medical experts is easily affected by subjectivity is solved, the labor cost is reduced, and the accuracy of model evaluation is improved.
EXAMPLE III
Fig. 3 is a structural diagram of a model evaluation apparatus according to a third embodiment of the present invention, where the model evaluation apparatus includes: a first prediction result obtaining module 310 and an evaluation result obtaining module 320.
The first prediction result obtaining module 310 is configured to obtain first image data, and input the first image data into the trained segmentation model to obtain a first prediction result; the trained segmentation model is obtained by training a sample set of image data and marking data; an evaluation result obtaining module 320, configured to input the first prediction result into a regression model, so as to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeled data.
In the technical solution of the above embodiment, the model evaluating apparatus further includes:
the model training module is used for acquiring second image data and marking the region of interest of the second image data to obtain a marking result; inputting second image data into the trained segmentation model to obtain a second prediction result; and training a regression model by using the labeling result and the second prediction result as samples.
In the technical solution of the above embodiment, the model training module includes:
the measurement index calculation unit is used for acquiring a measurement index based on the annotation result and the second prediction result, and the measurement index reflects the similarity between the distribution rule of the prediction result and the distribution rule of the annotation data;
and the regression model training unit is used for training the regression model by taking the measurement index and the second prediction result as samples.
Optionally, the metric further includes: the accuracy, sensitivity and specificity calculated by the annotation result and the second prediction result.
In the technical solution of the above embodiment, the regression model training unit includes:
a feature extraction subunit, configured to perform feature extraction on the second prediction result to obtain second feature information;
and the regression model training subunit is used for training the regression model to be trained on the basis of the second characteristic information and the measurement index.
In the technical solution of the above embodiment, the evaluation result obtaining module 320 includes:
a first feature information obtaining unit, configured to perform feature extraction on the first prediction result to obtain first feature information;
a feature information input unit configured to input the first feature information into the regression model.
Optionally, the segmentation model is used to label a region of interest in the image data to obtain contour data of the region of interest as a prediction result; the image data is a human tissue image, and the region of interest is a focus region.
According to the technical scheme of the embodiment of the invention, first image data is acquired and input into the trained segmentation model to obtain a first prediction result; the trained segmentation model is obtained by training a sample set of image data and marking data; obtaining a segmentation prediction result of the test data, evaluating the segmentation prediction result of the test data, and further evaluating the performance of the segmentation model; inputting the first prediction result into a regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeled data, and the segmentation model is evaluated through the regression model, so that the problem that the evaluation of the segmentation model through medical experts is easily influenced by subjectivity is solved, the labor cost is reduced, and the accuracy of model evaluation is improved.
The model evaluation device provided by the embodiment of the invention can execute the model evaluation method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of a model evaluating apparatus according to a fourth embodiment of the present invention, as shown in fig. 4, the model evaluating apparatus includes a processor 410, a memory 420, an input device 430, and an output device 440; the number of the processors 410 in the model evaluating device can be one or more, and one processor 410 is taken as an example in fig. 4; the processor 410, the memory 420, the input device 430 and the output device 440 in the model evaluation apparatus may be connected by a bus or other means, and fig. 4 illustrates the connection by a bus as an example.
The memory 420 serves as a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the model evaluation method in the embodiment of the present invention (for example, the first prediction result obtaining module 310 and the evaluation result obtaining module 320 in the model evaluation device). The processor 410 executes various functional applications and data processing of the model evaluation device by executing software programs, instructions and modules stored in the memory 420, so as to realize the model evaluation method.
The memory 420 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 420 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 420 may further include memory located remotely from processor 410, which may be connected to the modeling tool via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 430 may be used to receive input numeric or character information and to generate key signal inputs relating to user settings and function controls of the model evaluation apparatus. The output device 440 may include a display device such as a display screen.
EXAMPLE five
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, where the computer-executable instructions are executed by a computer processor to perform a model evaluation method, and the method includes:
acquiring first image data, and inputting the first image data into the trained segmentation model to obtain a first prediction result; the trained segmentation model is obtained by training a sample set of image data and marking data;
inputting the first prediction result into a regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeled data.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the operations of the method described above, and may also execute the relevant operations in the model evaluation method provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the model evaluation apparatus, the included units and modules are only divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A model evaluation method is characterized by comprising the following steps:
acquiring first image data, and inputting the first image data into the trained segmentation model to obtain a first prediction result; the trained segmentation model is obtained by training a sample set of image data and marking data;
inputting the first prediction result into a regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeled data.
2. The method according to claim 1, further comprising a training process of the regression model, the training process specifically comprising:
acquiring second image data, and carrying out region-of-interest annotation on the second image data to obtain an annotation result;
inputting second image data into the trained segmentation model to obtain a second prediction result;
and training a regression model by using the labeling result and the second prediction result as samples.
3. The method of claim 2, wherein training a regression model using the labeled result and the second predicted result as samples comprises:
acquiring a measurement index based on the labeling result and the second prediction result, wherein the measurement index reflects the similarity between the distribution rule of the prediction result and the distribution rule of the labeling data;
and training a regression model by using the measurement index and the second prediction result as samples.
4. The method of claim 3, wherein the metric further comprises: the accuracy, sensitivity and specificity calculated by the annotation result and the second prediction result.
5. The method of claim 3, wherein training a regression model using the metric and the second prediction as samples comprises:
performing feature extraction on the second prediction result to obtain second feature information;
and training the regression model to be trained based on the second characteristic information and the metric index.
6. The method of claim 1, wherein inputting the first prediction into a regression model comprises:
performing feature extraction on the first prediction result to obtain first feature information;
inputting the first feature information into the regression model.
7. The method according to claim 1, wherein the segmentation model is used for labeling a region of interest in the image data to obtain contour data of the region of interest as a prediction result; the image data is a human tissue image, and the region of interest is a focus region.
8. A model evaluating apparatus, characterized by comprising:
the first prediction result acquisition module is used for acquiring first image data and inputting the first image data into the trained segmentation model to obtain a first prediction result; the trained segmentation model is obtained by training a sample set of image data and marking data;
the evaluation result acquisition module is used for inputting the first prediction result into a regression model to obtain an evaluation result of the trained segmentation model; the regression model is used for calculating the similarity between the distribution rule of the prediction result and the distribution rule of the labeled data.
9. A model evaluating apparatus, characterized in that the model evaluating apparatus includes:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method of evaluating a model according to any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out a method for evaluating a model according to any one of claims 1-7.
CN202011626216.0A 2020-12-30 2020-12-31 Model evaluation method, device, equipment and medium Pending CN112801940A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011626216.0A CN112801940A (en) 2020-12-31 2020-12-31 Model evaluation method, device, equipment and medium
US17/559,473 US20220207742A1 (en) 2020-12-30 2021-12-22 Image segmentation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011626216.0A CN112801940A (en) 2020-12-31 2020-12-31 Model evaluation method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN112801940A true CN112801940A (en) 2021-05-14

Family

ID=75807709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011626216.0A Pending CN112801940A (en) 2020-12-30 2020-12-31 Model evaluation method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN112801940A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114003511A (en) * 2021-12-24 2022-02-01 支付宝(杭州)信息技术有限公司 Evaluation method and device for model interpretation tool
CN114119645A (en) * 2021-11-25 2022-03-01 推想医疗科技股份有限公司 Method, system, device and medium for determining image segmentation quality

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019056499A1 (en) * 2017-09-20 2019-03-28 平安科技(深圳)有限公司 Prediction model training method, data monitoring method, apparatuses, device and medium
CN110942072A (en) * 2019-12-31 2020-03-31 北京迈格威科技有限公司 Quality evaluation-based quality scoring and detecting model training and detecting method and device
CN111340123A (en) * 2020-02-29 2020-06-26 韶鼎人工智能科技有限公司 Image score label prediction method based on deep convolutional neural network
CN111402260A (en) * 2020-02-17 2020-07-10 北京深睿博联科技有限责任公司 Medical image segmentation method, system, terminal and storage medium based on deep learning
US10726356B1 (en) * 2016-08-01 2020-07-28 Amazon Technologies, Inc. Target variable distribution-based acceptance of machine learning test data sets
WO2020156361A1 (en) * 2019-02-02 2020-08-06 杭州睿琪软件有限公司 Training sample obtaining method and apparatus, electronic device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10726356B1 (en) * 2016-08-01 2020-07-28 Amazon Technologies, Inc. Target variable distribution-based acceptance of machine learning test data sets
WO2019056499A1 (en) * 2017-09-20 2019-03-28 平安科技(深圳)有限公司 Prediction model training method, data monitoring method, apparatuses, device and medium
WO2020156361A1 (en) * 2019-02-02 2020-08-06 杭州睿琪软件有限公司 Training sample obtaining method and apparatus, electronic device and storage medium
CN110942072A (en) * 2019-12-31 2020-03-31 北京迈格威科技有限公司 Quality evaluation-based quality scoring and detecting model training and detecting method and device
CN111402260A (en) * 2020-02-17 2020-07-10 北京深睿博联科技有限责任公司 Medical image segmentation method, system, terminal and storage medium based on deep learning
CN111340123A (en) * 2020-02-29 2020-06-26 韶鼎人工智能科技有限公司 Image score label prediction method based on deep convolutional neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
林成创;赵淦森;尹爱华;丁笔超;郭莉;陈汉彪;: "AS-PANet:改进路径增强网络的重叠染色体实例分割", 中国图象图形学报, no. 10 *
田萱;王子亚;王建新;: "基于语义分割的食品标签文本检测", 农业机械学报, no. 08 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119645A (en) * 2021-11-25 2022-03-01 推想医疗科技股份有限公司 Method, system, device and medium for determining image segmentation quality
CN114003511A (en) * 2021-12-24 2022-02-01 支付宝(杭州)信息技术有限公司 Evaluation method and device for model interpretation tool

Similar Documents

Publication Publication Date Title
CN108446730B (en) CT pulmonary nodule detection device based on deep learning
CN107895367B (en) Bone age identification method and system and electronic equipment
CN108464840B (en) Automatic detection method and system for breast lumps
Yoon et al. Tumor identification in colorectal histology images using a convolutional neural network
JP2022529557A (en) Medical image segmentation methods, medical image segmentation devices, electronic devices and computer programs
CN109544518B (en) Method and system applied to bone maturity assessment
CN109389129A (en) A kind of image processing method, electronic equipment and storage medium
CN110033023A (en) It is a kind of based on the image processing method and system of drawing this identification
CN110705403A (en) Cell sorting method, cell sorting device, cell sorting medium, and electronic apparatus
CN110246579B (en) Pathological diagnosis method and device
CN112801940A (en) Model evaluation method, device, equipment and medium
CN110796661B (en) Fungal microscopic image segmentation detection method and system based on convolutional neural network
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
Zheng et al. Automated detection and recognition of thyroid nodules in ultrasound images using Improve Cascade Mask R-CNN
CN113902669A (en) Method and system for reading urine exfoliative cell fluid-based smear
CN112861881A (en) Honeycomb lung recognition method based on improved MobileNet model
CN112699907A (en) Data fusion method, device and equipment
Sameki et al. ICORD: Intelligent Collection of Redundant Data-A Dynamic System for Crowdsourcing Cell Segmentations Accurately and Efficiently.
CN114360695B (en) Auxiliary system, medium and equipment for breast ultrasonic scanning and analyzing
CN114010227B (en) Right ventricle characteristic information identification method and device
CN113283465B (en) Diffusion tensor imaging data analysis method and device
CN111582404B (en) Content classification method, device and readable storage medium
CN113705595A (en) Method, device and storage medium for predicting degree of abnormal cell metastasis
CN114078137A (en) Colposcope image screening method and device based on deep learning and electronic equipment
CN112365474A (en) Blood vessel extraction method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination