Data labeling method, model training device, image processing method, image processing device and storage medium
Technical Field
The invention relates to the field of image processing, in particular to a training method of an image processing model and a method for processing an image by using the model.
Background
Lung cancer is one of the most rapidly growing malignancies and has posed a great threat to human health and life. The survival rate of lung cancer is highly related to the disease stage when the first diagnosis is confirmed, and if the survival rate can be found in the early stage, the 5-year survival rate can reach 70-90%. Compared with other cancers, the biological characteristics of lung cancer are very complex, no obvious symptoms exist in early stage, nearly 75 percent of people are found to be in middle and late stage, the treatment cost is high, and the effect is poor. Therefore, early detection and diagnosis of lung cancer is particularly important. Early lung cancer generally manifests as malignant nodules in the lung, and thus early screening generally begins with the detection of lung nodules. Clinical practice has shown that the most effective means for detecting lung nodules is to perform low-dose computed tomography (LDCT) to obtain high-resolution images of the lungs.
The traditional medical image lung nodule detection algorithm comprises the following procedures: preprocessing medical image data, segmenting lung parenchymal regions, extracting candidate regions, extracting features, classifying and identifying lung nodule targets and the like. The most key links are feature extraction and classification identification, most of the feature extraction is to manually extract features of pathological features and image information in aspects of morphological features, textural features, local features and the like of lung nodules, and the specific features have limitations, complicated extraction process and low efficiency; the classification recognition method is usually obtained based on statistics, such as Bayes classification algorithm, artificial neural network, fuzzy clustering and the like, and belongs to a shallow structure model, and the classification problem is complicated because strong prior knowledge is usually required or satisfactory features can be obtained only through different parameter selection and feature attempts.
In recent years, methods for deep learning have been developed rapidly, and especially, many breakthroughs are made in the field of computer vision. Medical image analysis processing based on deep learning has become a great trend. Researchers have used deep learning based pulmonary nodule detection methods. However, the pulmonary nodules are various and complex in types, and are characterized by burr signs, lobular signs and calcific signs, as well as adherent pleura, adherent bronchi, adherent blood vessels, isolated nodules and the like. How to reduce the false positive rate while ensuring high recall rate in the pulmonary nodule detection based on the CT image can not be solved well.
Massive standard marking data are bottlenecks in the development of medical image AI at present, and the data acquisition and marking have many difficulties. For lung CT images, the lesion morphology, size, distribution, density, etc. of the lung are different. But one of the more difficult types is the diffuse, multiple lesions, which often occupy most of the space of the whole lung, and the number of lesions is thousands of which makes the labeling task difficult. The reasons for this are three, 1, because of the large number, the risk of missed labeling (e.g., diffuse infection of the whole lung, such as diffuse high-density shadows) is caused by finding and labeling all lesions; 2. marking all lesions is a significant challenge for the labeling worker. 3. Some types of lesions may not count as much, but are difficult to visually find (typically as tiny nodules under 3 mm). The three reasons jointly cause the occurrence of the condition of missing the mark of the focus, and the actual marking process is difficult to avoid.
Disclosure of Invention
The invention provides a new method for data labeling, model training and image processing based on at least one of the technical problems, and the method for data labeling, model training and image processing trains a machine learning model by labeling only weak labeling data of part of targets.
In view of this, an embodiment of the first aspect of the present invention provides a sample data labeling method for machine learning model training, including: acquiring a first image dataset and a second image dataset; each image in the first image data set comprises at least one detection target, and at least one detection target in each image is labeled as a real target (ground route) through a labeling frame; each image in the second image data set does not comprise a detection target; the detection target is a lung focus;
performing target detection on the first image data set by using a preset target detection model to obtain a primary detection result; the preset target detection model is a neural network model obtained by using a standard data set for training;
determining a false positive result in the preliminary test result;
labeling the false positive result as a first background image for generating a negative sample;
preferably, all images in the second image data set are labeled as second background images for generating negative examples.
Still further, the negative sample in the present invention includes a frangible sample and a non-frangible sample, wherein the false positive result can be further labeled as a non-frangible sample in the negative sample; the second image dataset is further labeled for generating a frangible sample of the negative samples.
In this embodiment, the step of determining the false positive result in the identification result may specifically include:
determining a false positive result in the recognition result based on the existing annotation box in the first image dataset and the recognition result. In the recognition result, if the first image data set is marked as a result of a real target, the first image data set can be considered as a true positive; and the part which is not marked as the real target in the identification result can be judged by a preset program or a manual screening mode to determine whether the part is the real result, not the real result, or a false positive result.
Optionally, the manual screening step is as follows: for each image, two-stage diagnostic labeling was performed by 4 radiologists; wherein 3 doctors are qualified doctors with the same level, and 1 doctor is a qualified doctor with more abundant clinical experience; the first 3 doctors are responsible for marking private data, and the last 1 senior doctor is responsible for final adjudication of the case with problems, specifically: for the CT images in the private data set, each independent diagnosis in 3 qualified doctors with the equivalent level and marking the position of a focus, recording the marking result, and counting the position of the marked focus on each CT image through a program; if the same region is marked by two or more doctors, the possibility of the focus is considered to be extremely high, and subsequent judgment is not carried out; for the region marked by only 1 doctor, the marking result is fed back to the more experienced physician in clinic for final judgment.
In this embodiment, the preset target detection model may be a deep neural network model trained using a standard data set. For example, RCNN, Faster R-CNN, YOLO, etc., the standard data set is a widely used data set containing accurately labeled results.
Typically, the method of the embodiment of the invention is used for processing lung CT images, and the detection target is a lung lesion.
An embodiment of another aspect of the present invention provides a training method of a neural network model for target detection, wherein the neural network model includes: the training method comprises the following specific steps of a feature extraction Network (contextual layer), a Regional Precursor Network (RPN), a target region pooling Network (ROI pooling layer) and a target Classification Network (Classification):
obtaining a sample image comprising a first image data set and a second image data set as described in the previous embodiments of the present invention;
generating a positive sample by using a real target (ground route) labeled by a labeling frame, generating a negative sample by using a first background image and a second background image labeled in the previous embodiment, and training the neural network model;
and updating parameters of the neural network according to the loss function of the neural network, wherein in the loss function of the neural network model, the weight of the negative sample is reduced, and further, the weight of the sample which is easy to be classified in the negative sample is reduced.
In addition, in this embodiment, the real target (ground route) labeled by the label box may be used to generate a hard-to-distinguish sample in the negative samples at the same time, and the generated negative sample may be used to train the neural network model.
Typically, the neural network model in this embodiment is a fast R-CNN model, and the step of generating the positive sample using the real target (ground route) labeled by the labeling box specifically includes:
in the area generation network, a plurality of anchor points with different sizes are generated in a sample image by using a sliding window, and when the coincidence degree (IOU) of the anchor points and the labeling frame is greater than a preset first threshold value, the anchor points are labeled as positive samples. Typically, the first threshold value may be 0.7, 0.75, 0.8, 0.9, and the like, and is preferably 0.75.
In this embodiment, the step of generating the difficultly-classified sample in the negative sample by using the real target (ground route) labeled by the labeling box specifically includes:
and when the anchor point is overlapped with the labeling frame part and the overlap ratio is smaller than a preset second threshold value, marking the anchor point as a negative sample. Typical values of the second threshold are 0.05,0.1, 0.15,0.2, and the like, and preferably 0.1.
The loss function in this embodiment is selected to be different from the function of the ordinary Faster R-CNN model:
FL(pt)=-αt(1-pt)γlog(pt)
wherein p istPredicting a probability value for the model, the value being between 0 and 1, αtThe parameter can control the balance of the number of positive and negative samples, and the value is between 0 and 1, and the gamma parameter is used for reducing the weight of easy sample (easy sample) in the negative sample, and the value is more than or equal to 0.
Updating the network parameters of the neural network model in a direction that reduces the value of the loss function.
In another aspect, another embodiment of the present invention provides an image processing method, including:
acquiring an image to be processed;
calling a neural network model to perform target detection on the image to obtain a target detection result in the image;
the neural network model is obtained by training a preset neural network model by adopting the model training method in the embodiment;
and outputting the target detection result.
In still another aspect, a further embodiment of the present invention provides a neural network model training apparatus, including:
the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a sample image, an annotation frame used for annotating a real target in the sample image and an annotated negative sample;
the updating unit is used for updating the network parameters of the neural network model according to the loss function of the neural network model;
and the training unit is used for carrying out iterative training on the preset image processing model according to the updated network parameters to obtain a target image processing model.
In still another aspect, still another embodiment of the present invention provides an image processing apparatus including:
an acquisition unit configured to acquire an image to be processed;
the processing unit is used for calling a neural network model to carry out target detection on the image to obtain a target detection result; the neural network model is obtained by training a preset neural network model by adopting the model training method in the embodiment;
and the output unit is used for outputting the target detection result.
In yet another aspect, a further embodiment of the present invention provides a computer storage medium storing one or more first instructions adapted to be loaded by a processor and to perform the model training method of the preceding embodiment; alternatively, the computer storage medium stores one or more second instructions adapted to be loaded by the processor and to perform the image processing method in the foregoing embodiments.
Through the technical scheme, weak labeling data which are not completely labeled or are difficult to accurately label can be better utilized to carry out model training and obtain a better target detection result, and in the detection of lung lesions, the shapes, sizes, distribution, density and the like of the lung lesions are different. However, the difficult disease focuses are diffuse and frequent disease focuses, which often occupy most space of the whole lung, and the number of the disease focuses is thousands, so that the labeling work is difficult to perform, and all the disease focuses are difficult to find and label completely, therefore, the labeled image of the lung disease focus is often the incompletely labeled data.
Drawings
FIG. 1 shows a schematic diagram of a neural network Faster R-CNN model in the prior art;
FIG. 2 is a diagram illustrating a data annotation method according to a first embodiment of the present invention;
FIG. 3 shows a schematic diagram of a model training method according to a second embodiment of the invention;
FIG. 4 is a diagram illustrating anchor points in a neural network model according to a second embodiment of the present invention;
FIG. 5 shows a schematic diagram of an image processing method according to a third embodiment of the invention;
fig. 6 shows some processing results of image processing by the image processing method according to the third embodiment of the present invention;
fig. 7 shows a schematic block diagram of a model training apparatus according to a fourth embodiment of the present invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced otherwise than as specifically described herein, and thus the scope of the present invention is not limited by the specific embodiments disclosed below.
Example one
FIG. 2 shows a schematic block diagram of a data annotation method according to one embodiment of the invention.
As shown in fig. 2, according to the data annotation method of an embodiment of the present invention, the present embodiment explains the invention scheme by taking target detection in a medical image as an example, and the method of the present invention is not limited to processing of a medical image. In addition, in the embodiment of the present invention, a Faster R-CNN model is taken as an example to describe a neural network model for target detection, the neural network model in the present invention is not limited thereto, and a plurality of neural networks with different structures can be used for the model of the present invention, and data labeled by using the standard method of the present invention is trained to improve the detection effect of the model on specific types of images and targets.
The sample data labeling method for training the machine learning model comprises the following steps:
s201: and acquiring sample data of part of the labels.
Taking the lung image as an example, the sample data is divided into a first image data set and a second image data set for better distinguishing the data sources; for a first image data set, each image of the first image data set comprises at least one detection target, labeling the targets in the images by using a labeling frame, and ensuring that all labels are correct in the first image data set, wherein the labeled targets are real targets (ground route) and false positive labels do not exist; however, not all detected objects in the image are labeled, and there may be one or more unlabeled objects in the image of the first image dataset, such that only some objects are labeled correctly, and there is also data for which there are multiple objects that are not labeled, referred to as "weakly labeled" data.
Incomplete supervision under traditional weak supervision learning means that only a part of training data is given with labels, and the other part of training data is not given with labels, and the labeling is incomplete. The marked data satisfies that targets contained in any data are marked, and the unmarked data has no given marking information, but cannot confirm whether the data contains the targets. During the learning process, the input is some labeled data and unlabeled data, the model is first trained on these labeled data, and then the remaining unlabeled data is "cluster analyzed" based on the resulting "experience".
However, the weak surveillance in the present invention is embodied in that only the identifiable target in the sample is labeled, and the uncertain target is not labeled, for example, a CT image of a patient contains 5 nodules, only 3 of the identified nodules are labeled, and the remaining 2 nodules are not labeled, such "weak labeled" data is used as a positive sample.
Because the incompletely labeled image is adopted to generate the positive sample, the unmarked target cannot be determined in the unmarked region of the image, if the unmarked part of the image is used to generate the negative sample, the condition that the target is used as the negative sample to train the model can occur, and the detection effect can be greatly influenced. Thus, under the condition of incomplete marking, absolute authenticity of the positive and negative sample labels is guaranteed.
In order to ensure accurate labeling, although CT images of healthy human bodies are used, in some cases, the images may be manually screened and calibrated to ensure that each image in the second image dataset does not include a detection target.
In addition, in order to obtain a better training effect, the image proportion used for training in the first image data set and the second image data set can be adjusted, the imbalance of the proportion of positive and negative samples is avoided, and the effect of sample balance is achieved.
S202: and carrying out target detection on the first image data set by using a preset target detection model to obtain a detection result.
The preset target detection model may be a conventional detection model or algorithm, such as: the model identification algorithm, Bayesian classification algorithm, fuzzy clustering, feature matching and other different algorithms, can also be neural network models, such as R-CNN, Faster R-CNN, SSD, YOLO and the like.
In the present embodiment, a fast R-CNN model is taken as an example for explanation, and the target detection model used herein is a fast R-CNN model trained by a standard labeled data set, which is a LUNA16 public data set. Firstly, a Faster R-CNN model is trained by using the labeling data in the LUNA16 public data set to obtain a trained model, then the model is used for carrying out target detection on the image in the first image data set to obtain a detection result, wherein the output detection result is a target labeled by using a labeling frame, and each detected target has confidence.
The detection results here are not completely accurate, and there are cases where recognition is erroneous (false positives) and recognition is missed (false negatives). The two situations both indicate that the corresponding areas have certain difficulty for the automatic identification of the model, and belong to the situation of easy error.
S203: determining a false positive result in the test results.
The step of judging the false positive result can be manually judged, the method of the invention is generally used for medical images, particularly for lesions in lung CT images, the identification of the lung lesions in the CT images has larger difficulty and error, in order to ensure the accuracy of the labeling result, the manual judgment is used in the embodiment, and the manual judgment steps are as follows: for each image, two-stage diagnostic labeling was performed by 4 radiologists; wherein 3 doctors are qualified doctors with the same level, and 1 doctor is a qualified doctor with more abundant clinical experience; the first 3 doctors are responsible for marking private data, and the last 1 senior doctor is responsible for final adjudication of the case with problems, specifically: for the CT images in the private data set, each independent diagnosis in 3 qualified doctors with the equivalent level and marking the position of a focus, recording the marking result, and counting the position of the marked focus on each CT image through a program; if the same region is marked by two or more doctors, the possibility of the focus is considered to be extremely high, and subsequent judgment is not carried out; for the region marked by only 1 doctor, the marking result is fed back to the more experienced physician in clinic for final judgment.
And (3) counting the detection results of the model in the step (S202), arranging the target results judged by the model according to the confidence given by the model, allowing three doctors to judge the detection results of the model, and continuously marking the target results which are not marked in the first image data set according to the independent mark final sanction review method for the nodule types with the confidence higher than 85% in the detection results.
In another alternative, the determination may be performed by a program, for example, another model with higher accuracy is used to perform target detection on the image in the first image data set again, the two detection results are compared, and if the result detected by the preset model is detected as not being a target by another model with higher accuracy, the result is taken as a false positive result. In the current step, the detection effect of the model trained by the standard data set in step S202 on the real data cannot be determined, but the accuracy can be determined by comparing the detection effect of the model trained by the standard data set with the detection effect of other detection methods or models in the prior art through testing a plurality of standard data sets.
The above determination method is only an exemplary illustration, and any method capable of determining whether the detection result is a real lesion in the prior art can be used for determining the false positive result in the present invention, which is not limited in the present invention.
S204: labeling the false positive result as a first background image for generating a negative sample.
The false positive result determined in step S203 is not actually the target we are to detect, as in the detection of lung nodules in a lung CT image, the part of the image of the false positive result is not actually the image of the nodule, but still an image of healthy lung tissue, which the model wrongly detects as a target (lung nodule), so that this image can be considered as a part that is difficult to recognize by the preset model, in the process of weak data labeling in this embodiment, by using the idea of "difficult mining", when a physician judges the result of model detection, the false positive area detected by the model is mapped to the weak labeling data set, and the region is marked as a non-target region (background image region), and the region can be used as a source of a hard sample (hard sample) in a negative sample during subsequent model training, so that the detection model can be helped to better identify a target in an actual target detection task.
In a preferred implementation manner of this embodiment, the data annotation method further includes:
s205: all images in the second image dataset are labeled as second background images for generating negative examples.
In the weakly labeled data used in the scheme of the invention, only part of the targets are labeled, and a large number of detection targets which are not labeled exist in the image with the targets, so that the part which is not labeled in the image cannot be directly used as a negative sample, the false positives screened by the previous step often cannot meet the requirement of model training, and more negative samples are needed to complete the model training. Therefore, the image determined not to include the detection target is used as the background image for generating the negative sample in the model training. For a medical image of a human body, a detection target is generally a lesion portion, and therefore a medical image of a healthy human body can be used as a background image not including a detection target.
In the embodiments of the present invention, a human lung image is taken as an example, and the detection target is a lung nodule part in the image. Thus, in this step, the image of the healthy human lung without nodules is used as the second image dataset to generate the large number of negative examples used for model training.
In addition, for the embodiment to which the idea of "hard case mining" is applied, the negative sample generated by the second image dataset can be used as an easy sample (easy example) therein, and is matched with the hard sample (hard example) in the negative sample generated by the false positive result in the step S204, so that a better model training effect is achieved, and the target detection accuracy is improved.
Example two
Fig. 3 shows a schematic block diagram according to another embodiment of the invention.
As shown in fig. 3, a second embodiment of the present invention provides a training method for a neural network for target detection, and the neural network in this embodiment generally includes four parts: a characteristic extraction Network (contextual layer), a Region generation Network (RPN), a target Region pooling Network (ROI pooling layer), and a target Classification Network (Classification), typically, in this embodiment, a framework of a fast R-CNN model is used to construct a neural Network for target detection, the fast R-CNN model is the prior art, and the typical fast R-CNN model is shown in fig. 1, and a structure of an exemplary neural Network in the present invention is briefly described below and will not be elaborated again:
among them, the feature extraction network or the feature extraction layer (fundamental layer) generally uses a set of fundamental conv + relu + posing layers to extract feature maps (feature maps) of images. The feature maps (feature maps) are shared for subsequent Region generation Network (RPN) and full connectivity layers.
A Region generation Network (RPN) is used to generate candidate regions (regionproplases). In the prior art, the anchor points (anchors) of the layer are judged to belong to positive samples or negative samples through softmax, and the invention improves the anchor points to be better applied to weakly labeled data. And then, correcting anchor points (anchors) by using a bounding box regression to obtain accurate candidate regions.
The target region pooling network (ROI posing layer) collects the input feature maps (feature maps) and the candidate regions (region characteristics), extracts the candidate region features (candidate features) after synthesizing the information, and sends the candidate region features (candidate features) to the subsequent full-connection layer to judge the target category.
Target Classification network (Classification). And calculating the category of the candidate region by using the candidate region features (the candidate feature maps), and simultaneously, carrying out bounding box regression again to obtain the final accurate position of the detection frame.
The training method of the neural network model of the embodiment specifically includes:
s301: a sample image is acquired.
In this embodiment, the sample image used for training the neural network is the medical image labeled by the labeling method in the first embodiment, and specifically includes the first image data set that is not completely labeled and the second image data set that does not include the target to be detected in the image.
S302: the neural network model is trained by generating a positive sample by using an image of a real target (ground route) marked with a marking frame, and generating a negative sample by using the first background image and the second background image marked in the previous embodiment.
In the step of generating a positive sample by using an image of a real target (ground route) labeled with the labeling box, in a region generation network of a neural network, a plurality of anchor points (anchors, as shown in fig. 4) with different sizes are generated in a sample image by using a sliding window, and when the coincidence degree (IOU) of the anchor points and the labeling box in the sample image is greater than a preset first threshold value, the anchor points are labeled as positive samples. In the invention, the calculation method of the IOU between the anchor point and the labeling box (ground route box) is as follows: the area of the overlapping part of the anchor point and the label box (ground track box) is divided by the area of the total area contained by the anchor point and the label box. Typically, the first threshold value may be 0.7, 0.75, 0.8, 0.9, and the like, and is preferably 0.75.
An anchor point is a common concept when a neural Network performs image processing, and in an area generation Network (RPN), a sliding window manner is adopted to generate a plurality of rectangular frames with different sizes, which are called as anchor points. To detect objects or lesions of different sizes, suitable anchor points may be designed by counting the size distribution of the marked objects on the data set. In this embodiment, six anchor points of different sizes are designed for each sliding window: 4 × 4,6 × 6,10 × 10,16 × 16,22 × 22 and 32 × 32 (see fig. 4).
In a target detection algorithm framework based on fast R-CNN in the prior art, a training area generation network is supervised training, namely class labels corresponding to anchor points are required to be given. For each anchor point, if the coincidence degree of the anchor point and a labeling frame (ground channel) of a certain target is high (for example, IOU >0.7), the anchor point is considered to contain the target to be detected and is marked as a positive sample, and otherwise, if the anchor point does not have the coincidence degree with any focus labeling frame (IOU <0.3), the anchor point is marked as a negative sample.
However, for weakly labeled data used in the present invention, different from the prior art, if an anchor point is not highly overlapped with any focus frame, and it is not determined that there is no object to be detected therein, therefore, we generate a positive sample by using an anchor point in a region (i.e., a labeled frame region) correctly labeled as a nodule in weakly labeled data, and generate a negative sample by using a medical image of a healthy human body and a non-nodule region labeled in a weak data labeling process by using an anchor point, and do not generate a negative sample by using an unmarked part in an image with a label, where the non-nodule region labeled in the weak data labeling process refers to a negative sample part labeled by a false positive result in the first embodiment of the present invention, and continue training an interesting region network. Wherein the medical image of the healthy person in the second image dataset may be a source of an easily separable sample (easyxample) in the negative sample; while the false positive result portion of the image in the first image dataset may be taken as a hard sample in a negative sample.
The method for generating the negative sample by using the first background image is consistent with the method for generating the positive sample by using the labeling frame, a plurality of anchor points with different sizes are generated in the sample image through the area generation network, when the coincidence degree (IOU) of the anchor points and the false positive results is greater than a preset first threshold value for the false positive results, the corresponding anchor points are marked as the negative samples, and for the image of the first image data set, other anchor points are not used as the positive samples or the negative samples except the positive samples generated by using the labeling frame and the negative samples generated by using the false positive results.
The process of generating negative examples using the second background image also produces a plurality of anchor points of different sizes in the example image through the area generation network, unlike before, all anchor points are marked as negative examples except for anchor points that are beyond the scope of the image itself.
During actual training, the number of negative samples, particularly the number of easily separable samples in the negative samples is often far greater than that of positive samples and difficultly separable samples, and at this time, a predetermined number of easily separable samples can be randomly extracted from the easily separable samples for training according to the required proportion of the positive samples and the negative samples and the proportion of the difficultly separable samples and the easily separable samples, so as to meet the requirement of sample balance.
S303: and updating parameters of the neural network according to a loss function of the neural network, wherein in the loss function of the neural network model, the weight of the negative sample is reduced, and in the negative sample, the weight of the easily-divided sample is reduced.
Considering that the anchor point number of the positive sample generated in the weakly labeled data used in the technical solution of the present invention is far less than that of the negative sample, which easily causes unbalanced proportion of the positive sample and the negative sample, and the easily separable sample (easy sample) in the negative sample is more than the hard sample (hard sample), the present embodiment does not adopt the cross entropy loss function commonly used in the neural network framework in the prior art.
Focal loss is addressed to the problem that the positive and negative samples are extremely unbalanced in the target detection task and the loss of target detection is easily controlled by a large number of simple negative samples. The contribution of positive and negative samples to the loss function can be automatically adjusted by adding an exponential coefficient into a typical cross entropy loss function.
The cross entropy loss function formula of the binary task is as follows:
if so:
then for a binary class, focal local is defined as:
FL(pt)=-(1-pt)γlog(pt)
on the basis of which another hyperparameter for adjusting the weight can be introduced
FL(pt)=-αt(1-pt)γlog(pt)
This function can make the model focus more on the hard sample (hard sample) α during training by reducing the weight of the easy sample (easy sample)tThe parameter controls the balance of the number of positive and negative samples, and the gamma parameter is used to reduce the weight of the easy sample (easy example) in the loss function. In the above formula, y is a class mark of ground truth, and for this patent, the value of y is 1 and 0, which respectively represents a nodule and a normal region. p is the prediction probability obtained by the model for class index y ═ 1, and the value of p is between 0 and 1
Particularly, in practical application, taking pulmonary nodule detection as an example, when the type of a certain ROI area is definite, the contribution of the ROI area to overall loss is less; when the ROI region type is not easy to distinguish, the contribution of the ROI region type to the overall loss is relatively large, the finally obtained overall loss guides the model to distinguish the target type which is difficult to distinguish, and therefore the accuracy of overall pulmonary nodule detection is improved. In such detection tasks, the method for determining the hyper-parameters in the focal loss function is as follows:
the dynamic adjustment α may be based on the performance of the standard dataset or the first image dataset on the neural network model, the statistical detection rate and the false positive ratetWhen the detection rate of the model and the average false positive rate reach an acceptable balance (generally, the detection rate is more than or equal to 92%, the average number of false positive results of each CT image is about 2, and if the detection rate meets the requirement, but the average false positive rate is too high, α can also be reduced properlytOf) then α under this result may be selectedtMeanwhile, whether a higher gamma value needs to be set or not can be determined according to the trend of the detection rate and the false positive rate, the gamma value under the condition that the detection rate is high and the average false positive rate is low can be kept unchanged, and if the detection rate is high and the average false positive rate is low, the gamma value is increased appropriately, and similarly, in the case of nodule detection of a lung CT image, α which is finally set is taken as an exampletHas a value of0.9, γ is 2, and the neural network used for target detection performs well on the LUNA16 dataset under this parameter.
In another preferred embodiment, a real target (ground route) labeled by a label box can be used to generate a negative sample at the same time, and the neural network model is trained by using the generated negative sample.
The step of generating the negative sample by using the real target (ground route) labeled by the labeling box specifically comprises the following steps: when the anchor point and the labeling frame portion are partially overlapped and the overlap ratio (IOU) is less than a preset second threshold value, the anchor point is marked as a negative sample. Typical values of the second threshold are 0.05,0.1, 0.15,0.2, and the like, and preferably 0.1.
The preferred embodiment works well for lesion identification in medical images, in particular pulmonary nodule detection. Based on the prior medical knowledge, the nodule does not appear in the same region frequently, so that some regions with an IOU (input average) less than 0.1 of the nodule can be selected around the nodule and marked as non-nodule regions, and the part of negative samples further makes up the defect that the number of easily separable samples (easy samples) in the original negative samples is far more than that of difficultly separable samples (hard samples).
Updating the network parameters of the neural network model generally according to the direction of reducing the value of the loss function, and continuously updating the network parameters through repeated iterative training of a large number of samples, so as to obtain a trained neural network, which can be finally used for target detection.
EXAMPLE III
As shown in fig. 5, a third embodiment of the present invention provides a method for image processing using a neural network model to detect an object in an image. The method specifically comprises the following steps:
s401: and acquiring an image to be processed.
The image to be processed is the image of the same type of the sample image used for training the neural network, and the target to be detected and the target marked in the sample image have the same type.
As in the previous embodiment, this embodiment still takes the human lung image as an example, and the detection target is a lung nodule. The scope of the invention is not limited thereto, and the data labeling method, the neural network training method and the image processing method of the invention can be used for detecting various different types of images and targets according to different labeling data.
S402: and calling a neural network model to perform target detection on the image to obtain a target detection result in the image.
The neural network model used in the step is a trained model, and the model training is obtained by training a preset neural network model by using the model training method in the second embodiment; the labeling data used for model training is obtained by using the data labeling method in the first embodiment.
S403: and outputting the target detection result.
The target detection result comprises a target marking box used for indicating targets, and for each detected target, the neural network model gives a confidence coefficient at the same time. Therefore, the final output target detection result is that the range of each detected result is marked on the original image by using the marking frame, and the confidence of each detection result is given.
In lesion detection of medical images, detection results are generally used to assist clinical diagnosis of doctors.
To test the effectiveness of the solution according to the invention, again taking the nodule detection of lung images as an example, a test was performed on the public data set LUNA16 and the results are presented in the following table, with the results being compared to the two following published methods:
1、DingJ.,Li,A.,Hu,Z.,Wang,L:Accurate pulmonary nodule detection incomputed tomography images using deep convolutional neural networks In:MICCAI.(2017)559-567]
2、Dou,Q.,Chen,H.,Jin,e,:Automated pulmonary nodule detection via 3dconvnets with online sample filtering and hybrid-loss residual learning.In:MICCAI.(2017)630-638]
the experiment was specifically set up as follows:
the LUNA16 dataset is a subset of the largest common pulmonary nodule dataset, LIDC-IDRI, which consists of 1018 low dose lung CT images. The LIDC-IDRI removed CT images with slice thicknesses greater than 3mm and lung nodules less than 3mm, leaving the LUNA16 dataset for a total of 888 CT images. In LUNA16, 5765 nodules satisfying a nodule size greater than 3mm, if two nodules are too close together (center distance less than the sum of radii, in the case of intersection), the two nodules are merged, the merged center and radius are the mean of the two nodules, and 2290 nodules remain after this process. Because four experts are adopted during labeling, 1186 nodules labeled by at least three experts are finally taken as the area to be detected finally. However, for marked areas which are not used as the candidate group channel, in the subsequent processing process, if the algorithm detects the irrelevant areas, the marked areas are not processed into false positive areas or true positive areas, and the detection result is not influenced.
The whole data set is equally divided into 10 parts, and ten-fold cross validation needs to be performed on the data set, and the process is as follows:
(1) taking one part as a test set and the other nine parts as a training set
(2) Training algorithms on a training set
(3) Testing on the test set and saving the result file (the result file contains the label of the CT image, the CT corresponds to the x, y, z coordinates of the detected nodule, score is the confidence of the detection result)
(4) And after ten-fold cross validation is completed, averaging the results and fusing the results into one part.
The criterion for judging whether the detection result is a true positive nodule is as follows:
if the detected nodule coordinate is located in the nodule radius range, the nodule is a true positive nodule, and if a plurality of candidate regions are all related to one nodule, the confidence coefficient is selected to be the highest; if the candidate area belongs to the irrelevant area, the result is not counted, and the remaining candidate area belongs to the false positive area.
The evaluation of experimental results is measured using the FROC (Free-Response receiveoperating characterization) criterion commonly used in machine learning algorithms. Specifically, the FROC criterion in this patent describes the relationship between recall (number of detected nodules actually in all CT data tested/number of nodules actually in all CT data tested) and the number of false positive nodules averaged on each CT image (number of nodules predicted from all non-actual nodules in the test/number of CT tested, FPs/scan).
And (4) comparing the results:
method of producing a composite material
|
0.125
|
0.25
|
0.5
|
1
|
2
|
4
|
8
|
Mean
|
Dou,et al
|
0.659
|
0.745
|
0.819
|
0.865
|
0.906
|
0.933
|
0.946
|
0.839
|
Ding,et al
|
0.748
|
0.853
|
0.887
|
0.922
|
0.938
|
0.944
|
0.946
|
0.891
|
The invention
|
0.744
|
0.828
|
0.888
|
0.940
|
0.961
|
0.975
|
0.979
|
0.9023 |
The data in the tables are the recall ratios of the corresponding nodules at 0.125FPs/scan, 0.25FPs/scan, 0.5FPs/scan, 1FPs/scan, 2FPs/scan, 4FPs/scan, 8FPs/scan, respectively, and the last column is the average sensitivity. The solution of the invention has better results than the solutions of the prior art. In addition, some examples of nodule detection results are given in fig. 6.
Example four
As shown in fig. 7, a fourth embodiment of the present invention provides a neural network model training apparatus, which may be a computer program (including program code) running in a terminal. The model training apparatus may perform the model training method in the second embodiment, and specifically includes:
the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring a sample image, an annotation frame used for annotating a real target in the sample image and an annotated negative sample;
the updating unit is used for updating the network parameters of the neural network model according to the loss function of the neural network model;
and the training unit is used for carrying out iterative training on the preset image processing model according to the updated network parameters to obtain a target image processing model.
The units in the model training device may be respectively or completely combined into one or several other units to form the model training device, or some unit(s) may be further split into multiple units with smaller functions to form the model training device, which may achieve the same operation without affecting the achievement of the technical effect of the embodiments of the present invention. The units are divided based on logic functions, and in practical application, the functions of one unit can be realized by a plurality of units, or the functions of a plurality of units can be realized by one unit. In other embodiments of the present invention, the model-based training apparatus may also include other units, and in practical applications, these functions may also be implemented by the assistance of other units, and may be implemented by cooperation of a plurality of units.
According to another embodiment of the present invention, the model training apparatus device as shown in fig. 7 may be constructed by running a computer program (including program codes) capable of executing the steps involved in the corresponding method in the second embodiment on a general-purpose computing device such as a computer including a Central Processing Unit (CPU), a random access storage medium (RAM), a read only storage medium (ROM), and the like as well as a storage element, and the model training method of the second embodiment of the present invention may be implemented. The computer program may be recorded on a computer-readable recording medium, for example, and loaded and executed in the above-described computing apparatus via the computer-readable recording medium.
EXAMPLE five
An embodiment of the present invention provides an image processing apparatus, including:
an acquisition unit configured to acquire an image to be processed;
the processing unit is used for calling a neural network model to carry out target detection on the image to obtain a target detection result; the neural network model is obtained by training a preset neural network model by adopting the model training method in the embodiment;
and the output unit is used for outputting the target detection result.
EXAMPLE six
An embodiment of the present invention provides a computer storage medium, where one or more first instructions are stored, where the one or more first instructions are adapted to be loaded by a processor and to execute the model training method in the foregoing embodiment; alternatively, the computer storage medium stores one or more second instructions adapted to be loaded by the processor and to perform the image processing method in the foregoing embodiments.
The steps in the method of each embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs.
The units in the device of each embodiment of the invention can be merged, divided and deleted according to actual needs.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with a program, which may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk, magnetic tape, or magnetic tape, Or any other medium which can be used to carry or store data and which can be read by a computer.
The technical solutions of the present invention have been described in detail with reference to the accompanying drawings, and the above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and it will be apparent to those skilled in the art that various modifications and variations can be made in the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.