WO2023030298A1 - 息肉分型方法、模型训练方法及相关装置 - Google Patents

息肉分型方法、模型训练方法及相关装置 Download PDF

Info

Publication number
WO2023030298A1
WO2023030298A1 PCT/CN2022/115758 CN2022115758W WO2023030298A1 WO 2023030298 A1 WO2023030298 A1 WO 2023030298A1 CN 2022115758 W CN2022115758 W CN 2022115758W WO 2023030298 A1 WO2023030298 A1 WO 2023030298A1
Authority
WO
WIPO (PCT)
Prior art keywords
sample
polyp
typing
noise
endoscopic image
Prior art date
Application number
PCT/CN2022/115758
Other languages
English (en)
French (fr)
Inventor
边成
赵秋阳
李永会
李剑
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Publication of WO2023030298A1 publication Critical patent/WO2023030298A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image

Definitions

  • the present disclosure relates to the technical field of medical images, and in particular, to a polyp classification method, a model training method and related devices.
  • Deep learning usually relies on a large amount of accurately labeled data. If there is mislabeling in the data (that is, label noise), it will greatly affect the accuracy of model predictions.
  • label noise In the medical field, the labeling of image data is usually manually labeled by multiple doctors or automatically generated. Due to the complexity of medical images, for some cases, doctors cannot ensure the accuracy of their judgments. Therefore, for multi-doctor annotations, there are inevitably some differences in judgments. In addition, a large number of readings can easily lead to expert fatigue and misjudgment. Therefore, there are usually more or less label noises in the acquired medical data sets, and these label noises usually have a great impact on the training of the model in the case of limited data sets.
  • the present disclosure provides a model training method, which is applied to a polyp classification model, and the polyp classification model includes a first recognition network and a second recognition network, and the method includes:
  • a first sample prediction value for polyps in said sample endoscopic image is determined by said first recognition network, and a first sample prediction value for said sample endoscopic image is determined by said second recognition network Second-sample predicted value of polyps in endoscopic images;
  • sample endoscopic image classify the sample endoscopic image as a clean sample or a noise sample according to the difference between the predicted value of the first sample and the predicted value of the second sample, and the clean sample is labeled correctly for the polyp typing label
  • the sample endoscopic image of the sample, the noise sample is the sample endoscopic image of the polyp typing label label error;
  • the present disclosure provides a method for polyp typing, the method comprising:
  • the first classification prediction value of the polyps in the endoscopic image is determined by the first recognition network in the polyp classification model, and the polyp in the endoscopic image is determined by the second recognition network in the polyp classification model
  • the second polyp type prediction value, the polyp type model is obtained by training the model training method described in the first aspect
  • the present disclosure provides a model training device, which is applied to a polyp classification model, and the polyp classification model includes a first recognition network and a second recognition network, and the device includes:
  • the first training module is used to determine a plurality of sample endoscopic images, and the sample endoscopic images are marked with polyp typing labels;
  • the second training module is used to determine a first sample prediction value for polyps in the sample endoscopic image through the first recognition network for each of the sample endoscopic images, and use the second the recognition network determines a second sample predictive value for a polyp in the sample endoscopic image;
  • a third training module configured to classify the sample endoscopic image as a clean sample or a noise sample according to the difference between the first sample predicted value and the second sample predicted value, the clean sample being the The polyp typing result of the sample is correctly marked with the sample endoscopic image, and the noise sample is the sample endoscopic image of the sample polyp typing result marked incorrectly;
  • the fourth training module is used to train the polyp classification model according to the clean samples and the noise samples.
  • the present disclosure provides a polyp typing device, the device comprising:
  • An acquisition module configured to acquire an endoscopic image, the endoscopic image including polyps to be typed
  • the first processing module is used to determine the first type prediction value of the polyp in the endoscopic image through the first recognition network in the polyp type model, and determine through the second recognition network in the polyp type model
  • the second type prediction value of the polyp in the endoscopic image, the polyp type model is obtained by training the model training method described in the first aspect
  • the second processing module is configured to average the first type prediction value and the second type prediction value to obtain a target type prediction value, and determine the endoscopic type based on the target type prediction value Targeted typing results for polyps in mirror images.
  • the present disclosure provides a non-transitory computer-readable storage medium on which a computer program is stored, and when the program is executed by a processing device, the steps of the method described in the first aspect or the second aspect are implemented.
  • an electronic device including:
  • a processing device configured to execute the computer program in the storage device to implement the steps of the method in the first aspect or in the second aspect.
  • the polyp classification model may include a first recognition network and a second recognition network, so that a clean sample or a noise sample can be distinguished by the difference between the sample prediction values output by the two recognition networks for the same endoscopic image, Furthermore, combining the clean samples and noise samples for model training can make full use of the limited polyp sample data set, improve data utilization, and reduce the impact of noise samples on model prediction accuracy. In addition, since the clean sample and the noisy sample are obtained based on the joint learning of the first recognition network and the second recognition network, compared with the method of setting the sample selection ratio, the situation of misclassifying the noisy data into clean samples can be reduced. Thereby improving the prediction accuracy of the polyp classification model.
  • FIG. 1 is a flowchart of a model training method according to an exemplary embodiment of the present disclosure
  • Fig. 2 is a schematic diagram of an image inputting a first input, a first recognition network and a second recognition network in a model training method shown according to an exemplary embodiment of the present disclosure
  • Fig. 3 is a schematic diagram of a polyp classification model in a model training method shown according to an exemplary embodiment of the present disclosure
  • Fig. 4 is a flow chart showing a polyp typing method according to an exemplary embodiment of the present disclosure
  • Fig. 5 is a block diagram of a model training device according to an exemplary embodiment of the present disclosure.
  • Fig. 6 is a block diagram of a polyp typing device according to another exemplary embodiment of the present disclosure.
  • Fig. 7 is a block diagram of an electronic device according to another exemplary embodiment of the present disclosure.
  • the term “comprise” and its variations are open-ended, ie “including but not limited to”.
  • the term “based on” is “based at least in part on”.
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one further embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
  • the inventors have discovered that in other fields, there is a way to reduce the impact of label noise on model prediction accuracy through sample selection.
  • loss loss function
  • this method based on sample selection usually only selects clean data to train the model, while ignoring noisy data, which cannot make full use of limited medical data sets.
  • this method is usually based on mini-batch (dividing all samples into equal subsets to improve training efficiency, these subsets are mini-batch), but in each mini-batch of real-world data
  • the proportion of noise samples is likely to be different, so the proportion of noise samples is difficult to choose. If the same noise sample selection ratio is set for each mini-batch, it is easy to misclassify some noise samples as clean samples, thus affecting the accuracy of model training.
  • the present disclosure proposes a more robust model for label noise in polyp typing data, the model includes a first recognition network and a second recognition network, so that the difference between the output results of the two networks can be
  • the difference between clean samples and noisy samples can be used to distinguish clean samples from noisy samples, and the problem of not conforming to real data noise in the sample selection-based methods in related technologies can be reduced, making the model more robust to real polyp data sets with certain noise labels.
  • Fig. 1 is a flowchart showing a model training method according to an exemplary embodiment of the present disclosure.
  • the model training method can be applied to a polyp classification model, and the polyp classification model includes a first recognition network and a second recognition network.
  • the method may include:
  • step 101 a plurality of sample endoscopic images are determined, and the sample endoscopic images are marked with polyp typing labels.
  • Step 102 for each sample endoscopic image, determine the first sample prediction value of the polyp in the sample endoscopic image through the first recognition network, and determine the polyp in the sample endoscopic image through the second recognition network The second-sample predicted value.
  • Step 103 classify the sample endoscopic image as a clean sample or a noise sample according to the difference between the predicted value of the first sample and the predicted value of the second sample.
  • the clean sample is an endoscopic image of a sample correctly labeled with a polyp typing label
  • the noise sample is an endoscopic image of a sample labeled incorrectly with a polyp typing label.
  • Step 104 training a polyp classification model according to clean samples and noise samples.
  • the polyp classification model can include the first recognition network and the second recognition network, so as to distinguish the clean sample or the noise sample through the difference between the sample prediction values output by the two recognition networks for the same endoscopic image, and then combine
  • the clean samples and noise samples are used for model training, which can make full use of the limited polyp sample data set, improve data utilization, and reduce the impact of noise samples on the prediction accuracy of the model.
  • the clean sample and the noisy sample are obtained based on the joint learning of the first recognition network and the second recognition network, compared with the method of setting the sample selection ratio, the situation of misclassifying the noisy data into clean samples can be reduced. Thereby improving the prediction accuracy of the polyp classification model.
  • a plurality of sample endoscopic images of a patient including polyps may be collected, and the collected sample endoscopic images may include white light images and narrow-band images.
  • the polyps of some patients may collect white light and narrow-band images at the same time, while the polyps of some patients may only collect white light images. Therefore, the image of the white light part can be selected in the collected sample endoscopic images as the final multiple sample endoscopic images.
  • a polyp type label may be pre-marked in a manner in the related art.
  • the polyp typing signature may include hyperplasia, tumor or cancer.
  • the first sample prediction value of the polyp in the sample endoscopic image can be determined through the first recognition network, and can be determined through the second recognition network A second sample predictor of polyps in the sample endoscopic image.
  • each sample endoscopic image can be converted into first-dimensional image features through the first recognition network, and the first detection of polyps in the sample endoscopic image can be determined based on the first-dimensional image features. This predicted value.
  • Each sample endoscopic image is converted into a second-dimensional image feature through the second recognition network, and a second sample prediction value for polyps in the sample endoscopic image is determined based on the second-dimensional image feature.
  • the first dimension and the second dimension can be set according to actual conditions.
  • each sample endoscopic image may be converted into features of different dimensions for training.
  • H represents the length of the sample endoscopic image
  • W represents the width of the sample endoscopic image
  • C represents the number of channels of the sample endoscopic image.
  • feature transformation (reshape) is performed, so that an image with a dimension of H ⁇ W ⁇ C can be converted into a dimension of and , where N1 represents the number of sub-images corresponding to the image features of the first dimension, which is 16 in this embodiment, and N2 represents the number of sub-images corresponding to the image features of the second dimension, which is 64 in this embodiment.
  • a first sample predictive value for polyps in the sample endoscopic image may be determined by the image features of the first dimension
  • a second sample predictive value for polyps in the sample endoscopic image may be determined by the image features of the second dimension.
  • clean samples can be selected, for example, a certain proportion of clean samples can be selected according to the method of sample selection in related technologies, and then the clean samples can be converted into image features of different dimensions for the first recognition network Training with the second recognition network, that is, preliminary training of the two recognition networks using images of different resolutions, to obtain the first recognition network and the second recognition network of different scales, which are used to extract information of different scales.
  • the first recognition network and the second recognition network can extract image features of different scales from undifferentiated types of sample endoscopic images for prediction.
  • the accuracy of distinguishing clean samples from noise samples through the two recognition networks can be improved, thereby improving the accuracy of the polyp classification model.
  • the first sample prediction value of the polyp in the sample endoscopic image is determined through the first recognition network and the second recognition network, that is, after the first sample prediction value and the second sample prediction value are obtained, the first sample prediction value can be and the second-sample predicted value to classify the sample endoscopic image as a clean sample or a noisy sample.
  • the clean sample is the sample endoscopic image with the correct polyp typing label, that is, the artificially labeled polyp typing label is consistent with the actual polyp typing result in the sample endoscopic image
  • the noise sample is the polyp typing label
  • the wrongly labeled sample endoscopic image, that is, the artificially labeled polyp typing label is inconsistent with the actual polyp typing result in the sample endoscopic image.
  • the inventor's research shows that the features learned by two recognition networks of different scales are different, and they tend to agree on clean samples, but diverge on noisy samples. Therefore, in the embodiment of the present disclosure, the difference between the prediction values of the first recognition network and the second recognition network for the endoscopic image of the same sample can be determined first, so as to distinguish clean samples and noise samples according to the difference.
  • the JS divergence distance between the first sample predicted value and the second sample predicted value can be determined first, if the JS divergence distance between the first sample predicted value and the second sample predicted value is equal to If the numerical relationship between the preset thresholds satisfies the preset condition, the sample endoscopic image is classified as a clean sample, if the JS divergence distance between the first sample predicted value and the second sample predicted value is equal to the preset If the numerical relationship between the thresholds does not satisfy the preset condition, the sample endoscopic image is classified as a noise sample.
  • the preset threshold may be set according to actual conditions, which is not limited in this embodiment of the present disclosure.
  • an initial preset threshold may be determined first, and then during the training process of the polyp classification model, if the number of training times of the polyp classification model reaches the preset number of training times, the initial preset threshold is increased.
  • the preset threshold can be set smaller, so that the polyp classification model can be trained more easily.
  • the preset threshold can be slowly increased.
  • the preset threshold can be set according to the following formula:
  • represents the preset threshold
  • epoch represents the number of training times of the polyp classification model
  • epoch warm represents the hyperparameter used to represent the number of training times in the polyp classification model, which can be 10
  • ⁇ C represents the hyperparameter of the polyp classification model
  • ⁇ m represents a preset constant, it can be 0.95
  • epoch max represents the total number of training times.
  • the preset condition includes that the JS divergence distance between the first sample predicted value and the second sample predicted value is smaller than a preset threshold, or the sample index is greater than the preset threshold, and the sample index is 1 minus the JS divergence The difference obtained from the distance in degrees.
  • the JS divergence distance between the predicted value of the first sample and the predicted value of the second sample is determined according to the following formula:
  • JS represents the JS divergence distance between the predicted value of the first sample and the predicted value of the second sample
  • p1 represents the predicted value of the first sample
  • p2 represents the predicted value of the second sample
  • N represents the sample endoscopic image
  • M represents the number of categories of polyp classification
  • the JS divergence distance between the first sample predicted value and the second sample predicted value is less than the preset threshold, it means that the numerical relationship between the JS divergence distance and the preset threshold satisfies the preset condition, so that the sample The endoscopic image is classified as a clean sample, otherwise if the JS divergence distance is not less than (that is, greater than or equal to) the preset threshold, it means that the numerical relationship between the JS divergence distance and the preset threshold does not meet the preset condition, The sample endoscopic image can thus be classified as a noisy sample.
  • the difference between the predicted value of the first sample and the predicted value of the second sample can be determined by means other than the JS divergence, such as the Wasserstein distance, etc., which is not discussed in this embodiment of the present disclosure. limited.
  • the difference between the sample prediction values output by the first recognition network and the second recognition network can be measured by means of JS divergence, so as to more accurately distinguish between clean samples and noise samples, and avoid the existence of sample selection based methods.
  • the problem that the sample selection does not conform to the real data noise makes the polyp typing model more robust to the polyp data sets with real noise labels.
  • the predicted typing result of the polyp classification model for the noise sample can also be determined according to the first sample predicted value and the second sample predicted value corresponding to the noise sample, and according to the noise
  • the predicted typing results of the samples, the polyp typing labels marked by the noise samples and the hyperparameters of the polyp typing model determine the noise pseudo-labels of the noise samples.
  • training the polyp typing model based on clean samples and noise samples can be: according to the predicted typing results of the clean samples by the polyp typing model, the polyp typing labels marked by the clean samples, and the predicted classification of the noise samples by the polyp typing model.
  • the polyp classification model is trained with the noise pseudo-labels of the polyp classification results and noise samples.
  • a noise pseudo-label can be generated according to the noise sample, so that the noise sample can be trained using the noise pseudo-label instead of the artificially labeled initial label to further improve
  • the prediction accuracy of the polyp typing model while reducing the cost of secondary manual labeling, improves the training efficiency of the polyp typing model.
  • the polyp classification model in the embodiment of the present disclosure adopts the first recognition network and the second recognition network, so the first sample prediction value output by the first recognition network and the second sample prediction value output by the second recognition network can be combined , to determine the prediction typing result of the polyp typing model for noise samples.
  • the noise pseudo-label of the noise sample can be determined according to the predicted typing result of the noise sample, the polyp typing label marked by the noise sample, and the hyperparameters of the polyp typing model.
  • the noise pseudo-label of a noise sample can be determined according to the following formula:
  • y′ represents the noise pseudo-label of the noise sample
  • y n represents the polyp classification label of the noise sample
  • denotes the hyperparameter of the polyp typing model, which ranges from 0 to 1, and can be 0.5.
  • the model training can be combined with clean samples and noise samples, making full use of the limited polyp sample data set, improving data utilization, and improving the robustness of the polyp typing model to noise data.
  • the calculation of the first loss function can be performed according to the predicted value of the first sample corresponding to the clean sample and the polyp typing label
  • the second loss function can be calculated according to the predicted value of the second sample corresponding to the clean sample and the polyp typing label. loss function, and then adjust the parameters of the polyp classification model according to the calculation result of the first loss function and the calculation result of the second loss function.
  • the calculation of the third loss function may be performed according to the predicted value of the first sample corresponding to the noise sample and the noise pseudo-label
  • the calculation of the fourth loss function may be performed according to the predicted value of the second sample corresponding to the noise sample and the noise pseudo-label, and then According to the calculation result of the third loss function and the calculation result of the fourth loss function, the parameters of the polyp classification model are adjusted.
  • the overall training goal of the polyp classification model can be set as:
  • L L CE (y c , y clean1 ) + L CE (y c , y clean2 ) + L CE (y′, y noisy1 ) + L CE (y′, y noisy2 )
  • L represents the loss function of the polyp classification model
  • L CE represents the cross entropy loss
  • y c represents the polyp classification label of the clean sample
  • y clean1 represents the first sample prediction value corresponding to the clean sample
  • y clean2 represents the clean sample corresponding to y' represents the noise pseudo-label of the noise sample
  • y noisy1 represents the first sample prediction value corresponding to the noise sample
  • y noisy2 represents the second sample prediction value corresponding to the noise sample.
  • the loss function of the polyp classification model is calculated according to the above formula, the first two terms of the above formula will be calculated for clean samples, and the last two terms of the above formula will be calculated for noise samples. That is to say, in the embodiments of the present disclosure, different loss function calculation methods can be used for clean samples and noise samples, thereby improving the training effect of the polyp classification model.
  • model training method provided by the present disclosure will be described below with reference to the schematic diagram of the model structure shown in FIG. 3 .
  • the polyp typing model includes a first recognition network and a second recognition network.
  • the first recognition network and the second recognition network may be vision transformer model networks respectively.
  • the first recognition network and the second recognition network include a linear mapping module (Linear Projection), a position encoder (Embed), a normalization module (Layer Normalization), a self-attention module (Multi-Head Attention) and multiple Layer Perceptron (MLP).
  • the linear mapping module is used to map the flattened image features to the dimensions of the hidden layer.
  • the position encoder is used to obtain the position information of the image.
  • the self-attention module is used to learn the key information parts in the image.
  • the first recognition network converts the sample endoscopic image into a first-dimensional image feature
  • the first sample prediction value is determined based on the first-dimensional image feature
  • the second recognition network converts the sample endoscopic image into a second-dimensional image feature
  • a second sample prediction value is determined based on the second-dimensional image features.
  • the JS module can determine the JS divergence distance between the predicted value of the first sample and the predicted value of the second sample, thereby distinguishing clean samples from noise samples. And, for noisy samples, noisy pseudo-labels can be generated.
  • the first sample prediction value output by the first identification network can be input into a classifier (Classifier) to obtain a prediction classification result.
  • Classifier classifier
  • the loss function can be calculated according to the predicted classification result and the noise pseudo-label, so as to adjust the model parameters according to the calculation result of the loss function.
  • the predicted value of the second sample output by the second identification network can be input into a corresponding classifier (Classifier) to obtain a predicted typing result.
  • the loss function can be calculated according to the predicted classification result and the noise pseudo-label, so as to adjust the model parameters according to the calculation result of the loss function.
  • the loss function can be calculated according to the predicted typing result and the polyp typing label marked by the clean sample, so that according to the calculation result of the loss function Adjust model parameters.
  • the polyp classification model can include a first recognition network and a second recognition network, so that a clean sample or a noise sample can be distinguished by the difference between the sample prediction values output by the two recognition networks for the same endoscopic image, and then Combining the clean samples and noise samples for model training can make full use of the limited polyp sample data set, improve data utilization, and reduce the impact of noise samples on the prediction accuracy of the polyp typing model. Moreover, since the clean sample and the noisy sample are obtained based on the joint learning of the first recognition network and the second recognition network, compared with the method of setting the sample selection ratio, the situation of misclassifying the noisy data as a clean sample can be reduced. Thereby improving the prediction accuracy of the polyp classification model.
  • the noise pseudo-label can be generated according to the noise sample, so that the noise pseudo-label can be used instead of the initial manual label for model training for the noise sample, which can further improve the prediction accuracy of the polyp classification model and reduce the cost of secondary manual labeling. Cost, improve the training efficiency of the polyp classification model.
  • the present disclosure also provides a polyp typing method, referring to Fig. 4, the method includes the following steps:
  • Step 401 acquiring an endoscopic image, the endoscopic image including polyps to be typed;
  • Step 402 determine the first type prediction value of the polyp in the endoscopic image through the first identification network in the polyp type model, and determine the first type prediction value of the polyp in the endoscopic image through the second identification network in the polyp type model Dichotomous predictors.
  • the polyp classification model is obtained by training through any of the above-mentioned model training methods.
  • Step 403 Calculate the average of the first type prediction value and the second type prediction value to obtain the target type prediction value, and determine the target type result of the polyp in the endoscopic image based on the target type prediction value.
  • acquiring an endoscopic image may be acquired from an endoscopic device.
  • the polyp classification method provided by the present disclosure can be applied to the control unit of the endoscope device, and after the control unit acquires the endoscopic image collected by the image acquisition unit of the endoscope device, it can execute this method.
  • the polyp typing method provided is disclosed, so as to determine the target typing result of the polyp in the endoscopic image through the trained polyp typing model.
  • the polyp classification method provided by the present disclosure can be applied to a medical system including an endoscope device, and the control equipment in the medical system can communicate with the endoscope device in a wired or wireless manner, so that the endoscope device can The endoscopic image is acquired, and the polyp typing method provided by the present disclosure is executed, so as to determine the target typing result of the polyp in the endoscopic image through the trained polyp typing model.
  • the trained polyp classification model includes a first recognition network and a second recognition network, so the first classification prediction of polyps in the endoscopic image can be determined through the first recognition network value, and determine the second type prediction value of the polyp in the endoscopic image through the second recognition network, and then obtain the target type prediction value according to the following formula:
  • y test represents the predicted value of the target classification
  • p 1 represents the first classification prediction value of the polyps in the endoscopic image by the first recognition network
  • p 2 represents the second classification of the polyps in the endoscopic image by the second recognition network Type predictive value.
  • the target typing result of the polyps in the endoscopic image can be determined through the classifier and the target typing prediction value.
  • the polyp classification model uses the difference between the predicted values of samples output by two recognition networks for the same endoscopic image to distinguish clean samples or noisy samples, and then combines the clean samples and noisy samples for training. It is obtained that the clean samples and noise samples are fully utilized in the model training process, so the robustness of the polyp typing model is high, so typing based on the polyp typing model can improve the accuracy of the polyp typing results .
  • clean samples and noisy samples are jointly learned by two recognition networks. Compared with the method of setting sample selection ratio, it can reduce the situation of misclassifying noisy data into clean samples, thereby further improving the accuracy of polyp typing results. sex.
  • the present disclosure also provides a model training device, which can become a part or all of the electronic equipment through software, hardware or a combination of both.
  • the device is used for training a polyp classification model, and the polyp classification model includes a first recognition network and a second recognition network.
  • the model training device 500 includes:
  • the first training module 501 is used to determine a plurality of sample endoscopic images, and the sample endoscopic images are marked with polyp typing labels;
  • the second training module 502 is configured to determine a first sample prediction value for polyps in the sample endoscopic image through the first recognition network for each of the sample endoscopic images, and use the first recognition network a recognition network to determine a second sample predictive value for a polyp in the sample endoscopic image;
  • the third training module 503 is configured to classify the sample endoscopic image into a clean sample or a noise sample according to the difference between the predicted value of the first sample and the predicted value of the second sample, and the clean sample is
  • the sample polyp typing result is correctly marked with the sample endoscopic image
  • the noise sample is the sample endoscopic image of the sample polyp typing result marked incorrectly;
  • the fourth training module 504 is configured to train the polyp classification model according to the clean samples and the noise samples.
  • the device 500 also includes:
  • the fifth training module is configured to determine the polyp according to the first sample prediction value and the second sample prediction value corresponding to the noise sample after classifying the sample endoscopic image as a noise sample The prediction typing result of the typing model to the noise sample;
  • the sixth training module is used to determine the noise pseudo-label of the noise sample according to the predicted typing result of the noise sample, the polyp typing label marked by the noise sample, and the hyperparameters of the polyp typing model;
  • the fourth training module 504 is used for:
  • the polyp typing label marked by the clean sample the predicted typing result of the polyp typing model for the noise sample
  • the noise sample Noisy pseudo-labels to train the polyp typing model.
  • the fourth training module 504 is used for:
  • the calculation of the fourth loss function is to adjust the parameters of the polyp classification model according to the calculation result of the third loss function and the calculation result of the fourth loss function.
  • the second training module 502 is used for:
  • the third training module 503 is used for:
  • the sample Endoscopic images are classified as noisy samples.
  • the preset condition includes that the JS divergence distance is less than the preset threshold, or the sample index is greater than the preset threshold, and the sample index is the difference obtained by subtracting the JS divergence distance from 1 value.
  • the preset threshold is set by the following module:
  • a determination module configured to determine an initial preset threshold
  • An adjustment module configured to increase the initial preset threshold if the number of training times of the polyp typing model reaches a preset number of training times during the training process of the polyp typing model.
  • the present disclosure also provides a polyp classification device, which can be part or all of electronic equipment through software, hardware or a combination of the two.
  • the electronic equipment can be endoscopic equipment or include Medical equipment for speculum device.
  • the polyp typing device 600 includes:
  • An acquisition module 601, configured to acquire an endoscopic image, the endoscopic image including polyps to be typed;
  • the first processing module 602 is configured to use the first recognition network in the polyp classification model to determine the first classification prediction value of the polyp in the endoscopic image, and use the second recognition network in the polyp classification model Determining a second type prediction value of the polyp in the endoscopic image, the polyp type model is trained by any of the above-mentioned model training methods;
  • the second processing module 603 is configured to average the first type prediction value and the second type prediction value to obtain a target type prediction value, and determine the internal type based on the target type prediction value. Targeted typing results for polyps in speculum images.
  • an embodiment of the present disclosure also provides a non-transitory computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processing device, any of the above-mentioned model training methods or any of the above-mentioned polyp typing methods can be implemented.
  • an embodiment of the present disclosure also provides an electronic device, including:
  • a processing device configured to execute the computer program in the storage device, so as to realize the steps of any of the above-mentioned model training methods or any of the above-mentioned polyp classification methods.
  • FIG. 7 it shows a schematic structural diagram of an electronic device 700 suitable for implementing the embodiments of the present disclosure.
  • the terminal equipment in the embodiment of the present disclosure may include but not limited to such as mobile phone, notebook computer, digital broadcast receiver, PDA (personal digital assistant), PAD (tablet computer), PMP (portable multimedia player), vehicle terminal (such as mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers and the like.
  • the electronic device shown in FIG. 7 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
  • an electronic device 700 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) Various appropriate actions and processes are executed by programs in the memory (RAM) 703 . In the RAM 703, various programs and data necessary for the operation of the electronic device 700 are also stored.
  • the processing device 701, ROM 702, and RAM 703 are connected to each other through a bus 704.
  • An input/output (I/O) interface 705 is also connected to the bus 704 .
  • the following devices can be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 707 such as a computer; a storage device 708 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 709.
  • the communication means 709 may allow the electronic device 700 to communicate with other devices wirelessly or by wire to exchange data. While FIG. 7 shows electronic device 700 having various means, it should be understood that implementing or having all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from a network via communication means 709, or from storage means 708, or from ROM 702.
  • the processing device 701 When the computer program is executed by the processing device 701, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are performed.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol) can be used to communicate, and can communicate with digital data in any form or medium (for example, communication network) interconnection.
  • Examples of communication networks include local area networks (“LANs”), wide area networks (“WANs”), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: determines a plurality of sample endoscopic images, and the sample endoscopic images are marked with polyp classification label; for each of the sample endoscopic images, determine the first sample prediction value of the polyp in the sample endoscopic image through the first recognition network, and use the second recognition network determining a second sample predicted value for polyps in the sample endoscopic image; classifying the sample endoscopic image as clean based on a difference between the first sample predicted value and the second sample predicted value A sample or a noise sample, the clean sample is the correct sample endoscopic image of the polyp typing label, and the noise sample is the wrong sample endoscopic image of the polyp typing label; according to the clean sample training the polyp classification model with the noise samples.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: acquires an endoscopic image, and the endoscopic image includes the the polyp in the polyp classification model; the first classification predictive value of the polyp in the endoscopic image is determined by the first recognition network in the polyp classification model, and the endoscopic is determined by the second recognition network in the polyp classification model
  • the second type prediction value of the polyp in the mirror image, the polyp type model is trained by any of the above-mentioned model training methods; the first type prediction value and the second type prediction Values are averaged to obtain a target typing prediction value, and the target typing result of the polyp in the endoscopic image is determined based on the target typing prediction value.
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as "C" or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (for example, using an Internet service provider to connected via the Internet).
  • LAN local area network
  • WAN wide area network
  • Internet service provider for example, using an Internet service provider to connected via the Internet.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • modules involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of the module does not constitute a limitation on the module itself under certain circumstances.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs System on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • Example 1 provides a model training method, which is applied to a polyp classification model, and the polyp classification model includes a first recognition network and a second recognition network, the method comprising:
  • a first sample prediction value for polyps in said sample endoscopic image is determined by said first recognition network, and a first sample prediction value for said sample endoscopic image is determined by said second recognition network Second-sample predicted value of polyps in endoscopic images;
  • sample endoscopic image classify the sample endoscopic image as a clean sample or a noise sample according to the difference between the predicted value of the first sample and the predicted value of the second sample, and the clean sample is labeled correctly for the polyp typing label
  • the sample endoscopic image of the sample, the noise sample is the sample endoscopic image of the polyp typing label label error;
  • Example 2 provides the method of Example 1, after classifying the sample endoscopic image as a noise sample, the method further includes:
  • the predicted value of the first sample and the predicted value of the second sample corresponding to the noise sample determine the predicted typing result of the polyp typing model for the noise sample
  • the training of the polyp classification model based on the clean sample and the noise sample includes:
  • the polyp typing label marked by the clean sample the predicted typing result of the polyp typing model for the noise sample
  • the noise sample Noisy pseudo-labels to train the polyp typing model.
  • Example 3 provides the method of Example 2, the predicted typing result of the clean sample according to the polyp typing model, the polyp typing label marked by the clean sample .
  • the polyp typing model predicts the typing result of the noise sample and the noise pseudo-label of the noise sample, and trains the polyp typing model, including:
  • the calculation of the fourth loss function is to adjust the parameters of the polyp classification model according to the calculation result of the third loss function and the calculation result of the fourth loss function.
  • Example 4 provides the method of any one of Examples 1-3, wherein for each of the sample endoscope images, it is determined through the first recognition network that the sample endoscope A first sample predicted value of the polyp in the image, and determining a second sample predicted value of the polyp in the sample endoscopic image through a second recognition network, comprising:
  • Example 5 provides the method of any one of Examples 1-3, wherein according to the difference between the first sample predicted value and the second sample predicted value, the Sample endoscopic images are classified as clean or noisy samples, including:
  • the sample Endoscopic images are classified as noisy samples.
  • Example 6 provides the method of Example 5, the preset condition includes that the JS divergence distance is less than the preset threshold, or the sample index is greater than the preset threshold, the The sample index is the difference obtained by subtracting the JS divergence distance from 1.
  • Example 7 provides the method of Example 5, the preset threshold is set in the following manner:
  • the initial preset threshold is increased.
  • Example 8 provides a polyp typing method, the method comprising:
  • the first classification prediction value of the polyps in the endoscopic image is determined by the first recognition network in the polyp classification model, and the polyp in the endoscopic image is determined by the second recognition network in the polyp classification model
  • the second type prediction value of polyps, the polyp type model is obtained by training the model training method described in any one of Examples 1-7;
  • Example 9 provides a model training device, which is applied to a polyp classification model, and the polyp classification model includes a first recognition network and a second recognition network, and the device includes:
  • the first training module is used to determine a plurality of sample endoscopic images, and the sample endoscopic images are marked with polyp typing labels;
  • the second training module is used to determine a first sample prediction value for polyps in the sample endoscopic image through the first recognition network for each of the sample endoscopic images, and use the second the recognition network determines a second sample predictive value for a polyp in the sample endoscopic image;
  • a third training module configured to classify the sample endoscopic image as a clean sample or a noise sample according to the difference between the first sample predicted value and the second sample predicted value, the clean sample being the The polyp typing result of the sample is correctly marked with the sample endoscopic image, and the noise sample is the sample endoscopic image of the sample polyp typing result marked incorrectly;
  • the fourth training module is used to train the polyp classification model according to the clean samples and the noise samples.
  • Example 10 provides a polyp typing device, the device comprising:
  • An acquisition module configured to acquire an endoscopic image, the endoscopic image including polyps to be typed
  • the first processing module is used to determine the first type prediction value of the polyp in the endoscopic image through the first recognition network in the polyp type model, and determine through the second recognition network in the polyp type model
  • the second type prediction value of the polyp in the endoscopic image, the polyp type model is obtained by training the model training method described in any one of Examples 1-7;
  • the second processing module is configured to average the first type prediction value and the second type prediction value to obtain a target type prediction value, and determine the endoscopic type based on the target type prediction value Targeted typing results for polyps in mirror images.
  • Example 11 provides a non-transitory computer-readable storage medium on which a computer program is stored, and when the program is executed by a processing device, any one of Examples 1-8 can be implemented. steps of the method described above.
  • Example 12 provides an electronic device, comprising:
  • a processing device configured to execute the computer program in the storage device to implement the steps of any one of the methods in Examples 1-8.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Endoscopes (AREA)
  • Image Processing (AREA)

Abstract

本公开涉及一种息肉分型方法、模型训练方法及相关装置,以针对息肉分型数据中的标记噪声提出一种鲁棒性更高的模型。该模型训练方法包括:确定多个样本内窥镜图像,该样本内窥镜图像标注有息肉分型标签;针对每一样本内窥镜图像,通过第一识别网络确定对样本内窥镜图像中息肉的第一样本预测值,并通过第二识别网络确定对样本内窥镜图像中息肉的第二样本预测值;根据第一样本预测值和第二样本预测值间的差异,将样本内窥镜图像分类为干净样本或噪声样本。该干净样本为息肉分型标签标注正确的样本内窥镜图像,所述噪声样本为所述息肉分型标签标注错误的样本内窥镜图像;根据干净样本和噪声样本训练息肉分型模型。

Description

息肉分型方法、模型训练方法及相关装置
相关申请的交叉引用
本申请基于申请号为202111034220.2、申请日为2021年09月03日,名称为“息肉分型方法、模型训练方法及相关装置”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本公开涉及医疗图像技术领域,具体地,涉及一种息肉分型方法、模型训练方法及相关装置。
背景技术
深度学习通常依赖大量标注准确的数据,假如数据中存在错误标注(即标记噪声)则会大大影响模型预测的准确性。在医学领域,影像数据的标记通常由多个医生进行人工标注或是通过自动化的方式生成。而由于医学图像的复杂性,对于一些病例,医生也无法确保其判断的准确性,因此对于多医生的标注,则不可避免的存在一些判断分歧。此外,大量的阅片容易导致专家的疲劳从而出现错误判断。因此,获取的医学数据集中通常或多或少的存在一些标记噪声,而这些标记噪声在数据集有限的情况下对模型的训练通常有着极大的影响。
然而,在息肉分型的领域,通常使用单一的卷积神经网络模型,在模型的训练过程中通常没有考虑到息肉训练数据中可能存在的标记噪声,因此息肉分型模型的预测准确性受标记噪声的影响较大。
发明内容
提供该发明内容部分以便以简要的形式介绍构思,这些构思将在后面的具体实施方式部分被详细描述。该发明内容部分并不旨在标识要求保护的技术方案的关键特征或必要特征,也不旨在用于限制所要求的保护的技术方案的范围。
第一方面,本公开提供一种模型训练方法,应用于息肉分型模型,所述息肉分型模型包括第一识别网络和第二识别网络,所述方法包括:
确定多个样本内窥镜图像,所述样本内窥镜图像标注有息肉分型标签;
针对每一所述样本内窥镜图像,通过所述第一识别网络确定对所述样本内窥镜图像中息肉的第一样本预测值,并通过所述第二识别网络确定对所述样本内窥镜图像中息肉的第二样本预测值;
根据所述第一样本预测值和所述第二样本预测值间的差异,将所述样本内窥镜图像分类为干净样本或噪声样本,所述干净样本为所述息肉分型标签标注正确的样本内窥镜图像,所述噪声样本为所述息肉分型标签标注错误的样本内窥镜图像;
根据所述干净样本和所述噪声样本训练所述息肉分型模型。
第二方面,本公开提供一种息肉分型方法,所述方法包括:
获取内窥镜图像,所述内窥镜图像包括待分型的息肉;
通过息肉分型模型中的第一识别网络确定所述内窥镜图像中息肉的第一分型预测值,并通过所述息肉分型模型中的第二识别网络确定所述内窥镜图像中息肉的第二分型预测值,所述息肉分型模型是通过第一方面所述的模型训练方法训练得到的;
将所述第一分型预测值和所述第二分型预测值进行平均计算,得到目标分型预测值,并基于所述目标分型预测值确定所述内窥镜图像中息肉的目标分型结果。
第三方面,本公开提供一种模型训练装置,应用于息肉分型模型,所述息肉分型模型包括第一识别网络和第二识别网络,所述装置包括:
第一训练模块,用于确定多个样本内窥镜图像,所述样本内窥镜图像标注有息肉分型标签;
第二训练模块,用于针对每一所述样本内窥镜图像,通过所述第一识别网络确定对所述样本内窥镜图像中息肉的第一样本预测值,并通过所述第二识别网络确定对所述样本内窥镜图像中息肉的第二样本预测值;
第三训练模块,用于根据所述第一样本预测值和所述第二样本预测值间的差异,将所述样本内窥镜图像分类为干净样本或噪声样本,所述干净样本为所述样本息肉分型结果标注正确的样本内窥镜图像,所述噪声样本为所述样本息肉分型结果标注错误的样本内窥镜图像;
第四训练模块,用于根据所述干净样本和所述噪声样本训练所述息肉分型模型。
第四方面,本公开提供一种息肉分型装置,所述装置包括:
获取模块,用于获取内窥镜图像,所述内窥镜图像包括待分型的息肉;
第一处理模块,用于通过息肉分型模型中的第一识别网络确定所述内窥镜图像中息肉的第一分型预测值,并通过所述息肉分型模型中的第二识别网络确定所述内窥镜图像中息肉的第二分型预测值,所述息肉分型模型是通过第一方面所述的模型训练方法训练得到的;
第二处理模块,用于将所述第一分型预测值和所述第二分型预测值进行平均计算,得到目标分型预测值,并基于所述目标分型预测值确定所述内窥镜图像中息肉的目标分型结果。
第五方面,本公开提供一种非临时性计算机可读存储介质,其上存储有计算机程序,该程序被处理装置执行时实现第一方面或第二方面所述方法的步骤。
第六方面,本公开提供一种电子设备,包括:
存储装置,其上存储有计算机程序;
处理装置,用于执行所述存储装置中的所述计算机程序,以实现第一方面或第二方面所述方法的步骤。
通过上述技术方案,息肉分型模型可以包括第一识别网络和第二识别网络,从而可以通过两个识别网络针对同一内窥镜图像输出的样本预测值间的差异来区分干净样本或噪声样本,进而结合该干净样本和噪声样本进行模型训练,可以充分利用有限的息肉样本数据集,提高数据利用率,减少噪声样本对模型预测准确性的影响。另外,由于干净样本和噪声样本是根据第一识别网络和第二识别网络的共同学习得到的,因此相较于设定样本选择比例的方式,可以减少将噪声数据误分为干净样本的情况,从而提高息肉分型模型的预测准确性。
本公开的其他特征和优点将在随后的具体实施方式部分予以详细说明。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同 或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。在附图中:
图1是根据本公开一示例性实施例示出的一种模型训练方法的流程图;
图2是根据本公开一示例性实施例示出的一种模型训练方法中输入第一输入第一识别网络和第二识别网络的图像示意图;
图3是根据本公开一示例性实施例示出的一种模型训练方法中息肉分型模型的示意图;
图4是根据本公开一示例性实施例示出的一种息肉分型方法的流程图;
图5是根据本公开一示例性实施例示出的一种模型训练装置的框图;
图6是根据本公开另一示例性实施例示出的一种息肉分型装置的框图;
图7是根据本公开另一示例性实施例示出的一种电子设备的框图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的 顺序或者相互依存关系。另外需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
正如背景技术所言,在息肉分型的领域,通常使用单一的卷积神经网络模型,在模型的训练过程中通常没有考虑到息肉训练数据中可能存在的标记噪声,因此息肉分型模型的预测准确性受标记噪声的影响较大。
发明人研究发现,在其他领域,存在通过样本选择的方式减小标记噪声对模型预测准确性影响的方式。具体地,先根据训练早期的loss(损失函数)选择一定比例较小loss的样本作为干净样本,然后通过该干净样本进行模型训练。但是,此种基于样本选择的方式,通常只选取干净数据来对模型进行训练,而忽略了含噪数据,无法充分利用有限的医学数据集。并且,此种方法通常基于mini-batch(将全部样本分为等量的子集以提升训练效率,这些子集即为mini-batch),但是在真实世界的数据上每个mini-batch中的噪声样本的比例很可能是不同的,因此噪声样本的比例很难选择。如果对每个mini-batch设置相同的噪声样本选择比例,则容易将一些噪声样本误分为干净样本,从而影响模型训练的准确性。
有鉴于此,本公开针对息肉分型数据中的标记噪声提出了一种鲁棒性更高的模型,该模型包括第一识别网络和第二识别网络,从而可以通过两个网络输出结果之间的差异来区分干净样本和噪声样本,减少相关技术中基于样本选择的方法中存在的不符合真实数据噪声的问题,使得模型对于真实的存在一定噪声标签的息肉数据集更具鲁棒性。
图1是根据本公开一示例性实施例示出的一种模型训练方法的流程图。该模型训练方法可以应用于息肉分型模型,该息肉分型模型包括第一识别网 络和第二识别网络。参照图1,该方法可以包括:
步骤101,确定多个样本内窥镜图像,该样本内窥镜图像标注有息肉分型标签。
步骤102,针对每一样本内窥镜图像,通过第一识别网络确定对样本内窥镜图像中息肉的第一样本预测值,并通过第二识别网络确定对样本内窥镜图像中息肉的第二样本预测值。
步骤103,根据第一样本预测值和第二样本预测值间的差异,将样本内窥镜图像分类为干净样本或噪声样本。该干净样本为息肉分型标签标注正确的样本内窥镜图像,所述噪声样本为所述息肉分型标签标注错误的样本内窥镜图像。
步骤104,根据干净样本和噪声样本训练息肉分型模型。
通过上述方式,息肉分型模型可以包括第一识别网络和第二识别网络,从而通过两个识别网络针对同一内窥镜图像输出的样本预测值间的差异来区分干净样本或噪声样本,进而结合该干净样本和噪声样本进行模型训练,可以充分利用有限的息肉样本数据集,提高数据利用率,减少噪声样本对模型预测准确性的影响。另外,由于干净样本和噪声样本是根据第一识别网络和第二识别网络的共同学习得到的,因此相较于设定样本选择比例的方式,可以减少将噪声数据误分为干净样本的情况,从而提高息肉分型模型的预测准确性。
为了使得本领域技术人员更加理解本公开提供的模型训练方法,下面对上述各步骤进行详细举例说明。
示例地,可以采集多个病人包括息肉的样本内窥镜图像,采集到的样本内窥镜图像可以包括白光图像和窄带图像。其中,有的病人的息肉可能同时采集到白光和窄带图像,而有的病人的息肉可能只采集到白光图像。因此,可以在采集到的样本内窥镜图像中选取白光部分的图像作为最终的多个样本 内窥镜图像。针对每一样本内窥镜图像,可以通过相关技术中的方式预先标注息肉分型标签。其中,息肉分型标签可以包括增生、肿瘤或者癌症。
在得到多个样本内窥镜图像后,可以针对每一样本内窥镜图像,通过第一识别网络确定对样本内窥镜图像中息肉的第一样本预测值,并通过第二识别网络确定对样本内窥镜图像中息肉的第二样本预测值。
在可能的方式中,可以通过第一识别网络将每一样本内窥镜图像转换为第一维度的图像特征,并基于第一维度的图像特征确定对样本内窥镜图像中息肉的第一样本预测值。通过第二识别网络将每一样本内窥镜图像转换为第二维度的图像特征,并基于第二维度的图像特征确定对样本内窥镜图像中息肉的第二样本预测值。其中,第一维度和第二维度可以根据实际情况设定。
在本公开实施例中,为了使得第一识别网络和第二识别网络学到不同的信息,可以将每个样本内窥镜图像转换为不同维度的特征进行训练。例如,参照图2,针对尺寸为H×W×C的样本内窥镜图像,可以先分为16个子图像和64个子图像。其中,H表示样本内窥镜图像的长度,W表示样本内窥镜图像的宽度,C表示样本内窥镜图像的通道数量。之后进行特征转换(reshape),从而可以将维度为H×W×C的图像转换为维度为
Figure PCTCN2022115758-appb-000001
Figure PCTCN2022115758-appb-000002
Figure PCTCN2022115758-appb-000003
的图像,其中N1表示第一维度图像特征对应的子图像数量,该实施例中取16,N2表示第二维度图像特征对应的子图像数量,该实施例中取64。
之后,可以通过第一维度的图像特征确定对样本内窥镜图像中息肉的第一样本预测值,并通过第二维度的图像特征确定对样本内窥镜图像中息肉的第二样本预测值。应当理解的是,在模型训练初期,可以选择干净样本,比 如按照相关技术中基于样本选择的方式选择一定比例的干净样本,然后将该干净样本分别转换为不同维度的图像特征对第一识别网络和第二识别网络进行训练,即对两个识别网络使用不同分辨率的图像进行初步训练,得到不同尺度的第一识别网络和第二识别网络,用于提取不同尺度的信息。在后续训练过程中,第一识别网络和第二识别网络可以对未区分类型的样本内窥镜图像提取不同尺度的图像特征进行预测。由此,可以提升通过两个识别网络来区分干净样本和噪声样本的准确性,从而提升息肉分型模型的准确性。
在通过第一识别网络和第二识别网络确定对样本内窥镜图像中息肉的样本预测值后,即得到第一样本预测值和第二样本预测值后,可以根据第一样本预测值和第二样本预测值间的差异,将样本内窥镜图像分类为干净样本或噪声样本。
其中,干净样本为息肉分型标签标注正确的样本内窥镜图像,即人工标注的息肉分型标签和该样本内窥镜图像中息肉实际的分型结果一致,而噪声样本为息肉分型标签标注错误的样本内窥镜图像,即人工标注的息肉分型标签和该样本内窥镜图像中息肉实际的分型结果不一致。
发明人研究表明,两个不同尺度的识别网络学到的特征不同,倾向于在干净样本上达成一致,而在噪声样本上产生分歧。因此,在本公开实施例中,可以先确定第一识别网络和第二识别网络对同一样本内窥镜图像的预测值之间的差异,从而根据该差异情况区分干净样本和噪声样本。
在可能的方式中,可以先确定第一样本预测值与第二样本预测值之间的JS散度距离,若第一样本预测值与第二样本预测值之间的JS散度距离与预设阈值之间的数值关系满足预设条件,则将样本内窥镜图像分类为干净样本,若第一样本预测值与所述第二样本预测值之间的JS散度距离与预设阈值之间的数值关系不满足预设条件,则将样本内窥镜图像分类为噪声样本。
示例地,预设阈值可以根据实际情况设定,本公开实施例对此不作限定。
在可能的方式中,可以先确定初始的预设阈值,然后在息肉分型模型的训练过程中,若息肉分型模型的训练次数达到预设训练次数,则增大初始的预设阈值。
示例地,在训练初期,可以将预设阈值设定得较小,以使息肉分型模型可以更容易的进行训练。随着息肉分型模型的能力增强,比如息肉分型模型的训练次数达到预设训练次数,则可以缓慢增大预设阈值。比如,可以按照如下公式设定预设阈值:
Figure PCTCN2022115758-appb-000004
其中,τ表示预设阈值,epoch表示息肉分型模型的训练次数,epoch warm表示息肉分型模型中用于表征训练次数的超参数,可以取10,τ C表示息肉分型模型的超参数,可以取0.75,τ m表示预设常数,可以取0.95,epoch max表示训练总次数。
示例地,预设条件包括第一样本预测值与所述第二样本预测值之间的JS散度距离小于预设阈值,或者样本指标大于预设阈值,该样本指标为1减去JS散度距离得到的差值。
例如,按照如下公式确定第一样本预测值与第二样本预测值之间的JS散度距离:
Figure PCTCN2022115758-appb-000005
Figure PCTCN2022115758-appb-000006
其中,JS表示第一样本预测值与第二样本预测值之间的JS散度距离,p 1表示第一样本预测值,p 2表示第二样本预测值,N表示样本内窥镜图像对应的子图像数量,M表示息肉分型的类别数,
Figure PCTCN2022115758-appb-000007
表示第一识别网络对第i个子图像x i属于第m类息肉分型的样本预测值,
Figure PCTCN2022115758-appb-000008
表示第二识别网络对第i个子图像x i属于第m类息肉分型的样本预测值。
若第一样本预测值与第二样本预测值之间的JS散度距离小于预设阈值,则说明该JS散度距离与预设阈值之间的数值关系满足预设条件,从而可以将样本内窥镜图像分类为干净样本,反之若该JS散度距离不小于(即大于或等于)预设阈值,则说明该JS散度距离与预设阈值之间的数值关系不满足预设条件,从而可以将样本内窥镜图像分类为噪声样本。
又例如,在按照上述公式确定第一样本预测值与第二样本预测值之间的JS散度距离后,可以按照如下公式确定样本指标:P clean=1-JS,其中P clean表示样本指标。若该样本指标大于预设阈值,则说明该JS散度距离与预设阈值之间的数值关系满足预设条件,从而可以将样本内窥镜图像分类为干净样本,反之若该样本指标不大于(即小于或等于)预设阈值,则说明该JS散度距离与预设阈值之间的数值关系不满足预设条件,从而可以将样本内窥镜图像分类为噪声样本。
当然,在其他可能的方式中,可以通过除JS散度以外的其他方式确定第 一样本预测值和第二样本预测值之间的差异,比如,Wasserstein距离等,本公开实施例对此不作限定。
通过上述方式,可以通过例如JS散度的方式衡量第一识别网络和第二识别网络输出的样本预测值之间的差异,从而更加准确的区分干净样本和噪声样本,避免基于样本选择的方式存在的样本选择不符合真实数据噪声的问题,使得息肉分型模型对于真实存在噪声标签的息肉数据集更具鲁棒性。
在将样本内窥镜图像分类为噪声样本后,还可以根据噪声样本对应的第一样本预测值和第二样本预测值,确定息肉分型模型对噪声样本的预测分型结果,并根据噪声样本的预测分型结果、噪声样本标注的息肉分型标签和息肉分型模型的超参数,确定噪声样本的噪声伪标签。相应地,基于干净样本和噪声样本训练息肉分型模型可以是:根据息肉分型模型对干净样本的预测分型结果、干净样本标注的息肉分型标签、息肉分型模型对噪声样本的预测分型结果和噪声样本的噪声伪标签,训练息肉分型模型。
也即是说,在将样本内窥镜图像分类为噪声样本后,可以根据该噪声样本生成噪声伪标签,从而针对噪声样本使用该噪声伪标签而非人工标注的初始标签进行模型训练,进一步提高息肉分型模型预测的准确性,同时减少二次人工标注的成本,提高息肉分型模型的训练效率。
示例地,本公开实施例中息肉分型模型采用第一识别网络和第二识别网络,因此可以结合第一识别网络输出的第一样本预测值和第二识别网络输出的第二样本预测值,确定息肉分型模型对噪声样本的预测分型结果。之后则可以根据噪声样本的预测分型结果、噪声样本标注的息肉分型标签和息肉分型模型的超参数,确定噪声样本的噪声伪标签。
例如,可以按照如下公式确定噪声样本的噪声伪标签:
Figure PCTCN2022115758-appb-000009
其中,y′表示噪声样本的噪声伪标签,y n表示噪声样本的息肉分型标签,
Figure PCTCN2022115758-appb-000010
表示根据第一样本预测值p 1和第二样本预测值p 2确定的预测分型结果,λ表示息肉分型模型的超参数,取值范围为0~1,可以取0.5。
在得到噪声伪标签后,根据噪声样本的噪声伪标签和息肉分型模型对所述噪声样本的预测分型结果,以及息肉分型模型对所述干净样本的预测分型结果和干净样本标注的息肉分型标签,训练息肉分型模型。由此,可以结合干净样本和噪声样本进行模型训练,充分利用有限的息肉样本数据集,提高数据利用率,并且可以提升息肉分型模型对噪声数据的鲁棒性。
在可能的方式中,可以根据干净样本对应的第一样本预测值和息肉分型标签进行第一损失函数的计算,并根据干净样本对应的第二样本预测值和息肉分型标签进行第二损失函数,再根据第一损失函数的计算结果和第二损失函数的计算结果调整息肉分型模型的参数。或者,可以根据噪声样本对应的第一样本预测值和噪声伪标签进行第三损失函数的计算,并根据噪声样本对应的第二样本预测值和噪声伪标签进行第四损失函数的计算,再根据第三损失函数的计算结果和第四损失函数的计算结果,调整息肉分型模型的参数。
例如,可以设定息肉分型模型的整体训练目标为:
L=L CE(y c,y clean1)+L CE(y c,y clean2)+L CE(y′,y noisy1)+L CE(y′,y noisy2)
其中,L表示息肉分型模型的损失函数,L CE表示交叉熵损失,y c表示干净样本的息肉分型标签,y clean1表示干净样本对应的第一样本预测值,y clean2表示干净样本对应的第二样本预测值,y′表示噪声样本的噪声伪标签,y noisy1表示噪声样本对应的第一样本预测值,y noisy2表示噪声样本对应的第二样本预测值。
应当理解的是,若按照上述公式计算息肉分型模型的损失函数,则针对干净样本,会计算上述公式的前两项,而针对噪声样本,会计算上述公式的后两项。也即是说,在本公开实施例中,可以针对干净样本和噪声样本采用不同的损失函数计算方式,进而提高息肉分型模型的训练效果。
下面参照图3所示的模型结构示意图说明本公开提供的模型训练方法。
息肉分型模型包括第一识别网络和第二识别网络。该第一识别网络和第二识别网络分别可以是vision transformer模型网络。参照图3,第一识别网络和第二识别网络包括线性映射模块(Linear Projection)、位置编码器(Embed)、归一化模块(Layer Normalization)、自注意力模块(Multi-Head Attention)和多层感知机(MLP)。其中,线性映射模块用于将铺平的图片特征映射到隐藏层的维度。位置编码器用于获取图像的位置信息。自注意力模块用于学习图像中的关键信息部分。
第一识别网络将样本内窥镜图像转换为第一维度图像特征后,基于该第一维度图像特征确定第一样本预测值,同时第二识别网络将样本内窥镜图像转换为第二维度图像特征后,基于该第二维度图像特征确定第二样本预测值。之后,参照图3,第一识别网络输出的第一样本预测值和第二识别网络输出的第二样本预测值可以输入JS模块。该JS模块可以确定第一样本预测值和 第二样本预测值之间的JS散度距离,从而区分干净样本和噪声样本。并且,针对噪声样本,可以生成噪声伪标签。
另外,第一识别网络输出的第一样本预测值可以输入分类器(Classifier),得到一预测分型结果。针对噪声样本,则可以根据该预测分型结果和噪声伪标签进行损失函数的计算,从而根据该损失函数的计算结果调整模型参数。同样地,第二识别网络输出的第二样本预测值可以输入对应的分类器(Classifier),得到一预测分型结果。针对噪声样本,则可以根据该预测分型结果和噪声伪标签进行损失函数的计算,从而根据该损失函数的计算结果调整模型参数。
应当理解的是,针对干净样本,通过分类器得到预测分型结果后,可以根据该预测分型结果和该干净样本标注的息肉分型标签进行损失函数的计算,从而根据该损失函数的计算结果调整模型参数。
通过上述方案,息肉分型模型可以包括第一识别网络和第二识别网络,从而可以通过两个识别网络针对同一内窥镜图像输出的样本预测值间的差异来区分干净样本或噪声样本,进而结合该干净样本和噪声样本进行模型训练,可以充分利用有限的息肉样本数据集,提高数据利用率,减少噪声样本对息肉分型模型预测准确性的影响。并且,由于干净样本和噪声样本是根据第一识别网络和第二识别网络的共同学习得到的,因此相较于设定样本选择比例的方式,可以减少将噪声数据误分为干净样本的情况,从而提高息肉分型模型的预测准确性。此外,可以根据噪声样本生成噪声伪标签,从而针对噪声样本使用该噪声伪标签而非人工标注的初始标签进行模型训练,可以进一步提高息肉分型模型预测的准确性,同时减少二次人工标注的成本,提高息肉分型模型的训练效率。
基于同一构思,本公开还提供一种息肉分型方法,参照图4,该方法包括以下步骤:
步骤401,获取内窥镜图像,该内窥镜图像包括待分型的息肉;
步骤402,通过息肉分型模型中的第一识别网络确定内窥镜图像中息肉的第一分型预测值,并通过息肉分型模型中的第二识别网络确定内窥镜图像中息肉的第二分型预测值。该息肉分型模型是通过上述任一模型训练方法训练得到的。
步骤403,将第一分型预测值和第二分型预测值进行平均计算,得到目标分型预测值,并基于目标分型预测值确定内窥镜图像中息肉的目标分型结果。
示例地,获取内窥镜图像可以是从内窥镜装置中获取。在具体实施时,本公开提供的息肉分型方法可以应用于内窥镜装置的控制单元,该控制单元在获取到内窥镜装置的图像采集单元采集到的内窥镜图像后,可以执行本公开提供的息肉分型方法,从而通过训练好的息肉分型模型确定该内窥镜图像中息肉的目标分型结果。或者,本公开提供的息肉分型方法可以应用于包括内窥镜装置的医疗系统,该医疗系统中的控制设备可以通过有线或无线的方式与内窥镜装置通信,从而可以从内窥镜装置中获取内窥镜图像,并执行本公开提供的息肉分型方法,从而通过训练好的息肉分型模型确定该内窥镜图像中息肉的目标分型结果。
示例地,在获取到内窥镜图像后,训练好的息肉分型模型包括第一识别网络和第二识别网络,因此可以通过第一识别网络确定内窥镜图像中息肉的第一分型预测值,并通过第二识别网络确定内窥镜图像中息肉的第二分型预测值,然后按照如下公式得到目标分型预测值:
Figure PCTCN2022115758-appb-000011
其中,y test表示目标分型预测值,p 1表示第一识别网络对内窥镜图像中息肉的第一分型预测值,p 2表示第二识别网络对内窥镜图像中息肉的第二分型预测值。
之后,则可以通过分类器和目标分型预测值确定内窥镜图像中息肉的目标分型结果。按照此种方式,由于息肉分型模型是通过两个识别网络针对同一内窥镜图像输出的样本预测值间的差异来区分干净样本或噪声样本后,再结合该干净样本和噪声样本进行训练而得到的,即在模型训练过程中充分利用了干净样本和噪声样本,因此息肉分型模型的鲁棒性较高,从而基于该息肉分型模型进行分型,可以提高息肉分型结果的准确性。并且,干净样本和噪声样本是根据两个识别网络共同学习得到的,相较于设定样本选择比例的方式,可以减少将噪声数据误分为干净样本的情况,从而进一步息肉分型结果的准确性。
基于同一构思,本公开还提供一种模型训练装置,该装置可以通过软件、硬件或者两者结合的方式成为电子设备的部分或全部。该装置用于训练息肉分型模型,该息肉分型模型包括第一识别网络和第二识别网络。参照图5,该模型训练装置500包括:
第一训练模块501,用于确定多个样本内窥镜图像,所述样本内窥镜图像标注有息肉分型标签;
第二训练模块502,用于针对每一所述样本内窥镜图像,通过所述第一识别网络确定对所述样本内窥镜图像中息肉的第一样本预测值,并通过所述第二识别网络确定对所述样本内窥镜图像中息肉的第二样本预测值;
第三训练模块503,用于根据所述第一样本预测值和所述第二样本预测值间的差异,将所述样本内窥镜图像分类为干净样本或噪声样本,所述干净样本为所述样本息肉分型结果标注正确的样本内窥镜图像,所述噪声样本为 所述样本息肉分型结果标注错误的样本内窥镜图像;
第四训练模块504,用于根据所述干净样本和所述噪声样本训练所述息肉分型模型。
可选地,所述装置500还包括:
第五训练模块,用于在将所述样本内窥镜图像分类为噪声样本后,根据所述噪声样本对应的所述第一样本预测值和所述第二样本预测值,确定所述息肉分型模型对所述噪声样本的预测分型结果;
第六训练模块,用于根据所述噪声样本的预测分型结果、所述噪声样本标注的息肉分型标签和所述息肉分型模型的超参数,确定所述噪声样本的噪声伪标签;
所述第四训练模块504用于:
根据所述息肉分型模型对所述干净样本的预测分型结果、所述干净样本标注的息肉分型标签、所述息肉分型模型对所述噪声样本的预测分型结果和所述噪声样本的噪声伪标签,训练所述息肉分型模型。
可选地,所述第四训练模块504用于:
根据所述干净样本对应的所述第一样本预测值和息肉分型标签进行第一损失函数的计算,根据所述干净样本对应的所述第二样本预测值和息肉分型标签进行第二损失函数,根据所述第一损失函数的计算结果和所述第二损失函数的计算结果调整所述息肉分型模型的参数;或者
根据所述噪声样本对应的所述第一样本预测值和所述噪声伪标签进行第三损失函数的计算,根据所述噪声样本对应的所述第二样本预测值和所述噪声伪标签进行第四损失函数的计算,根据所述第三损失函数的计算结果和所述第四损失函数的计算结果,调整所述息肉分型模型的参数。
可选地,所述第二训练模块502用于:
通过所述第一识别网络将每一所述样本内窥镜图像转换为第一维度的图 像特征,并基于所述第一维度的图像特征确定对所述样本内窥镜图像中息肉的第一样本预测值;
通过所述第二识别网络将每一所述样本内窥镜图像转换为第二维度的图像特征,并基于所述第二维度的图像特征确定对所述样本内窥镜图像中息肉的第二样本预测值。
可选地,所述第三训练模块503用于:
确定所述第一样本预测值与所述第二样本预测值之间的JS散度距离;
若所述第一样本预测值与所述第二样本预测值之间的JS散度距离与预设阈值之间的数值关系满足预设条件,则将所述样本内窥镜图像分类为干净样本,若所述第一样本预测值与所述第二样本预测值之间的JS散度距离与所述预设阈值之间的数值关系不满足所述预设条件,则将所述样本内窥镜图像分类为噪声样本。
可选地,所述预设条件包括所述JS散度距离小于所述预设阈值,或者样本指标大于所述预设阈值,所述样本指标为1减去所述JS散度距离得到的差值。
可选地,所述预设阈值是通过如下模块设定的:
确定模块,用于确定初始的预设阈值;
调整模块,用于在所述息肉分型模型的训练过程中,若所述息肉分型模型的训练次数达到预设训练次数,则增大初始的所述预设阈值。
基于同一发明构思,本公开还提供一种息肉分型装置,该装置可以通过软件、硬件或者两者结合的方式成为电子设备的部分或全部,比如该电子设备可以是内窥镜设备或者包括内窥镜设备的医疗设备。参照图6,该息肉分型装置600包括:
获取模块601,用于获取内窥镜图像,所述内窥镜图像包括待分型的息肉;
第一处理模块602,用于通过息肉分型模型中的第一识别网络确定所述内窥镜图像中息肉的第一分型预测值,并通过所述息肉分型模型中的第二识别网络确定所述内窥镜图像中息肉的第二分型预测值,所述息肉分型模型是通过上述任一所述的模型训练方法训练得到的;
第二处理模块603,用于将所述第一分型预测值和所述第二分型预测值进行平均计算,得到目标分型预测值,并基于所述目标分型预测值确定所述内窥镜图像中息肉的目标分型结果。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
基于同一构思,本公开实施例还提供一种非临时性计算机可读存储介质,其上存储有计算机程序,该程序被处理装置执行时实现上述任一模型训练方法或者上述任一息肉分型方法的步骤。
基于同一构思,本公开实施例还提供一种电子设备,包括:
存储装置,其上存储有计算机程序;
处理装置,用于执行所述存储装置中的所述计算机程序,以实现上述任一模型训练方法或者上述任一息肉分型方法的步骤。
下面参考图7,其示出了适于用来实现本公开实施例的电子设备700的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图7示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图7所示,电子设备700可以包括处理装置(例如中央处理器、图形处理器等)701,其可以根据存储在只读存储器(ROM)702中的程序或者从存储装置708加载到随机访问存储器(RAM)703中的程序而执行各种适当 的动作和处理。在RAM 703中,还存储有电子设备700操作所需的各种程序和数据。处理装置701、ROM 702以及RAM 703通过总线704彼此相连。输入/输出(I/O)接口705也连接至总线704。
通常,以下装置可以连接至I/O接口705:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置706;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置707;包括例如磁带、硬盘等的存储装置708;以及通信装置709。通信装置709可以允许电子设备700与其他设备进行无线或有线通信以交换数据。虽然图7示出了具有各种装置的电子设备700,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置709从网络上被下载和安装,或者从存储装置708被安装,或者从ROM 702被安装。在该计算机程序被处理装置701执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机 可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:确定多个样本内窥镜图像,所述样本内窥镜图像标注有息肉分型标签;针对每一所述样本内窥镜图像,通过所述第一识别网络确定对所述样本内窥镜图像中息肉的第一样本预测值,并通过所述第二识别网络确定对所述样本内窥镜图像中息肉的第二样本预测值;根据所述第一样本预测值和所述第二样本预测值间的差异,将所述样本内窥镜图像分类为干净样本或噪声样本,所述干净样本为所述息肉分型标签 标注正确的样本内窥镜图像,所述噪声样本为所述息肉分型标签标注错误的样本内窥镜图像;根据所述干净样本和所述噪声样本训练所述息肉分型模型。
或者,上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取内窥镜图像,所述内窥镜图像包括待分型的息肉;通过息肉分型模型中的第一识别网络确定所述内窥镜图像中息肉的第一分型预测值,并通过所述息肉分型模型中的第二识别网络确定所述内窥镜图像中息肉的第二分型预测值,所述息肉分型模型是通过上述任一所述的模型训练方法训练得到的;将所述第一分型预测值和所述第二分型预测值进行平均计算,得到目标分型预测值,并基于所述目标分型预测值确定所述内窥镜图像中息肉的目标分型结果。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言——诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)——连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可 以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的模块可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,模块的名称在某种情况下并不构成对该模块本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,示例1提供了一种模型训练方法,应用于息肉分型模型,所述息肉分型模型包括第一识别网络和第二识别网络,所述方法包括:
确定多个样本内窥镜图像,所述样本内窥镜图像标注有息肉分型标签;
针对每一所述样本内窥镜图像,通过所述第一识别网络确定对所述样本 内窥镜图像中息肉的第一样本预测值,并通过所述第二识别网络确定对所述样本内窥镜图像中息肉的第二样本预测值;
根据所述第一样本预测值和所述第二样本预测值间的差异,将所述样本内窥镜图像分类为干净样本或噪声样本,所述干净样本为所述息肉分型标签标注正确的样本内窥镜图像,所述噪声样本为所述息肉分型标签标注错误的样本内窥镜图像;
根据所述干净样本和所述噪声样本训练所述息肉分型模型。
根据本公开的一个或多个实施例,示例2提供了示例1的方法,在将所述样本内窥镜图像分类为噪声样本后,所述方法还包括:
根据所述噪声样本对应的所述第一样本预测值和所述第二样本预测值,确定所述息肉分型模型对所述噪声样本的预测分型结果;
根据所述噪声样本的预测分型结果、所述噪声样本标注的息肉分型标签和所述息肉分型模型的超参数,确定所述噪声样本的噪声伪标签;
所述基于所述干净样本和所述噪声样本训练所述息肉分型模型,包括:
根据所述息肉分型模型对所述干净样本的预测分型结果、所述干净样本标注的息肉分型标签、所述息肉分型模型对所述噪声样本的预测分型结果和所述噪声样本的噪声伪标签,训练所述息肉分型模型。
根据本公开的一个或多个实施例,示例3提供了示例2的方法,所述根据所述息肉分型模型对所述干净样本的预测分型结果、所述干净样本标注的息肉分型标签、所述息肉分型模型对所述噪声样本的预测分型结果和所述噪声样本的噪声伪标签,训练所述息肉分型模型,包括:
根据所述干净样本对应的所述第一样本预测值和息肉分型标签进行第一损失函数的计算,根据所述干净样本对应的所述第二样本预测值和息肉分型标签进行第二损失函数,根据所述第一损失函数的计算结果和所述第二损失函数的计算结果调整所述息肉分型模型的参数;或者
根据所述噪声样本对应的所述第一样本预测值和所述噪声伪标签进行第三损失函数的计算,根据所述噪声样本对应的所述第二样本预测值和所述噪声伪标签进行第四损失函数的计算,根据所述第三损失函数的计算结果和所述第四损失函数的计算结果,调整所述息肉分型模型的参数。
根据本公开的一个或多个实施例,示例4提供了示例1-3任一的方法,所述针对每一所述样本内窥镜图像,通过第一识别网络确定对所述样本内窥镜图像中息肉的第一样本预测值,并通过第二识别网络确定对所述样本内窥镜图像中息肉的第二样本预测值,包括:
通过所述第一识别网络将每一所述样本内窥镜图像转换为第一维度的图像特征,并基于所述第一维度的图像特征确定对所述样本内窥镜图像中息肉的第一样本预测值;
通过所述第二识别网络将每一所述样本内窥镜图像转换为第二维度的图像特征,并基于所述第二维度的图像特征确定对所述样本内窥镜图像中息肉的第二样本预测值。
根据本公开的一个或多个实施例,示例5提供了示例1-3任一的方法,所述根据所述第一样本预测值和所述第二样本预测值间的差异,将所述样本内窥镜图像分类为干净样本或噪声样本,包括:
确定所述第一样本预测值与所述第二样本预测值之间的JS散度距离;
若所述第一样本预测值与所述第二样本预测值之间的JS散度距离与预设阈值之间的数值关系满足预设条件,则将所述样本内窥镜图像分类为干净样本,若所述第一样本预测值与所述第二样本预测值之间的JS散度距离与所述预设阈值之间的数值关系不满足所述预设条件,则将所述样本内窥镜图像分类为噪声样本。
根据本公开的一个或多个实施例,示例6提供了示例5的方法,所述预设条件包括所述JS散度距离小于所述预设阈值,或者样本指标大于所述预设 阈值,所述样本指标为1减去所述JS散度距离得到的差值。
根据本公开的一个或多个实施例,示例7提供了示例5的方法,所述预设阈值是通过如下方式设定的:
确定初始的预设阈值;
在所述息肉分型模型的训练过程中,若所述息肉分型模型的训练次数达到预设训练次数,则增大初始的所述预设阈值。
根据本公开的一个或多个实施例,示例8提供了一种息肉分型方法,所述方法包括:
获取内窥镜图像,所述内窥镜图像包括待分型的息肉;
通过息肉分型模型中的第一识别网络确定所述内窥镜图像中息肉的第一分型预测值,并通过所述息肉分型模型中的第二识别网络确定所述内窥镜图像中息肉的第二分型预测值,所述息肉分型模型是通过示例1-7任一项所述的模型训练方法训练得到的;
将所述第一分型预测值和所述第二分型预测值进行平均计算,得到目标分型预测值,并基于所述目标分型预测值确定所述内窥镜图像中息肉的目标分型结果。
根据本公开的一个或多个实施例,示例9提供了一种模型训练装置,应用于息肉分型模型,所述息肉分型模型包括第一识别网络和第二识别网络,所述装置包括:
第一训练模块,用于确定多个样本内窥镜图像,所述样本内窥镜图像标注有息肉分型标签;
第二训练模块,用于针对每一所述样本内窥镜图像,通过所述第一识别网络确定对所述样本内窥镜图像中息肉的第一样本预测值,并通过所述第二识别网络确定对所述样本内窥镜图像中息肉的第二样本预测值;
第三训练模块,用于根据所述第一样本预测值和所述第二样本预测值间 的差异,将所述样本内窥镜图像分类为干净样本或噪声样本,所述干净样本为所述样本息肉分型结果标注正确的样本内窥镜图像,所述噪声样本为所述样本息肉分型结果标注错误的样本内窥镜图像;
第四训练模块,用于根据所述干净样本和所述噪声样本训练所述息肉分型模型。
根据本公开的一个或多个实施例,示例10提供了一种息肉分型装置,所述装置包括:
获取模块,用于获取内窥镜图像,所述内窥镜图像包括待分型的息肉;
第一处理模块,用于通过息肉分型模型中的第一识别网络确定所述内窥镜图像中息肉的第一分型预测值,并通过所述息肉分型模型中的第二识别网络确定所述内窥镜图像中息肉的第二分型预测值,所述息肉分型模型是通过示例1-7任一项所述的模型训练方法训练得到的;
第二处理模块,用于将所述第一分型预测值和所述第二分型预测值进行平均计算,得到目标分型预测值,并基于所述目标分型预测值确定所述内窥镜图像中息肉的目标分型结果。
根据本公开的一个或多个实施例,示例11提供了一种非临时性计算机可读存储介质,其上存储有计算机程序,该程序被处理装置执行时实现示例1-8中任一项所述方法的步骤。
根据本公开的一个或多个实施例,示例12提供了一种电子设备,包括:
存储装置,其上存储有计算机程序;
处理装置,用于执行所述存储装置中的所述计算机程序,以实现示例1-8中任一项所述方法的步骤。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下, 由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。

Claims (12)

  1. 一种模型训练方法,应用于息肉分型模型,所述息肉分型模型包括第一识别网络和第二识别网络,所述方法包括:
    确定多个样本内窥镜图像,所述样本内窥镜图像标注有息肉分型标签;
    针对每一所述样本内窥镜图像,通过所述第一识别网络确定对所述样本内窥镜图像中息肉的第一样本预测值,并通过所述第二识别网络确定对所述样本内窥镜图像中息肉的第二样本预测值;
    根据所述第一样本预测值和所述第二样本预测值间的差异,将所述样本内窥镜图像分类为干净样本或噪声样本,所述干净样本为所述息肉分型标签标注正确的样本内窥镜图像,所述噪声样本为所述息肉分型标签标注错误的样本内窥镜图像;
    根据所述干净样本和所述噪声样本训练所述息肉分型模型。
  2. 根据权利要求1所述的方法,其中,在将所述样本内窥镜图像分类为噪声样本后,所述方法还包括:
    根据所述噪声样本对应的所述第一样本预测值和所述第二样本预测值,确定所述息肉分型模型对所述噪声样本的预测分型结果;
    根据所述噪声样本的预测分型结果、所述噪声样本标注的息肉分型标签和所述息肉分型模型的超参数,确定所述噪声样本的噪声伪标签;
    所述根据所述干净样本和所述噪声样本训练所述息肉分型模型,包括:
    根据所述息肉分型模型对所述干净样本的预测分型结果、所述干净样本标注的息肉分型标签、所述息肉分型模型对所述噪声样本的预测分型结果和所述噪声样本的噪声伪标签,训练所述息肉分型模型。
  3. 根据权利要求2所述的方法,其中,所述根据所述息肉分型模型对所述干净样本的预测分型结果、所述干净样本标注的息肉分型标签、所述息肉分型模型对所述噪声样本的预测分型结果和所述噪声样本的噪声伪标签,训练所述息肉分型模型,包括:
    根据所述干净样本对应的所述第一样本预测值和息肉分型标签进行第一损失函数的计算,根据所述干净样本对应的所述第二样本预测值和息肉分型 标签进行第二损失函数,根据所述第一损失函数的计算结果和所述第二损失函数的计算结果调整所述息肉分型模型的参数;或者
    根据所述噪声样本对应的所述第一样本预测值和所述噪声伪标签进行第三损失函数的计算,根据所述噪声样本对应的所述第二样本预测值和所述噪声伪标签进行第四损失函数的计算,根据所述第三损失函数的计算结果和所述第四损失函数的计算结果,调整所述息肉分型模型的参数。
  4. 根据权利要求1-3任一项所述的方法,其中,所述针对每一所述样本内窥镜图像,通过第一识别网络确定对所述样本内窥镜图像中息肉的第一样本预测值,并通过第二识别网络确定对所述样本内窥镜图像中息肉的第二样本预测值,包括:
    通过所述第一识别网络将每一所述样本内窥镜图像转换为第一维度的图像特征,并基于所述第一维度的图像特征确定对所述样本内窥镜图像中息肉的第一样本预测值;
    通过所述第二识别网络将每一所述样本内窥镜图像转换为第二维度的图像特征,并基于所述第二维度的图像特征确定对所述样本内窥镜图像中息肉的第二样本预测值。
  5. 根据权利要求1-3任一项所述的方法,其中,所述根据所述第一样本预测值和所述第二样本预测值间的差异,将所述样本内窥镜图像分类为干净样本或噪声样本,包括:
    确定所述第一样本预测值与所述第二样本预测值之间的JS散度距离;
    若所述第一样本预测值与所述第二样本预测值之间的JS散度距离与预设阈值之间的数值关系满足预设条件,则将所述样本内窥镜图像分类为干净样本,若所述第一样本预测值与所述第二样本预测值之间的JS散度距离与所述预设阈值之间的数值关系不满足所述预设条件,则将所述样本内窥镜图像分类为噪声样本。
  6. 根据权利要求5所述的方法,其中,所述预设条件包括所述JS散度距离小于所述预设阈值,或者样本指标大于所述预设阈值,所述样本指标为1减去所述JS散度距离得到的差值。
  7. 根据权利要求5所述的方法,其中,所述预设阈值是通过如下方式设定的:
    确定初始的预设阈值;
    在所述息肉分型模型的训练过程中,若所述息肉分型模型的训练次数达到预设训练次数,则增大初始的所述预设阈值。
  8. 一种息肉分型方法,其中,所述方法包括:
    获取内窥镜图像,所述内窥镜图像包括待分型的息肉;
    通过息肉分型模型中的第一识别网络确定所述内窥镜图像中息肉的第一分型预测值,并通过所述息肉分型模型中的第二识别网络确定所述内窥镜图像中息肉的第二分型预测值,所述息肉分型模型是通过权利要求1-7任一项所述的模型训练方法训练得到的;
    将所述第一分型预测值和所述第二分型预测值进行平均计算,得到目标分型预测值,并基于所述目标分型预测值确定所述内窥镜图像中息肉的目标分型结果。
  9. 一种模型训练装置,其中,应用于息肉分型模型,所述息肉分型模型包括第一识别网络和第二识别网络,所述装置包括:
    第一训练模块,用于确定多个样本内窥镜图像,所述样本内窥镜图像标注有息肉分型标签;
    第二训练模块,用于针对每一所述样本内窥镜图像,通过所述第一识别网络确定对所述样本内窥镜图像中息肉的第一样本预测值,并通过所述第二识别网络确定对所述样本内窥镜图像中息肉的第二样本预测值;
    第三训练模块,用于根据所述第一样本预测值和所述第二样本预测值间的差异,将所述样本内窥镜图像分类为干净样本或噪声样本,所述干净样本为所述样本息肉分型结果标注正确的样本内窥镜图像,所述噪声样本为所述样本息肉分型结果标注错误的样本内窥镜图像;
    第四训练模块,用于根据所述干净样本和所述噪声样本训练所述息肉分型模型。
  10. 一种息肉分型装置,所述装置包括:
    获取模块,用于获取内窥镜图像,所述内窥镜图像包括待分型的息肉;
    第一处理模块,用于通过息肉分型模型中的第一识别网络确定所述内窥镜图像中息肉的第一分型预测值,并通过所述息肉分型模型中的第二识别网络确定所述内窥镜图像中息肉的第二分型预测值,所述息肉分型模型是通过权利要求1-7任一项所述的模型训练方法训练得到的;
    第二处理模块,用于将所述第一分型预测值和所述第二分型预测值进行平均计算,得到目标分型预测值,并基于所述目标分型预测值确定所述内窥镜图像中息肉的目标分型结果。
  11. 一种非临时性计算机可读存储介质,其上存储有计算机程序,该程序被处理装置执行时实现权利要求1-8中任一项所述方法的步骤。
  12. 一种电子设备,包括:
    存储装置,其上存储有计算机程序;
    处理装置,用于执行所述存储装置中的所述计算机程序,以实现权利要求1-8中任一项所述方法的步骤。
PCT/CN2022/115758 2021-09-03 2022-08-30 息肉分型方法、模型训练方法及相关装置 WO2023030298A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111034220.2A CN113470031B (zh) 2021-09-03 2021-09-03 息肉分型方法、模型训练方法及相关装置
CN202111034220.2 2021-09-03

Publications (1)

Publication Number Publication Date
WO2023030298A1 true WO2023030298A1 (zh) 2023-03-09

Family

ID=77868130

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/115758 WO2023030298A1 (zh) 2021-09-03 2022-08-30 息肉分型方法、模型训练方法及相关装置

Country Status (2)

Country Link
CN (1) CN113470031B (zh)
WO (1) WO2023030298A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710763A (zh) * 2023-11-23 2024-03-15 广州航海学院 图像噪声识别模型训练方法、图像噪声识别方法及装置

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470031B (zh) * 2021-09-03 2021-12-03 北京字节跳动网络技术有限公司 息肉分型方法、模型训练方法及相关装置
CN114417987A (zh) * 2022-01-11 2022-04-29 支付宝(杭州)信息技术有限公司 一种模型训练方法、数据识别方法、装置及设备
CN114565586B (zh) * 2022-03-02 2023-05-30 小荷医疗器械(海南)有限公司 息肉分割模型的训练方法、息肉分割方法及相关装置
CN114782390B (zh) * 2022-04-29 2023-08-11 小荷医疗器械(海南)有限公司 检测模型的确定方法、息肉检测方法、装置、介质及设备
CN115511012B (zh) * 2022-11-22 2023-04-07 南京码极客科技有限公司 一种最大熵约束的类别软标签识别训练方法
CN116051486A (zh) * 2022-12-29 2023-05-02 抖音视界有限公司 内窥镜图像识别模型的训练方法、图像识别方法及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109753938A (zh) * 2019-01-10 2019-05-14 京东方科技集团股份有限公司 图像识别方法和设备及应用、神经网络的训练方法
CN110060247A (zh) * 2019-04-18 2019-07-26 深圳市深视创新科技有限公司 应对样本标注错误的鲁棒深度神经网络学习方法
CN110390674A (zh) * 2019-07-24 2019-10-29 腾讯医疗健康(深圳)有限公司 图像处理方法、装置、存储介质、设备以及系统
CN110427994A (zh) * 2019-07-24 2019-11-08 腾讯医疗健康(深圳)有限公司 消化道内镜图像处理方法、装置、存储介质、设备及系统
US20210248736A1 (en) * 2018-06-13 2021-08-12 Siemens Healthcare Gmbh Localization and classification of abnormalities in medical images
CN113470031A (zh) * 2021-09-03 2021-10-01 北京字节跳动网络技术有限公司 息肉分型方法、模型训练方法及相关装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10482313B2 (en) * 2015-09-30 2019-11-19 Siemens Healthcare Gmbh Method and system for classification of endoscopic images using deep decision networks
CN114424210A (zh) * 2019-09-20 2022-04-29 谷歌有限责任公司 存在标签噪声情况下的鲁棒训练
CN111414946B (zh) * 2020-03-12 2022-09-23 腾讯科技(深圳)有限公司 基于人工智能的医疗影像的噪声数据识别方法和相关装置
CN112668698A (zh) * 2020-12-28 2021-04-16 北京的卢深视科技有限公司 一种神经网络的训练方法及系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210248736A1 (en) * 2018-06-13 2021-08-12 Siemens Healthcare Gmbh Localization and classification of abnormalities in medical images
CN109753938A (zh) * 2019-01-10 2019-05-14 京东方科技集团股份有限公司 图像识别方法和设备及应用、神经网络的训练方法
CN110060247A (zh) * 2019-04-18 2019-07-26 深圳市深视创新科技有限公司 应对样本标注错误的鲁棒深度神经网络学习方法
CN110390674A (zh) * 2019-07-24 2019-10-29 腾讯医疗健康(深圳)有限公司 图像处理方法、装置、存储介质、设备以及系统
CN110427994A (zh) * 2019-07-24 2019-11-08 腾讯医疗健康(深圳)有限公司 消化道内镜图像处理方法、装置、存储介质、设备及系统
CN113470031A (zh) * 2021-09-03 2021-10-01 北京字节跳动网络技术有限公司 息肉分型方法、模型训练方法及相关装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710763A (zh) * 2023-11-23 2024-03-15 广州航海学院 图像噪声识别模型训练方法、图像噪声识别方法及装置

Also Published As

Publication number Publication date
CN113470031A (zh) 2021-10-01
CN113470031B (zh) 2021-12-03

Similar Documents

Publication Publication Date Title
WO2023030298A1 (zh) 息肉分型方法、模型训练方法及相关装置
US11610148B2 (en) Information processing device and information processing method
JP2022505775A (ja) 画像分類モデルの訓練方法、画像処理方法及びその装置、並びにコンピュータプログラム
WO2023030370A1 (zh) 内窥镜图像检测方法、装置、存储介质及电子设备
WO2023030523A1 (zh) 用于内窥镜的组织腔体定位方法、装置、介质及设备
WO2022252881A1 (zh) 图像处理方法、装置、可读介质和电子设备
WO2023030427A1 (zh) 生成模型的训练方法、息肉识别方法、装置、介质及设备
WO2023185516A1 (zh) 图像识别模型的训练方法、识别方法、装置、介质和设备
WO2023143178A1 (zh) 对象分割方法、装置、设备及存储介质
WO2023061080A1 (zh) 组织图像的识别方法、装置、可读介质和电子设备
WO2023030097A1 (zh) 组织腔清洁度的确定方法、装置、可读介质和电子设备
CN110298850B (zh) 眼底图像的分割方法和装置
WO2022095674A1 (zh) 用于操作移动设备的方法和装置
WO2022105622A1 (zh) 图像分割方法、装置、可读介质及电子设备
Istasy et al. The impact of artificial intelligence on health equity in oncology: scoping review
WO2023078070A1 (zh) 一种字符识别方法、装置、设备、介质及产品
CN110472673B (zh) 参数调整方法、眼底图像处理方法、装置、介质及设备
WO2023030426A1 (zh) 息肉识别方法、装置、介质及设备
CN111797665B (zh) 用于转换视频的方法和装置
WO2023103887A1 (zh) 图像分割标签的生成方法、装置、电子设备及存储介质
CN114937178B (zh) 基于多模态的图像分类方法、装置、可读介质和电子设备
WO2022052889A1 (zh) 图像识别方法、装置、电子设备和计算机可读介质
CN114863124A (zh) 模型训练方法、息肉检测方法、相应装置、介质及设备
Zhang et al. Usable region estimate for assessing practical usability of medical image segmentation models
Abdel Magid et al. Channel embedding for informative protein identification from highly multiplexed images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22863433

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE