CN113298159A - Target detection method and device, electronic equipment and storage medium - Google Patents
Target detection method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113298159A CN113298159A CN202110593978.3A CN202110593978A CN113298159A CN 113298159 A CN113298159 A CN 113298159A CN 202110593978 A CN202110593978 A CN 202110593978A CN 113298159 A CN113298159 A CN 113298159A
- Authority
- CN
- China
- Prior art keywords
- detection model
- training
- data set
- detection
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 341
- 238000012549 training Methods 0.000 claims abstract description 172
- 238000000034 method Methods 0.000 claims abstract description 14
- 230000003321 amplification Effects 0.000 claims description 56
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 56
- 230000006870 function Effects 0.000 claims description 28
- 238000012545 processing Methods 0.000 claims description 18
- 239000000126 substance Substances 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 5
- 238000012360 testing method Methods 0.000 claims description 4
- 238000005286 illumination Methods 0.000 claims 1
- 238000005516 engineering process Methods 0.000 abstract description 3
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 15
- 230000009466 transformation Effects 0.000 description 12
- 238000004422 calculation algorithm Methods 0.000 description 6
- 230000003416 augmentation Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 5
- 238000007726 management method Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000003094 perturbing effect Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to an artificial intelligence technology, and discloses a target detection method, which comprises the following steps: acquiring a training data set; training an original detection model by using a training data set to obtain training parameters, and initializing a teacher detection model and a student detection model; training and updating parameters of the student detection model to obtain comparison parameters; updating the teacher detection model according to the comparison parameters; returning to the steps of training the student detection model and updating parameters until the student detection model is converged to obtain a target detection model; and identifying a target object contained in the image to be detected by using the target detection model to obtain a detection result. Furthermore, the invention relates to a blockchain technique, wherein the training data set can be stored in a node of the blockchain. The invention also provides an object detection device, an electronic device and a computer readable storage medium. The method can solve the problem of low accuracy of the detection result when the labeled data amount of the target detection model is small.
Description
Technical Field
The present invention relates to the field of artificial intelligence technologies, and in particular, to a target detection method and apparatus, an electronic device, and a computer-readable storage medium.
Background
Currently, the application of object detection models has been in many fields, such as face detection, vehicle detection, pedestrian counting, automatic driving, safety systems, and medical detection.
The target detection model often contains a large number of parameters, and needs to be learned through a large number of labeled data, but under the condition of low labeled data quantity, the overfitting problem is easily caused, and the large number of data labels often consume time and labor; meanwhile, for the existing target detection model, a pseudo label mode is used during training, but the pseudo label can amplify edge noise, so that the model can easily learn wrong data, and the detection result accuracy of the model is low.
Disclosure of Invention
The invention provides a target detection method, a target detection device and a computer-readable storage medium, and mainly aims to solve the problem of low accuracy of a detection result when the labeled data amount of a target detection model is small.
In order to achieve the above object, the present invention provides a target detection method, including:
acquiring a training data set, wherein the training data set comprises labeled data and unlabeled data;
training a pre-constructed original detection model by using the labeled data in the training data set to obtain training parameters, and initializing the pre-constructed teacher detection model and the pre-constructed student detection model by using the training parameters;
training and updating parameters of the initialized student detection model by using the training data set and the initialized teacher detection model to obtain comparison parameters;
updating the initialized teacher detection model according to the comparison parameters;
training and parameter updating steps are carried out on the initialized student detection model by using the training data set and the initialized teacher detection model until the student detection model is converged to obtain a target detection model;
and identifying a target object contained in the image to be detected by using the target detection model to obtain a detection result.
Optionally, the acquiring the training data set includes:
acquiring an image with a detection frame and a label from a first database to obtain annotation data;
obtaining an unprocessed image from a second database to obtain unmarked data;
and collecting the marked data and the unmarked data to obtain a training data set.
Optionally, the training of the pre-constructed original detection model by using the labeled data in the training data set to obtain the training parameters includes:
acquiring a pre-constructed original detection model;
performing target detection on the labeled data in the training data set by using the original detection model to obtain a detection result;
calculating a loss value of the detection result according to the label of the labeled data and the detection frame;
and updating the parameters of the original detection model according to the loss value to obtain training parameters.
Optionally, the training and parameter updating the initialized student detection model by using the training data set to obtain a comparison parameter includes:
carrying out amplification processing on the training data set to obtain an amplification data set;
carrying out disturbance processing on the amplification data set to obtain a disturbance data set;
inputting the disturbance data set into the student detection model respectively according to the data of the marked class and the data of the unmarked class for training to obtain a prediction detection result;
inputting the data of the unmarked classes in the disturbance data set into the teacher detection model for target detection to obtain a comparison detection result;
calculating supervision loss and consistency constraint of the prediction detection result according to a preset loss function and the comparison detection result to obtain a loss value;
and performing back propagation updating on the student detection model according to the loss value to obtain an updated student detection model, and acquiring parameters in the updated student detection model to obtain comparison parameters.
Optionally, the performing augmentation processing on the training data set to obtain an augmented data set includes:
carrying out geometric amplification, sequence amplification and intensity amplification on the labeled data in the training data set to obtain labeled amplification data;
performing sequence amplification on unlabeled data in the training data set to obtain unlabeled amplification data;
and collecting the amplification data of the labeled class and the unlabeled class to obtain an amplification data set.
Optionally, the calculating the supervised loss and the consistency constraint of the predicted detection result according to the preset loss function and the comparison detection result to obtain a loss value includes:
calculating supervision loss for the labeled data in the prediction detection result according to a preset loss function and the label of the training data set to obtain a supervision loss value;
calculating consistency constraint on the data of the unmarked class in the prediction detection result according to a preset loss function and the comparison detection result to obtain a consistency constraint value;
and merging the supervision loss value and the consistency constraint value to obtain a loss value.
Optionally, the updating the teacher detection model according to the comparison parameter includes:
acquiring current model parameters of the teacher detection model;
calculating the current model parameter and the comparison parameter by using a preset parameter updating formula to obtain a new parameter;
replacing current model parameters of the teacher detected model with the new parameters.
In order to solve the above problem, the present invention also provides an object detection apparatus, including:
the data acquisition module is used for acquiring a training data set, wherein the training data set comprises marked data and unmarked data;
the model initialization module is used for training a pre-constructed original detection model by using the labeled data in the training data set to obtain training parameters, and initializing the pre-constructed teacher detection model and the pre-constructed student detection model by using the training parameters;
the model training module is used for training and updating parameters of the initialized student detection model by utilizing the training data set and the initialized teacher detection model to obtain comparison parameters;
updating the initialized teacher detection model according to the comparison parameters;
training and parameter updating steps are carried out on the initialized student detection model by using the training data set and the initialized teacher detection model until the student detection model is converged to obtain a target detection model;
and the target detection module is used for identifying a target object contained in the image to be detected by using the target detection model to obtain a detection result.
In order to solve the above problem, the present invention also provides an electronic device, including:
a memory storing at least one instruction; and
and the processor executes the instructions stored in the memory to realize the target detection method.
In order to solve the above problem, the present invention further provides a computer-readable storage medium, which stores at least one instruction, where the at least one instruction is executed by a processor in an electronic device to implement the object detection method described above.
In the embodiment of the invention, the initialized student detection model is trained and parameter updated by utilizing the training data set and the initialized teacher detection model, and the loss of the attribute of each point position and the thermodynamic diagram (heatmap) are calculated during training to be regarded as a soft threshold mode, so that the problem that noise is amplified in iteration can be effectively solved, consistency constraint loss calculation is carried out on the size of a detection frame, the length and the width of a better detection frame can be distinguished, and the accuracy of target detection is improved; meanwhile, the initialized teacher detection model is updated according to the comparison parameters, and a semi-supervised training algorithm is adopted, so that the learning efficiency of the model can be improved under the condition of less labeled data, and the accuracy of the detection result of the model is effectively improved. Therefore, the target detection method, the target detection device, the electronic equipment and the computer readable storage medium provided by the invention can solve the problem of low accuracy of the detection result when the labeled data amount of the target detection model is small.
Drawings
Fig. 1 is a schematic flow chart of a target detection method according to an embodiment of the present invention;
FIG. 2 is a functional block diagram of an object detection apparatus according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device for implementing the target detection method according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the application provides a target detection method. The execution subject of the target detection method includes, but is not limited to, at least one of electronic devices such as a server and a terminal that can be configured to execute the method provided by the embodiments of the present application. In other words, the object detection method may be performed by software or hardware installed in the terminal device or the server device, and the software may be a block chain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Fig. 1 is a schematic flow chart of a target detection method according to an embodiment of the present invention.
In this embodiment, the target detection method includes:
and S1, acquiring a training data set, wherein the training data set comprises marked data and unmarked data.
In an embodiment of the invention, the training data set is a set of a plurality of images, such as medical images.
In detail, the acquiring of the training data set comprises:
acquiring an image with a detection frame and a label from a first database to obtain annotation data;
obtaining an unprocessed image from a second database to obtain unmarked data;
and collecting the marked data and the unmarked data to obtain a training data set.
The label data refers to an image set with a detection frame and a label, and the unlabeled data refers to an image set without a detection frame and a label.
In this embodiment, the first database and the second database may be the same or different databases.
Optionally, the marked data in the training data set occupies a small part, the unmarked data occupies a large part, the marked data amount is often small, if a large amount of marked data needs to be acquired, time and labor are often wasted, the use amount of the marked data is reduced, the time can be saved, and the efficiency is improved.
Optionally, the training data set may be a medical examination image, and in order to further emphasize the secrecy and security of the unlabeled data, the training data set may also be obtained from nodes of a block chain.
And S2, training the pre-constructed original detection model by using the labeled data in the training data set to obtain training parameters, and initializing the pre-constructed teacher detection model and the pre-constructed student detection model by using the training parameters.
The pre-constructed original detection model in the embodiment of the invention is a target detection model based on a neural network, such as a CenterNet model. The teacher detection model and the student detection model are constructed based on the Mean teacher algorithm and are copied from the original detection model.
In detail, the training of the pre-constructed original detection model by using the labeled data in the training data set to obtain the training parameters includes:
acquiring a pre-constructed original detection model;
performing target detection on the labeled data in the training data set by using the original detection model to obtain a detection result;
calculating a loss value of the detection result according to the label of the labeled data and the detection frame;
and updating the parameters of the original detection model according to the loss value to obtain training parameters.
In detail, the initializing a pre-constructed teacher detection model and a pre-constructed student detection model by using the training parameters includes: replacing original parameters in a pre-constructed teacher detection model with the training parameters; and replacing original parameters in the pre-constructed student detection model with the training parameters.
And S3, training and updating parameters of the initialized student detection model by using the training data set and the initialized teacher detection model to obtain comparison parameters.
In detail, the training and parameter updating the initialized student detection model by using the training data set to obtain a comparison parameter includes:
carrying out amplification processing on the training data set to obtain an amplification data set;
carrying out disturbance processing on the amplification data set to obtain a disturbance data set;
inputting the disturbance data set into the student detection model respectively according to the data of the marked class and the data of the unmarked class for training to obtain a prediction detection result;
inputting the data of the unmarked classes in the disturbance data set into the teacher detection model for target detection to obtain a comparison detection result;
calculating supervision loss and consistency constraint of the prediction detection result according to a preset loss function and the comparison detection result to obtain a loss value;
and performing back propagation updating on the student detection model according to the loss value to obtain an updated student detection model, and acquiring parameters in the updated student detection model to obtain comparison parameters.
Further, the performing amplification processing on the training data set to obtain an amplification data set includes:
carrying out geometric amplification, sequence amplification and intensity amplification on the labeled data in the training data set to obtain labeled amplification data;
performing sequence amplification on unlabeled data in the training data set to obtain unlabeled amplification data;
and collecting the amplification data of the labeled class and the unlabeled class to obtain an amplification data set.
Wherein, the sequence augmentation refers to randomly combining different sequences of the data, for example, the data set comprises five sequences, five sequences are randomly selected, at least one, at most five, and the combined data contains 31 sequence combinations.
The data is subjected to augmentation processing, so that the number of training data sets can be increased, the data sets are diversified as much as possible, the trained model has stronger generalization capability, and the accuracy of the model is improved.
Further, the perturbing the amplification data set to obtain a perturbed data set includes:
performing geometric transformation and intensity transformation on the amplification data of the unlabeled class in the amplification data set to obtain disturbance data of the unlabeled class;
and adding the disturbance data of the unlabeled class into the amplification data set to obtain a disturbance data set.
The intensity transformation refers to performing brightness transformation processing on an image, namely the image is the same picture but different in brightness.
Both the intensity transformation and the geometric transformation are performed on the data in order to increase data disturbance (noise), and the images with noise are actually the original images, and the obtained prediction results should be consistent with the original images.
Further, the calculating the supervision loss and the consistency constraint of the predicted detection result according to the preset loss function and the comparison detection result to obtain a loss value includes:
calculating supervision loss for the labeled data in the prediction detection result according to a preset loss function and the label of the training data set to obtain a supervision loss value;
calculating consistency constraint on the data of the unmarked class in the prediction detection result according to a preset loss function and the comparison detection result to obtain a consistency constraint value;
and merging the supervision loss value and the consistency constraint value to obtain a loss value.
In the embodiment of the present invention, the loss function is as follows:
the first term is a supervision loss value and is obtained by performing loss function calculation according to label-class corresponding data in a prediction detection result output by the student detection model and a label of a training data set. The second term is consistency constraint, comprises two MSE losses, and is attribute regression loss of the pixel point center prediction thermodynamic diagram without labeled class data and each point position.
Further, the monitoring loss specifically includes:
wherein L isheatmapIs a pixel point center prediction thermodynamic diagram, L, of the student detection model to the labeled data outputsizeIs the attribute regression loss, L of each point position in the annotation dataoffsetThe deviation loss, λ, of the position of each point in the annotation datasizee and λoffsetAre weight coefficients.
Wherein the content of the first and second substances,represents a predicted thermodynamic diagram (heatmap),1 represents a detected keypoint, and 0 represents a background; y isxycThe heatmap representing the ground truth is a Gaussian distribution map with a key point as the center; alpha and beta are hyper-parameters; and N is the number of key points.
Wherein s iskThe length and width of an actual detection frame of the marked data are represented;and the length and the width of a prediction frame of the labeled data output by the student detection model.
Wherein the content of the first and second substances,representing the length and width offset predicted by the student detection model;marking the position of an actual key point in the data; p is the position of a predicted key point of the labeled data output by the student detection model; r represents an output step size.
And S4, updating the initialized teacher detection model according to the comparison parameters.
In detail, the updating the teacher detection model according to the comparison parameters includes:
acquiring current model parameters of the teacher detection model;
calculating the current model parameter and the comparison parameter by using a preset parameter updating formula to obtain a new parameter;
replacing current model parameters of the teacher detected model with the new parameters.
In the embodiment of the present invention, the parameter update formula includes:
wherein the content of the first and second substances,parameters representing the teacher test model at the kth time,and expressing the parameter of the student detection model at the kth time, wherein alpha is a weight coefficient.
And S5, returning to the step S3 until the student detection model converges to obtain a target detection model.
In the embodiment of the present invention, the steps S3 and S4 are repeated until the student detection model converges, that is, the loss function in the student detection model does not decrease any more, and the student detection model at this time is used as the final target detection model.
And S6, identifying the target object contained in the image to be detected by using the target detection model to obtain a detection result.
In the embodiment of the invention, the image to be detected can be a medical image, the focus in the medical image can be preliminarily detected by using the target detection model, and the position of the focus can be obtained.
In detail, the identifying, by using the target detection model, the target object included in the image to be detected to obtain a detection result includes:
extracting the characteristics of the image to be detected through a characteristic extraction network of the target detection model to obtain a characteristic diagram;
and classifying and positioning the characteristic graph through a classification network of the target detection model to obtain result information of whether a target object exists in the target and position information of the target object obtained when the target object exists.
In the embodiment of the invention, the initialized student detection model is trained and parameter updated by utilizing the training data set and the initialized teacher detection model, and the loss of the attribute of each point position and the thermodynamic diagram (heatmap) are calculated during training to be regarded as a soft threshold mode, so that the problem that noise is amplified in iteration can be effectively solved, consistency constraint loss calculation is carried out on the size of a detection frame, the length and the width of a better detection frame can be distinguished, and the accuracy of target detection is improved; meanwhile, the initialized teacher detection model is updated according to the comparison parameters, and a semi-supervised training algorithm is adopted, so that the learning efficiency of the model can be improved under the condition of less labeled data, and the accuracy of the detection result of the model is effectively improved. Therefore, the target detection method provided by the invention can solve the problem of low accuracy of the detection result when the labeled data amount of the target detection model is small.
Fig. 2 is a functional block diagram of an object detection apparatus according to an embodiment of the present invention.
The object detecting device 100 of the present invention can be installed in an electronic device. According to the implemented functions, the object detection apparatus 100 may include a data acquisition module 101, a model initialization module 102, a model training module 103, and an object detection module 104. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the data obtaining module 101 is configured to obtain a training data set, where the training data set includes labeled data and unlabeled data.
In an embodiment of the invention, the training data set is a set of a plurality of images, such as medical images.
In detail, the data obtaining module 101 is specifically configured to:
acquiring an image with a detection frame and a label from a first database to obtain annotation data;
obtaining an unprocessed image from a second database to obtain unmarked data;
and collecting the marked data and the unmarked data to obtain a training data set.
The label data refers to an image set with a detection frame and a label, and the unlabeled data refers to an image set without a detection frame and a label.
Optionally, the training data set may be a medical examination image, and in order to further emphasize the secrecy and security of the unlabeled data, the training data set may also be obtained from nodes of a block chain.
The model initialization module 102 is configured to train the pre-constructed original detection model by using the labeled data in the training data set to obtain training parameters, and initialize the pre-constructed teacher detection model and the pre-constructed student detection model by using the training parameters.
The pre-constructed original detection model in the embodiment of the invention is a target detection model based on a neural network, such as a CenterNet model. The teacher detection model and the student detection model are constructed based on the Mean teacher algorithm and are copied from the original detection model.
In detail, when the pre-constructed original detection model is trained by using the labeled data in the training data set to obtain the training parameters, the model initialization module 102 specifically executes the following operations:
acquiring a pre-constructed original detection model;
performing target detection on the labeled data in the training data set by using the original detection model to obtain a detection result;
calculating a loss value of the detection result according to the label of the labeled data and the detection frame;
and updating the parameters of the original detection model according to the loss value to obtain training parameters.
In detail, the initializing a pre-constructed teacher detection model and a pre-constructed student detection model by using the training parameters includes: replacing original parameters in a pre-constructed teacher detection model with the training parameters; and replacing original parameters in the pre-constructed student detection model with the training parameters.
The model training module 103 is configured to train and update parameters of the initialized student detection model by using the training data set and the initialized teacher detection model to obtain a comparison parameter;
updating the initialized teacher detection model according to the comparison parameters;
and repeating the operations until the student detection model is converged to obtain the target detection model.
In detail, when the initialized student detection model is trained and parameter-updated by using the training data set to obtain a comparison parameter, the model training module 103 specifically executes the following operations:
carrying out amplification processing on the training data set to obtain an amplification data set;
carrying out disturbance processing on the amplification data set to obtain a disturbance data set;
inputting the disturbance data set into the student detection model respectively according to the data of the marked class and the data of the unmarked class for training to obtain a prediction detection result;
inputting the data of the unmarked classes in the disturbance data set into the teacher detection model for target detection to obtain a comparison detection result;
calculating supervision loss and consistency constraint of the prediction detection result according to a preset loss function and the comparison detection result to obtain a loss value;
and performing back propagation updating on the student detection model according to the loss value to obtain an updated student detection model, and acquiring parameters in the updated student detection model to obtain comparison parameters.
Further, the performing amplification processing on the training data set to obtain an amplification data set includes:
carrying out geometric amplification, sequence amplification and intensity amplification on the labeled data in the training data set to obtain labeled amplification data;
performing sequence amplification on unlabeled data in the training data set to obtain unlabeled amplification data;
and collecting the amplification data of the labeled class and the unlabeled class to obtain an amplification data set.
Wherein, the sequence augmentation refers to randomly combining different sequences of the data, for example, the data set comprises five sequences, five sequences are randomly selected, at least one, at most five, and the combined data contains 31 sequence combinations.
The data is subjected to augmentation processing, so that the number of training data sets can be increased, the data sets are diversified as much as possible, the trained model has stronger generalization capability, and the accuracy of the model is improved.
Further, the perturbing the amplification data set to obtain a perturbed data set includes:
performing geometric transformation and intensity transformation on the amplification data of the unlabeled class in the amplification data set to obtain disturbance data of the unlabeled class;
and adding the disturbance data of the unlabeled class into the amplification data set to obtain a disturbance data set.
The intensity transformation refers to performing brightness transformation processing on an image, namely the image is the same picture but different in brightness.
Both the intensity transformation and the geometric transformation are performed on the data in order to increase data disturbance (noise), and the images with noise are actually the original images, and the obtained prediction results should be consistent with the original images.
Further, the calculating the supervision loss and the consistency constraint of the predicted detection result according to the preset loss function and the comparison detection result to obtain a loss value includes:
calculating supervision loss for the labeled data in the prediction detection result according to a preset loss function and the label of the training data set to obtain a supervision loss value;
calculating consistency constraint on the data of the unmarked class in the prediction detection result according to a preset loss function and the comparison detection result to obtain a consistency constraint value;
and merging the supervision loss value and the consistency constraint value to obtain a loss value.
In the embodiment of the present invention, the loss function is as follows:
the first term is a supervision loss value and is obtained by performing loss function calculation according to label-class corresponding data in a prediction detection result output by the student detection model and a label of a training data set. The second term is consistency constraint, comprises two MSE losses, and is attribute regression loss of the pixel point center prediction thermodynamic diagram without labeled class data and each point position.
Further, the monitoring loss specifically includes:
wherein L isheatmapIs a pixel point center prediction thermodynamic diagram, L, of the student detection model to the labeled data outputsizeIs the attribute regression loss, L of each point position in the annotation dataoffsetThe deviation loss, λ, of the position of each point in the annotation datasizee and λoffsetAre weight coefficients.
Wherein the content of the first and second substances,represents a predicted thermodynamic diagram (heatmap),1 represents a detected keypoint, and 0 represents a background; y isxycThe heatmap representing the ground truth is a Gaussian distribution map with a key point as the center; alpha and beta are hyper-parameters; and N is the number of key points.
Wherein s iskThe length and width of an actual detection frame of the marked data are represented;and the length and the width of a prediction frame of the labeled data output by the student detection model.
Wherein the content of the first and second substances,representing the length and width offset predicted by the student detection model;marking the position of an actual key point in the data; p is the position of a predicted key point of the labeled data output by the student detection model; r represents an output step size.
In detail, when the teacher detection model is updated according to the comparison parameters, the model training module 103 specifically performs the following operations:
acquiring current model parameters of the teacher detection model;
calculating the current model parameter and the comparison parameter by using a preset parameter updating formula to obtain a new parameter;
replacing current model parameters of the teacher detected model with the new parameters.
In the embodiment of the present invention, the parameter update formula includes:
wherein the content of the first and second substances,parameters representing the teacher test model at the kth time,and expressing the parameter of the student detection model at the kth time, wherein alpha is a weight coefficient.
And returning to train and update parameters of the initialized student detection model by using the training data set and the initialized teacher detection model until the student detection model converges, namely, the loss function in the student detection model does not descend any more, and taking the student detection model at the moment as a final target detection model.
The target detection module 104 is configured to identify a target object included in the image to be detected by using the target detection model, so as to obtain a detection result.
In the embodiment of the invention, the image to be detected can be a medical image, the focus in the medical image can be preliminarily detected by using the target detection model, and the position of the focus can be obtained.
In detail, the object detection module 104 is specifically configured to:
extracting the characteristics of the image to be detected through a characteristic extraction network of the target detection model to obtain a characteristic diagram;
and classifying and positioning the characteristic graph through a classification network of the target detection model to obtain result information of whether a target object exists in the target and position information of the target object obtained when the target object exists.
In the embodiment of the invention, the initialized student detection model is trained and parameter updated by utilizing the training data set and the initialized teacher detection model, and the loss of the attribute of each point position and the thermodynamic diagram (heatmap) are calculated during training to be regarded as a soft threshold mode, so that the problem that noise is amplified in iteration can be effectively solved, consistency constraint loss calculation is carried out on the size of a detection frame, the length and the width of a better detection frame can be distinguished, and the accuracy of target detection is improved; meanwhile, the initialized teacher detection model is updated according to the comparison parameters, and a semi-supervised training algorithm is adopted, so that the learning efficiency of the model can be improved under the condition of less labeled data, and the accuracy of the detection result of the model is effectively improved. Therefore, the target detection device provided by the invention can solve the problem of low accuracy of the detection result when the labeled data amount of the target detection model is small.
Fig. 3 is a schematic structural diagram of an electronic device for implementing a target detection method according to an embodiment of the present invention.
The electronic device 1 may comprise a processor 10, a memory 11 and a bus, and may further comprise a computer program, such as an object detection program 12, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only to store application software installed in the electronic device 1 and various types of data, such as codes of the object detection program 12, but also to temporarily store data that has been output or is to be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device 1 by running or executing programs or modules (e.g., object detection programs, etc.) stored in the memory 11 and calling data stored in the memory 11.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
Fig. 3 shows only an electronic device with components, and it will be understood by those skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than those shown, or some components may be combined, or a different arrangement of components.
For example, although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so as to implement functions of charge management, discharge management, power consumption management, and the like through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the electronic device 1 and other electronic devices.
Optionally, the electronic device 1 may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device 1 and for displaying a visualized user interface, among other things.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The object detection program 12 stored in the memory 11 of the electronic device 1 is a combination of instructions that, when executed in the processor 10, enable:
acquiring a training data set, wherein the training data set comprises labeled data and unlabeled data;
training a pre-constructed original detection model by using the labeled data in the training data set to obtain training parameters, and initializing the pre-constructed teacher detection model and the pre-constructed student detection model by using the training parameters;
training and updating parameters of the initialized student detection model by using the training data set and the initialized teacher detection model to obtain comparison parameters;
updating the initialized teacher detection model according to the comparison parameters;
training and parameter updating steps are carried out on the initialized student detection model by using the training data set and the initialized teacher detection model until the student detection model is converged to obtain a target detection model;
and identifying a target object contained in the image to be detected by using the target detection model to obtain a detection result.
Specifically, the specific implementation method of the processor 10 for the instruction may refer to the description of the relevant steps in the embodiments corresponding to fig. 1 to fig. 3, which is not repeated herein.
Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, storing a computer program which, when executed by a processor of an electronic device, may implement:
acquiring a training data set, wherein the training data set comprises labeled data and unlabeled data;
training a pre-constructed original detection model by using the labeled data in the training data set to obtain training parameters, and initializing the pre-constructed teacher detection model and the pre-constructed student detection model by using the training parameters;
training and updating parameters of the initialized student detection model by using the training data set and the initialized teacher detection model to obtain comparison parameters;
updating the initialized teacher detection model according to the comparison parameters;
training and parameter updating steps are carried out on the initialized student detection model by using the training data set and the initialized teacher detection model until the student detection model is converged to obtain a target detection model;
and identifying a target object contained in the image to be detected by using the target detection model to obtain a detection result.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.
Claims (10)
1. A method of object detection, the method comprising:
acquiring a training data set, wherein the training data set comprises labeled data and unlabeled data;
training a pre-constructed original detection model by using the labeled data in the training data set to obtain training parameters, and initializing the pre-constructed teacher detection model and the pre-constructed student detection model by using the training parameters;
training and updating parameters of the initialized student detection model by using the training data set and the initialized teacher detection model to obtain comparison parameters;
updating the initialized teacher detection model according to the comparison parameters;
training and parameter updating steps are carried out on the initialized student detection model by using the training data set and the initialized teacher detection model until the student detection model is converged to obtain a target detection model;
and identifying a target object contained in the image to be detected by using the target detection model to obtain a detection result.
2. The target detection method of claim 1, wherein said obtaining a training data set comprises:
acquiring an image with a detection frame and a label from a first database to obtain annotation data;
obtaining an unprocessed image from a second database to obtain unmarked data;
and collecting the marked data and the unmarked data to obtain a training data set.
3. The method of claim 1, wherein the training the pre-constructed original detection model using the labeled data in the training dataset to obtain the training parameters comprises:
acquiring a pre-constructed original detection model;
performing target detection on the labeled data in the training data set by using the original detection model to obtain a detection result;
calculating a loss value of the detection result according to the label of the labeled data and the detection frame;
and updating the parameters of the original detection model according to the loss value to obtain training parameters.
4. The method for detecting the target of claim 1, wherein the training and updating the initialized student detection model by using the training data set to obtain the comparison parameter comprises:
carrying out amplification processing on the training data set to obtain an amplification data set;
carrying out disturbance processing on the amplification data set to obtain a disturbance data set;
inputting the disturbance data set into the student detection model respectively according to the data of the marked class and the data of the unmarked class for training to obtain a prediction detection result;
inputting the data of the unmarked classes in the disturbance data set into the teacher detection model for target detection to obtain a comparison detection result;
calculating supervision loss and consistency constraint of the prediction detection result according to a preset loss function and the comparison detection result to obtain a loss value;
and performing back propagation updating on the student detection model according to the loss value to obtain an updated student detection model, and acquiring parameters in the updated student detection model to obtain comparison parameters.
5. The method for detecting an object according to claim 4, wherein the amplifying the training data set to obtain an amplified data set comprises:
carrying out geometric amplification, sequence amplification and intensity amplification on the labeled data in the training data set to obtain labeled amplification data;
performing sequence amplification on unlabeled data in the training data set to obtain unlabeled amplification data;
and collecting the amplification data of the labeled class and the unlabeled class to obtain an amplification data set.
6. The method of claim 4, wherein the calculating the supervised loss and consistency constraints of the predicted test results from the pre-set loss function and the illumination test results to obtain a loss value comprises:
calculating supervision loss for the labeled data in the prediction detection result according to a preset loss function and the label of the training data set to obtain a supervision loss value;
calculating consistency constraint on the data of the unmarked class in the prediction detection result according to a preset loss function and the comparison detection result to obtain a consistency constraint value;
and merging the supervision loss value and the consistency constraint value to obtain a loss value.
7. The object detection method of claim 1, wherein said updating the teacher detection model based on the control parameters comprises:
acquiring current model parameters of the teacher detection model;
calculating the current model parameter and the comparison parameter by using a preset parameter updating formula to obtain a new parameter;
replacing current model parameters of the teacher detected model with the new parameters.
8. An object detection apparatus, characterized in that the apparatus comprises:
the data acquisition module is used for acquiring a training data set, wherein the training data set comprises marked data and unmarked data;
the model initialization module is used for training a pre-constructed original detection model by using the labeled data in the training data set to obtain training parameters, and initializing the pre-constructed teacher detection model and the pre-constructed student detection model by using the training parameters;
the model training module is used for training and updating parameters of the initialized student detection model by utilizing the training data set and the initialized teacher detection model to obtain comparison parameters;
updating the initialized teacher detection model according to the comparison parameters;
training and parameter updating steps are carried out on the initialized student detection model by using the training data set and the initialized teacher detection model until the student detection model is converged to obtain a target detection model;
and the target detection module is used for identifying a target object contained in the image to be detected by using the target detection model to obtain a detection result.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method of object detection as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the object detection method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110593978.3A CN113298159B (en) | 2021-05-28 | Target detection method, target detection device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110593978.3A CN113298159B (en) | 2021-05-28 | Target detection method, target detection device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113298159A true CN113298159A (en) | 2021-08-24 |
CN113298159B CN113298159B (en) | 2024-06-28 |
Family
ID=
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113963322A (en) * | 2021-10-29 | 2022-01-21 | 北京百度网讯科技有限公司 | Detection model training method and device and electronic equipment |
CN114511743A (en) * | 2022-01-29 | 2022-05-17 | 北京百度网讯科技有限公司 | Detection model training method, target detection method, device, equipment, medium and product |
CN114519850A (en) * | 2022-04-20 | 2022-05-20 | 宁波博登智能科技有限公司 | Marking system and method for automatic target detection of two-dimensional image |
CN115018852A (en) * | 2022-08-10 | 2022-09-06 | 四川大学 | Abdominal lymph node detection method and device based on semi-supervised learning |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020024400A1 (en) * | 2018-08-02 | 2020-02-06 | 平安科技(深圳)有限公司 | Class monitoring method and apparatus, computer device, and storage medium |
CN112183577A (en) * | 2020-08-31 | 2021-01-05 | 华为技术有限公司 | Training method of semi-supervised learning model, image processing method and equipment |
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020024400A1 (en) * | 2018-08-02 | 2020-02-06 | 平安科技(深圳)有限公司 | Class monitoring method and apparatus, computer device, and storage medium |
CN112183577A (en) * | 2020-08-31 | 2021-01-05 | 华为技术有限公司 | Training method of semi-supervised learning model, image processing method and equipment |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113963322A (en) * | 2021-10-29 | 2022-01-21 | 北京百度网讯科技有限公司 | Detection model training method and device and electronic equipment |
CN113963322B (en) * | 2021-10-29 | 2023-08-25 | 北京百度网讯科技有限公司 | Detection model training method and device and electronic equipment |
CN114511743A (en) * | 2022-01-29 | 2022-05-17 | 北京百度网讯科技有限公司 | Detection model training method, target detection method, device, equipment, medium and product |
CN114519850A (en) * | 2022-04-20 | 2022-05-20 | 宁波博登智能科技有限公司 | Marking system and method for automatic target detection of two-dimensional image |
CN115018852A (en) * | 2022-08-10 | 2022-09-06 | 四川大学 | Abdominal lymph node detection method and device based on semi-supervised learning |
CN115018852B (en) * | 2022-08-10 | 2022-12-06 | 四川大学 | Abdominal lymph node detection method and device based on semi-supervised learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113283446A (en) | Method and device for identifying target object in image, electronic equipment and storage medium | |
CN111932547B (en) | Method and device for segmenting target object in image, electronic device and storage medium | |
CN111814962A (en) | Method and device for acquiring parameters of recognition model, electronic equipment and storage medium | |
CN112137591B (en) | Target object position detection method, device, equipment and medium based on video stream | |
CN112581227A (en) | Product recommendation method and device, electronic equipment and storage medium | |
CN112396005A (en) | Biological characteristic image recognition method and device, electronic equipment and readable storage medium | |
WO2022141858A1 (en) | Pedestrian detection method and apparatus, electronic device, and storage medium | |
CN113158676A (en) | Professional entity and relationship combined extraction method and system and electronic equipment | |
CN112580684A (en) | Target detection method and device based on semi-supervised learning and storage medium | |
CN114491047A (en) | Multi-label text classification method and device, electronic equipment and storage medium | |
CN114708461A (en) | Multi-modal learning model-based classification method, device, equipment and storage medium | |
CN114511038A (en) | False news detection method and device, electronic equipment and readable storage medium | |
CN113268665A (en) | Information recommendation method, device and equipment based on random forest and storage medium | |
CN112016617A (en) | Fine-grained classification method and device and computer-readable storage medium | |
CN114494800A (en) | Prediction model training method and device, electronic equipment and storage medium | |
CN112990374A (en) | Image classification method, device, electronic equipment and medium | |
CN113157739A (en) | Cross-modal retrieval method and device, electronic equipment and storage medium | |
CN113487621A (en) | Medical image grading method and device, electronic equipment and readable storage medium | |
CN112101481A (en) | Method, device and equipment for screening influence factors of target object and storage medium | |
CN112651782A (en) | Behavior prediction method, device, equipment and medium based on zoom dot product attention | |
CN112580505B (en) | Method and device for identifying network point switch door state, electronic equipment and storage medium | |
CN113298159B (en) | Target detection method, target detection device, electronic equipment and storage medium | |
CN112215336B (en) | Data labeling method, device, equipment and storage medium based on user behaviors | |
CN113298159A (en) | Target detection method and device, electronic equipment and storage medium | |
CN115147660A (en) | Image classification method, device and equipment based on incremental learning and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |