CN113705362A - Training method and device of image detection model, electronic equipment and storage medium - Google Patents

Training method and device of image detection model, electronic equipment and storage medium Download PDF

Info

Publication number
CN113705362A
CN113705362A CN202110888202.4A CN202110888202A CN113705362A CN 113705362 A CN113705362 A CN 113705362A CN 202110888202 A CN202110888202 A CN 202110888202A CN 113705362 A CN113705362 A CN 113705362A
Authority
CN
China
Prior art keywords
detection model
image detection
label
sample
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110888202.4A
Other languages
Chinese (zh)
Other versions
CN113705362B (en
Inventor
黄泽斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110888202.4A priority Critical patent/CN113705362B/en
Publication of CN113705362A publication Critical patent/CN113705362A/en
Application granted granted Critical
Publication of CN113705362B publication Critical patent/CN113705362B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The disclosure provides a training method and device of an image detection model, electronic equipment and a storage medium, relates to the technical field of artificial intelligence, in particular to the technical field of computer vision and deep learning, and can be applied to scenes such as face recognition, living body detection and the like. The specific implementation scheme is as follows: acquiring training data, wherein the training data comprises: a sample image and a label for the sample image; determining a teacher image detection model corresponding to the label of the sample image; the teacher image detection model is obtained by training with training data with labels; determining a feature vector of the sample image according to the teacher image detection model; according to the sample image, the label of the sample image and the feature vector, the coefficient adjustment is carried out on the initial student image detection model to realize training.

Description

Training method and device of image detection model, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to the field of computer vision and deep learning technologies, which can be applied to scenes such as face recognition and living body detection, and in particular, to a training method and apparatus for an image detection model, an electronic device, and a storage medium.
Background
With the development of image detection technology, an image detection model can be applied to living body detection, and the living body detection can automatically judge whether a face image in a given image or video is from a real person on site or from a technology of deceiving a face. The living body detection is an important technical means for preventing face attack and fraud, and is widely applied to industries and occasions relating to remote identity authentication, such as banks, insurance, internet finance, electronic commerce and the like.
Disclosure of Invention
The disclosure provides a training method and device for an image detection model, an electronic device and a storage medium.
According to an aspect of the present disclosure, there is provided a training method of an image detection model, including: obtaining training data, wherein the training data comprises: a sample image and a label for the sample image; determining a teacher image detection model corresponding to a label of the sample image; the teacher image detection model is obtained by training with training data with the labels; determining a feature vector of the sample image according to the teacher image detection model; and adjusting coefficients of an initial student image detection model according to the sample image, the label of the sample image and the feature vector to realize training.
According to another aspect of the present disclosure, there is provided a training apparatus for an image detection model, including: an obtaining module, configured to obtain training data, where the training data includes: a sample image and a label for the sample image; the determining module is used for determining a teacher image detection model corresponding to the label of the sample image; the teacher image detection model is obtained by training with training data with the labels; the determining module is further used for determining a feature vector of the sample image according to the teacher image detection model; and the training module is used for adjusting the coefficient of the initial student image detection model according to the sample image, the label of the sample image and the characteristic vector so as to realize training.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of the first aspect of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of an embodiment of the first aspect of the present disclosure.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a training method of an image detection model according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram according to a third embodiment of the present disclosure;
FIG. 5 is a schematic diagram according to a fourth embodiment of the present disclosure;
FIG. 6 is a schematic diagram according to a fifth embodiment of the present disclosure;
FIG. 7 is a schematic illustration according to a sixth embodiment of the present disclosure;
FIG. 8 is a block diagram of an electronic device for implementing a method of training an image detection model according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
With the development of image detection technology, an image detection model can be applied to living body detection, and the living body detection can automatically judge whether a face image in a given image or video is from a real person on site or from a technology of deceiving a face. The living body detection is an important technical means for preventing face attack and fraud, and is widely applied to industries and occasions relating to remote identity authentication, such as banks, insurance, internet finance, electronic commerce and the like.
In the related art, an image detection model for living body detection is trained by using only images corresponding to two labels, namely a true person and an attack, and because the number distribution and the learning difficulty of the images corresponding to each label are unbalanced, high detection accuracy is difficult to achieve on the images corresponding to each label.
In order to solve the above problems, the present disclosure provides a training method and apparatus for an image detection model, an electronic device, and a storage medium.
Fig. 1 is a schematic diagram according to a first embodiment of the present disclosure. It should be noted that the training method of the image detection model according to the embodiment of the present disclosure may be applied to a training apparatus of the image detection model according to the embodiment of the present disclosure, and the apparatus may be configured in an electronic device. The electronic device may be a mobile terminal, for example, a mobile phone, a tablet computer, a personal digital assistant, and other hardware devices with various operating systems.
As shown in fig. 1, the training method of the image detection model may include the following steps:
step 101, obtaining training data, wherein the training data comprises: a sample image and a label for the sample image.
In the embodiment of the disclosure, the sample image may be obtained by an image acquisition device, and the label of the sample image corresponding to the sample image may be obtained according to the feature of the sample image, for example, whether the sample image is a real object image may be determined as the label of the corresponding sample image. For example, the sample image is a real object image, the label of the corresponding sample image is a positive sample label, the sample image is a non-real object, and the label of the corresponding sample image is a negative sample label. Next, the sample image and the label of the sample image are used as training data.
Step 102, determining a teacher image detection model corresponding to the label of the sample image; the teacher image detection model is obtained by training with training data with labels.
In the embodiment of the present disclosure, a set of teacher image detection models corresponding to the label of the sample image may be obtained, and a plurality of teacher image detection models in the set of teacher image detection models may be used as the teacher image detection models corresponding to the label of the sample image. The teacher image detection models can be obtained by training corresponding training data with labels.
And 103, determining the characteristic vector of the sample image according to the teacher image detection model.
In the disclosed embodiment, for each sample image, the sample image may be input into a teacher image detection model, and the feature vector of the sample image may be determined from the output of the teacher image detection model.
And 104, adjusting coefficients of the initial student image detection model according to the sample image, the label of the sample image and the characteristic vector to realize training.
In the embodiment of the disclosure, the sample image may be input into an initial student image detection model, the student image detection model may output a prediction feature vector and a prediction tag, and the prediction feature vector and the prediction tag may be respectively combined with the tag and the feature vector of the sample image to perform coefficient adjustment on the initial student image detection model, so as to implement training on the student image detection model. It should be noted that the teacher image detection model and the student image detection model may be living body detection models, and the trained student image detection model may be applied to scenes such as face recognition and living body detection.
In summary, by obtaining training data, the training data includes: a sample image and a label for the sample image; determining a teacher image detection model corresponding to the label of the sample image; the teacher image detection model is obtained by training with training data with labels; determining a feature vector of the sample image according to the teacher image detection model; according to the method, the initial student image detection model is subjected to coefficient adjustment to realize training, the teacher image detection models corresponding to the sample images corresponding to different labels are trained to determine a plurality of teacher image detection models, the initial student image detection model is subjected to coefficient adjustment according to the characteristic vectors of the sample images, the sample images and the labels of the sample images determined by the teacher image detection models to realize training of the initial student image detection model, and the detection accuracy of the student image detection model on the image corresponding to each label can be improved.
In order to accurately determine the teacher image detection model corresponding to the label of the sample image, as shown in fig. 2, fig. 2 is a schematic diagram according to a second embodiment of the present disclosure. In the embodiment of the present disclosure, the teacher image detection model may be trained using training data corresponding to different labels of the sample images to determine the teacher image detection model corresponding to the labels of the sample images. The embodiment shown in fig. 2 may include the following steps:
step 201, obtaining training data, wherein the training data includes: a sample image and a label for the sample image.
Step 202, acquiring a teacher image detection model set; wherein, the teacher image detection model set comprises: a first teacher image detection model and a plurality of second teacher image detection models; the first teacher image detection model is obtained by training the training data of the positive sample labels and the training data of the multiple types of negative sample labels; the second teacher image detection model is obtained by training with training data of the positive sample labels and training data of one type of negative sample labels.
In an embodiment of the present disclosure, the label of the sample image includes: the teacher image detection model comprises a positive sample label and a negative sample label, wherein training data of the positive sample label can comprise the positive sample label and a sample image corresponding to the positive sample label, training data of the negative sample label can comprise the negative sample label and a sample image corresponding to the negative sample label, and then the first teacher image detection model can be trained by adopting the sample images corresponding to the positive sample label and the positive sample label, the negative sample labels of various types and the sample images corresponding to the negative sample labels of various types; and training the second teacher image detection model by adopting the positive sample label and the sample image corresponding to the positive sample label, the negative sample label of one type and the sample image corresponding to the negative sample label, and outputting the label corresponding to the image by the trained first teacher image detection model and the trained second teacher image detection model. It should be noted that, because the second teacher image detection model is obtained by training the training data of the positive sample label and the training data of the negative sample label of one type, if the second teacher image detection model adopts the same parameter quantity as the first teacher image detection model and/or the number of network layers, data overfitting is more likely to occur in the second teacher image detection model, therefore, in order to improve the detection accuracy of the image of the second teacher image detection model on the label of one type, the parameter quantity of the first teacher image detection model is greater than the parameter quantity of the second teacher image detection model, and/or the number of network layers of the first teacher image detection model is greater than the number of network layers of the second teacher image detection model.
It should be noted that, the positive sample label, the characterization sample image is a real object image, the negative sample label, and the characterization sample image is a non-real object image, where the negative sample label includes at least one of the following labels: photo attack tags, video attack tags, mask attack tags. The image attack label comprises a picture attack label, a characterization sample image and a mask attack label, wherein the characterization sample image is an image obtained by shooting a picture, the video attack label is an image obtained by shooting a video, and the characterization sample image is an image obtained by shooting a mask.
Step 203, the first teacher image detection model and the corresponding second teacher image detection model with the training data including the label are used as the teacher image detection model corresponding to the label.
It can be understood that the first teacher image detection model obtained by training with the training data of the positive sample labels and the training data of the negative sample labels of multiple types is better for the detection results on the images corresponding to the negative sample labels of multiple types, but when the first teacher image detection model is trained, because the number distribution of the images corresponding to the negative sample labels of each type and the learning difficulty are unbalanced, it is difficult to achieve a better detection result on the image corresponding to each type of label. Therefore, in order to be able to achieve a good detection result on each type of label, the first teacher image detection model and the second teacher image detection model including the label in the corresponding training data may be set as the teacher image detection model corresponding to the label.
It should be noted that, teacher image detection models corresponding to different labels of the sample image are also different, for example, the label of the sample image is a photo attack label in the negative sample label, the corresponding teacher image detection model is a first teacher image detection model and a second teacher image detection model corresponding to the photo attack label, the label of the sample image is a video attack label in the negative sample label, and the corresponding teacher image detection model is a first teacher image detection model and a second teacher image detection model corresponding to the video attack label; the label of the sample image is a mask attack label in the negative sample label, and the corresponding teacher image detection model is a first teacher image detection model and a second teacher image detection model corresponding to the mask attack label.
For example, as shown in fig. 3, the living body model 1 may be a first teacher image detection model, the living body models 2 to n may be different second teacher image detection models, and the combination of the living body model 1 and the different second teacher image detection models may distill a new living body model, for example, by distilling the new living body detection model with the living body model 1 and the living body model 2, the detection accuracy of the new living body detection model on the image corresponding to the photo attack tag may be improved; the living body model 1 and the living body model 3 are adopted to distill the new living body detection model, so that the detection accuracy of the new living body detection model on the image corresponding to the video attack label can be improved; similarly, the living body model 1 and the living body model 4 are adopted to distill the new living body detection model, so that the detection accuracy of the new living body detection model on the image corresponding to the mask attack label can be improved.
And step 204, determining the characteristic vector of the sample image according to the teacher image detection model.
Step 205, performing coefficient adjustment on the initial student image detection model according to the sample image, the label of the sample image and the feature vector to realize training.
In the embodiment of the present disclosure, the steps 201 and 204 and 205 may be implemented by any method in each embodiment of the present disclosure, which is not limited by the embodiment of the present disclosure and is not described again.
In conclusion, a teacher image detection model set is obtained; wherein, the teacher image detection model set comprises: a first teacher image detection model and a plurality of second teacher image detection models; the first teacher image detection model is obtained by training the training data of the positive sample labels and the training data of the multiple types of negative sample labels; the second teacher image detection model is obtained by training the training data of the positive sample label and the training data of one type of negative sample label; the first teacher image detection model and the second teacher image detection model including the labels in the corresponding training data are used as the teacher image detection model corresponding to the labels, so that the detection accuracy of the teacher image detection model on the image corresponding to each label can be improved, and the detection accuracy of the student image detection model on the image corresponding to each label can be improved.
In order to enable the student image detection model to better learn the characteristics of the teacher image detection model, as shown in fig. 4, fig. 4 is a schematic diagram according to a third embodiment of the present disclosure, in the embodiment of the present disclosure, a sample image may be input into the teacher image detection model, and a feature vector output by a network layer of the teacher image detection model except a full connection layer is obtained, so as to avoid that the feature vector loses the corresponding knowledge characteristics after being processed by the full connection layer and outputting a label corresponding to the image. The embodiment illustrated in fig. 4 may include the following steps:
step 401, obtaining training data, wherein the training data includes: a sample image and a label for the sample image.
Step 402, determining a teacher image detection model corresponding to the label of the sample image; the teacher image detection model is obtained by training with training data with labels.
Step 403, inputting the sample image into the teacher image detection model, and obtaining an output vector of a target network layer in the teacher image detection model; and the target network layer is a network layer except the full connection layer in the teacher image detection model.
In the embodiment of the disclosure, the sample image is input into the teacher image detection model, the teacher image detection model can output the label of the sample image corresponding to the sample image through the target network layer and the full connection layer, the output vector output by the target network layer can be the label of the sample image corresponding to the sample image after being processed by the full connection layer, and the output vector loses the corresponding knowledge characteristic after being processed by the full connection layer, so that the output vector of the target network layer in the teacher image detection model can be obtained in order to enable the student image detection model to better learn the characteristic of the teacher image detection model. It should be noted that the target network layer may be a network layer except for the full connection layer in the teacher image detection model.
And step 404, taking the output vector as a feature vector.
The output vector output by the target network layer may then be taken as a feature vector. It should be noted that the number of feature vectors of the sample image is at least one.
And 405, performing coefficient adjustment on the initial student image detection model according to the sample image, the label of the sample image and the feature vector to realize training.
In the embodiment of the present disclosure, the steps 401, 402, and 405 may be implemented by any one of the embodiments of the present disclosure, which is not limited by the embodiment of the present disclosure and will not be described again.
In conclusion, the sample image is input into the teacher image detection model, and the output vector of the target network layer in the teacher image detection model is obtained; the target network layer is a network layer except the full connection layer in the teacher image detection model; and taking the output vector as a feature vector, wherein the number of the feature vectors of the sample image is at least one. Therefore, the student image detection model can better learn the characteristics of the teacher image detection model, and the detection accuracy of the student image detection model on the image corresponding to each label is improved.
In order to improve the detection accuracy and the deployment convenience of the student image detection model, the student image detection model may learn the features of the teacher image detection model, as shown in fig. 5, fig. 5 is a schematic diagram according to a fourth embodiment of the present disclosure, as an example, a coefficient adjustment may be performed on an initial student image detection model according to a sample image, a label of the sample image, and each feature vector, and the embodiment shown in fig. 5 may include the following steps:
step 501, obtaining training data, wherein the training data includes: a sample image and a label for the sample image.
Step 502, determining a teacher image detection model corresponding to the label of the sample image; the teacher image detection model is obtained by training with training data with labels.
And step 503, determining the feature vector of the sample image according to the teacher image detection model.
In the embodiment of the present disclosure, the steps 501-503 may be implemented by any method in various embodiments of the present disclosure, which is not limited by the embodiment of the present disclosure and will not be described again.
Step 504, aiming at each feature vector of the sample image, performing coefficient adjustment on the initial student image detection model according to the sample image, the feature vector and the label of the sample image.
Optionally, inputting the sample image into a student image detection model, and obtaining a prediction characteristic vector and a prediction label of the sample image; determining a first sub-loss function value according to the predicted feature vector, the feature vector and the first sub-loss function; determining a second sub-loss function value according to the prediction label, the label of the sample image and the second sub-loss function; determining a loss function value according to the first sub-loss function value, the second sub-loss function value, the weight of the first sub-loss function and the weight of the second sub-loss function; and adjusting the coefficient of the initial student image detection model according to the loss function value.
That is, the feature vector of the sample image is at least one, for each feature vector of the sample image, the corresponding sample image can be input into the student image detection model, the student image detection model can output the predicted feature vector and the prediction label of the sample image, then, the first sub-loss function value can be determined according to the predicted feature vector output by the student image detection model, the feature vector of the sample image and the preset first sub-loss function, the prediction label output by the student image detection model, the label of the sample image and the preset second sub-loss function can be determined, the second sub-loss function value can be determined, the first sub-loss function value and the second sub-loss function value are respectively combined with the corresponding weight to determine the loss function value, the initial student image detection model can be subjected to coefficient adjustment according to the loss function value, for example, the loss function value can be minimized, and the coefficient of the corresponding student image detection model is used as the coefficient of the trained student image detection model.
In sum, by adjusting the coefficient of the initial student image detection model according to the sample image, the feature vector and the label of the sample image for each feature vector of the sample image, the student image detection model can learn the features of the teacher image detection model, and the detection accuracy and the deployment convenience of the student image detection model are improved.
In order to improve the detection accuracy and the deployment convenience of the student image detection model, the student image detection model may learn the features of the teacher image detection model, as shown in fig. 6, fig. 6 is a schematic diagram according to a fifth embodiment of the present disclosure, as another example, at least one feature vector of a sample image may be spliced or weighted to obtain a processed feature vector, and a coefficient of the initial student image detection model is adjusted according to the sample image, the label of the sample image, and the processed feature vector, where the embodiment shown in fig. 6 includes the following steps:
step 601, acquiring training data, wherein the training data comprises: a sample image and a label for the sample image.
Step 602, determining a teacher image detection model corresponding to the label of the sample image; the teacher image detection model is obtained by training with training data with labels.
Step 603, determining the feature vector of the sample image according to the teacher image detection model.
In the embodiment of the present disclosure, the step 601-603 may be implemented by any one of the embodiments of the present disclosure, which is not limited by the embodiment of the present disclosure and is not described again.
Step 604, performing splicing or weighted summation processing on at least one feature vector of the sample image to obtain a processed feature vector; and adjusting coefficients of the initial student image detection model according to the sample image, the label of the sample image and the processed characteristic vector.
Optionally, inputting the sample image into a student image detection model, and obtaining a prediction characteristic vector and a prediction label of the sample image; determining a first sub-loss function value according to the predicted feature vector, the processed feature vector and the first sub-loss function; determining a second sub-loss function value according to the prediction label, the label of the sample image and the second sub-loss function; determining a loss function value according to the first sub-loss function value, the second sub-loss function value, the weight of the first sub-loss function and the weight of the second sub-loss function; and adjusting the coefficient of the initial student image detection model according to the loss function value.
That is, at least one feature vector of the sample image is subjected to splicing or weighted summation processing to obtain a processed feature vector, then, the sample image is input into a student image detection model, the student image detection model can output the prediction characteristic vector and the prediction label of the sample image, determining a first sub-loss function value according to the predicted feature vector, the processed feature vector and a preset first sub-loss function, determining a second sub-loss function value according to the prediction label, the label of the sample image and a preset second sub-loss function, respectively combining the first sub-loss function value and the second sub-loss function value with corresponding weights to determine a loss function value, determining a second sub-loss function value according to the second sub-loss function value, the initial student image detection model is subjected to coefficient adjustment, for example, when the loss function value is minimized, and the coefficient of the corresponding student image detection model is used as the coefficient of the trained student image detection model.
In conclusion, at least one feature vector of the sample image is spliced or weighted to obtain a processed feature vector, and the coefficient of the initial student image detection model is adjusted according to the sample image, the label of the sample image and the processed feature vector, so that the student image detection model can learn the features of the teacher image detection model, and the detection accuracy and the deployment convenience of the student image detection model on the image corresponding to each label are improved.
The training method of the image detection model according to the embodiment of the present disclosure obtains training data, where the training data includes: a sample image and a label for the sample image; determining a teacher image detection model corresponding to the label of the sample image; the teacher image detection model is obtained by training with training data with labels; determining a feature vector of the sample image according to the teacher image detection model; according to the method, the initial student image detection model is subjected to coefficient adjustment to realize training, the teacher image detection models corresponding to the sample images corresponding to different labels are trained to determine a plurality of teacher image detection models, the initial student image detection model is subjected to coefficient adjustment according to the characteristic vectors of the sample images, the sample images and the labels of the sample images determined by the teacher image detection models to realize training of the initial student image detection model, and the detection accuracy of the student image detection model on the image corresponding to each label can be improved.
In order to implement the above embodiments, the embodiments of the present disclosure further provide a training device for an image detection model.
Fig. 7 is a schematic diagram of a sixth embodiment of the present disclosure, and as shown in fig. 7, an apparatus 700 for training an image detection model includes: an acquisition module 710, a determination module 720, and a training module 730.
The obtaining module 710 is configured to obtain training data, where the training data includes: a sample image and a label for the sample image; a determining module 720, configured to determine a teacher image detection model corresponding to the label of the sample image; the teacher image detection model is obtained by training with training data with labels; the determining module 720 is further configured to determine a feature vector of the sample image according to the teacher image detection model; and the training module 730 is configured to perform coefficient adjustment on the initial student image detection model according to the sample image, the label of the sample image, and the feature vector, so as to implement training.
As a possible implementation of the embodiments of the present disclosure, a tag includes: positive swatch labels and multiple types of negative swatch labels; the determining module 720 is specifically configured to: acquiring a teacher image detection model set; wherein, the teacher image detection model set comprises: a first teacher image detection model and a plurality of second teacher image detection models; the first teacher image detection model is obtained by training the training data of the positive sample labels and the training data of the multiple types of negative sample labels; the second teacher image detection model is obtained by training the training data of the positive sample label and the training data of one type of negative sample label; and taking the first teacher image detection model and the second teacher image detection model with the corresponding training data including the label as the teacher image detection model corresponding to the label.
As a possible implementation manner of the embodiment of the present disclosure, the parameter amount of the first teacher image detection model is greater than the parameter amount of the second teacher image detection model; and/or the number of network layers of the first teacher image detection model is greater than that of the second teacher image detection model.
As a possible implementation manner of the embodiment of the present disclosure, the determining module 720 is further configured to: inputting the sample image into a teacher image detection model, and acquiring an output vector of a target network layer in the teacher image detection model; the target network layer is a network layer except the full connection layer in the teacher image detection model; the output vector is taken as a feature vector.
As a possible implementation manner of the embodiment of the present disclosure, the number of feature vectors of a sample image is at least one; the training module 730 is specifically configured to: for each feature vector of the sample image, performing coefficient adjustment on an initial student image detection model according to the sample image, the feature vector and a label of the sample image; or, performing splicing or weighted summation processing on at least one feature vector of the sample image to obtain a processed feature vector; and adjusting coefficients of the initial student image detection model according to the sample image, the label of the sample image and the processed characteristic vector.
As a possible implementation manner of the embodiment of the present disclosure, the training module 730 is further configured to: inputting a sample image into a student image detection model, and obtaining a prediction characteristic vector and a prediction label of the sample image; determining a first sub-loss function value according to the predicted feature vector, the feature vector and the first sub-loss function; determining a second sub-loss function value according to the prediction label, the label of the sample image and the second sub-loss function; determining a loss function value according to the first sub-loss function value, the second sub-loss function value, the weight of the first sub-loss function and the weight of the second sub-loss function; and adjusting the coefficient of the initial student image detection model according to the loss function value.
As a possible implementation manner of the embodiment of the present disclosure, the teacher image detection model and the student image detection model are living body detection models; a positive sample label, which represents the sample image as a real object image; a negative sample label, which represents that the sample image is a non-real object image; the negative examples label includes at least one of the following labels: photo attack tags, video attack tags, mask attack tags.
The training device for the image detection model of the embodiment of the present disclosure is configured to obtain training data, where the training data includes: a sample image and a label for the sample image; determining a teacher image detection model corresponding to the label of the sample image; the teacher image detection model is obtained by training with training data with labels; determining a feature vector of the sample image according to the teacher image detection model; according to the device, the initial student image detection model is subjected to coefficient adjustment to realize training, the device can train the teacher image detection model corresponding to the sample images corresponding to different labels to determine a plurality of teacher image detection models, and the initial student image detection model is subjected to coefficient adjustment according to the characteristic vectors of the sample images, the sample images and the labels of the sample images determined by the teacher image detection models to realize training of the initial student image detection model, so that the detection accuracy of the student image detection model on the image corresponding to each label can be improved.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are carried out on the premise of obtaining the consent of the user, and the personal information conforms to the regulations of related laws and regulations without violating the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 executes the respective methods and processes described above, such as a training method of an image detection model. For example, in some embodiments, the training method of the image detection model may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto device 800 via ROM 802 and/or communications unit 809. When the computer program is loaded into the RAM 803 and executed by the computing unit 801, one or more steps of the method of training an image detection model described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the training method of the image detection model in any other suitable way (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein. The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (17)

1. A training method of an image detection model comprises the following steps:
obtaining training data, wherein the training data comprises: a sample image and a label for the sample image;
determining a teacher image detection model corresponding to a label of the sample image; the teacher image detection model is obtained by training with training data with the labels;
determining a feature vector of the sample image according to the teacher image detection model;
and adjusting coefficients of an initial student image detection model according to the sample image, the label of the sample image and the feature vector to realize training.
2. The method of claim 1, wherein the tag comprises: positive swatch labels and multiple types of negative swatch labels;
the determining a teacher image detection model corresponding to a label of the sample image includes:
acquiring a teacher image detection model set; wherein, the teacher image detection model set comprises: a first teacher image detection model and a plurality of second teacher image detection models; the first teacher image detection model is obtained by training the training data of the positive sample label and the training data of the multiple types of negative sample labels; the second teacher image detection model is obtained by training the training data of the positive sample label and the training data of the negative sample label of one type;
and taking the first teacher image detection model and a second teacher image detection model with the corresponding training data including the label as teacher image detection models corresponding to the label.
3. The method of claim 2, wherein the first teacher image detection model has a parameter quantity greater than a parameter quantity of the second teacher image detection model; and/or the number of network layers of the first teacher image detection model is greater than that of the second teacher image detection model.
4. The method of claim 1, wherein the determining feature vectors for the sample image from the teacher image detection model comprises:
inputting the sample image into the teacher image detection model, and acquiring an output vector of a target network layer in the teacher image detection model; the target network layer is a network layer except the full connection layer in the teacher image detection model;
and taking the output vector as the feature vector.
5. The method of claim 1, wherein the number of feature vectors of the sample image is at least one;
the adjusting the coefficient of the initial student image detection model according to the sample image, the label of the sample image and the feature vector comprises:
for each feature vector of the sample image, performing coefficient adjustment on an initial student image detection model according to the sample image, the feature vector and a label of the sample image;
alternatively, the first and second electrodes may be,
splicing or weighting and summing at least one feature vector of the sample image to obtain a processed feature vector; and adjusting coefficients of an initial student image detection model according to the sample image, the label of the sample image and the processed feature vector.
6. The method of claim 1, wherein the coefficient adjustment of an initial student image detection model based on the sample image, the label of the sample image, and the feature vector comprises:
inputting the sample image into the student image detection model, and obtaining a prediction characteristic vector and a prediction label of the sample image;
determining a first sub-loss function value according to the predicted feature vector, the feature vector and a first sub-loss function;
determining a second sub-loss function value according to the prediction label, the label of the sample image and a second sub-loss function;
determining a loss function value according to the first sub-loss function value, the second sub-loss function value, the weight of the first sub-loss function and the weight of the second sub-loss function;
and adjusting the coefficient of the initial student image detection model according to the loss function value.
7. The method of claim 2, wherein the teacher image detection model and the student image detection model are liveness detection models;
the positive sample label represents that the sample image is a real object image;
the negative sample label represents that the sample image is a non-real object image; the negative examples label includes at least one of the following labels: photo attack tags, video attack tags, mask attack tags.
8. An apparatus for training an image detection model, comprising:
an obtaining module, configured to obtain training data, where the training data includes: a sample image and a label for the sample image;
the determining module is used for determining a teacher image detection model corresponding to the label of the sample image; the teacher image detection model is obtained by training with training data with the labels;
the determining module is further used for determining a feature vector of the sample image according to the teacher image detection model;
and the training module is used for adjusting the coefficient of the initial student image detection model according to the sample image, the label of the sample image and the characteristic vector so as to realize training.
9. The apparatus of claim 8, wherein the tag comprises: positive swatch labels and multiple types of negative swatch labels;
the determining module is specifically configured to:
acquiring a teacher image detection model set; wherein, the teacher image detection model set comprises: a first teacher image detection model and a plurality of second teacher image detection models; the first teacher image detection model is obtained by training the training data of the positive sample label and the training data of the multiple types of negative sample labels; the second teacher image detection model is obtained by training the training data of the positive sample label and the training data of the negative sample label of one type;
and taking the first teacher image detection model and a second teacher image detection model with the corresponding training data including the label as teacher image detection models corresponding to the label.
10. The apparatus of claim 9, wherein the parameter quantity of the first teacher image detection model is greater than the parameter quantity of the second teacher image detection model; and/or the number of network layers of the first teacher image detection model is greater than that of the second teacher image detection model.
11. The apparatus of claim 8, wherein the means for determining is further configured to:
inputting the sample image into the teacher image detection model, and acquiring an output vector of a target network layer in the teacher image detection model; the target network layer is a network layer except the full connection layer in the teacher image detection model;
and taking the output vector as the feature vector.
12. The apparatus of claim 8, wherein the number of feature vectors of the sample image is at least one;
the training module is specifically configured to:
for each feature vector of the sample image, performing coefficient adjustment on an initial student image detection model according to the sample image, the feature vector and a label of the sample image;
alternatively, the first and second electrodes may be,
splicing or weighting and summing at least one feature vector of the sample image to obtain a processed feature vector; and adjusting coefficients of an initial student image detection model according to the sample image, the label of the sample image and the processed feature vector.
13. The apparatus of claim 8, wherein the training module is further configured to:
inputting the sample image into the student image detection model, and obtaining a prediction characteristic vector and a prediction label of the sample image;
determining a first sub-loss function value according to the predicted feature vector, the feature vector and a first sub-loss function;
determining a second sub-loss function value according to the prediction label, the label of the sample image and a second sub-loss function;
determining a loss function value according to the first sub-loss function value, the second sub-loss function value, the weight of the first sub-loss function and the weight of the second sub-loss function;
and adjusting the coefficient of the initial student image detection model according to the loss function value.
14. The apparatus according to claim 9, wherein the teacher image detection model and the student image detection model are living body detection models;
the positive sample label represents that the sample image is a real object image;
the negative sample label represents that the sample image is a non-real object image; the negative examples label includes at least one of the following labels: photo attack tags, video attack tags, mask attack tags.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
17. A computer program product comprising a computer program which, when executed by a processor, implements the method of any one of claims 1-7.
CN202110888202.4A 2021-08-03 2021-08-03 Training method and device of image detection model, electronic equipment and storage medium Active CN113705362B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110888202.4A CN113705362B (en) 2021-08-03 2021-08-03 Training method and device of image detection model, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110888202.4A CN113705362B (en) 2021-08-03 2021-08-03 Training method and device of image detection model, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113705362A true CN113705362A (en) 2021-11-26
CN113705362B CN113705362B (en) 2023-10-20

Family

ID=78651365

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110888202.4A Active CN113705362B (en) 2021-08-03 2021-08-03 Training method and device of image detection model, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113705362B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578614A (en) * 2022-10-21 2023-01-06 北京百度网讯科技有限公司 Training method of image processing model, image processing method and device
WO2023169334A1 (en) * 2022-03-09 2023-09-14 北京字跳网络技术有限公司 Semantic segmentation method and apparatus for image, and electronic device and storage medium
WO2024036847A1 (en) * 2022-08-16 2024-02-22 北京百度网讯科技有限公司 Image processing method and apparatus, and electronic device and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3125156A1 (en) * 2015-07-31 2017-02-01 Xiaomi Inc. Method, apparatus and server for image scene determination
WO2019114580A1 (en) * 2017-12-13 2019-06-20 深圳励飞科技有限公司 Living body detection method, computer apparatus and computer-readable storage medium
CN111027060A (en) * 2019-12-17 2020-04-17 电子科技大学 Knowledge distillation-based neural network black box attack type defense method
CN111639710A (en) * 2020-05-29 2020-09-08 北京百度网讯科技有限公司 Image recognition model training method, device, equipment and storage medium
US20200380300A1 (en) * 2019-05-30 2020-12-03 Baidu Usa Llc Systems and methods for adversarially robust object detection
CN112036331A (en) * 2020-09-03 2020-12-04 腾讯科技(深圳)有限公司 Training method, device and equipment of living body detection model and storage medium
CN112418268A (en) * 2020-10-22 2021-02-26 北京迈格威科技有限公司 Target detection method and device and electronic equipment
CN112598643A (en) * 2020-12-22 2021-04-02 百度在线网络技术(北京)有限公司 Depth counterfeit image detection and model training method, device, equipment and medium
WO2021114974A1 (en) * 2019-12-14 2021-06-17 支付宝(杭州)信息技术有限公司 User risk assessment method and apparatus, electronic device, and storage medium
CN113033465A (en) * 2021-04-13 2021-06-25 北京百度网讯科技有限公司 Living body detection model training method, device, equipment and storage medium
CN113052144A (en) * 2021-04-30 2021-06-29 平安科技(深圳)有限公司 Training method, device and equipment of living human face detection model and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3125156A1 (en) * 2015-07-31 2017-02-01 Xiaomi Inc. Method, apparatus and server for image scene determination
WO2019114580A1 (en) * 2017-12-13 2019-06-20 深圳励飞科技有限公司 Living body detection method, computer apparatus and computer-readable storage medium
US20200380300A1 (en) * 2019-05-30 2020-12-03 Baidu Usa Llc Systems and methods for adversarially robust object detection
WO2021114974A1 (en) * 2019-12-14 2021-06-17 支付宝(杭州)信息技术有限公司 User risk assessment method and apparatus, electronic device, and storage medium
CN111027060A (en) * 2019-12-17 2020-04-17 电子科技大学 Knowledge distillation-based neural network black box attack type defense method
CN111639710A (en) * 2020-05-29 2020-09-08 北京百度网讯科技有限公司 Image recognition model training method, device, equipment and storage medium
CN112036331A (en) * 2020-09-03 2020-12-04 腾讯科技(深圳)有限公司 Training method, device and equipment of living body detection model and storage medium
CN112418268A (en) * 2020-10-22 2021-02-26 北京迈格威科技有限公司 Target detection method and device and electronic equipment
CN112598643A (en) * 2020-12-22 2021-04-02 百度在线网络技术(北京)有限公司 Depth counterfeit image detection and model training method, device, equipment and medium
CN113033465A (en) * 2021-04-13 2021-06-25 北京百度网讯科技有限公司 Living body detection model training method, device, equipment and storage medium
CN113052144A (en) * 2021-04-30 2021-06-29 平安科技(深圳)有限公司 Training method, device and equipment of living human face detection model and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
胡永健 等: ""人脸欺诈检测最新进展及典型方法"", 《信号处理》, vol. 37, no. 12, pages 2261 - 2277 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023169334A1 (en) * 2022-03-09 2023-09-14 北京字跳网络技术有限公司 Semantic segmentation method and apparatus for image, and electronic device and storage medium
WO2024036847A1 (en) * 2022-08-16 2024-02-22 北京百度网讯科技有限公司 Image processing method and apparatus, and electronic device and storage medium
CN115578614A (en) * 2022-10-21 2023-01-06 北京百度网讯科技有限公司 Training method of image processing model, image processing method and device
CN115578614B (en) * 2022-10-21 2024-03-12 北京百度网讯科技有限公司 Training method of image processing model, image processing method and device

Also Published As

Publication number Publication date
CN113705362B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN112561077B (en) Training method and device of multi-task model and electronic equipment
CN113705362B (en) Training method and device of image detection model, electronic equipment and storage medium
CN114186564B (en) Pre-training method and device for semantic representation model and electronic equipment
CN113065614B (en) Training method of classification model and method for classifying target object
CN113971751A (en) Training feature extraction model, and method and device for detecting similar images
CN112580733B (en) Classification model training method, device, equipment and storage medium
CN114494784A (en) Deep learning model training method, image processing method and object recognition method
CN114187459A (en) Training method and device of target detection model, electronic equipment and storage medium
CN112949767A (en) Sample image increment, image detection model training and image detection method
CN112861885A (en) Image recognition method and device, electronic equipment and storage medium
CN113344862A (en) Defect detection method, defect detection device, electronic equipment and storage medium
CN113627536A (en) Model training method, video classification method, device, equipment and storage medium
CN113436100A (en) Method, apparatus, device, medium and product for repairing video
CN115359308B (en) Model training method, device, equipment, storage medium and program for identifying difficult cases
CN113205041A (en) Structured information extraction method, device, equipment and storage medium
CN112580666A (en) Image feature extraction method, training method, device, electronic equipment and medium
CN113627361B (en) Training method and device for face recognition model and computer program product
CN114120454A (en) Training method and device of living body detection model, electronic equipment and storage medium
CN114494747A (en) Model training method, image processing method, device, electronic device and medium
CN114494782B (en) Image processing method, model training method, related device and electronic equipment
CN116052288A (en) Living body detection model training method, living body detection device and electronic equipment
CN114093006A (en) Training method, device and equipment of living human face detection model and storage medium
CN113850072A (en) Text emotion analysis method, emotion analysis model training method, device, equipment and medium
CN112749978A (en) Detection method, apparatus, device, storage medium, and program product
CN114844889B (en) Video processing model updating method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant