WO2020098074A1 - 人脸样本图片标注方法、装置、计算机设备及存储介质 - Google Patents

人脸样本图片标注方法、装置、计算机设备及存储介质 Download PDF

Info

Publication number
WO2020098074A1
WO2020098074A1 PCT/CN2018/122728 CN2018122728W WO2020098074A1 WO 2020098074 A1 WO2020098074 A1 WO 2020098074A1 CN 2018122728 W CN2018122728 W CN 2018122728W WO 2020098074 A1 WO2020098074 A1 WO 2020098074A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture
marked
preset
emotion
error
Prior art date
Application number
PCT/CN2018/122728
Other languages
English (en)
French (fr)
Inventor
盛建达
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020098074A1 publication Critical patent/WO2020098074A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present application relates to the field of biometrics technology, and in particular, to a method, device, computer equipment, and storage medium for labeling pictures of face samples.
  • Facial expression recognition is an important research direction in the field of artificial intelligence.
  • a large number of facial emotion samples need to be prepared to support model training of emotion recognition models.
  • the deep learning of emotion samples helps to improve the accuracy and robustness of emotion recognition models.
  • Embodiments of the present application provide a method, device, computer equipment, and storage medium for tagging face sample pictures, to solve the problem of low efficiency of tagging face emotion sample pictures.
  • a method for annotating face sample pictures including:
  • N preset emotion recognition models are used to identify the picture to be marked to obtain a recognition result of the picture to be marked, wherein N is a positive integer, and the recognition result includes N emotions predicted by the emotion recognition model The predicted score corresponding to the state and the N emotional states;
  • the picture to be marked is identified as an error picture and will include the The error data set of the error picture is output to the client;
  • N emotional states predicted by the emotional recognition model are the same, and the predicted scores corresponding to the N emotional states are all greater than a preset sample threshold, then the The emotional state and the average value of the N predicted scores are used as the labeling information of the picture to be marked, and the labeling information is marked into the corresponding picture to be marked as the first standard sample;
  • An image tagging device for face samples including:
  • the picture acquisition module is used to obtain the preset face image in the data set to be marked as the picture to be marked;
  • the picture recognition module is used to recognize the picture to be marked using N preset emotion recognition models to obtain the recognition result of the picture to be marked, where N is a positive integer, and the recognition result includes the N The emotion state predicted by the emotion recognition model and the predicted scores corresponding to the N said emotion states;
  • the data output module is used for the recognition result of each of the pictures to be marked, if there are at least two different emotion states in the emotion states predicted by the N emotion recognition models, the picture to be marked is identified as an error picture And output the error data set containing the error picture to the client;
  • the picture annotation module is used for the recognition result of each picture to be annotated, if N emotional states predicted by the emotion recognition model are the same, and the predicted scores corresponding to the N emotional states are all greater than a preset sample Threshold, the average value of the emotional state and the N predicted scores is used as the labeling information of the picture to be marked, and the labeling information is marked into the corresponding picture to be marked as the first standard sample;
  • a sample storage module configured to receive the annotated error data set sent by the client, and use the error picture in the annotated error data set as a second standard sample, and use the first
  • the standard sample and the second standard sample are saved in a preset standard sample library;
  • a model update module configured to use the first standard sample and the second standard sample to respectively train N preset emotion recognition models to update the N preset emotion recognition models
  • a loop execution module configured to use the face image in the data set to be marked except the first standard sample and the second standard sample as a new to-be-marked image, and continue to execute the use of N presets
  • the emotion recognition model recognizes the picture to be marked and obtains the recognition result of the picture to be marked until the error data set is empty.
  • a computer device includes a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, and the processor implements the computer-readable instructions to implement the above face sample image The steps of the labeling method.
  • One or more non-volatile readable storage media storing computer-readable instructions, characterized in that when the computer-readable instructions are executed by one or more processors, the one or more processors are executed To implement the steps of the above face sample image annotation method.
  • FIG. 1 is a schematic diagram of an application environment of a method for labeling a face sample picture in an embodiment of the present application
  • FIG. 2 is a flowchart of a method for tagging a face sample image in an embodiment of the present application
  • FIG. 3 is a specific flowchart of generating a data set to be annotated in a method for annotating a face sample picture in an embodiment of the present application
  • FIG. 4 is a specific flowchart of constructing an emotion recognition model in the method for tagging face sample pictures in the embodiment of the present application;
  • FIG. 5 is a specific flowchart of step S20 in FIG. 2;
  • FIG. 6 is a specific flowchart of step S30 in FIG. 2;
  • FIG. 7 is a schematic block diagram of an apparatus for annotating face sample images in an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a computer device in an embodiment of the present application.
  • the face sample image annotation method provided in the embodiment of the present application can be applied in the application environment as shown in FIG. 1, the application environment includes a server and a client, wherein the server and the client are connected through a network, and the server Recognize and annotate face pictures, output wrongly identified pictures to the client, users annotate the wrongly identified pictures on the client, and the server will obtain the annotated and correctly identified data from the client Store in the standard sample library.
  • the client may specifically be, but not limited to, various personal computers, notebook computers, smart phones, tablets, and portable wearable devices.
  • the server may be implemented by an independent server or a server cluster composed of multiple servers.
  • the embodiment of the present application provides a method for labeling face sample pictures, which is applied to the server.
  • FIG. 2 shows a flowchart of a method for labeling a face sample picture in this embodiment.
  • the method is applied to the server in FIG. 1 and is used to recognize and mark face pictures.
  • the face sample image labeling method includes steps S10 to S70, which are described in detail as follows:
  • S10 Acquire a face image in a preset data set to be marked as a picture to be marked.
  • the preset data set to be marked is a preset storage space for storing the collected face pictures, which can be obtained from public data sets on the network, or can be intercepted from public videos Get a face picture containing a face.
  • the specific way to get a face picture can be set according to the actual situation. There is no restriction here.
  • the server acquires the face image from the preset data set to be annotated as the image to be annotated, and the image to be annotated needs to be annotated in order to be used for training and testing of the machine learning model.
  • S20 Use the N preset emotion recognition models to identify the image to be annotated to obtain the recognition result of the image to be annotated, where N is a positive integer, and the recognition result includes the emotional states predicted by the N emotional recognition models and the corresponding N emotional states Forecast score.
  • the preset emotion recognition model is a pre-trained model for identifying the emotional state corresponding to the face in the face picture to be recognized.
  • the preset emotion recognition model has N, N is a positive integer, N can It can be 1 or 2, it can be set according to the needs of the actual application. There is no limit here.
  • the emotional state and the predicted score of the emotional state of the image to be marked can be obtained under each emotion recognition model, and a total of N The emotional state predicted by each emotion recognition model and the predicted scores corresponding to the N emotional states.
  • the emotional state includes but is not limited to emotions such as happy, sad, fear, angry, surprised, disgusted and calm.
  • the predicted score is used to express The probability of the emotional state corresponding to the face in the face picture, the larger the predicted score, the greater the probability that the face in the face picture belongs to the emotional state.
  • the server detects the recognition result of each picture to be marked. If there are at least two different emotion states among the emotion states predicted by the N emotion recognition models, for example, the preset first emotion recognition model predicts that The emotion state corresponding to the picture to be marked is “happy”, and the preset second emotion recognition model predicts that the emotion state corresponding to the picture to be marked is “surprised”, which means that there is an error in the recognition result of the picture to be marked.
  • the image to be marked is identified as an error picture, and the error data set containing the error picture is output to the client through the network, so that the user can mark the error picture in the error data set on the client, and input the correct emotional state corresponding to each error picture Information, update the error recognition result corresponding to the error picture in the error data set.
  • the preset sample threshold is a threshold set in advance for selecting and identifying correct pictures to be marked. If the predicted score obtained by the recognition is greater than the preset sample threshold, it means that the recognition result of the picture to be marked is correct.
  • the sample threshold can be set to 0.9 or 0.95.
  • the specific sample threshold can be set according to the actual situation, without limitation here.
  • the server detects the recognition result of each image to be marked. If the emotion states predicted by the N emotion recognition models are the same, and the predicted scores corresponding to the N emotion states are all greater than the preset sample threshold, the confirmation is confirmed.
  • the recognition result of the picture to be marked is correct, the same emotional state and the average of the N predicted scores are used as the marking information of the picture to be marked, and the marking information is marked into the corresponding picture to be marked as the first standard sample,
  • the average value of the N predicted scores is the arithmetic average of the N predicted scores
  • the first standard sample includes the annotation information of the emotional state corresponding to the face picture.
  • step S30 there is no necessary execution sequence between step S30 and step S40, and it may also be a relationship of parallel execution, which is not limited here.
  • S50 Receive the marked error data set sent by the client, and use the error picture in the marked error data set as the second standard sample, and save the first standard sample and the second standard sample to the preset standard sample library .
  • the client sends the marked error data set to the server through the network
  • the error data set carries identification information of the completion of data marking, and is used to identify the sent data as the marked error data set.
  • Receive the sent data if it is detected that the data contains the identification information of the completion of the data marking, it means that the received data is the marked error data set sent by the client, and the face image in the marked error data set.
  • the second standard sample contains the annotation information of the emotional state corresponding to the face picture.
  • the server stores the first standard sample and the second standard sample in a preset standard sample library, where the preset standard sample library is a database for storing standard samples, and the standard sample refers to a face containing annotation information
  • the preset standard sample library is a database for storing standard samples
  • the standard sample refers to a face containing annotation information
  • S60 Use the first standard sample and the second standard sample to respectively train N preset emotion recognition models to update N preset emotion recognition models.
  • the server uses the first standard sample and the second standard sample to perform incremental training on each preset emotion recognition model, thereby updating N preset emotion recognition models.
  • the incremental training refers to Optimized model training for the preset model parameters of the emotion recognition model. Incremental training can make full use of the historical training results of the preset emotion recognition model, reducing the time for subsequent model training and eliminating the need to repeatedly process previously trained sample.
  • the standard samples containing the correct annotation information are used to incrementally train the preset emotion recognition model, so that the pre
  • the sentiment recognition model learns new knowledge from the newly added standard samples, and can save the knowledge that has been learned from the training samples before, get more accurate model parameters, and improve the recognition accuracy of the model.
  • S70 Use the face images in the data set to be marked except the first standard sample and the second standard sample as new to be marked images, and continue to use the N preset emotion recognition models to recognize the marked images to obtain the to be marked images Step of the recognition result until the error data set is empty.
  • the server excludes the face pictures corresponding to the first standard sample from the data set to be marked, and deletes the face pictures corresponding to the second standard sample, and uses the remaining face pictures in the data set to be marked as new to be marked
  • the remaining face pictures may have a picture to be annotated with incorrect recognition or a picture to be annotated with a correct recognition, and need to be further distinguished using an emotion recognition model with higher recognition accuracy.
  • the step of obtaining the identification result of the image to be annotated continues to be executed until the error data set is empty, indicating that N preset emotion recognition models are to be annotated data
  • the recognition results of the set there is no picture to be marked with the recognition error, then stop using the emotion recognition model to continue to recognize the marked picture, and obtain the marked standard samples and store them in the preset standard sample library for the machine learning model. Training and testing.
  • an error picture for identifying an error and a correct sample picture are obtained according to the recognition result, and an error data set is formed using the error picture and output Client, so that the user can mark the error data set, store the marked error data set and the correctly identified sample picture as standard samples, and store them in the standard sample library, and use the standard samples in the standard sample library to analyze multiple emotions.
  • the recognition model is incrementally trained to update each emotion recognition model to improve the recognition accuracy of the annotation information of the emotion recognition model to be annotated pictures, and then return to the step of using multiple preset emotion recognition models to identify the annotated pictures to continue Until the error data set is empty.
  • the method for labeling a face sample picture before step S10, that is, before acquiring a face image in a preset data set to be marked as a picture to be marked, the method for labeling a face sample picture further includes:
  • a preset crawler tool is used to crawl a face image in a public data set on the network.
  • the crawler tool is a tool used to obtain a face image, for example, an octopus crawler tool, a parthenocissus crawler tool, or a collection crawler.
  • Tools, etc. browse the content in the publicly stored address of the picture data through the network, use the crawler tool to crawl the picture data corresponding to the preset keyword, and identify the crawled picture data as the first face picture,
  • the preset keywords are keywords related to emotions or faces.
  • S02 Use a preset augmentation method to augment the first face image to obtain the second face image.
  • a preset augmentation method is used to augment the first face image.
  • the preset augmentation method is a picture that is preset to increase the number of face images Processing method.
  • the augmentation method may specifically include cropping the first face image, for example, randomly cropping the first face image with a size of 256 * 256 to obtain a second face image with a size of 248 * 248 as augmentation
  • the specific augmentation method can be set according to the needs of the actual application, and there is no limitation here.
  • the augmentation of the first face picture is to increase the number of face pictures
  • the augmented picture is used as the second face picture
  • the first face picture and the second face picture are saved to the preset
  • the preset emotion recognition model can recognize and mark the face pictures in the marked data set, so as to obtain more face sample pictures to support model training of the emotion recognition model.
  • the first face image is obtained by using a preset crawler tool, and the first face image is augmented using a preset augmentation method to obtain the second face image, and then The first face picture and the second face picture are saved to the preset data set to be marked, which improves the efficiency of obtaining face pictures, and greatly increases the number of face picture samples, so as to collect more face pictures to support Model training for emotion recognition models.
  • the method for annotating the face sample image include:
  • the server can obtain a face sample picture from a preset standard emotion database for training the emotion recognition model, where the preset standard sample library is a database for storing standard samples, and the standard sample is Refers to the sample image of the face that contains annotation information. Each sample image corresponds to one piece of annotation information.
  • the annotation information is used to describe the emotional state corresponding to the face in the sample image of the face.
  • the emotional state corresponding to the face image includes but not Limited to emotions such as happy, sad, fear, angry, surprised, disgusted and calm.
  • the image pre-processing refers to the processing of transforming the size, color and shape of the image to form a uniform specification training sample, so that the subsequent model training process can process the image more efficiently and improve the recognition of the machine learning model Accuracy.
  • the face sample image can be converted into a preset uniform size training sample, and then the training sample is subjected to pre-processing such as denoising, graying and binarization to eliminate the noise information in the face sample image , To enhance the detectability of information related to human faces and simplify image data.
  • pre-processing such as denoising, graying and binarization
  • the size of the training sample can be preset to a face image of 224 * 224 size.
  • a face sample image of size [1280, 720] the person in the face sample image is detected by the existing face detection algorithm The area of the face, and cut out the area where the face is from the face sample picture, and then scale the cropped face sample picture to the training sample of [224, 224] size, by denoising and graying the training sample And preprocessing such as binarization, to achieve the preprocessing of the face sample pictures.
  • S13 Use the preprocessed face sample images to train the residual neural network model, the dense convolutional neural network model and the Google convolutional neural network model respectively, and train the trained residual neural network model and the dense convolutional neural network model
  • the neural network model and the Google convolutional neural network model are used as preset emotion recognition models.
  • a preprocessed face sample image is obtained, and the preprocessed face sample image is used to train the residual neural network model, the dense convolutional neural network model, and the Google convolutional neural network model, respectively, so that The residual neural network model, dense convolutional neural network model and Google convolutional neural network model can perform machine learning on the training samples to obtain the model parameters corresponding to each model, thereby obtaining N preset emotion recognition models for New sample data for identification and prediction.
  • the residual neural network model is the ResNet (Residual Network) model.
  • the ResNet model refers to a model that introduces a deep residual learning framework in the ResNet network structure to solve the degradation problem. It is worth mentioning that the depth The network will be shallower. The effect of the deep network is good, but the residual of the deep network disappears, which leads to the degradation problem. ResNet solves the degradation problem, allowing the deeper network to be better trained.
  • the residual error refers to the actual observation value and estimated value in mathematical statistics. The difference between.
  • DenseNet Dense Convolutional Neural Network
  • DenseNet refers to the model that uses feature reuse in the DenseNet network.
  • the input of each layer of the network includes all previous layers of the network
  • the output of improves the transmission efficiency of information and gradients in the network, so that it can train deeper networks.
  • the Google convolutional neural network model is the GoogleNet model.
  • the GoogleNet model is to reduce the computational cost of the deep neural network by using the computing resources in the network, and increase the width and depth of the network without increasing the computing load.
  • Machine learning model is to reduce the computational cost of the deep neural network by using the computing resources in the network, and increase the width and depth of the network without increasing the computing load.
  • the quality of the face sample pictures is improved, so that the subsequent model training process can process the pictures more efficiently, thereby improving machine learning
  • the training rate and recognition accuracy of the model and then using the pre-processed face sample pictures to train the residual neural network model, the dense convolutional neural network model and the Google convolutional neural network model, to obtain multiple trained good
  • the emotion recognition model enables the emotion recognition model to be used for classifying and predicting new face pictures, and can be combined with the recognition results of multiple emotion recognition models to analyze and judge, and improve the accuracy of labeling face pictures.
  • this embodiment describes in detail the specific implementation method of using N preset emotion recognition models mentioned in step S20 to recognize the image to be marked to obtain the recognition result of the image to be marked.
  • FIG. 5 shows a specific flowchart of step S20, which is described in detail as follows:
  • S201 For each picture to be marked, use N preset emotion recognition models to separately extract feature values of the picture to be marked, to obtain feature data corresponding to each preset emotion recognition model.
  • feature value extraction refers to a method of extracting characteristic information belonging to a human face in a picture to be annotated using an emotion recognition model to highlight the representative features of the picture to be annotated.
  • the server uses N preset emotion recognition models to extract feature values of the to-be-annotated picture, respectively, to obtain feature data corresponding to each preset emotion recognition model, and retain the required For important features, discard irrelevant information to obtain feature data that can be used for subsequent emotional state prediction.
  • each preset emotion recognition model use the trained m classifiers to calculate the similarity of the feature data to obtain the probability values of m emotional states of the image to be marked, where m is a positive integer, each A classifier corresponds to an emotional state.
  • each classifier corresponds to an emotional state and feature data corresponding to the emotional state, wherein the emotional state corresponding to the classifier can be based on actual needs
  • the number of classifiers m can also be set as needed.
  • m can be set to 7, which includes 7 emotional states.
  • the emotional state can be set to happy, sad, fear, angry, 7 emotions such as surprise, disgust and calm.
  • each preset emotion recognition model the trained m classifiers are used to perform similarity calculation on the feature data, and the feature value of the image to be annotated belongs to the corresponding classifier
  • the probability of the emotional state in each preset emotion recognition model, and each emotion recognition model predicts the image to be annotated separately, and the probability that the image to be annotated belongs to each emotional state is obtained, and a total of m probability values are obtained.
  • S203 Obtain the emotion state corresponding to the largest probability value from the m probability values as the emotion state predicted by the emotion recognition model, and use the maximum probability value as the predicted score corresponding to the emotion state, to obtain a total of N emotion recognition models The predicted emotional state and the predicted score corresponding to the N emotional states.
  • the emotional state corresponding to the largest probability value is obtained as the emotional state of the picture to be marked, to represent the picture to be marked Corresponding emotional state, and using the maximum probability value as the predicted score of the emotional state of the picture to be annotated, a total of the emotional states predicted by the N emotional recognition models and the predicted scores corresponding to the N emotional states are obtained.
  • Table 1 shows a picture to be marked with three preset emotion recognition models.
  • the recognition results obtained after the first model, the second model, and the third model are respectively subjected to recognition prediction, of which categories 1-6 represent respectively
  • the emotional states of happy, sad, fear, angry, disgusted and calm corresponding to the face picture the probability corresponding to each category is the probability that each preset emotion recognition model predicts that the picture to be marked belongs to the category, such as corresponding to category 1 95% is the probability that the first model obtained the face in the picture to be marked by the emotional state of “happy” by identifying and predicting.
  • the maximum probability in the classification of the picture to be marked is predicted to be 95%.
  • the maximum probability value of 95% is identified as the predicted score corresponding to the emotional state predicted by the first model, that is, the predicted score is 0.95, thereby obtaining the first model
  • the predicted emotional state is "happy” and the predicted score is 0.95
  • the predicted emotional state of the second model is “happy” and the predicted score is 0.90
  • the predicted emotional state of the third model is "happy" and the predicted score is 0.90.
  • feature data corresponding to each preset emotion recognition model is obtained.
  • each preset emotion In the recognition model use the trained multiple classifiers to calculate the similarity of the feature data to obtain the probability values corresponding to the multiple emotional states of the pictures to be marked, and obtain the emotion corresponding to the largest probability value from the obtained probability values
  • the state is used as the emotional state predicted by the emotion recognition model, and the maximum probability value is used as the predicted score corresponding to the emotional state.
  • the emotional state predicted by each emotional recognition model and the predicted score corresponding to the emotional state are obtained by using multiple
  • the emotion recognition model labels the pictures to be annotated, and analyzes and judges the recognition results of multiple emotion recognition models to improve the recognition accuracy of the pictures to be annotated, thereby improving the accuracy of the annotation of the face sample pictures.
  • step S30 for the recognition result mentioned in step S30 for each picture to be marked, if there are at least two different emotion states among the emotion states predicted by the N emotion recognition models, then The specific method for identifying the picture to be marked as an error picture and outputting the error data set containing the error picture to the client is described in detail.
  • FIG. 6 shows a specific flowchart of step S30, which is described in detail as follows:
  • S301 Detect the recognition result of each picture to be marked. If there are at least two different emotion states among the emotion states predicted by the N emotion recognition models, the picture to be marked is identified as the first error picture.
  • the server detects the recognition result of each picture to be marked. If there are at least two different emotion states in the emotion states predicted by the N emotion recognition models, it means that the recognition result of the picture to be marked is wrong.
  • the picture to be marked is identified as the first error picture.
  • the preset error threshold is a threshold set in advance to distinguish whether the emotional state of the identified picture to be annotated is incorrect. If the emotional states predicted by the N emotional recognition models are the same, and the predictions corresponding to the N emotional states If the scores are less than the preset error threshold, it indicates that there is an error in the recognition of the face picture, and the picture to be marked is identified as the second error picture.
  • the error threshold can be set to 0.5 or 0.6, specific The error threshold can be set according to the actual situation, without limitation here.
  • S303 Use the first error picture and the second error picture as an error data set, and output the error data set to the client.
  • the server uses the first error picture and the second error picture as the error data set, and outputs the error data set to the client, so that the user can mark the error picture in the error data set on the client, and input each error picture
  • the user confirms the emotional state of the face in the face picture set of the marked error picture, and the corresponding mark is correctly marked with information, and the wrong recognition result corresponding to the error picture in the error data set is updated .
  • the picture to be marked by detecting the recognition result of each picture to be marked, if at least two different emotional states exist in the predicted emotional state, the picture to be marked is identified as the first error picture , If the predicted emotional states are all the same, and the predicted score corresponding to each emotional state is less than the preset error threshold, the picture to be marked is identified as the second error picture, and at the same time, the first error picture and the second
  • the error picture is output to the client as an error data set, so as to manually mark the pictures to be marked incorrectly, to obtain the correctly marked face sample pictures, used for incremental training of the emotion recognition model, and improve the recognition accuracy of the emotion recognition model Rate, so that the server can use a higher accuracy emotion recognition model to identify and mark the annotated pictures, thereby improving the accuracy of the annotation of the face pictures.
  • a face sample picture tagging device is provided, and the face sample picture tagging device corresponds to the face sample picture tagging method in the above embodiment in one-to-one correspondence.
  • the face sample picture tagging device includes: a picture acquisition module 71, a picture recognition module 72, a data output module 73, a picture tagging module 74, a sample storage module 75, a model update module 76, and a loop execution module 77.
  • the detailed description of each functional module is as follows:
  • the picture obtaining module 71 is used to obtain a preset face image in the data set to be marked as a picture to be marked;
  • the picture recognition module 72 is used to recognize the image to be marked using N preset emotion recognition models to obtain the recognition result of the picture to be marked, where N is a positive integer, and the recognition result includes the emotion state predicted by the N emotion recognition models and The predicted scores corresponding to N emotional states;
  • the data output module 73 is used for the recognition result of each picture to be marked, if there are at least two different emotion states among the emotion states predicted by the N emotion recognition models, the picture to be marked is identified as an error picture, and The error data set containing the error picture is output to the client;
  • the picture annotation module 74 is used for the recognition result of each picture to be annotated. If the emotion states predicted by the N emotion recognition models are the same and the predicted scores corresponding to the N emotion states are all greater than the preset sample threshold, the emotion The average value of the state and the N predicted scores is used as the labeling information of the picture to be marked, and the labeling information is marked into the corresponding picture to be marked as the first standard sample;
  • the sample storage module 75 is used to receive the marked error data set sent by the client, and use the error picture in the marked error data set as the second standard sample, and save the first standard sample and the second standard sample to a preset In the standard sample library;
  • the model updating module 76 is configured to use the first standard sample and the second standard sample to respectively train N preset emotion recognition models to update N preset emotion recognition models;
  • the loop execution module 77 is configured to use the face pictures in the data set to be marked except the first standard sample and the second standard sample as new to be marked images, and continue to use the N preset emotion recognition models to recognize the marked images The step of obtaining the recognition result of the picture to be marked until the error data set is empty.
  • the image tagging device of the face sample further includes:
  • the picture crawling module 701 is used to obtain a first face image using a preset crawler tool
  • the picture augmentation module 702 is used to augment the first face image by using a preset augmentation method to obtain the second face image;
  • the picture saving module 703 is used to save the first face picture and the second face picture to a preset data set to be marked.
  • the image tagging device of the face sample further includes:
  • the sample obtaining module 711 is used to obtain a face sample picture from a preset standard sample library
  • the first processing module 712 is used to pre-process the face sample pictures
  • the model training module 713 is used to train the residual neural network model, the dense convolutional neural network model and the Google convolutional neural network model using the preprocessed face sample pictures, and the trained residual neural network
  • the model, dense convolutional neural network model and Google's convolutional neural network model are used as preset emotion recognition models.
  • the picture recognition module 72 includes:
  • the feature extraction sub-module 7201 is used to extract feature values of the to-be-marked pictures using N preset emotion recognition models for each to-be-marked picture, to obtain feature data corresponding to each preset emotion recognition model;
  • the data calculation submodule 7202 is used to calculate the similarity of the feature data using the trained m classifiers in each preset emotion recognition model to obtain the probability values of m emotional states of the image to be marked, where, m is a positive integer, and each classifier corresponds to an emotional state;
  • the data selection submodule 7203 is used to obtain the emotional state corresponding to the maximum probability value from the m probability values as the emotional state predicted by the emotion recognition model, and use the maximum probability value as the predicted score corresponding to the emotional state.
  • the emotional states predicted by the N emotional recognition models and the predicted scores corresponding to the N emotional states are obtained.
  • the data output module 73 includes:
  • the first identification submodule 7301 is used to detect the recognition result of each picture to be marked. If there are at least two different emotion states among the emotion states predicted by the N emotion recognition models, the picture to be marked is marked as the first An error picture;
  • the second identification submodule 7302 is configured to identify the picture to be marked as the second error if the N emotional recognition models predict the same emotional state and the predicted scores corresponding to the N emotional states are less than the preset error threshold image;
  • the data output submodule 7303 is configured to use the first error picture and the second error picture as an error data set, and output the error data set to the client.
  • Each module in the above-mentioned face sample picture labeling device may be implemented in whole or in part by software, hardware, or a combination thereof.
  • the above modules may be embedded in the hardware or independent of the processor in the computer device, or may be stored in the memory in the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure may be as shown in FIG. 8.
  • the computer device includes a processor, memory, network interface, and database connected by a system bus. Among them, the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, computer-readable instructions, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium.
  • the network interface of the computer device is used to communicate with external terminals through a network connection. When the computer-readable instructions are executed by the processor, a method for tagging a face sample image is realized.
  • a non-volatile readable storage medium which includes a memory, a processor, and computer-readable instructions stored on the memory and executable on the processor.
  • the processor executes the computer-readable instructions
  • the steps in the method for labeling a face sample picture in the above embodiment are implemented, for example, steps S10 to S70 shown in FIG. 2, or the modules of the apparatus for labeling a face sample picture in the above embodiment are executed when the processor executes computer-readable instructions.
  • Functions for example, the functions of the modules 71 to 77 shown in FIG. 7. To avoid repetition, I will not repeat them here.
  • a computer-readable storage medium on which computer-readable instructions are stored.
  • the steps in the method for annotating a face sample picture in the above embodiment are implemented, for example, Steps S10 to S70 shown in FIG. 2 or the processor implements computer-readable instructions to implement the functions of each module of the face sample picture labeling apparatus in the foregoing embodiment, for example, the functions of the modules 71 to 77 shown in FIG. 7. To avoid repetition, I will not repeat them here.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM random access memory
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous chain (Synchlink) DRAM
  • RDRAM direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种人脸样本图片标注方法、装置、计算机设备及存储介质,该人脸样本图片标注方法包括:使用多个预设的情绪识别模型对待标注图片进行识别,根据识别结果获取识别错误的误差图片和识别正确的图片,将包含误差图片的误差数据集输出到客户端进行标注,将标注后的误差数据集和识别正确的图片作为标准样本存储到标准样本库中,并使用标准样本分别对多个情绪识别模型进行训练,以更新情绪识别模型,再返回使用多个预设的情绪识别模型对待标注图片进行识别的步骤继续执行,直到误差数据集为空为止。本申请的技术方案能够为人脸图片自动生成标注信息,提高了人脸图片的标注效率和准确率,从而提高用于模型训练和测试的标准样本库的生成效率。

Description

人脸样本图片标注方法、装置、计算机设备及存储介质
本申请以2018年11月12日提交的申请号为201811339683.8,名称为“人脸样本图片标注方法、装置、计算机设备及存储介质”的中国发明专利申请为基础,并要求其优先权。
技术领域
本申请涉及生物识别技术领域,尤其涉及一种人脸样本图片标注方法、装置、计算机设备及存储介质。
背景技术
面部表情识别是人工智能领域的一个重要的研究方向,在对人脸面部的情绪识别的研究中,需要准备大量的人脸情绪样本用以支撑情绪识别模型的模型训练,通过使用大量的人脸情绪样本进行深度学习,有助于提高情绪识别模型的准确率和鲁棒性。
但是,目前关于人脸情绪分类的公开数据集相对较少,需要依靠人为的方式对人脸图片进行人工标注,或者人工采集特定的人脸情绪样本,由于目前对人脸图片进行人工标注法的耗时较长,投入的人力资源较大,使得通过人工的方式收集人脸情绪样本的工作量大,导致人脸情绪样本数据集的收集效率低,且人工收集的样本数量有限,无法很好地支撑情绪识别模型的模型训练。
发明内容
本申请实施例中提供一种人脸样本图片标注方法、装置、计算机设备及存储介质,以解决人脸情绪样本图片的标注效率低的问题。
一种人脸样本图片标注方法,包括:
获取预设的待标注数据集中的人脸图片作为待标注图片;
使用N个预设的情绪识别模型对所述待标注图片进行识别,得到所述待标注图片的识别结果,其中,N为正整数,所述识别结果包括N个所述情绪识别模型预测的情绪状态和N个所述情绪状态对应的预测分值;
针对每个所述待标注图片的识别结果,若N个所述情绪识别模型预测的情绪状态中存在至少两种不同的情绪状态,则将该待标注图片标识为误差图片,并将包含所述误差图片的误差数据集输出到客户端;
针对每个所述待标注图片的识别结果,若N个所述情绪识别模型预测的情绪状态相同,并且所述N个情绪状态对应的预测分值均大于预设的样本阈值,则将所述情绪状态和N个所述预测分值的均值作为该待标注图片的标注信息,并将所述标注信息标注到对应的所述待标注图片中,作为第一标准样本;
接收所述客户端发送的标注后的所述误差数据集,并将所述标注后的所述误差数据集中的所述误差图片作为第二标准样本,将所述第一标准样本和所述第二标准样本保存到预设的标准样本库中;
使用所述第一标准样本和所述第二标准样本,分别对N个所述预设的情绪识别模型进行训练,以更新所述N个预设的情绪识别模型;
将所述待标注数据集中除所述第一标准样本和所述第二标准样本以外的所述人脸图片作为新的待标注图片,继续执行所述使用N个预设的情绪识别模型对所述待标注图片进行识别,得到所述待标注图片的识别结果的步骤,直到所述误差数据集为空为止。
一种人脸样本图片标注装置,包括:
图片获取模块,用于获取预设的待标注数据集中的人脸图片作为待标注图片;
图片识别模块,用于使用N个预设的情绪识别模型对所述待标注图片进行识别,得到所述待标注图片的识别结果,其中,N为正整数,所述识别结果包括N个所述情绪识别模型预测的情绪状态和N个所述情绪状态对应的预测分值;
数据输出模块,用于针对每个所述待标注图片的识别结果,若N个所述情绪识别模型预测的情绪状态中存在至少两种不同的情绪状态,则将该待标注图片标识为误差图片,并将包含所述误差图片的误差数据集输出到客户端;
图片标注模块,用于针对每个所述待标注图片的识别结果,若N个所述情绪识别模型预测的情绪状态相同,并且所述N个情绪状态对应的预测分值均大于预设的样本阈值,则将所述情绪状态和N个所述预测分值的均值作为该待标注图片的标注信息,并将所述标注信息标注到对应的所述待标注图片中,作为第一标准样本;
样本存储模块,用于接收所述客户端发送的标注后的所述误差数据集,并将所述标注后的所述误差数据集中的所述误差图片作为第二标准样本,将所述第一标准样本和所述第二标准样本保存到预设的标准样本库中;
模型更新模块,用于使用所述第一标准样本和所述第二标准样本,分别对N个所述预设的情绪识别模型进行训练,以更新所述N个预设的情绪识别模型;
循环执行模块,用于将所述待标注数据集中除所述第一标准样本和所述第二标准样本以外的所述人脸图片作为新的待标注图片,继续执行所述使用N个预设的情绪识别模型对所述待标注图片进行识别,得到所述待标注图片的识别结果的步骤,直到所述误差数据集为空为止。
一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现上述人脸样本图片标注方法的步骤。
一个或多个存储有计算机可读指令的非易失性可读存储介质,其特征在于,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行时实现上述人脸样本图片标注方法的步骤。
本申请的一个或多个实施例的细节在下面的附图和描述中提出,本申请的其他特征和优点将从说明书、附图以及权利要求变得明显。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些 附图获得其他的附图。
图1是本申请一实施例中人脸样本图片标注方法的一应用环境示意图;
图2是本申请一实施例中人脸样本图片标注方法的一流程图;
图3是本申请实施例中人脸样本图片标注方法中生成待标注数据集的一具体流程图;
图4是本申请实施例中人脸样本图片标注方法中构建情绪识别模型的一具体流程图;
图5是图2中步骤S20的一具体流程图;
图6是图2中步骤S30的一具体流程图;
图7是本申请一实施例中人脸样本图片标注装置的一原理框图;
图8是本申请一实施例中计算机设备的一示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供的人脸样本图片标注方法,可应用在如图1的应用环境中,该应用环境包括服务端和客户端,其中,服务端和客户端之间通过网络进行连接,服务端对人脸图片进行识别和标注处理,将识别错误的图片输出到客户端,用户在客户端对识别错误的图片进行标注,服务端将从客户端获取到的标注后的数据和识别正确的数据存储到标准样本库中。客户端具体可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备,服务端具体可以用独立的服务器或者多个服务器组成的服务器集群实现。本申请实施例提供人脸样本图片标注的方法应用于服务端。
在一实施例中,图2示出本实施例中人脸样本图片标注方法的一流程图,该方法应用在图1中的服务端,用于对人脸图片进行识别和标注处理。如图2所示,该人脸样本图片标注方法包括步骤S10至步骤S70,详述如下:
S10:获取预设的待标注数据集中的人脸图片作为待标注图片。
其中,预设的待标注数据集,是预先设置的用于存储收集得到的人脸图片的存储空间,该人脸图片可以从网络的公开数据集中爬取得到,也可以从公开的视频中截取得到包含人脸的人脸图片,具体的人脸图片的获取方式可以根据实际情况进行设置,此处不做限制。
具体地,服务端从预设的待标注数据集中获取人脸图片作为待标注图片,需要对待标注图片进行标注,以便用于机器学习模型的训练和测试。
S20:使用N个预设的情绪识别模型对待标注图片进行识别,得到待标注图片的识别结果,其中,N为正整数,识别结果包括N个情绪识别模型预测的情绪状态和N个情绪状态对应的预测分值。
其中,预设的情绪识别模型是预先训练好的模型,用于识别待识别的人脸图片中人脸对应的情绪状态,该预设的情绪识别模型有N个,N为正整数,N可以为1,也可以为2,具体可以根据实际应用的需要进行设置,此处不做限制。
具体地,分别使用N个预设的情绪识别模型对待标注图片进行识别和预测后,可以得到在每个情绪识别模型下,该待标注图片的情绪状态和情绪状态的预测分值,共得到N个 情绪识别模型预测的情绪状态和N个情绪状态对应的预测分值,其中,情绪状态包括但不限于开心、悲伤、恐惧、生气、惊讶、厌恶和平静等情绪,预测分值是用于表示人脸图片中人脸对应的情绪状态的概率,若预测分值越大,则人脸图片中人脸属于该情绪状态的概率也越大。
S30:针对每个待标注图片的识别结果,若N个情绪识别模型预测的情绪状态中存在至少两种不同的情绪状态,则将该待标注图片标识为误差图片,并将包含误差图片的误差数据集输出到客户端。
具体地,服务端对每个待标注图片的识别结果进行检测,若N个情绪识别模型预测的情绪状态中存在至少两种不同的情绪状态,例如,预设的第一情绪识别模型预测得到该待标注图片对应的情绪状态为“开心”,而预设的第二情绪识别模型预测得到该待标注图片对应的情绪状态为“惊讶”,则表示该待标注图片的识别结果存在错误,将该待标注图片标识为误差图片,并将包含误差图片的误差数据集通过网络输出到客户端,以使用户在客户端对误差数据集中误差图片进行标注,输入每个误差图片对应的情绪状态的正确信息,更新误差数据集中误差图片对应的错误的识别结果。
S40:针对每个待标注图片的识别结果,若N个情绪识别模型预测的情绪状态相同,并且N个情绪状态对应的预测分值均大于预设的样本阈值,则将情绪状态和N个预测分值的均值作为该待标注图片的标注信息,并将标注信息标注到对应的待标注图片中,作为第一标准样本。
其中,预设的样本阈值,是预先设置用于挑选识别正确的待标注图片的阈值,若识别得到的预测分值大于该预设的样本阈值,则表示该待标注图片的识别结果正确,该样本阈值可以设置为0.9,也可以设置为0.95,具体的样本阈值可以根据实际情况进行设置,此处不做限制。
具体地,服务端对每个待标注图片的识别结果进行检测,若N个情绪识别模型预测的情绪状态相同,并且N个情绪状态对应的预测分值均大于预设的样本阈值,则确认该待标注图片的识别结果正确,将该相同的情绪状态和N个预测分值的均值作为该待标注图片的标注信息,并将标注信息标注到对应的待标注图片中,作为第一标准样本,其中,该N个预测分值的均值为N个预测分值的算数平均值,该第一标准样本包含人脸图片对应的情绪状态的标注信息。
需要说明的是,步骤S30和步骤S40之间没有必然的先后执行顺序,其也可以是并列执行的关系,此处不做限制。
S50:接收客户端发送的标注后的误差数据集,并将标注后的误差数据集中的误差图片作为第二标准样本,将第一标准样本和第二标准样本保存到预设的标准样本库中。
具体地,客户端通过网络发送标注后的误差数据集给服务端,该误差数据集携带有数据完成标注的标识信息,用于标识发送的数据为标注后的误差数据集,服务端对客户端发送的数据进行接收,若检测到该数据包含数据完成标注的标识信息,则表示接收到的数据为客户端发送的标注后的误差数据集,并该将标注后的误差数据集中的人脸图片作为第二标准样本,该第二标准样本包含人脸图片对应的情绪状态的标注信息。
服务端将第一标准样本和第二标准样本存储到预设的标准样本库中,其中,预设的标准样本库是用于存储标准样本的数据库,该标准样本是指包含标注信息的人脸样本图片, 在人脸图片中标注上标注信息后,得到人脸样本图片,使得机器学习模型能够根据人脸样本图片中的标注信息,对人脸样本图片和人脸样本图片对应的情绪状态进行机器学习。
S60:使用第一标准样本和第二标准样本,分别对N个预设的情绪识别模型进行训练,以更新N个预设的情绪识别模型。
具体地,服务端使用第一标准样本和第二标准样本,对每个预设的情绪识别模型分别进行增量训练,从而对N个预设的情绪识别模型进行更新,该增量训练是指对预设的情绪识别模型的模型参数进行优化的模型训练,增量训练能够充分利用预设的情绪识别模型的历史训练结果,减少了后续模型训练的时间,不需要重复处理以前已经训练过的样本数据。
可以理解的是,训练样本越多,训练得到的情绪识别模型的准确率和鲁棒性越高,使用包含正确的标注信息的标准样本对预设的情绪识别模型进行增量训练,使得该预设的情绪识别模型从新增的标准样本中学习新的知识,并能保存以前已经对训练样本进行学习到的知识,得到更加准确的模型参数,提高模型的识别准确率。
S70:将待标注数据集中除第一标准样本和第二标准样本以外的人脸图片作为新的待标注图片,继续执行使用N个预设的情绪识别模型对待标注图片进行识别,得到待标注图片的识别结果的步骤,直到误差数据集为空为止。
具体地,服务端从待标注数据集中排除掉第一标准样本对应的人脸图片,以及删除第二标准样本对应的人脸图片,将待标注数据集中剩下的人脸图片作为新的待标注图片,该剩下的人脸图片可能存在识别错误的待标注图片,也可能存在识别正确的待标注图片,需要使用识别准确率更高的情绪识别模型进一步地进行区分。
进一步地,使用N个预设的情绪识别模型对待标注图片进行识别,得到待标注图片的识别结果的步骤继续执行,直到误差数据集为空为止,表示N个预设的情绪识别模型对待标注数据集的识别结果中,不存在识别错误的待标注图片,则停止使用情绪识别模型对待标注图片的继续识别,得到已经标注的标准样本存储到预设的标准样本库中,用于机器学习模型的训练和测试。
在图2对应的实施例中,通过使用多个预设的情绪识别模型对待标注图片进行识别,根据识别结果获取识别错误的误差图片和识别正确的样本图片,使用误差图片组成误差数据集输出到客户端,以使用户对误差数据集进行标注,将标注后的误差数据集和识别正确的样本图片作为标准样本,存储到标准样本库中,使用标准样本库中的标准样本分别对多个情绪识别模型进行增量训练,以更新每个情绪识别模型,提高情绪识别模型对待标注图片的标注信息的识别准确率,再返回使用多个预设的情绪识别模型对待标注图片进行识别的步骤继续执行,直到误差数据集为空为止。实现了为人脸图片自动生成对应的标注信息,节约人力成本,提高了人脸图片的标注效率,从而提高用于模型训练和测试的标准样本库的生成效率,同时,通过使用多个情绪识别模型对人脸图片进行识别,使用多个识别结果进行对比分析得到图片的标注信息,提高人脸图片的标注准确率。
在一实施例中,如图3所示,在步骤S10之前,即在获取预设的待标注数据集中的人脸图片作为待标注图片之前,人脸样本图片标注方法还包括:
S01:使用预设的爬虫工具获取第一人脸图片。
具体地,使用预设的爬虫工具在网络的公开数据集中爬取人脸图片,该爬虫工具是用于获取人脸图片的工具,例如,八爪鱼爬虫工具、爬山虎爬虫工具或者集搜客爬虫工具等, 通过网络进行浏览公开的存储图片数据的地址中的内容,使用爬虫工具爬取与预设的关键字对应的图片数据,并将爬取到的图片数据标识为第一人脸图片,其中,预设的关键字是与情绪或者人脸等相关的关键字。
例如,可以使用爬虫工具,根据预设的关键字“人脸”,在百度图片中爬取与“人脸”对应的图片数据,并且根据图片获取的先后顺序,将人脸图片命名为人脸_1.jpg、人脸_2.jpg、…、人脸_X.jpg等。
S02:采用预设的增广方式对第一人脸图片进行增广,得到第二人脸图片。
具体地,针对每个第一人脸图片,使用预设的增广方式对第一人脸图片进行增广,该预设的增广方式是预先设置好用于增加人脸图片的数量的图片处理方式。
其中,增广方式具体可以是对第一人脸图片进行裁剪处理,例如,对大小为256*256的第一人脸图片随机裁剪,得到大小为248*248的第二人脸图片作为增广图片,也可以采用图片灰度化或者全局光照修正的图片处理方式对第一人脸图片进行处理,或者使用多种图片处理方式的结合形成预设的增广方式,例如,先对第一人脸图片做翻转处理,再对翻转后的图片进行局部侧光源修正等,但并不限于此,具体的增广方式可以根据实际应用的需要进行设置,此处不做限制。
S03:将第一人脸图片和第二人脸图片保存到预设的待标注数据集中。
具体地,对第一人脸图片进行增广是为了增加人脸图片的数量,将增广图片作为第二人脸图片,并将第一人脸图片和第二人脸图片保存到预设的待标注数据集中,能够使得预设的情绪识别模型对待标注数据集中的人脸图片进行识别和标注,以便得到更多的人脸样本图片,用以支撑情绪识别模型的模型训练。
在图3对应的实施例中,通过使用预设的爬虫工具获取第一人脸图片,并采用预设的增广方式对第一人脸图片进行增广,得到第二人脸图片,再将第一人脸图片和第二人脸图片保存到预设的待标注数据集中,提高人脸图片的获取效率,大量增加了人脸图片的样本数量,以便收集更多的人脸图片用以支撑情绪识别模型的模型训练。
在一实施例中,如图4所示,在步骤S20之前,即在使用N个预设的情绪识别模型对待标注图片进行识别,得到待标注图片的识别结果之前,人脸样本图片标注方法还包括:
S11:从预设的标准样本库中获取人脸样本图片。
具体地,服务端可以从预设的标准情绪数据库中获取人脸样本图片,用于对情绪识别模型进行训练,其中,预设的标准样本库是用于存储标准样本的数据库,该标准样本是指包含标注信息的人脸样本图片,每个人脸样本图片对应一个标注信息,该标注信息是用于描述人脸样本图片中的人脸对应的情绪状态,人脸图片对应的情绪状态包括但不限于开心、悲伤、恐惧、生气、惊讶、厌恶和平静等情绪。
S12:对人脸样本图片进行预处理。
其中,图片预处理是指对图片进行尺寸、颜色和形状等进行变换的处理,以形成统一的规格的训练样本,以便后续的模型训练过程对图片的处理能够更加高效,提高机器学习模型的识别准确率。
具体地,可以先将人脸样本图片转换为预设的统一大小的训练样本,再对训练样本进行去噪、灰度化和二值化等预处理过程,消除人脸样本图片中的噪声信息,增强与人脸相关的信息的可检测性和简化图像数据。
例如,训练样本的大小可以预先设置为224*224尺寸的人脸图片,对一尺寸为[1280,720]的人脸样本图片,通过现有的人脸检测算法检测出人脸样本图片中人脸的区域,并从人脸样本图片中裁剪出人脸所在的区域,再将裁剪得到的人脸样本图片缩放为[224,224]尺寸的训练样本,通过对训练样本进行去噪、灰化和二值化等预处理过程,实现对人脸样本图片的预处理。
S13:使用预处理后的人脸样本图片,分别对残差神经网络模型、稠密卷积神经网络模型和谷歌卷积神经网络模型进行训练,并将训练好的残差神经网络模型、稠密卷积神经网络模型和谷歌卷积神经网络模型作为预设的情绪识别模型。
具体地,根据步骤S12得到预处理后的人脸样本图片,使用预处理后的人脸样本图片分别对残差神经网络模型、稠密卷积神经网络模型和谷歌卷积神经网络模型进行训练,使得残差神经网络模型、稠密卷积神经网络模型和谷歌卷积神经网络模型能够对训练样本进行机器学习,得到每个模型对应的模型参数,从而得到N个预设的情绪识别模型,用于对新的样本数据进行识别预测。
其中,残差神经网络模型即ResNet(Residual Network,残差神经网络)模型,ResNet模型是指在ResNet网络结构中引入一个深度残差学习框架来解决退化问题的模型,值得一提的是,深度网络会比较浅的网络效果好,但是深度网络残差消失,导致退化问题,ResNet解决了退化问题,使得更深的网络得以更好的训练,残差在数理统计中是指实际观察值与估计值之间的差。
其中,稠密卷积神经网络模型即DenseNet(Dense Convolutional Network,稠密卷积神经网络)模型,DenseNet是指在DenseNet网络中采用了特征重用的方式的模型,每一层网络的输入包括前面所有层网络的输出,提升了信息和梯度在网络中的传输效率,从而能够训练更深的网络。
其中,谷歌卷积神经网络模型即GoogleNet模型,GoogleNet模型是一种通过利用了网络中的计算资源,减少深度神经网络的计算开销,并且在不增加计算负载的情况下,增加网络的宽度和深度的机器学习模型。
在图4对应的实施例中,通过对标准样本库中的人脸样本图片进行预处理,提高人脸样本图片的质量,使得后续的模型训练过程对图片的处理能够更加高效,从而提高机器学习模型的训练速率和识别准确率,再使用预处理后的人脸样本图片,分别对残差神经网络模型、稠密卷积神经网络模型和谷歌卷积神经网络模型进行训练,得到多个训练好的情绪识别模型,使得情绪识别模型能够用于对新的人脸图片进行分类预测,并能够结合多个情绪识别模型的识别结果进行分析判断,提高人脸图片的标注的准确率。
在一实施例中,本实施例对步骤S20中所提及的使用N个预设的情绪识别模型对待标注图片进行识别,得到待标注图片的识别结果的具体实现方法进行详细说明。
请参阅图5,图5示出了步骤S20的一具体流程图,详述如下:
S201:针对每个待标注图片,使用N个预设的情绪识别模型分别对该待标注图片进行特征值提取处理,得到每个预设的情绪识别模型对应的特征数据。
其中,特征值提取是指使用情绪识别模型提取待标注图片中属于人脸的特征性的信息的方法,以突出待标注图片具有的代表性特征。
具体地,针对每个待标注图片,服务端使用N个预设的情绪识别模型分别对该待标注 图片进行特征值提取处理,得到每个预设的情绪识别模型对应的特征数据,保留需要的重要特征,摒弃掉无关紧要的信息,从而得到可以用于后续情绪状态预测的特征数据。
S202:在每个预设的情绪识别模型中,使用训练好的m个分类器对特征数据进行相似度计算,得到待标注图片的m种情绪状态的概率值,其中,m为正整数,每个分类器对应一种情绪状态。
其中,每个预设的情绪识别模型中有m个训练好的分类器,每个分类器对应一种情绪状态和该情绪状态对应的特征数据,其中,分类器对应的情绪状态可根据实际需要进行训练,分类器的数量m也可根据需要进行设置,此处不做具体限制,例如,m可以设置为7,即包括7种情绪状态,情绪状态可以设置为开心、悲伤、恐惧、生气、惊讶、厌恶和平静等7种情绪。
具体地,根据待标注图片的特征数据,在每个预设的情绪识别模型中,使用训练好的m个分类器对特征数据进行相似度计算,得到待标注图片的特征值属于该分类器对应的情绪状态的概率,在每个预设的情绪识别模型中,每个情绪识别模型分别对待标注图片预测,得出该待标注图片属于每一种情绪状态的概率,共得到m个概率值。
S203:从m个概率值中,获取最大的概率值对应的情绪状态作为该情绪识别模型预测的情绪状态,将该最大的概率值作为情绪状态对应的预测分值,共得到N个情绪识别模型预测的情绪状态和N个情绪状态对应的预测分值。
具体地,在每个预设的情绪识别模型的识别结果中,从m种情绪状态的概率值中,获取最大的概率值对应的情绪状态作为待标注图片的情绪状态,以表示该待标注图片对应的情绪状态,并将该最大的概率值作为待标注图片的情绪状态的预测分值,共得到N个情绪识别模型预测的情绪状态和N个情绪状态对应的预测分值。
例如,表1示出了一待标注图片由3个预设的情绪识别模型,第一模型、第二模型和第三模型分别进行识别预测之后得到的识别结果,其中,分类1-6分别表示人脸图片对应的开心、悲伤、恐惧、生气、厌恶和平静等情绪状态,各个分类对应的概率为每个预设的情绪识别模型预测该待标注图片属于该分类的概率,如分类1对应的95%,是第一模型通过识别预测得出该待标注图片中的人脸属于“开心”的情绪状态的概率,根据预测结果可知预测得到该待标注图片的分类中的最大概率为95%,则获取“开心”作为第一模型预测的情绪状态,同时,该最大的概率值95%标识为第一模型预测的情绪状态对应的预测分值,即预测分值为0.95,从而得到第一模型预测的情绪状态为“开心”和预测分值为0.95,第二模型预测的情绪状态为“开心”和预测分值为0.90,以及第三模型预测的情绪状态为“开心”和预测分值为0.90。
表1.待标注图片的识别结果
待标注图片 分类1 分类2 分类3 分类4 分类5 分类6
第一模型 95% 3% 1% 1% 0% 0%
第二模型 90% 5% 5% 0% 0% 0%
第三模型 90% 5% 2% 1% 1% 1%
在图5对应的实施例中,通过使用N个预设的情绪识别模型分别对待标注图片进行特征值提取处理,得到每个预设的情绪识别模型对应的特征数据,在每个预设的情绪识别模 型中,使用训练好的多个分类器对特征数据进行相似度计算,得到待标注图片的多种情绪状态对应的概率值,并从得到的概率值中,获取最大的概率值对应的情绪状态作为该情绪识别模型预测的情绪状态,将该最大的概率值作为情绪状态对应的预测分值,得到每个情绪识别模型预测的情绪状态和该情绪状态对应的预测分值,通过使用多个情绪识别模型对待标注图片进行标注,并结合多个情绪识别模型的识别结果进行分析判断,提高待标注图片的识别准确率,从而提高人脸样本图片的标注准确率。
在一实施例中,本实施例对步骤S30中所提及的针对每个待标注图片的识别结果,若N个情绪识别模型预测的情绪状态中存在至少两种不同的情绪状态,则将该待标注图片标识为误差图片,并将包含误差图片的误差数据集输出到客户端的具体实现方法进行详细说明。
请参阅图6,图6示出了步骤S30的一具体流程图,详述如下:
S301:对每个待标注图片的识别结果进行检测,若N个情绪识别模型预测的情绪状态中存在至少两种不同的情绪状态,则将该待标注图片标识为第一误差图片。
具体地,服务端对每个待标注图片的识别结果进行检测,若N个情绪识别模型预测的情绪状态中存在至少两种不同的情绪状态,则表示该待标注图片的识别结果存在错误,将该待标注图片标识为第一误差图片。
S302:若N个情绪识别模型预测的情绪状态相同,并且N个情绪状态对应的预测分值均小于预设的误差阈值,则将该待标注图片标识为第二误差图片。
具体地,预设的误差阈值,是预先设置用于区分识别得到的待标注图片的情绪状态是否有误的阈值,若N个情绪识别模型预测的情绪状态相同,并且N个情绪状态对应的预测分值均小于预设的误差阈值,则表示该人脸图片的识别存在错误,并将该待标注图片标识为第二误差图片,该误差阈值可以设置为0.5,也可以设置为0.6,具体的误差阈值可以根据实际情况进行设置,此处不做限制。
S303:将第一误差图片和第二误差图片作为误差数据集,并将误差数据集输出到客户端。
具体地,服务端将第一误差图片和第二误差图片作为误差数据集,并将误差数据集输出到客户端,以使用户在客户端对误差数据集中误差图片进行标注,输入每个误差图片对应的情绪状态的正确信息,由用户进行确认标注误差图片集中的人脸图片中的人脸所属的情绪状态,并对应的标注上正确标注信息,更新误差数据集中误差图片对应的错误的识别结果。
在图6对应的实施例中,通过对每个待标注图片的识别结果进行检测,若预测得到的情绪状态中存在至少两种不同的情绪状态,则将该待标注图片标识为第一误差图片,若预测的情绪状态均相同,并且每个情绪状态对应的预测分值均小于预设的误差阈值,则将该待标注图片标识为第二误差图片,同时,将第一误差图片和第二误差图片作为误差数据集输出到客户端,以便对识别错误的待标注图片进行人工标注,得到标注正确的人脸样本图片,用于对情绪识别模型进行增量训练,提高情绪识别模型的识别准确率,使得服务端能够使用更高准确率的情绪识别模型对待标注图片进行识别和标注,从而提高人脸图片的标注准确率。
应理解,上述实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执 行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
在一实施例中,提供一种人脸样本图片标注装置,该人脸样本图片标注装置与上述实施例中人脸样本图片标注方法一一对应。如图7所示,该人脸样本图片标注装置包括:图片获取模块71、图片识别模块72、数据输出模块73、图片标注模块74、样本存储模块75、模型更新模块76和循环执行模块77。各功能模块详细说明如下:
图片获取模块71,用于获取预设的待标注数据集中的人脸图片作为待标注图片;
图片识别模块72,用于使用N个预设的情绪识别模型对待标注图片进行识别,得到待标注图片的识别结果,其中,N为正整数,识别结果包括N个情绪识别模型预测的情绪状态和N个情绪状态对应的预测分值;
数据输出模块73,用于针对每个待标注图片的识别结果,若N个情绪识别模型预测的情绪状态中存在至少两种不同的情绪状态,则将该待标注图片标识为误差图片,并将包含误差图片的误差数据集输出到客户端;
图片标注模块74,用于针对每个待标注图片的识别结果,若N个情绪识别模型预测的情绪状态相同,并且N个情绪状态对应的预测分值均大于预设的样本阈值,则将情绪状态和N个预测分值的均值作为该待标注图片的标注信息,并将标注信息标注到对应的待标注图片中,作为第一标准样本;
样本存储模块75,用于接收客户端发送的标注后的误差数据集,并将标注后的误差数据集中的误差图片作为第二标准样本,将第一标准样本和第二标准样本保存到预设的标准样本库中;
模型更新模块76,用于使用第一标准样本和第二标准样本,分别对N个预设的情绪识别模型进行训练,以更新N个预设的情绪识别模型;
循环执行模块77,用于将待标注数据集中除第一标准样本和第二标准样本以外的人脸图片作为新的待标注图片,继续执行使用N个预设的情绪识别模型对待标注图片进行识别,得到待标注图片的识别结果的步骤,直到误差数据集为空为止。
进一步地,该人脸样本图片标注装置还包括:
图片爬取模块701,用于使用预设的爬虫工具获取第一人脸图片;
图片增广模块702,用于采用预设的增广方式对第一人脸图片进行增广,得到第二人脸图片;
图片保存模块703,用于将第一人脸图片和第二人脸图片保存到预设的待标注数据集中。
进一步地,该人脸样本图片标注装置还包括:
样本获取模块711,用于从预设的标准样本库中获取人脸样本图片;
第一处理模块712,用于对人脸样本图片进行预处理;
模型训练模块713,用于使用预处理后的人脸样本图片,分别对残差神经网络模型、稠密卷积神经网络模型和谷歌卷积神经网络模型进行训练,并将训练好的残差神经网络模型、稠密卷积神经网络模型和谷歌卷积神经网络模型作为预设的情绪识别模型。
进一步地,图片识别模块72包括:
特征提取子模块7201,用于针对每个待标注图片,使用N个预设的情绪识别模型分别对该待标注图片进行特征值提取处理,得到每个预设的情绪识别模型对应的特征数据;
数据计算子模块7202,用于在每个预设的情绪识别模型中,使用训练好的m个分类器对特征数据进行相似度计算,得到待标注图片的m种情绪状态的概率值,其中,m为正整数,每个分类器对应一种情绪状态;
数据选取子模块7203,用于从m个概率值中,获取最大的概率值对应的情绪状态作为该情绪识别模型预测的情绪状态,将该最大的概率值作为情绪状态对应的预测分值,共得到N个情绪识别模型预测的情绪状态和N个情绪状态对应的预测分值。
进一步地,数据输出模块73包括:
第一标识子模块7301,用于对每个待标注图片的识别结果进行检测,若N个情绪识别模型预测的情绪状态中存在至少两种不同的情绪状态,则将该待标注图片标识为第一误差图片;
第二标识子模块7302,用于若N个情绪识别模型预测的情绪状态相同,并且N个情绪状态对应的预测分值均小于预设的误差阈值,则将该待标注图片标识为第二误差图片;
数据输出子模块7303,用于将第一误差图片和第二误差图片作为误差数据集,并将误差数据集输出到客户端。
关于人脸样本图片标注装置的具体限定可以参见上文中对于人脸样本图片标注方法的限定,在此不再赘述。上述人脸样本图片标注装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图8所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。其中,该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机可读指令和数据库。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现一种人脸样本图片标注方法。
在一个实施例中,提供了一种非易失性可读存储介质,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机可读指令,处理器执行计算机可读指令时实现上述实施例人脸样本图片标注方法中的步骤,例如图2所示的步骤S10至步骤S70,或者,处理器执行计算机可读指令时实现上述实施例中人脸样本图片标注装置的各模块的功能,例如图7所示模块71至模块77的功能。为避免重复,这里不再赘述。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机可读指令,计算机可读指令被处理器执行时实现上述实施例人脸样本图片标注方法中的步骤,例如图2所示的步骤S10至步骤S70,或者,处理器执行计算机可读指令时实现上述实施例中人脸样本图片标注装置的各模块的功能,例如图7所示模块71至模块77的功能。为避免重复,这里不再赘述。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流 程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种人脸样本图片标注方法,其特征在于,所述人脸样本图片标注方法包括:
    获取预设的待标注数据集中的人脸图片作为待标注图片;
    使用N个预设的情绪识别模型对所述待标注图片进行识别,得到所述待标注图片的识别结果,其中,N为正整数,所述识别结果包括N个所述情绪识别模型预测的情绪状态和N个所述情绪状态对应的预测分值;
    针对每个所述待标注图片的识别结果,若N个所述情绪识别模型预测的情绪状态中存在至少两种不同的情绪状态,则将该待标注图片标识为误差图片,并将包含所述误差图片的误差数据集输出到客户端;
    针对每个所述待标注图片的识别结果,若N个所述情绪识别模型预测的情绪状态相同,并且所述N个情绪状态对应的预测分值均大于预设的样本阈值,则将所述情绪状态和N个所述预测分值的均值作为该待标注图片的标注信息,并将所述标注信息标注到对应的所述待标注图片中,作为第一标准样本;
    接收所述客户端发送的标注后的所述误差数据集,并将所述标注后的所述误差数据集中的所述误差图片作为第二标准样本,将所述第一标准样本和所述第二标准样本保存到预设的标准样本库中;
    使用所述第一标准样本和所述第二标准样本,分别对N个所述预设的情绪识别模型进行训练,以更新所述N个预设的情绪识别模型;
    将所述待标注数据集中除所述第一标准样本和所述第二标准样本以外的所述人脸图片作为新的待标注图片,继续执行所述使用N个预设的情绪识别模型对所述待标注图片进行识别,得到所述待标注图片的识别结果的步骤,直到所述误差数据集为空为止。
  2. 如权利要求1所述的人脸样本图片标注方法,其特征在于,在所述获取预设的待标注数据集中的人脸图片作为待标注图片之前,所述人脸样本图片标注方法还包括:
    使用预设的爬虫工具获取第一人脸图片;
    采用预设的增广方式对所述第一人脸图片进行增广,得到第二人脸图片;
    将所述第一人脸图片和所述第二人脸图片保存到所述预设的待标注数据集中。
  3. 如权利要求1所述的人脸样本图片标注方法,其特征在于,在所述使用N个预设的情绪识别模型对所述待标注图片进行识别,得到所述待标注图片的识别结果之前,所述人脸样本图片标注方法还包括:
    从所述预设的标准样本库中获取人脸样本图片;
    对所述人脸样本图片进行预处理;
    使用所述预处理后的人脸样本图片,分别对残差神经网络模型、稠密卷积神经网络模型和谷歌卷积神经网络模型进行训练,并将训练好的所述残差神经网络模型、所述稠密卷积神经网络模型和所述谷歌卷积神经网络模型作为所述预设的情绪识别模型。
  4. 如权利要求1所述的人脸样本图片标注方法,其特征在于,所述使用N个预设的情绪识别模型对所述待标注图片进行识别,得到所述待标注图片的识别结果包括:
    针对每个所述待标注图片,使用N个预设的情绪识别模型分别对该待标注图片进行特征值提取处理,得到每个所述预设的情绪识别模型对应的特征数据;
    在每个所述预设的情绪识别模型中,使用训练好的m个分类器对所述特征数据进行相似度计算,得到所述待标注图片的m种情绪状态的概率值,其中,m为正整数,每个所述分类器对应一种情绪状态;
    从m个所述概率值中,获取最大的概率值对应的情绪状态作为该情绪识别模型预测的情绪状态,将该最大的概率值作为情绪状态对应的预测分值,共得到N个所述情绪识别模型预测的情绪状态和所述N个情绪状态对应的预测分值。
  5. 如权利要求1至4任一项所述的人脸样本图片标注方法,其特征在于,所述针对每个所述待标注图片的识别结果,若N个所述情绪识别模型预测的情绪状态中存在至少两种不同的情绪状态,则将该待标注图片标识为误差图片,并将包含所述误差图片的误差数据集输出到客户端包括:
    对每个所述待标注图片的识别结果进行检测,若N个所述情绪识别模型预测的情绪状态中存在至少两种不同的情绪状态,则将该待标注图片标识为第一误差图片;
    若N个所述情绪识别模型预测的情绪状态相同,并且所述N个情绪状态对应的预测分值均小于预设的误差阈值,则将该待标注图片标识为第二误差图片;
    将所述第一误差图片和所述第二误差图片作为所述误差数据集,并将所述误差数据集输出到所述客户端。
  6. 一种人脸样本图片标注装置,其特征在于,所述人脸样本图片标注装置包括:
    图片获取模块,用于获取预设的待标注数据集中的人脸图片作为待标注图片;
    图片识别模块,用于使用N个预设的情绪识别模型对所述待标注图片进行识别,得到所述待标注图片的识别结果,其中,N为正整数,所述识别结果包括N个所述情绪识别模型预测的情绪状态和N个所述情绪状态对应的预测分值;
    数据输出模块,用于针对每个所述待标注图片的识别结果,若N个所述情绪识别模型预测的情绪状态中存在至少两种不同的情绪状态,则将该待标注图片标识为误差图片,并将包含所述误差图片的误差数据集输出到客户端;
    图片标注模块,用于针对每个所述待标注图片的识别结果,若N个所述情绪识别模型预测的情绪状态相同,并且所述N个情绪状态对应的预测分值均大于预设的样本阈值,则将所述情绪状态和N个所述预测分值的均值作为该待标注图片的标注信息,并将所述标注信息标注到对应的所述待标注图片中,作为第一标准样本;
    样本存储模块,用于接收所述客户端发送的标注后的所述误差数据集,并将所述标注后的所述误差数据集中的所述误差图片作为第二标准样本,将所述第一标准样本和所述第二标准样本保存到预设的标准样本库中;
    模型更新模块,用于使用所述第一标准样本和所述第二标准样本,分别对N个所述预设的情绪识别模型进行训练,以更新所述N个预设的情绪识别模型;
    循环执行模块,用于将所述待标注数据集中除所述第一标准样本和所述第二标准样本以外的所述人脸图片作为新的待标注图片,继续执行所述使用N个预设的情绪识别模型对所述待标注图片进行识别,得到所述待标注图片的识别结果的步骤,直到所述误差数据集为空为止。
  7. 如权利要求6所述的人脸样本图片标注装置,其特征在于,所述人脸样本图片标注装置还包括:
    图片爬取模块,用于使用预设的爬虫工具获取第一人脸图片;
    图片增广模块,用于采用预设的增广方式对所述第一人脸图片进行增广,得到第二人脸图片;
    图片保存模块,用于将所述第一人脸图片和所述第二人脸图片保存到所述预设的待标注数据集中。
  8. 如权利要求6所述的人脸样本图片标注装置,其特征在于,所述人脸样本图片标注装置还包括:
    样本获取模块,用于从所述预设的标准样本库中获取人脸样本图片;
    样本处理模块,对所述人脸样本图片进行预处理;
    模型训练模块,用于使用所述预处理后的人脸样本图片,分别对残差神经网络模型、稠密卷积神经网络模型和谷歌卷积神经网络模型进行训练,并将训练好的所述残差神经网络模型、所述稠密卷积神经网络模型和所述谷歌卷积神经网络模型作为所述预设的情绪识别模型。
  9. 如权利要求6所述的人脸样本图片标注装置,其特征在于,所述图片识别模块包括:
    特征提取子模块,用于针对每个所述待标注图片,使用N个预设的情绪识别模型分别对该待标注图片进行特征值提取处理,得到每个所述预设的情绪识别模型对应的特征数据;
    数据计算子模块,用于在每个所述预设的情绪识别模型中,使用训练好的m个分类器对所述特征数据进行相似度计算,得到所述待标注图片的m种情绪状态的概率值,其中,m为正整数,每个所述分类器对应一种情绪状态;
    数据选取子模块,用于从m个所述概率值中,获取最大的概率值对应的情绪状态作为该情绪识别模型预测的情绪状态,将该最大的概率值作为情绪状态对应的预测分值,共得到N个所述情绪识别模型预测的情绪状态和所述N个情绪状态对应的预测分值。
  10. 如权利要求6至9任一项所述的人脸样本图片标注装置,其特征在于,图片标注模块包括:
    第一标识子模块,用于对每个所述待标注图片的识别结果进行检测,若N个所述情绪识别模型预测的情绪状态中存在至少两种不同的情绪状态,则将该待标注图片标识为第一误差图片;
    第二标识子模块,用于若N个所述情绪识别模型预测的情绪状态相同,并且所述N个情绪状态对应的预测分值均小于预设的误差阈值,则将该待标注图片标识为第二误差图片;
    数据输出子模块,用于将所述第一误差图片和所述第二误差图片作为所述误差数据集,并将所述误差数据集输出到所述客户端。
  11. 一种计算机设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现如下步骤:
    获取预设的待标注数据集中的人脸图片作为待标注图片;
    使用N个预设的情绪识别模型对所述待标注图片进行识别,得到所述待标注图片的识别结果,其中,N为正整数,所述识别结果包括N个所述情绪识别模型预测的情绪状态和 N个所述情绪状态对应的预测分值;
    针对每个所述待标注图片的识别结果,若N个所述情绪识别模型预测的情绪状态中存在至少两种不同的情绪状态,则将该待标注图片标识为误差图片,并将包含所述误差图片的误差数据集输出到客户端;
    针对每个所述待标注图片的识别结果,若N个所述情绪识别模型预测的情绪状态相同,并且所述N个情绪状态对应的预测分值均大于预设的样本阈值,则将所述情绪状态和N个所述预测分值的均值作为该待标注图片的标注信息,并将所述标注信息标注到对应的所述待标注图片中,作为第一标准样本;
    接收所述客户端发送的标注后的所述误差数据集,并将所述标注后的所述误差数据集中的所述误差图片作为第二标准样本,将所述第一标准样本和所述第二标准样本保存到预设的标准样本库中;
    使用所述第一标准样本和所述第二标准样本,分别对N个所述预设的情绪识别模型进行训练,以更新所述N个预设的情绪识别模型;
    将所述待标注数据集中除所述第一标准样本和所述第二标准样本以外的所述人脸图片作为新的待标注图片,继续执行所述使用N个预设的情绪识别模型对所述待标注图片进行识别,得到所述待标注图片的识别结果的步骤,直到所述误差数据集为空为止。
  12. 如权利要求11所述的计算机设备,其特征在于,在所述获取预设的待标注数据集中的人脸图片作为待标注图片之前,所述处理器执行所述计算机可读指令时还实现如下步骤:
    使用预设的爬虫工具获取第一人脸图片;
    采用预设的增广方式对所述第一人脸图片进行增广,得到第二人脸图片;
    将所述第一人脸图片和所述第二人脸图片保存到所述预设的待标注数据集中。
  13. 如权利要求11所述的计算机设备,其特征在于,在所述使用N个预设的情绪识别模型对所述待标注图片进行识别,得到所述待标注图片的识别结果之前,所述处理器执行所述计算机可读指令时还实现如下步骤:
    从所述预设的标准样本库中获取人脸样本图片;
    对所述人脸样本图片进行预处理;
    使用所述预处理后的人脸样本图片,分别对残差神经网络模型、稠密卷积神经网络模型和谷歌卷积神经网络模型进行训练,并将训练好的所述残差神经网络模型、所述稠密卷积神经网络模型和所述谷歌卷积神经网络模型作为所述预设的情绪识别模型。
  14. 如权利要求11所述的计算机设备,其特征在于,所述使用N个预设的情绪识别模型对所述待标注图片进行识别,得到所述待标注图片的识别结果包括:
    针对每个所述待标注图片,使用N个预设的情绪识别模型分别对该待标注图片进行特征值提取处理,得到每个所述预设的情绪识别模型对应的特征数据;
    在每个所述预设的情绪识别模型中,使用训练好的m个分类器对所述特征数据进行相似度计算,得到所述待标注图片的m种情绪状态的概率值,其中,m为正整数,每个所述分类器对应一种情绪状态;
    从m个所述概率值中,获取最大的概率值对应的情绪状态作为该情绪识别模型预测的情绪状态,将该最大的概率值作为情绪状态对应的预测分值,共得到N个所述情绪识别模 型预测的情绪状态和所述N个情绪状态对应的预测分值。
  15. 如权利要求11至14任一项所述的计算机设备,其特征在于,所述针对每个所述待标注图片的识别结果,若N个所述情绪识别模型预测的情绪状态中存在至少两种不同的情绪状态,则将该待标注图片标识为误差图片,并将包含所述误差图片的误差数据集输出到客户端包括:
    对每个所述待标注图片的识别结果进行检测,若N个所述情绪识别模型预测的情绪状态中存在至少两种不同的情绪状态,则将该待标注图片标识为第一误差图片;
    若N个所述情绪识别模型预测的情绪状态相同,并且所述N个情绪状态对应的预测分值均小于预设的误差阈值,则将该待标注图片标识为第二误差图片;
    将所述第一误差图片和所述第二误差图片作为所述误差数据集,并将所述误差数据集输出到所述客户端。
  16. 一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如下步骤:
    获取预设的待标注数据集中的人脸图片作为待标注图片;
    使用N个预设的情绪识别模型对所述待标注图片进行识别,得到所述待标注图片的识别结果,其中,N为正整数,所述识别结果包括N个所述情绪识别模型预测的情绪状态和N个所述情绪状态对应的预测分值;
    针对每个所述待标注图片的识别结果,若N个所述情绪识别模型预测的情绪状态中存在至少两种不同的情绪状态,则将该待标注图片标识为误差图片,并将包含所述误差图片的误差数据集输出到客户端;
    针对每个所述待标注图片的识别结果,若N个所述情绪识别模型预测的情绪状态相同,并且所述N个情绪状态对应的预测分值均大于预设的样本阈值,则将所述情绪状态和N个所述预测分值的均值作为该待标注图片的标注信息,并将所述标注信息标注到对应的所述待标注图片中,作为第一标准样本;
    接收所述客户端发送的标注后的所述误差数据集,并将所述标注后的所述误差数据集中的所述误差图片作为第二标准样本,将所述第一标准样本和所述第二标准样本保存到预设的标准样本库中;
    使用所述第一标准样本和所述第二标准样本,分别对N个所述预设的情绪识别模型进行训练,以更新所述N个预设的情绪识别模型;
    将所述待标注数据集中除所述第一标准样本和所述第二标准样本以外的所述人脸图片作为新的待标注图片,继续执行所述使用N个预设的情绪识别模型对所述待标注图片进行识别,得到所述待标注图片的识别结果的步骤,直到所述误差数据集为空为止。
  17. 如权利要求16所述的非易失性可读存储介质,其特征在于,在所述获取预设的待标注数据集中的人脸图片作为待标注图片之前,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
    使用预设的爬虫工具获取第一人脸图片;
    采用预设的增广方式对所述第一人脸图片进行增广,得到第二人脸图片;
    将所述第一人脸图片和所述第二人脸图片保存到所述预设的待标注数据集中。
  18. 如权利要求16所述的非易失性可读存储介质,其特征在于,在所述使用N个预设 的情绪识别模型对所述待标注图片进行识别,得到所述待标注图片的识别结果之前,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器还执行如下步骤:
    从所述预设的标准样本库中获取人脸样本图片;
    对所述人脸样本图片进行预处理;
    使用所述预处理后的人脸样本图片,分别对残差神经网络模型、稠密卷积神经网络模型和谷歌卷积神经网络模型进行训练,并将训练好的所述残差神经网络模型、所述稠密卷积神经网络模型和所述谷歌卷积神经网络模型作为所述预设的情绪识别模型。
  19. 如权利要求16所述的非易失性可读存储介质,其特征在于,所述使用N个预设的情绪识别模型对所述待标注图片进行识别,得到所述待标注图片的识别结果包括:
    针对每个所述待标注图片,使用N个预设的情绪识别模型分别对该待标注图片进行特征值提取处理,得到每个所述预设的情绪识别模型对应的特征数据;
    在每个所述预设的情绪识别模型中,使用训练好的m个分类器对所述特征数据进行相似度计算,得到所述待标注图片的m种情绪状态的概率值,其中,m为正整数,每个所述分类器对应一种情绪状态;
    从m个所述概率值中,获取最大的概率值对应的情绪状态作为该情绪识别模型预测的情绪状态,将该最大的概率值作为情绪状态对应的预测分值,共得到N个所述情绪识别模型预测的情绪状态和所述N个情绪状态对应的预测分值。
  20. 如权利要求16至19任一项所述的非易失性可读存储介质,其特征在于,所述针对每个所述待标注图片的识别结果,若N个所述情绪识别模型预测的情绪状态中存在至少两种不同的情绪状态,则将该待标注图片标识为误差图片,并将包含所述误差图片的误差数据集输出到客户端包括:
    对每个所述待标注图片的识别结果进行检测,若N个所述情绪识别模型预测的情绪状态中存在至少两种不同的情绪状态,则将该待标注图片标识为第一误差图片;
    若N个所述情绪识别模型预测的情绪状态相同,并且所述N个情绪状态对应的预测分值均小于预设的误差阈值,则将该待标注图片标识为第二误差图片;
    将所述第一误差图片和所述第二误差图片作为所述误差数据集,并将所述误差数据集输出到所述客户端。
PCT/CN2018/122728 2018-11-12 2018-12-21 人脸样本图片标注方法、装置、计算机设备及存储介质 WO2020098074A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811339683.8 2018-11-12
CN201811339683.8A CN109583325B (zh) 2018-11-12 2018-11-12 人脸样本图片标注方法、装置、计算机设备及存储介质

Publications (1)

Publication Number Publication Date
WO2020098074A1 true WO2020098074A1 (zh) 2020-05-22

Family

ID=65922238

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/122728 WO2020098074A1 (zh) 2018-11-12 2018-12-21 人脸样本图片标注方法、装置、计算机设备及存储介质

Country Status (2)

Country Link
CN (1) CN109583325B (zh)
WO (1) WO2020098074A1 (zh)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111768228A (zh) * 2020-06-19 2020-10-13 京东数字科技控股有限公司 广告标志的识别准确性验证方法、装置、设备和存储介质
CN111882034A (zh) * 2020-07-20 2020-11-03 北京市商汤科技开发有限公司 神经网络处理及人脸识别方法、装置、设备和存储介质
CN111985298A (zh) * 2020-06-28 2020-11-24 百度在线网络技术(北京)有限公司 人脸识别样本收集方法和装置
CN112183197A (zh) * 2020-08-21 2021-01-05 深圳追一科技有限公司 基于数字人的工作状态确定方法、装置和存储介质
CN112381059A (zh) * 2020-12-02 2021-02-19 武汉光庭信息技术股份有限公司 一种目标检测的标注方法及装置
CN112633392A (zh) * 2020-12-29 2021-04-09 博微太赫兹信息科技有限公司 一种太赫兹人体安检图像目标检测模型训练数据增广方法
CN112700880A (zh) * 2020-12-31 2021-04-23 杭州依图医疗技术有限公司 优化方法、训练方法、模型、处理装置及存储介质
CN112989934A (zh) * 2021-02-05 2021-06-18 方战领 视频分析方法、装置及系统
CN113971773A (zh) * 2020-07-24 2022-01-25 阿里巴巴集团控股有限公司 数据处理方法和系统
CN114898418A (zh) * 2022-03-24 2022-08-12 合肥工业大学 基于环形模型的复杂情绪检测方法和系统
WO2023097639A1 (zh) * 2021-12-03 2023-06-08 宁德时代新能源科技股份有限公司 用于图像分割的数据标注方法和系统以及图像分割装置
CN117542106A (zh) * 2024-01-10 2024-02-09 成都同步新创科技股份有限公司 一种静态人脸检测和数据排除方法、装置及存储介质

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110060247B (zh) * 2019-04-18 2022-11-25 深圳市深视创新科技有限公司 应对样本标注错误的鲁棒深度神经网络学习方法
CN110059828A (zh) * 2019-04-23 2019-07-26 杭州智趣智能信息技术有限公司 一种训练样本标注方法、装置、设备及介质
CN112347774A (zh) * 2019-08-06 2021-02-09 北京搜狗科技发展有限公司 一种用于用户情绪识别的模型确定方法和装置
CN110659625A (zh) * 2019-09-29 2020-01-07 深圳市商汤科技有限公司 物体识别网络的训练方法及装置、电子设备和存储介质
CN111104846B (zh) * 2019-10-16 2022-08-30 平安科技(深圳)有限公司 数据检测方法、装置、计算机设备和存储介质
CN112805725A (zh) * 2020-01-06 2021-05-14 深圳市微蓝智能科技有限公司 数据处理方法及装置、计算机可读存储介质
CN111913934A (zh) * 2020-07-08 2020-11-10 珠海大横琴科技发展有限公司 目标样本数据库构建方法、装置及计算机设备
CN112132218B (zh) * 2020-09-23 2024-04-16 平安国际智慧城市科技股份有限公司 图像处理方法、装置、电子设备及存储介质
CN112022065A (zh) * 2020-09-24 2020-12-04 电子科技大学 一种快速定位胶囊进入十二指肠时间点的方法及系统
CN113221627B (zh) * 2021-03-08 2022-05-10 广州大学 一种人脸遗传特征分类数据集构建方法、系统、装置及介质
CN113763348A (zh) * 2021-09-02 2021-12-07 北京格灵深瞳信息技术股份有限公司 图像质量确定方法、装置、电子设备及存储介质
CN115114916A (zh) * 2022-05-27 2022-09-27 中国人民财产保险股份有限公司 用户反馈数据的分析方法、装置及计算机设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103605667A (zh) * 2013-10-28 2014-02-26 中国计量学院 一种图像自动标注算法
US20180027307A1 (en) * 2016-07-25 2018-01-25 Yahoo!, Inc. Emotional reaction sharing
CN107633203A (zh) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 面部情绪识别方法、装置及存储介质
EP3367296A1 (en) * 2017-02-28 2018-08-29 Fujitsu Limited A computer-implemented method of identifying a perforated face in a geometrical three-dimensional model

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793697B (zh) * 2014-02-17 2018-05-01 北京旷视科技有限公司 一种人脸图像的身份标注方法及人脸身份识别方法
CN103824053B (zh) * 2014-02-17 2018-02-02 北京旷视科技有限公司 一种人脸图像的性别标注方法及人脸性别检测方法
WO2018060993A1 (en) * 2016-09-27 2018-04-05 Faception Ltd. Method and system for personality-weighted emotion analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103605667A (zh) * 2013-10-28 2014-02-26 中国计量学院 一种图像自动标注算法
US20180027307A1 (en) * 2016-07-25 2018-01-25 Yahoo!, Inc. Emotional reaction sharing
EP3367296A1 (en) * 2017-02-28 2018-08-29 Fujitsu Limited A computer-implemented method of identifying a perforated face in a geometrical three-dimensional model
CN107633203A (zh) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 面部情绪识别方法、装置及存储介质

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111768228A (zh) * 2020-06-19 2020-10-13 京东数字科技控股有限公司 广告标志的识别准确性验证方法、装置、设备和存储介质
CN111985298A (zh) * 2020-06-28 2020-11-24 百度在线网络技术(北京)有限公司 人脸识别样本收集方法和装置
CN111882034A (zh) * 2020-07-20 2020-11-03 北京市商汤科技开发有限公司 神经网络处理及人脸识别方法、装置、设备和存储介质
CN113971773A (zh) * 2020-07-24 2022-01-25 阿里巴巴集团控股有限公司 数据处理方法和系统
CN112183197A (zh) * 2020-08-21 2021-01-05 深圳追一科技有限公司 基于数字人的工作状态确定方法、装置和存储介质
CN112381059A (zh) * 2020-12-02 2021-02-19 武汉光庭信息技术股份有限公司 一种目标检测的标注方法及装置
CN112633392A (zh) * 2020-12-29 2021-04-09 博微太赫兹信息科技有限公司 一种太赫兹人体安检图像目标检测模型训练数据增广方法
CN112700880A (zh) * 2020-12-31 2021-04-23 杭州依图医疗技术有限公司 优化方法、训练方法、模型、处理装置及存储介质
CN112989934A (zh) * 2021-02-05 2021-06-18 方战领 视频分析方法、装置及系统
CN112989934B (zh) * 2021-02-05 2024-05-24 方战领 视频分析方法、装置及系统
WO2023097639A1 (zh) * 2021-12-03 2023-06-08 宁德时代新能源科技股份有限公司 用于图像分割的数据标注方法和系统以及图像分割装置
CN114898418A (zh) * 2022-03-24 2022-08-12 合肥工业大学 基于环形模型的复杂情绪检测方法和系统
CN117542106A (zh) * 2024-01-10 2024-02-09 成都同步新创科技股份有限公司 一种静态人脸检测和数据排除方法、装置及存储介质
CN117542106B (zh) * 2024-01-10 2024-04-05 成都同步新创科技股份有限公司 一种静态人脸检测和数据排除方法、装置及存储介质

Also Published As

Publication number Publication date
CN109583325A (zh) 2019-04-05
CN109583325B (zh) 2023-06-27

Similar Documents

Publication Publication Date Title
WO2020098074A1 (zh) 人脸样本图片标注方法、装置、计算机设备及存储介质
CN109635838B (zh) 人脸样本图片标注方法、装置、计算机设备及存储介质
CN110909803B (zh) 图像识别模型训练方法、装置和计算机可读存储介质
US11954139B2 (en) Deep document processing with self-supervised learning
WO2019033525A1 (zh) Au特征识别方法、装置及存储介质
WO2018153265A1 (zh) 关键词提取方法、计算机设备和存储介质
WO2020147395A1 (zh) 基于情感的文本分类处理方法、装置和计算机设备
WO2019232843A1 (zh) 手写模型训练、手写图像识别方法、装置、设备及介质
US8873840B2 (en) Reducing false detection rate using local pattern based post-filter
WO2020024395A1 (zh) 疲劳驾驶检测方法、装置、计算机设备及存储介质
WO2019033571A1 (zh) 面部特征点检测方法、装置及存储介质
WO2020164278A1 (zh) 一种图像处理方法、装置、电子设备和可读存储介质
US20170185913A1 (en) System and method for comparing training data with test data
US20210390370A1 (en) Data processing method and apparatus, storage medium and electronic device
US20190311194A1 (en) Character recognition using hierarchical classification
WO2022035942A1 (en) Systems and methods for machine learning-based document classification
US11600088B2 (en) Utilizing machine learning and image filtering techniques to detect and analyze handwritten text
CN112966088B (zh) 未知意图的识别方法、装置、设备及存储介质
KR102370910B1 (ko) 딥러닝 기반 소수 샷 이미지 분류 장치 및 방법
CN107330387B (zh) 基于图像数据的行人检测方法
US20210174104A1 (en) Finger vein comparison method, computer equipment, and storage medium
JP2019153293A (ja) 人工ニューラルネットワークを用いたocrシステムのための、線認識最大−最小プーリングを用いたテキスト画像の処理
CN111126347A (zh) 人眼状态识别方法、装置、终端及可读存储介质
WO2020019457A1 (zh) 用户指令匹配方法、装置、计算机设备及存储介质
CN116484224A (zh) 一种多模态预训练模型的训练方法、装置、介质及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18940463

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 20.08.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18940463

Country of ref document: EP

Kind code of ref document: A1