CN109635838B - Face sample picture labeling method and device, computer equipment and storage medium - Google Patents

Face sample picture labeling method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN109635838B
CN109635838B CN201811339105.4A CN201811339105A CN109635838B CN 109635838 B CN109635838 B CN 109635838B CN 201811339105 A CN201811339105 A CN 201811339105A CN 109635838 B CN109635838 B CN 109635838B
Authority
CN
China
Prior art keywords
picture
face
marked
pictures
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811339105.4A
Other languages
Chinese (zh)
Other versions
CN109635838A (en
Inventor
盛建达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201811339105.4A priority Critical patent/CN109635838B/en
Publication of CN109635838A publication Critical patent/CN109635838A/en
Application granted granted Critical
Publication of CN109635838B publication Critical patent/CN109635838B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a face sample picture marking method, a device, computer equipment and a storage medium, wherein the method comprises the following steps: identifying the picture to be marked by using a preset emotion identification model, marking the identification result as marking information in the picture to be marked, acquiring an error data set in the picture to be marked, outputting the error data set to a client, storing the corrected error data set sent by the client and a correct data set in the picture to be marked as standard samples in a standard sample library, training the emotion identification model by using the standard samples to update the emotion identification model, and returning to the step of identifying and marking the picture to be marked by using the preset emotion identification model to continue to be executed until the error data set in the picture to be marked is empty. The technical scheme of the invention can automatically generate the labeling information for the face picture, and improves the labeling efficiency of the face picture, thereby improving the generation efficiency of the standard sample library for model training and testing.

Description

Face sample picture labeling method and device, computer equipment and storage medium
Technical Field
The present invention relates to the field of biological recognition technologies, and in particular, to a method and apparatus for labeling a face sample picture, a computer device, and a storage medium.
Background
Facial expression recognition is an important research direction in the field of artificial intelligence, and in research on emotion recognition of face, a large number of face emotion samples are required to be prepared for model training of supporting an emotion recognition model, and deep learning is performed by using the large number of face emotion samples, so that accuracy and robustness of the emotion recognition model are improved.
However, at present, the number of public data sets related to facial emotion classification is relatively small, manual labeling is needed for facial pictures by means of manual mode, or specific facial emotion samples are manually collected, because the time consumption of the manual labeling method for facial pictures is long at present, the input human resources are large, the workload of collecting the facial emotion samples by means of manual mode is large, the collection efficiency of the facial emotion sample data sets is low, the number of manually collected samples is limited, and model training of an emotion recognition model cannot be well supported.
Disclosure of Invention
The embodiment of the invention provides a method, a device, computer equipment and a storage medium for labeling face sample pictures, which are used for solving the problem of low labeling efficiency of face emotion sample pictures.
A face sample picture labeling method comprises the following steps:
acquiring a face picture in a preset face picture set as a picture to be marked;
identifying the pictures to be marked by using a preset emotion identification model, and marking the identification results of the pictures to be marked as marking information into the corresponding pictures to be marked to obtain the marking information corresponding to each face picture, wherein the identification results comprise the emotion states of the pictures to be marked and the predictive scores of the emotion states;
acquiring a face picture with the predictive value smaller than a preset error threshold value, forming an error data set from the acquired face picture, and outputting the error data set to a client so that a user corrects labeling information of the face picture in the error data set at the client;
receiving the corrected error data set sent by the client, and taking the face picture in the corrected error data set as a first standard sample;
taking the face picture with the predictive value larger than a preset sample threshold value as a second standard sample, and storing the first standard sample and the second standard sample into a preset standard sample library;
Training the preset emotion recognition model by using the first standard sample and the second standard sample to update the preset emotion recognition model;
and taking the face picture set except the first standard sample and the second standard sample as a new picture to be marked, continuously executing the steps of using a preset emotion recognition model to recognize the picture to be marked, taking the recognition result of the picture to be marked as marking information, marking the recognition result of the picture to be marked into the corresponding picture to be marked, and obtaining marking information corresponding to each face picture until the error data set is empty.
A face sample picture marking device, comprising:
the sample picture acquisition module is used for acquiring a face picture in a preset face picture set as a picture to be marked;
the sample picture marking module is used for identifying the pictures to be marked by using a preset emotion identification model, marking the identification results of the pictures to be marked as marking information into the corresponding pictures to be marked, and obtaining the marking information corresponding to each face picture, wherein the identification results comprise the emotion states of the pictures to be marked and the predictive values of the emotion states;
The error data output module is used for acquiring face pictures with the predictive value smaller than a preset error threshold value, forming an error data set from the acquired face pictures, and outputting the error data set to a client so that a user corrects the labeling information of the face pictures in the error data set at the client;
the correction data receiving module is used for receiving the corrected error data set sent by the client and taking the face picture in the corrected error data set as a first standard sample;
the standard sample storage module is used for taking the face picture with the predictive value larger than a preset sample threshold value as a second standard sample and storing the first standard sample and the second standard sample into a preset standard sample library;
the model updating module is used for training the preset emotion recognition model by using the first standard sample and the second standard sample so as to update the preset emotion recognition model;
and the circulation execution module is used for taking the face pictures except the first standard sample and the second standard sample in the face picture set as new pictures to be marked, continuously executing the steps of identifying the pictures to be marked by using a preset emotion identification model, marking the identification results of the pictures to be marked as marking information in the corresponding pictures to be marked, and obtaining marking information corresponding to each face picture until the error data set is empty.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the face sample picture marking method described above when the computer program is executed.
A computer readable storage medium storing a computer program which when executed by a processor performs the steps of the face sample picture labeling method described above.
According to the face sample picture labeling method, the device, the computer equipment and the storage medium, the picture to be labeled is identified by using the preset emotion recognition model, the identification result is used as labeling information to be labeled in the picture to be labeled, the error data set in the picture to be labeled is obtained and output to the client, so that the user corrects the labeling information of the error data set, the corrected error data set and the correct data set in the picture to be labeled are used as standard samples and stored in the standard sample library, the emotion recognition model is incrementally trained by using the standard samples in the standard sample library, the preset emotion recognition model is updated, the recognition accuracy of the labeling information of the picture to be labeled of the emotion recognition model is improved, and the steps of recognizing and labeling the picture to be labeled by using the preset emotion recognition model are returned to continue to be executed until the error data set in the face picture is empty. By using the emotion recognition model to recognize and mark the face picture, the corresponding marking information is automatically generated for the face sample picture, the labor cost is saved, the marking efficiency of the face sample picture is improved, and the generating efficiency of a standard sample library for model training and testing is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of an application environment of a face sample image labeling method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a face sample image labeling method according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for labeling face sample pictures to construct emotion recognition models according to an embodiment of the present invention;
FIG. 4 is a flowchart of the face sample image labeling method according to the present invention for performing an augmentation process on a face image;
FIG. 5 is a flowchart showing step S2 in FIG. 2;
FIG. 6 is a flowchart showing step S3 in FIG. 2;
FIG. 7 is a schematic block diagram of a face sample picture marking apparatus according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a computer device in accordance with an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The face sample picture labeling method provided by the embodiment of the application can be applied to an application environment as shown in fig. 1, wherein the application environment comprises a server side and a client side, the server side and the client side are connected through a network, the server side carries out recognition and labeling processing on the face picture and outputs data with wrong recognition to the client side, a user corrects the data with wrong recognition at the client side, and the server side stores corrected data and correct recognition data obtained from the client side into a standard sample library. The client may be, but not limited to, a variety of personal computers, notebook computers, smartphones, tablet computers, and portable wearable devices, and the server may be implemented by a server cluster formed by an independent server or a plurality of servers. The embodiment of the invention provides a method for labeling face sample pictures, which is applied to a server.
In an embodiment, fig. 2 shows a flowchart of a face sample picture labeling method in this embodiment, where the method is applied to the server in fig. 1, and is used for performing recognition and labeling processing on a face picture. As shown in fig. 2, the face sample picture labeling method includes steps S1 to S7, which are described in detail as follows:
s1: and acquiring a face picture in a preset face picture set as a picture to be marked.
The preset face picture set is a preset storage space for storing and collecting face pictures, the face pictures can be obtained by crawling from a network public data set, face pictures containing faces can also be obtained by intercepting from a public video, and the specific face picture obtaining mode can be set according to actual conditions without limitation.
Optionally, the face picture obtaining manner may specifically use a crawler tool to crawl the face picture in a public data set of the network, where the crawler tool is a tool for obtaining the face picture, for example, an octopus crawler tool, a tiger crawler tool, or a search crawler tool, and browse contents in an address of the public stored picture data through the network, crawl the picture data corresponding to a preset keyword using the crawler tool, and store the crawled picture data in the preset face picture set, where the preset keyword is a keyword related to emotion or face.
For example, a crawler tool may be used to crawl the image data corresponding to the "face" in the hundred-degree images according to the preset keyword "face", and name the face images as face_1. Jpg, face_2. Jpg, …, face_x.jpg, and the like according to the sequence of image acquisition, and store the face images in a preset face image set.
Specifically, the server acquires a face picture from a preset face picture set as a picture to be marked, and the picture to be marked needs to be marked so as to be used for training and testing a machine learning model.
S2: and identifying the picture to be marked by using a preset emotion identification model, marking the identification result of the picture to be marked as marking information in the corresponding picture to be marked, and obtaining marking information corresponding to each face picture, wherein the identification result comprises the emotion state of the picture to be marked and the predictive value of the emotion state.
Specifically, the preset emotion recognition model is a pre-trained model, and is used for recognizing an emotion state corresponding to a face in a face picture to be recognized, after the preset emotion recognition model is used for recognizing the picture to be marked, the emotion state of each picture to be marked and a predictive value of the emotion state can be obtained, the emotion states include but are not limited to happy, sad, fear, gas, surprise, aversion, calm and the like, the predictive value is used for representing the probability of the emotion state corresponding to the face in the face picture, and if the predictive value is larger, the probability that the face belongs to the emotion state in the face picture is larger.
Based on a preset emotion recognition model, recognizing each picture to be marked to obtain a recognition result of the picture to be marked, namely, the emotion state of each picture to be marked and the predictive value of the emotion state, and marking the recognition result of the picture to be marked as marking information in the corresponding picture to be marked, so that marking information corresponding to each face picture is obtained.
S3: and acquiring face pictures with predictive values smaller than a preset error threshold, forming an error data set from the acquired face pictures, and outputting the error data set to a client so that a user corrects labeling information of the face pictures in the error data set at the client.
The preset error threshold is a threshold preset to distinguish whether the emotion state of the identified face picture is wrong, if the predictive value obtained by identification is smaller than the preset error threshold, the error threshold can be set to 0.5 or 0.6, and the specific error threshold can be set according to the actual situation, so that the method is not limited.
Specifically, the server detects labeling information of the face pictures, if the predictive value in the labeling information is smaller than a preset error threshold, the fact that errors exist in the identification of the face pictures is confirmed, the face pictures with the predictive value smaller than the preset error threshold are obtained to form an error data set, the error data set is sent to the client through a network, so that a user corrects the labeling information of the face pictures in the error data set at the client, correct information of emotion states corresponding to each face picture is input, and the error labeling information corresponding to the face pictures in the error data set is updated.
S4: and receiving the corrected error data set sent by the client, and taking the face picture in the corrected error data set as a first standard sample.
Specifically, the client sends a corrected error data set to the server through the network, the error data set carries identification information for correcting data, the identification information is used for identifying that the sent data is the corrected error data set, the server receives the data sent by the client, if the data is detected to contain the identification information for correcting the data, the received data is the corrected error data set sent by the client, the face picture in the corrected error data set is used as a first standard sample, and the first standard sample contains labeling information of an emotion state corresponding to the face picture.
S5: and taking the face picture with the predictive value larger than the preset sample threshold value as a second standard sample, and storing the first standard sample and the second standard sample into a preset standard sample library.
The preset sample threshold is a threshold set in advance for selecting a face picture with correct recognition, if the predictive value obtained by recognition is greater than the preset sample threshold, the result of recognition of the face picture is correct, the sample threshold may be set to 0.9 or 0.95, and the specific sample threshold may be set according to the actual situation, which is not limited herein.
Specifically, the server detects labeling information of the face picture, if the prediction value in the labeling information is larger than a preset sample threshold, the face picture is confirmed to be correctly identified, the face picture with the prediction value larger than the preset sample threshold is obtained to serve as a second standard sample, the second standard sample contains labeling information of an emotion state corresponding to the face picture, the first standard sample and the second standard sample are stored in a preset standard sample library, the preset standard sample library is a database for storing standard samples, the standard samples are face sample pictures containing the labeling information, the face sample pictures are obtained after the labeling information is labeled in the face picture, and the machine learning model can learn the face sample pictures and the emotion states corresponding to the face sample pictures according to the labeling information in the face sample pictures.
S6: training the preset emotion recognition model by using the first standard sample and the second standard sample to update the preset emotion recognition model.
Specifically, the server performs incremental training on the preset emotion recognition model by using the first standard sample and the second standard sample, so that the preset emotion recognition model is updated, the incremental training refers to model training for optimizing model parameters of the preset emotion recognition model, the incremental training can fully utilize a historical training result of the preset emotion recognition model, the time of subsequent model training is reduced, and sample data trained before do not need to be repeatedly processed.
It can be understood that the more training samples are, the higher the accuracy and the robustness of the emotion recognition model obtained by training are, and the preset emotion recognition model is incrementally trained by using the standard sample with correct recognition, so that the preset emotion recognition model learns new knowledge from the newly added standard sample, can save the knowledge learned by the training samples before, obtain more accurate model parameters, and improve the recognition accuracy of the model.
S7: and taking the face pictures except the first standard sample and the second standard sample in the face picture set as new pictures to be marked, continuously executing the identification of the pictures to be marked by using a preset emotion identification model, taking the identification result of the pictures to be marked as marking information, marking the marking information into the corresponding pictures to be marked, and obtaining marking information corresponding to each face picture until the error data set is empty.
Specifically, the server excludes the face picture corresponding to the first standard sample from the face picture set, deletes the face picture corresponding to the second standard sample, takes the remaining face picture in the face picture set as a new picture to be marked, and the remaining face picture may have a face picture with incorrect recognition or a face picture with correct recognition, so that the emotion recognition model with higher recognition accuracy is needed to be used for further distinguishing.
Further, the method includes the steps of returning to the preset emotion recognition model to recognize the pictures to be marked, marking the recognition results of the pictures to be marked as marking information in the corresponding pictures to be marked, and continuing to execute the steps of obtaining the marking information corresponding to each face picture until an error data set is empty, wherein the fact that the emotion recognition model does not recognize the face picture with the wrong recognition in the recognition results of the face picture set is indicated, and stopping the continuous recognition of the face picture by the emotion recognition model, so that marked standard samples are obtained and stored in a preset standard sample library for training and testing of the machine learning model.
In the embodiment corresponding to fig. 2, the image to be marked is identified by using a preset emotion recognition model, the identification result is marked into the image to be marked as marking information, the error data set in the image to be marked is obtained and output to the client, so that the user corrects the marking information of the error data set, the corrected error data set and the correct data set in the image to be marked are used as standard samples, the standard samples are stored in a standard sample library, the emotion recognition model is subjected to incremental training by using the standard samples in the standard sample library, the preset emotion recognition model is updated, the recognition accuracy of the emotion recognition model on the marking information of the image to be marked is improved, and the steps of identifying and marking the image to be marked by using the preset emotion recognition model are continuously executed until the error data set in the face image is empty are returned. By using the emotion recognition model to recognize and mark the face picture, the corresponding marking information is automatically generated for the face picture, the labor cost is saved, the marking efficiency of the face picture is improved, and the generating efficiency of a standard sample library for model training and testing is improved.
In an embodiment, as shown in fig. 3, before step S2, that is, before using a preset emotion recognition model to recognize a picture to be marked, and marking the recognition result of the picture to be marked as marking information in a corresponding picture to be marked, before obtaining marking information corresponding to each face picture, the face sample picture marking method further includes:
s11: and obtaining a face sample picture from a preset standard sample library.
Specifically, the server may obtain a face sample picture from a preset standard emotion database, where the preset standard sample database is a database for storing standard samples, the standard samples are face sample pictures containing labeling information, each face sample picture corresponds to one labeling information, the labeling information is used for describing an emotion state corresponding to a face in the face sample picture, and the emotion states corresponding to the face picture include but are not limited to happy, sad, fear, gas, surprise, aversion, calm and the like.
S12: and preprocessing the face sample picture.
The image preprocessing refers to the process of transforming the size, the color, the shape and the like of the image to form a training sample with uniform specification, so that the subsequent model training process can be more efficient in processing the image, and the recognition accuracy of a machine learning model is improved.
Specifically, the face sample picture can be converted into a training sample with a preset uniform size, and then the training sample is subjected to pretreatment processes such as denoising, graying, binarization and the like, so that noise information in the face sample picture is eliminated, the detectability of information related to the face is enhanced, and image data is simplified.
For example, the size of the training sample may be preset to be a face picture with a size of 224×224, for a face sample picture with a size of [1280, 720], the area of the face in the face sample picture is detected by the existing face detection algorithm, the area where the face is located is cut out from the face sample picture, and then the cut face sample picture is scaled to be a training sample with a size of [224, 224], and preprocessing such as denoising, ashing, binarization and the like is performed on the training sample, so as to implement preprocessing on the face sample picture.
S13: training the support vector machine model by using the preprocessed face sample picture to obtain a preset emotion recognition model.
Specifically, a preprocessed face sample picture is obtained according to step S12, the preprocessed face sample picture is used for training a support vector machine model, the preprocessed face sample picture is used as a training sample to be input into the support vector machine model, so that the support vector machine model can conduct picture recognition, classification and regression analysis processing on the training sample, and a preset emotion recognition model can be obtained and used for classifying and predicting new sample data, wherein the support vector machine model is an SVM (Support Vector Machine ) model which is a linear classifier and can well classify and recognize texts and images.
In the embodiment corresponding to fig. 3, the quality of the face sample pictures is improved by preprocessing the face sample pictures in the standard sample library, so that the subsequent model training process can be more efficient in processing the pictures, the training rate and the recognition accuracy of the machine learning model are improved, and the support vector machine model is trained by using the preprocessed face sample pictures to obtain a preset emotion recognition model, so that the emotion recognition model can be used for classifying and predicting new face pictures.
In an embodiment, the embodiment identifies the pictures to be marked by using the preset emotion identification model mentioned in the step S2, and marks the identification result of the pictures to be marked as marking information in the corresponding pictures to be marked, so as to obtain detailed description of the specific implementation method of the marking information corresponding to each face picture.
Referring to fig. 4, fig. 4 shows a specific flowchart of step S2, which is described in detail below:
s201: and extracting the characteristic value of the picture to be marked by using a preset emotion recognition model.
Specifically, a preset emotion recognition model is used for extracting the characteristic value of the picture to be marked to obtain the characteristic value of the picture to be marked, required important characteristics are reserved, insignificant information is abandoned, and therefore the characteristic value which can be used for predicting the subsequent emotion state is obtained, wherein the characteristic value extraction refers to a method for extracting information belonging to the characteristics of a human face in the picture to be marked by using the emotion recognition model so as to highlight the representative characteristics of the picture to be marked.
S202: according to the characteristic values of the pictures to be marked, matching is carried out in n trained classifiers in a preset emotion recognition model to obtain probability values of n emotion states of the pictures to be marked, wherein n is a positive integer, and each classifier corresponds to one emotion state.
The preset emotion recognition model includes n trained classifiers, each classifier corresponds to one emotion state and feature data corresponding to the emotion state, wherein the emotion states corresponding to the classifiers can be trained according to actual needs, the number n of the classifiers can be set according to needs, and the number n of the classifiers is not particularly limited herein, for example, n can be set to 7, namely, 7 emotion states are included, and the emotion states can be set to 7 emotions such as happiness, sadness, fear, vitality, surprise, aversion, calm and the like.
Specifically, according to the characteristic values of the pictures to be marked, matching is carried out in n trained classifiers in a preset emotion recognition model to obtain the probability that the characteristic values of the pictures to be marked belong to the emotion states corresponding to the classifiers, n probability values are obtained altogether, and the emotion recognition model predicts the pictures to be marked to obtain the probability that the pictures to be marked belong to each emotion state.
S203: and acquiring an emotion state corresponding to the maximum probability value from the probability values of the n emotion states as the emotion state of the picture to be annotated, and taking the maximum probability value as a predictive value of the emotion state of the picture to be annotated.
Specifically, according to the probability values of the n emotion states of the picture to be annotated obtained in step S202, the emotion state corresponding to the largest probability value is obtained from the probability values of the n emotion states to be used as the emotion state of the picture to be annotated, so as to represent the emotion state corresponding to the picture to be annotated, and the largest probability value is used as the predictive value of the emotion state of the picture to be annotated.
For example, the table one shows a prediction result obtained after a picture to be marked is classified and predicted by a preset emotion recognition model, wherein the classifications 1-7 respectively represent the emotional states of happiness, sadness, fear, happiness, surprise, aversion, calm and the like corresponding to the face picture, the probability corresponding to each classification is the probability that the preset emotion recognition model predicts that the picture to be marked belongs to the classification, for example, 55% corresponding to the classification 1, the probability that the face in the picture to be marked belongs to the emotional state of "happiness" is obtained by the emotion recognition model through classification prediction, the maximum probability in the classification of the picture to be marked obtained through prediction can be known according to the prediction result is 55%, the "happiness" is obtained as the emotional state of the picture to be marked, and meanwhile, the maximum probability value 55% is marked as the prediction score of the emotional state of the picture to be marked, namely, the prediction score is 0.55.
Predicting result of picture to be marked
Picture to be marked Classification 1 Classification 2 Classification 3 Classification 4 Classification 5 Classification 6 Classification 7
Face picture 1.Jpg 55% 15% 10% 10% 5% 5% 0%
S204: and obtaining the emotion states of the pictures to be marked and the predictive scores of the emotion states as marking information, marking the marking information into the corresponding pictures to be marked, and obtaining marking information corresponding to each face picture.
Specifically, the emotion states of the pictures to be marked and the predictive values of the emotion states are obtained to serve as marking information, the marking information is marked in the corresponding pictures to be marked, marking information corresponding to each face picture is obtained, and the face sample pictures are obtained after the marking information is marked in the face pictures, so that the machine learning model can perform machine learning on the face sample pictures and the emotion states corresponding to the face sample pictures according to the marking information in the face sample pictures.
In the embodiment corresponding to fig. 4, the feature value of the picture to be marked is extracted by using the preset emotion recognition model, the emotion state of the picture to be marked and the prediction value of the emotion state are obtained by recognition according to the feature value of the picture to be marked, meanwhile, the emotion state of the picture to be marked and the prediction value of the emotion state are obtained as marking information, the marking information is marked into the corresponding picture to be marked, marking information corresponding to each face picture is obtained, recognition and prediction are automatically carried out on the picture to be marked through the emotion recognition model, the marking information corresponding to the picture to be marked can be obtained quickly and efficiently, the marking efficiency of the face sample picture is improved, and the labor cost is saved.
In an embodiment, as shown in fig. 5, after step S1 and before step S2, that is, after obtaining a face picture in a preset face picture set as a picture to be annotated, and after identifying the picture to be annotated by using a preset emotion identification model, and marking the identification result of the picture to be annotated as marking information in a corresponding picture to be annotated, before obtaining marking information corresponding to each face picture, the face sample picture marking method further includes:
s14: and amplifying the face picture by adopting a preset amplifying mode to obtain an amplified picture corresponding to the face picture.
Specifically, for each face picture in the face picture set, a preset augmentation mode is used to augment the face picture, and the preset augmentation mode is a picture processing mode which is preset to increase the number of the face pictures.
The augmenting mode may specifically be to cut a face picture, for example, cut a face picture with a size of 256×256 randomly to obtain a face picture with a size of 248×248 as an augmenting picture, process the face picture by adopting a picture processing mode of graying or global illumination correction, or combine multiple picture processing modes to form a preset augmenting mode, for example, firstly, turn the face picture, and then perform local side light source correction on the turned picture, which is not limited to this, but the specific augmenting mode may be set according to the needs of practical application, and is not limited herein;
Further, according to the name of the face picture, the augmented picture generated for the face picture is named according to a preset naming mode, and the preset naming mode can be specifically: the name of each augmentation picture is obtained by the face picture name-augmentation mode identification number jpg, so that the face picture corresponding to the augmentation picture and the augmentation mode of the augmentation picture can be determined according to the name of the augmentation picture.
S15: and adding the augmented picture serving as a picture to be marked into a face picture set.
It can be understood that the face pictures in the face picture set are augmented to increase the number of face pictures in the face picture set, and the augmented pictures are added to the face picture set as pictures to be labeled, so that the preset emotion recognition model can recognize and label the face pictures and the augmented pictures at the same time, so that more face sample pictures can be obtained for supporting model training of the emotion recognition model.
In the embodiment corresponding to fig. 5, the face picture is augmented by adopting a preset augmentation mode, so that an augmentation picture corresponding to the face picture is obtained, the acquisition efficiency of the face picture is improved, meanwhile, the augmentation picture is used as a picture to be marked and added into the face picture set, the number of samples of the face picture is greatly increased, and more face pictures are collected for supporting model training of the emotion recognition model.
In an embodiment, the detailed description is given to a specific implementation method of acquiring face pictures with predictive scores smaller than a preset error threshold in step S3, forming an error dataset from the acquired face pictures, and outputting the error dataset to the client.
Referring to fig. 6, fig. 6 shows a specific flowchart of step S3, which is described in detail below:
s301: and acquiring a face picture with the predictive value smaller than a preset error threshold as a first error picture.
In this embodiment, the preset error threshold is a threshold set in advance for distinguishing whether the emotion state of the face picture obtained by recognition is wrong, if the prediction score obtained by recognition is smaller than the preset error threshold, it indicates that the face picture is wrong in recognition, the face picture is identified as the first error picture, the error threshold may be set to 0.5 or 0.6, and the specific error threshold may be set according to the actual situation, which is not limited herein.
S302: and if the emotional state of the augmented picture is different from the emotional state of the face picture corresponding to the augmented picture, acquiring the augmented picture mark as a second error picture.
Specifically, an augmentation picture and a face picture corresponding to the augmentation picture can be determined according to the name of the picture to be marked, if the emotional state of the augmentation picture is different from the emotional state of the face picture corresponding to the augmentation picture, that is, the face picture and the augmentation picture corresponding to the face picture are predicted to be different emotional states, the inaccuracy of the prediction of the augmentation picture is confirmed, and the augmentation picture is identified as a second error picture.
For example, in this embodiment, the file of the enhanced picture is named as "face picture name_enhanced mode identification number. Jpg", the file of the face picture is named as "face picture_number. Jpg", and the picture to be marked can be distinguished according to the file name, so as to determine whether the picture to be marked is the face picture or the enhanced picture.
It should be noted that, the face picture is augmented by a preset augmentation mode, and only the face picture is transformed in size, color, shape, etc. to increase the number of face pictures, so that the generated augmentation picture corresponding to the face picture does not change the emotional state in the original face picture.
S303: and taking the first error picture and the second error picture as error data sets, and outputting the error data sets to the client.
Specifically, the server takes the first error picture and the second error picture as error data sets, outputs the error data sets to the client, confirms and marks the error data sets by relevant staff, confirms the emotional state of the face in the face picture in the marked error picture set by the user, and marks the correct marking information correspondingly.
In the embodiment corresponding to fig. 6, a face picture with a predictive value smaller than a preset error threshold is obtained as a first error picture, if the emotional state of an augmented picture is different from that of a face picture corresponding to the augmented picture, the augmented picture is obtained as a second error picture, and meanwhile, the first error picture and the second error picture are output to a client as an error data set so as to manually correct the face picture with the incorrect identification, obtain a face sample picture with the correct identification, and be used for performing incremental training on an emotion recognition model, so that the identification accuracy of the emotion recognition model is improved, and a server can use the emotion recognition model with higher accuracy to identify and label the picture to be marked, thereby improving the labeling accuracy of the face picture.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
In an embodiment, a face sample image labeling device is provided, and the face sample image labeling device corresponds to the face sample image labeling method in the above embodiment one by one. As shown in fig. 7, the face sample picture labeling device includes: a sample picture acquisition module 71, a sample picture labeling module 72, an error data output module 73, a correction data receiving module 74, a standard sample storage module 75, a model updating module 76, and a loop execution module 77. The functional modules are described in detail as follows:
a sample picture obtaining module 71, configured to obtain a face picture in a preset face picture set as a picture to be annotated;
the sample picture labeling module 72 is configured to identify a picture to be labeled by using a preset emotion recognition model, and label the identification result of the picture to be labeled as labeling information into a corresponding picture to be labeled, so as to obtain labeling information corresponding to each face picture, where the identification result includes an emotion state of the picture to be labeled and a predictive value of the emotion state;
The error data output module 73 is configured to obtain face images with a predictive value smaller than a preset error threshold, and form an error data set from the obtained face images, and output the error data set to the client, so that the user corrects labeling information of the face images in the error data set at the client;
the corrected data receiving module 74 is configured to receive the corrected error data set sent by the client, and take a face picture in the corrected error data set as a first standard sample;
a standard sample storage module 75, configured to take a face picture whose predictive value is greater than a preset sample threshold as a second standard sample, and store the first standard sample and the second standard sample in a preset standard sample library;
a model update module 76 for training a preset emotion recognition model using the first and second standard samples to update the preset emotion recognition model;
the loop execution module 77 is configured to take face pictures except the first standard sample and the second standard sample in the face picture set as new pictures to be marked, continue to execute the steps of identifying the pictures to be marked by using the preset emotion identification model, and marking the identification results of the pictures to be marked as marking information in the corresponding pictures to be marked, so as to obtain marking information corresponding to each face picture, until the error data set is empty.
Further, the face sample picture marking device further comprises:
the training sample acquiring module 711 is configured to acquire a face sample picture from a preset standard sample library;
a training sample processing module 712, configured to pre-process a face sample picture;
the model training module 713 is configured to train the support vector machine model by using the preprocessed face sample picture to obtain a preset emotion recognition model.
Further, the sample picture annotation module 72 includes:
the feature extraction submodule 7201 is used for extracting a feature value of the picture to be marked by using a preset emotion recognition model;
the feature matching submodule 7202 is used for matching in n trained classifiers in a preset emotion recognition model according to the feature value of the picture to be marked to obtain probability values of n emotion states of the picture to be marked, wherein n is a positive integer, and each classifier corresponds to one emotion state;
the result output submodule 7203 is used for acquiring an emotion state corresponding to the maximum probability value from the probability values of the n emotion states as the emotion state of the picture to be annotated, and taking the maximum probability value as a predictive value of the emotion state of the picture to be annotated;
The information labeling sub-module 7204 is configured to obtain an emotional state of the picture to be labeled and a prediction score of the emotional state as labeling information, label the labeling information into the corresponding picture to be labeled, and obtain labeling information corresponding to each face picture.
Further, the face sample picture marking device further comprises:
the image augmentation module 714 is configured to augment the face image in a preset augmentation manner to obtain an augmented image corresponding to the face image;
the image storage module 715 is configured to add the augmented image as the image to be annotated to the face image set.
Further, the error data output module 73 includes:
a first data obtaining submodule 7301, configured to obtain a face picture whose predictive value is smaller than a preset error threshold as a first error picture;
a second data obtaining submodule 7302, configured to obtain an identifier of the augmented picture as a second error picture if an emotional state of the augmented picture is different from an emotional state of a face picture corresponding to the augmented picture;
an error data output submodule 7303 for taking the first error picture and the second error picture as error data sets and outputting the error data sets to the client.
For specific limitations of the face sample image labeling device, reference may be made to the above limitations of the face sample image labeling method, and no further description is given here. All or part of the modules in the face sample picture marking device can be realized by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 8. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program when executed by the processor is used for realizing a face sample picture labeling method.
In one embodiment, a computer device is provided, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements steps in the face sample picture labeling method of the foregoing embodiment, such as steps S1 to S7 shown in fig. 2, when executing the computer program, or implements functions of each module of the face sample picture labeling apparatus of the foregoing embodiment, such as functions of modules 71 to 77 shown in fig. 7, when executing the computer program. In order to avoid repetition, a description thereof is omitted.
In one embodiment, a computer readable storage medium is provided, on which a computer program is stored, where the computer program when executed by a processor implements the steps in the face sample picture labeling method of the above embodiment, for example, steps S1 to S7 shown in fig. 2, or where the processor executes the computer program to implement the functions of each module of the face sample picture labeling device of the above embodiment, for example, the functions of modules 71 to 77 shown in fig. 7. In order to avoid repetition, a description thereof is omitted.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (8)

1. The face sample picture labeling method is characterized by comprising the following steps of:
acquiring a face picture in a preset face picture set as a picture to be marked;
Identifying the pictures to be marked by using a preset emotion identification model, and marking the identification results of the pictures to be marked as marking information into the corresponding pictures to be marked to obtain the marking information corresponding to each face picture, wherein the identification results comprise the emotion states of the pictures to be marked and the predictive scores of the emotion states;
the identifying the pictures to be marked by using a preset emotion identification model, marking the identification results of the pictures to be marked as marking information in the corresponding pictures to be marked, and obtaining the marking information corresponding to each face picture comprises the following steps:
extracting the characteristic value of the picture to be marked by using the preset emotion recognition model;
matching the feature values of the pictures to be marked in the trained n classifiers in the preset emotion recognition model to obtain probability values of n emotion states of the pictures to be marked, wherein n is a positive integer, and each classifier corresponds to one emotion state;
acquiring an emotion state corresponding to the maximum probability value from the n probability values of the emotion states as the emotion state of the picture to be annotated, and taking the maximum probability value as a predictive value of the emotion state of the picture to be annotated;
Acquiring an emotion state of the picture to be annotated and a predictive value of the emotion state as annotation information, and annotating the annotation information into the corresponding picture to be annotated to obtain the annotation information corresponding to each face picture;
acquiring a face picture with the predictive value smaller than a preset error threshold value, forming an error data set from the acquired face picture, and outputting the error data set to a client so that a user corrects labeling information of the face picture in the error data set at the client;
receiving the corrected error data set sent by the client, and taking the face picture in the corrected error data set as a first standard sample;
taking the face picture with the predictive value larger than a preset sample threshold value as a second standard sample, and storing the first standard sample and the second standard sample into a preset standard sample library;
training the preset emotion recognition model by using the first standard sample and the second standard sample to update the preset emotion recognition model;
and taking the face pictures except the first standard sample and the second standard sample in the face picture set as new pictures to be marked, continuously executing the steps of identifying the pictures to be marked by using a preset emotion identification model, taking the identification results of the pictures to be marked as marking information, marking the identification results into the corresponding pictures to be marked, and obtaining marking information corresponding to each face picture until the error data set is empty.
2. The method for labeling a face sample picture according to claim 1, wherein before the identifying the picture to be labeled using a preset emotion identification model and labeling the identification result of the picture to be labeled as labeling information into the corresponding picture to be labeled, the method for labeling a face sample picture further comprises:
acquiring a face sample picture from the preset standard sample library;
preprocessing the face sample picture;
training a support vector machine model by using the preprocessed face sample picture to obtain the preset emotion recognition model.
3. The face sample picture labeling method according to any one of claims 1 to 2, wherein after the obtaining of face pictures in a preset face picture set as pictures to be labeled, and after the identifying the pictures to be labeled using a preset emotion identifying model, and labeling the identification results of the pictures to be labeled as labeling information into the corresponding pictures to be labeled, before the labeling information corresponding to each face picture is obtained, the face sample picture labeling method further comprises:
The face picture is amplified by adopting a preset amplifying mode, and an amplifying picture corresponding to the face picture is obtained;
and taking the augmented picture as the picture to be marked and adding the augmented picture into the face picture set.
4. A face sample picture labeling method according to claim 3, wherein the obtaining the face picture whose predictive value is smaller than a preset error threshold, and forming an error dataset from the obtained face picture, and outputting the error dataset to a client comprises:
acquiring a face picture with the predictive value smaller than a preset error threshold as a first error picture;
if the emotional state of the augmented picture is different from the emotional state of the face picture corresponding to the augmented picture, acquiring the augmented picture mark as a second error picture;
and taking the first error picture and the second error picture as the error data set, and outputting the error data set to the client.
5. A face sample picture marking apparatus for implementing the face sample picture marking method according to any one of claims 1 to 4, characterized in that the face sample picture marking apparatus includes:
The sample picture acquisition module is used for acquiring a face picture in a preset face picture set as a picture to be marked;
the sample picture marking module is used for identifying the pictures to be marked by using a preset emotion identification model, marking the identification results of the pictures to be marked as marking information into the corresponding pictures to be marked, and obtaining the marking information corresponding to each face picture, wherein the identification results comprise the emotion states of the pictures to be marked and the predictive values of the emotion states;
the sample picture labeling module comprises:
the feature extraction sub-module is used for extracting the feature value of the picture to be marked by using the preset emotion recognition model;
the feature matching sub-module is used for matching in the trained n classifiers in the preset emotion recognition model according to the feature value of the picture to be marked to obtain probability values of n emotion states of the picture to be marked, wherein n is a positive integer, and each classifier corresponds to one emotion state;
the result output sub-module is used for acquiring an emotion state corresponding to the maximum probability value from the n probability values of the emotion states as the emotion state of the picture to be annotated, and taking the maximum probability value as a predictive value of the emotion state of the picture to be annotated;
The information labeling sub-module is used for acquiring the emotion state of the picture to be labeled and the predictive value of the emotion state as labeling information, labeling the labeling information into the corresponding picture to be labeled, and obtaining the labeling information corresponding to each face picture;
the error data output module is used for acquiring face pictures with the predictive value smaller than a preset error threshold value, forming an error data set from the acquired face pictures, and outputting the error data set to a client so that a user corrects the labeling information of the face pictures in the error data set at the client;
the correction data receiving module is used for receiving the corrected error data set sent by the client and taking the face picture in the corrected error data set as a first standard sample;
the standard sample storage module is used for taking the face picture with the predictive value larger than a preset sample threshold value as a second standard sample and storing the first standard sample and the second standard sample into a preset standard sample library;
the model updating module is used for training the preset emotion recognition model by using the first standard sample and the second standard sample so as to update the preset emotion recognition model;
And the circulation execution module is used for taking the face pictures except the first standard sample and the second standard sample in the face picture set as new pictures to be marked, continuously executing the steps of identifying the pictures to be marked by using a preset emotion identification model, marking the identification results of the pictures to be marked as marking information in the corresponding pictures to be marked, and obtaining marking information corresponding to each face picture until the error data set is empty.
6. The face sample picture marking device as claimed in claim 5, wherein said face sample picture marking device further comprises:
the training sample acquisition module is used for acquiring face sample pictures from the preset standard sample library;
the training sample processing module is used for preprocessing the face sample picture;
and the model training module is used for training the support vector machine model by using the preprocessed face sample picture to obtain the preset emotion recognition model.
7. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor, when executing the computer program, implements the steps of the face sample picture marking method as claimed in any one of claims 1 to 4.
8. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the steps of the face sample picture labeling method of any one of claims 1 to 4.
CN201811339105.4A 2018-11-12 2018-11-12 Face sample picture labeling method and device, computer equipment and storage medium Active CN109635838B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811339105.4A CN109635838B (en) 2018-11-12 2018-11-12 Face sample picture labeling method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811339105.4A CN109635838B (en) 2018-11-12 2018-11-12 Face sample picture labeling method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109635838A CN109635838A (en) 2019-04-16
CN109635838B true CN109635838B (en) 2023-07-11

Family

ID=66067695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811339105.4A Active CN109635838B (en) 2018-11-12 2018-11-12 Face sample picture labeling method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109635838B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210294A (en) * 2019-04-23 2019-09-06 平安科技(深圳)有限公司 Evaluation method, device, storage medium and the computer equipment of Optimized model
CN110188197B (en) * 2019-05-13 2021-09-28 北京一览群智数据科技有限责任公司 Active learning method and device for labeling platform
CN110298541B (en) * 2019-05-23 2024-04-09 中国平安人寿保险股份有限公司 Data processing method, device, computer equipment and storage medium
CN110263934B (en) * 2019-05-31 2021-08-06 中国信息通信研究院 Artificial intelligence data labeling method and device
CN110288007B (en) * 2019-06-05 2021-02-02 北京三快在线科技有限公司 Data labeling method and device and electronic equipment
CN110443270B (en) * 2019-06-18 2024-05-31 平安科技(深圳)有限公司 Chart positioning method, apparatus, computer device and computer readable storage medium
CN110363222B (en) * 2019-06-18 2024-05-31 中国平安财产保险股份有限公司 Picture labeling method and device for model training, computer equipment and storage medium
CN110245716B (en) * 2019-06-20 2021-05-14 杭州睿琪软件有限公司 Sample labeling auditing method and device
CN110378396A (en) * 2019-06-26 2019-10-25 北京百度网讯科技有限公司 Sample data mask method, device, computer equipment and storage medium
CN110610169B (en) * 2019-09-20 2023-12-15 腾讯科技(深圳)有限公司 Picture marking method and device, storage medium and electronic device
CN110826470A (en) * 2019-11-01 2020-02-21 复旦大学 Eye fundus image left and right eye identification method based on depth active learning
CN111061933A (en) * 2019-11-21 2020-04-24 深圳壹账通智能科技有限公司 Picture sample library construction method and device, readable storage medium and terminal equipment
CN112884158A (en) * 2019-11-29 2021-06-01 杭州海康威视数字技术股份有限公司 Training method, device and equipment for machine learning program
CN111178442B (en) * 2019-12-31 2023-05-12 北京容联易通信息技术有限公司 Service realization method for improving algorithm precision
CN111401158B (en) * 2020-03-03 2023-09-01 平安科技(深圳)有限公司 Difficult sample discovery method and device and computer equipment
CN112817839B (en) * 2020-09-08 2024-03-12 腾讯科技(深圳)有限公司 Artificial intelligence engine testing method, platform, terminal, computing device and storage medium
CN113221627B (en) * 2021-03-08 2022-05-10 广州大学 Method, system, device and medium for constructing face genetic feature classification data set
CN113704504B (en) * 2021-08-30 2023-09-19 平安银行股份有限公司 Emotion recognition method, device, equipment and storage medium based on chat record

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633203A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 Facial emotions recognition methods, device and storage medium
CN107808149A (en) * 2017-11-17 2018-03-16 腾讯数码(天津)有限公司 A kind of face information mask method, device and storage medium
CN107862292A (en) * 2017-11-15 2018-03-30 平安科技(深圳)有限公司 Personage's mood analysis method, device and storage medium
CN108510194A (en) * 2018-03-30 2018-09-07 平安科技(深圳)有限公司 Air control model training method, Risk Identification Method, device, equipment and medium
CN108537160A (en) * 2018-03-30 2018-09-14 平安科技(深圳)有限公司 Risk Identification Method, device, equipment based on micro- expression and medium
CN108563978A (en) * 2017-12-18 2018-09-21 深圳英飞拓科技股份有限公司 A kind of mood detection method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633203A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 Facial emotions recognition methods, device and storage medium
CN107862292A (en) * 2017-11-15 2018-03-30 平安科技(深圳)有限公司 Personage's mood analysis method, device and storage medium
CN107808149A (en) * 2017-11-17 2018-03-16 腾讯数码(天津)有限公司 A kind of face information mask method, device and storage medium
CN108563978A (en) * 2017-12-18 2018-09-21 深圳英飞拓科技股份有限公司 A kind of mood detection method and device
CN108510194A (en) * 2018-03-30 2018-09-07 平安科技(深圳)有限公司 Air control model training method, Risk Identification Method, device, equipment and medium
CN108537160A (en) * 2018-03-30 2018-09-14 平安科技(深圳)有限公司 Risk Identification Method, device, equipment based on micro- expression and medium

Also Published As

Publication number Publication date
CN109635838A (en) 2019-04-16

Similar Documents

Publication Publication Date Title
CN109635838B (en) Face sample picture labeling method and device, computer equipment and storage medium
CN109583325B (en) Face sample picture labeling method and device, computer equipment and storage medium
CN110909803B (en) Image recognition model training method and device and computer readable storage medium
CN108563722B (en) Industry classification method, system, computer device and storage medium for text information
CN109446302B (en) Question-answer data processing method and device based on machine learning and computer equipment
CN108536800B (en) Text classification method, system, computer device and storage medium
CN110909137A (en) Information pushing method and device based on man-machine interaction and computer equipment
EP3882814A1 (en) Utilizing machine learning models, position-based extraction, and automated data labeling to process image-based documents
CN110033018B (en) Graph similarity judging method and device and computer readable storage medium
CN113157863B (en) Question-answer data processing method, device, computer equipment and storage medium
US11636936B2 (en) Method and apparatus for verifying medical fact
US20210390370A1 (en) Data processing method and apparatus, storage medium and electronic device
CN106611015B (en) Label processing method and device
US20190311194A1 (en) Character recognition using hierarchical classification
CN110750523A (en) Data annotation method, system, computer equipment and storage medium
CN113536735A (en) Text marking method, system and storage medium based on keywords
US20220358658A1 (en) Semi Supervised Training from Coarse Labels of Image Segmentation
CN112580329B (en) Text noise data identification method, device, computer equipment and storage medium
CN110909768B (en) Method and device for acquiring marked data
CN113868419B (en) Text classification method, device, equipment and medium based on artificial intelligence
CN116796758A (en) Dialogue interaction method, dialogue interaction device, equipment and storage medium
CN116681961A (en) Weak supervision target detection method based on semi-supervision method and noise processing
CN116484224A (en) Training method, device, medium and equipment for multi-mode pre-training model
CN114743204A (en) Automatic question answering method, system, equipment and storage medium for table
CN113255368B (en) Method and device for emotion analysis of text data and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant