WO2021068325A1 - 面部动作识别模型训练方法、面部动作识别方法、装置、计算机设备和存储介质 - Google Patents

面部动作识别模型训练方法、面部动作识别方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2021068325A1
WO2021068325A1 PCT/CN2019/117027 CN2019117027W WO2021068325A1 WO 2021068325 A1 WO2021068325 A1 WO 2021068325A1 CN 2019117027 W CN2019117027 W CN 2019117027W WO 2021068325 A1 WO2021068325 A1 WO 2021068325A1
Authority
WO
WIPO (PCT)
Prior art keywords
facial
trained
neural network
image
facial motion
Prior art date
Application number
PCT/CN2019/117027
Other languages
English (en)
French (fr)
Inventor
罗琳耀
徐国强
邱寒
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021068325A1 publication Critical patent/WO2021068325A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • This application relates to a facial motion recognition model training method, facial motion recognition method, device, computer equipment and storage medium.
  • Facial recognition is also called facial recognition, and facial motion recognition refers to the ability to recognize specific facial movements and expressions.
  • a trained neural network model is usually used as a facial motion recognition model for facial motion recognition.
  • the inventor realizes that since the training data for traditional training facial action recognition models are obtained from open source, the amount of data is not only limited, but most of the data features are the same, and the features are relatively single, which leads to the accuracy of model recognition. decline.
  • a facial motion recognition model training method facial motion recognition method, device, computer equipment, and storage medium are provided.
  • a method for training facial action recognition model including:
  • the facial motion image recognition data set includes a variety of facial motion images
  • the training image set is input into a preset neural network to be trained to train the neural network to be trained, and the trained neural network to be trained is used as a facial action recognition model.
  • a facial motion recognition method including:
  • the facial motion recognition model trained by the facial motion recognition model training method described in any one of the above is used to perform facial motion recognition on the facial motion image to be recognized to obtain a recognition result.
  • a facial motion recognition model training device including:
  • the acquisition module is used to acquire a facial motion recognition data set, and the facial motion image recognition data set includes a variety of facial motion images;
  • the adding module is used to add black blocks to the facial feature images based on preset rules, and the obtained images are used as training image sets;
  • the training module is used to input the training image set into a preset neural network to be trained to train the neural network to be trained, and use the trained neural network to be trained as a facial action recognition model.
  • a facial motion recognition device includes:
  • An image acquisition module for acquiring facial motion images to be recognized
  • the recognition module is configured to use the facial motion recognition model trained by the facial motion recognition model training method described in any one of the above to perform facial motion recognition on the facial motion image to be recognized to obtain a recognition result.
  • a computer device includes a memory and one or more processors.
  • the memory stores computer-readable instructions.
  • the method for training a facial motion recognition model provided in any one of the embodiments of the present application is implemented. Steps and steps of the facial motion recognition method.
  • One or more non-volatile storage media storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, the one or more processors realize the face provided in any one of the embodiments of the present application.
  • the steps of the action recognition model training method and the steps of the facial action recognition method are executed by one or more processors.
  • Fig. 1 is an application scenario diagram of a facial action recognition model training method according to one or more embodiments.
  • Fig. 2 is a schematic flowchart of a method for training a facial action recognition model according to one or more embodiments.
  • Fig. 3 is a schematic flowchart of a method for training a facial action recognition model in another embodiment.
  • Fig. 4 is a schematic flowchart of the steps of obtaining a facial feature image according to one or more embodiments.
  • Fig. 5 is a block diagram of a facial action recognition model training device according to one or more embodiments.
  • Figure 6 is a block diagram of a computer device according to one or more embodiments.
  • the facial motion recognition model training method provided in this application can be applied to the application environment as shown in FIG. 1.
  • the terminal 102 communicates with the server 104 through the network.
  • the server 104 receives a model training instruction sent by the terminal 102, and the server 104 obtains a facial motion recognition data set in response to the model training instruction.
  • the facial motion image recognition data set includes a variety of facial motion images.
  • the server 104 inputs each facial motion image in the facial motion recognition data set to a preset multi-task convolutional neural network to use the multi-task convolutional neural network to perform face detection on the facial motion image to obtain multiple corresponding facial feature images.
  • the server 104 separately adds black blocks to the facial feature images based on preset rules, and the obtained images are used as a training image set.
  • the server 104 inputs the training image set into a preset neural network to be trained to train the neural network to be trained, and uses the trained neural network to be trained as a facial motion recognition model.
  • the terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
  • the server 104 may be implemented by an independent server or a server cluster composed of multiple servers.
  • a method for training a facial action recognition model is provided. Taking the method applied to the server in FIG. 1 as an example for description, the method includes the following steps:
  • step S202 a facial motion recognition data set is acquired, and the facial motion image recognition data set includes a variety of facial motion images.
  • the facial motion recognition data set is a collection of multiple facial motion images. It can be understood that the facial motion images in the facial motion data set are images of many different types, such as different expressions, different genders, different ages, different shapes, and Different colors and so on.
  • the facial motion images in the facial motion recognition data set can be manually pre-collected and stored in the database, or they can be obtained from an open source database using crawlers.
  • a model training instruction is issued to the server through the operating terminal.
  • the server receives the model training instruction, it responds to the model training instruction to obtain the pre-stored facial action recognition data set from the database.
  • the URL Uniform Resource Locator, Uniform Resource Locator
  • Step S204 Input each facial motion image in the facial motion recognition data set to a preset multi-task convolutional neural network, so as to use the multi-task convolutional neural network to perform face detection on the facial motion image to obtain multiple corresponding facial feature images .
  • Multi-task convolutional neural network is a neural network used for face detection.
  • Mtcnn can be divided into three parts, namely P-Net (Proposal Network), R-Net (Refine Network, optimized network) and O-Net (Output Network, output network) three-layer network structure.
  • P-Net Proposal Network
  • R-Net refine Network, optimized network
  • O-Net Output Network, output network
  • the basic structure of P-Net is a fully connected neural network.
  • the basic structure of R-Net is a convolutional neural network. Compared with P-Net, R-Net adds a fully connected layer. The filtering of input data will be more stringent.
  • O-Net is a more complex convolutional neural network, which has one more convolutional layer compared to R-Net. The difference between the effect of O-Net and R-Net is that this layer structure will recognize the area of the face through more supervision, and will regress the facial feature points of the person, and finally output the facial feature image including the facial feature points.
  • the server After the server obtains the facial action recognition data set, it calls a preset multi-task convolutional neural network.
  • Each facial motion image in the facial motion recognition data set is input to the multi-task convolutional neural network, and the facial motion images are detected by the P-Net, R-Net and O-Net of the multi-task convolutional neural network in turn to obtain the corresponding face Feature image. That is, the image output by P-Net is used as the input of R-Net, and the image output by R-Net is used as the input of O-Net.
  • the facial motion recognition data set includes a variety of different types of facial motion images, and each facial motion image can get a corresponding facial feature image, the final facial feature image is also a variety of different types of images , And each facial feature image has a corresponding facial action image.
  • step S206 black blocks are respectively added to the facial feature images based on preset rules, and the obtained images are used as a training image set.
  • the preset rule refers to a file in which a rule for instructing how to add black blocks is stored.
  • a black block refers to an occlusion pattern whose color is black or gray, that is, the gray value is between 0-50. When the gray value is 0, the color of the black block is completely black.
  • Black blocks can be understood as various shapes with inconsistent shapes. For example, the black blocks may be triangular, circular, square, or irregular in shape.
  • the training data set refers to a collection of facial feature images with black blocks added, that is, the training data set includes multiple facial feature images with black blocks added.
  • the training data set can include facial feature images that have been added with black blocks, and facial feature images that are determined not to be added, that is, include two types of facial features that contain black blocks and those that do not contain black blocks. image.
  • black blocks are added to the facial feature images based on preset rules, and the obtained images are used as the training image set, including: generating corresponding random numbers for the facial feature images, and determining whether the corresponding facial feature images are based on the random numbers.
  • Add a black block when a black block is added to the corresponding facial feature image according to a random number, the black block information is determined based on the random number and the corresponding facial feature image; according to the black block information, a black block is added to the corresponding facial feature image,
  • the obtained images are used as the training image set.
  • the random number refers to a randomly generated value.
  • the range of the random number is 0 to 1.
  • the random number is used to determine whether to add a black block.
  • the black block information includes black block coverage position, coverage angle, and color.
  • the server randomly generates a random number ranging from 0 to 1.
  • the generated random number is compared with the preset random number, and when the generated random number is greater than or equal to the preset random number, it is determined to add a black block to the facial feature image, otherwise, no black block is added.
  • the facial feature image includes image 1, image 2, and image 3, a random number 1 is randomly generated as a random number for determining whether image 1 is added with black blocks.
  • a random number 2 is generated again as a random number for determining whether the black block is added to the image 2.
  • the random number of image 3 is generated after image 2 is added with black blocks or after it is determined that no black blocks are added.
  • the pixels of the facial feature image, the preset angle, and the preset gray value are acquired.
  • the random number is multiplied by the pixel of the facial feature image, the preset angle is multiplied, and the preset gray value is multiplied to determine the black block information, that is, the position, angle, and color of the black block. That is, the generated random number is respectively multiplied with the pixel, the preset angle, and the preset gray value, and the three values obtained respectively represent the position, angle, and color of the black block.
  • the corresponding black block is generated according to the black block information and the black block is covered on the facial feature image.
  • the preset random number is 0.7, that is, when the generated random number is greater than or equal to 0.7, it means adding black blocks, and when it is less than 0.7, it means not adding black blocks.
  • the facial feature image with black blocks can be used to block some of the features in the image through the black blocks, thereby ensuring a variety of training data, thus passing Training the neural network for unoccluded images and occluded images can improve the robustness of the neural network and increase the accuracy of the model.
  • Step S208 Input the training image set into a preset neural network to be trained to train the neural network to be trained, and use the trained neural network to be trained as a facial action recognition model.
  • the acquired training image set is input into a preset neural network in batches, so that the neural network learns the characteristics of each facial feature image in the training image set, thereby completing the training.
  • the neural network trained based on the training image set is used as the facial action recognition model.
  • the preset neural network model in this embodiment is the ResNet50 network structure.
  • the above-mentioned facial motion recognition model training method, device, computer equipment and storage medium after acquiring the facial motion recognition data set, perform face detection on the facial motion images in the facial motion recognition data set through the multi-task convolutional neural network to obtain facial feature images, In this way, the image characteristics of each facial image are determined, and the automatic annotation of image characteristics is realized. Then, black blocks are respectively added to the facial feature images based on preset rules, and the obtained images are used as the training image set, thereby ensuring the diversity of training samples. Input the training image set into the preset neural network to be trained to train the neural network to be trained to obtain the facial motion recognition model, thereby ensuring the improvement of the robustness of the neural network and improving the accuracy of facial motion recognition model recognition.
  • step S206 black blocks are added to the facial feature images based on preset rules, and before the obtained images are used as the training image set, Including step S205, performing data enhancement on the facial feature image to obtain a facial feature image after data enhancement.
  • the facial feature image is data-enhanced.
  • Data enhancement refers to the basic commonly used data enhancement methods, including but not limited to rotating to change the orientation of the image, flipping and transforming along the horizontal or vertical direction, scaling up or down, contrast transforming, and so on. That is, after the facial feature image is data-enhanced, the facial feature image and the facial feature image enhanced by the corresponding data are obtained. Then, corresponding random numbers are generated for the original facial feature image and the data-enhanced facial feature image to determine whether it needs to add black blocks, so as to ensure the diversity of training data.
  • the amount of data for the facial action recognition model is small, and the data amount for training the facial action recognition model can be enhanced through data enhancement.
  • each facial motion image in the facial motion recognition data set is input to a preset multi-task convolutional neural network to perform face detection on the facial motion image using the multi-task convolutional neural network ,
  • a preset multi-task convolutional neural network to perform face detection on the facial motion image using the multi-task convolutional neural network .
  • Step S402 performing scaling processing on the facial motion images in the facial motion recognition data set, and constructing an image pyramid.
  • An image pyramid refers to a pyramid constructed from images of different sizes. It can be understood that the size of the bottommost image is the largest, and the size of the topmost image is the smallest, that is, the size of each image is larger than the size of the image of the previous layer. The size of the image smaller than the next layer, thus constructing an image pyramid.
  • the facial motion image is scaled, that is, reduced or enlarged, to obtain facial motion images with inconsistent sizes corresponding to the facial motion images.
  • the facial feature images with inconsistent sizes are superimposed and sorted from large to small to obtain the corresponding image pyramid.
  • Each facial motion image in the facial motion recognition data set is scaled to obtain the corresponding image pyramid. It can be understood that each facial action image has a corresponding image pyramid.
  • Step S404 using the multi-task convolutional neural network to perform feature extraction and frame calibration on the image pyramid to obtain a first feature map.
  • the P-Net in the multi-task convolutional neural network will be used to perform preliminary feature extraction and frame calibration on the image pyramid to obtain a feature map including multiple calibrated frames.
  • Bounding-Box Regression By performing Bounding-Box Regression on the feature map to adjust the border and using NMS (non-maximum suppression) to filter most of the borders, that is, to merge the overlapping borders, the first feature image is obtained.
  • the function of Bounding-Box Regression is to fine-tune the bounding box predicted by the network to make it close to the true value.
  • the NMS is to suppress elements that are not maximum values. Using this method, the borders with high coincidence and relatively inaccurate calibration can be quickly removed.
  • Step S406 Filter the calibrated frame in the first feature map to obtain a second feature map, and obtain multiple corresponding facial feature images according to the second feature map.
  • the output first feature map still leaves many prediction windows. Therefore, the first feature map is input to R-Net, and most of the frames of the first feature map are filtered through R-Net to determine candidate frames. Similarly, the candidate frame is further subjected to Bounding-Box Regression to adjust the frame and NMS (Non-Maximum Suppression) is used to obtain a second feature map including only one frame. In other words, use R-Net to further optimize the prediction results. Finally, input the second feature map output by R-Net into O-Net, and use O-Net to perform further feature extraction on the second feature map that includes only one frame. The final output includes five features of face calibration. Point the facial feature image.
  • the five feature points are left eye, right eye, nose, left corner of mouth and right corner of mouth.
  • the feature image of the face including the feature points is obtained through detection by the multi-task convolutional neural network, and there is no need to manually label the feature points.
  • the training image set is input into a preset neural network to be trained to train the neural network to be trained, and the trained neural network to be trained is used as a facial motion recognition model, which specifically includes: initializing the neural network to be trained Network parameters; input the training image set in batches to the neural network to be trained, and the neural network to be trained is trained based on the preset first learning rate to obtain the gradient value of the network parameter of the neural network to be trained; update the neural network to be trained according to the gradient value Network parameters of the network, get the neural network with updated network parameters; take the neural network with updated network parameters as the neural network to be trained, and return to the step of inputting the training image set to the neural network to be trained in batches until the neural network to be trained Until the loss function converges, the neural network to be trained on which the loss function converges is used as the facial action recognition model.
  • the Xavier method is used to initially preset the network parameters of each layer in the neural network to be trained.
  • Xavier is a method of neural network initialization.
  • the training image set is input to the neural network to be trained in batches. That is, the facial feature images in the training image set are input to the neural network in batches in batches.
  • the batch is 128. It can be understood that the 128 facial feature images in the training image set are input into the neural network to be trained after the network parameters are initialized in a batch, and the feature layer and classification layer in the neural network to be trained are based on the preset first learning rate Forward the input facial feature image to get the corresponding output value.
  • the first learning rate is preset, and the first learning rate is fixed at 0.001. It can be understood that both the feature layer and the classification layer in the neural network to be trained use the first learning rate.
  • the neural network to be trained calculates the loss value of this training according to the preset loss function and the corresponding output value. Based on the loss value, it is back-propagated to obtain the gradient value of each network parameter. According to the obtained gradient value, the network parameter is calculated Update. Then, the next batch of facial feature images is input to the neural network to be trained after the network parameters are updated, and the neural network to be trained is also retrained based on the first learning rate.
  • the neural network to be trained is based on the first learning rate, and forwards the input facial feature images again, and also obtains the corresponding output value and calculates the loss value, and then performs back propagation to update the network parameters again. .
  • the loss function converges. It can be understood that when the loss function has not converged, it means that the various network parameters of the neural network have not reached the optimal value, that is, training is still needed. When the loss function converges, it means that the neural network has reached the optimal value.
  • the network is put into use as a facial action recognition model.
  • the third batch of facial feature images can be input again after the network parameters are updated for the second time until the loss function converges.
  • Convergence of the loss function can be understood as the loss value calculated by the loss function tends to 0, which means that the predicted value of the neural network is closer to the expected value, which means that the neural network training is completed.
  • the preset network structure of the neural network to be trained is the optimized ResNet50 model.
  • the difference from the traditional ResNet50 model is that the last fully connected layer is replaced with a fully connected layer with an output channel of 12.
  • the output value includes the predicted value and the real label, and the loss value is calculated based on the predicted value and the real label, and the loss function.
  • the loss function is a binary cross entropy loss (binary cross entropy loss) function
  • the optimizer used for training is an adam optimizer.
  • the training image set is input into a preset neural network to be trained to train the neural network to be trained, and the trained neural network to be trained is used as a facial motion recognition model, which specifically includes: initializing the neural network to be trained Network parameters: input the training image set in batches to the neural network to be trained, the neural network to be trained is trained based on the preset first learning rate and the second learning rate, and the trained neural network to be trained is used as a facial action recognition model.
  • InsightFace is a face recognition model. That is, the network parameters of the feature layer in the neural network to be trained in this embodiment are initialized by the parameters of the InsightFace pre-training model, and the network parameters of the classification layer in the neural network to be trained in this embodiment are initialized using the Xavier initialization method.
  • the fully connected layer of the neural network to be trained uses the Xavier initialization method to initialize the network parameters, and the network parameters of other layers are initialized to the parameters of the InsightFace pre-training model, that is, the parameters of the InsightFace pre-training model are migrated to the neural network to be trained in.
  • the training image set is input to the neural network to be trained in batches. That is, the facial feature images in the training image set are input to the neural network in batches in batches.
  • the batch is 128. It can be understood that 128 facial feature images in the training image set are input as a batch into the neural network to be trained after the network parameters are initialized.
  • the network to be trained is trained in stages based on the preset first learning rate and second learning rate.
  • the first learning rate is 0.001
  • the second learning rate is 0.0001.
  • the network structure of the neural network to be trained is also optimized for the ResNet50 model, that is, the last fully connected layer of the traditional ResNet50 model is replaced with a fully connected layer with an output channel of 12.
  • the optimizer also uses the adam optimizer, and the loss function is a binary cross entropy loss (binary cross entropy loss) function.
  • the training image set is input to the neural network to be trained in batches, the neural network to be trained is trained based on a preset first learning rate and a second learning rate, and the trained neural network to be trained is used as a facial action
  • the recognition model includes: inputting the training image set into the neural network to be trained in batches, the neural network to be trained performs the first stage training based on the first learning rate and the second learning rate, and the neural network to be trained trained trained in the first stage is taken as Initial facial motion recognition model; batch input training image set to the initial facial motion recognition model, the initial facial motion recognition model is based on the preset second learning rate for the second stage of training, the second stage trained initial facial motion recognition The model is used as a facial action recognition model.
  • the first learning rate is the learning rate of the classification layer in the neural network to be trained, that is, the learning rate of the fully connected layer.
  • the second learning rate is the learning rate of the feature layer in the neural network to be trained, that is, the learning rate of other layers except the fully connected layer.
  • the facial feature images in the training image set are input in batches to the neural network to be trained.
  • the first batch of facial feature images from the training image set are selected in batches and input to the neural network to be trained.
  • the feature layer in the neural network to be trained is based on The second learning rate and the classification layer forward the facial feature image based on the first learning rate to obtain the corresponding output value.
  • the neural network to be trained calculates the loss value of this training according to the preset loss function and the corresponding output value. Based on the loss value, it is back-propagated to obtain the gradient value of each network parameter. According to the obtained gradient value, the network parameter is calculated Update. Then, input the next batch of facial feature images to the neural network to be trained after the network parameters are updated.
  • the classification layer in the neural network to be trained is also based on the first learning rate, and the feature layer is also based on the second learning rate, and the training is performed again. That is, input the second batch of facial feature images to the neural network to be trained that has updated the network parameters.
  • the feature layer in the neural network to be trained is based on the second learning rate, and the classification layer is based on the first learning rate on the input facial feature images. Forward propagation, the corresponding output value is also obtained and the loss value is calculated, and then the back propagation is performed to update the network parameters again. Repeat the above steps for iterative training until the loss function converges, and the neural network to be trained obtained after the loss function converges is used as the initial facial action recognition model.
  • the second stage of training is carried out. That is, re-input the facial feature images in the training image set to the initial facial action recognition model in batches.
  • the feature layer and the fully connected layer in the initial facial action recognition model both forward the facial feature images based on the second learning rate to obtain The corresponding output value.
  • the initial facial action recognition model calculates the loss value of this training according to the preset loss function and the corresponding output value. Based on the loss value, it is back-propagated to obtain the gradient value of each network parameter. According to the obtained gradient value, the initial face The network parameters in the action recognition model are updated.
  • the network parameters of the feature layer are migrated from InsightFace, and the feature layer and the classification layer use different learning rates, which not only makes the parameters of the model feature extraction layer tend to face recognition parameters, but also accelerates the convergence speed of the classification layer.
  • a facial motion recognition method is provided. After the facial motion recognition model is trained by the facial motion recognition model training method, the facial motion recognition model can be used for facial motion recognition.
  • the facial motion image to be recognized is acquired, and the facial motion image to be recognized is input to the facial motion recognition model.
  • the facial motion recognition model extracts features from the facial motion image to be recognized, and classifies the features to determine the facial motions in the facial motion image to be recognized, such as different facial motion expressions, mouth opening, and eyes closed.
  • a facial motion recognition model training device including: an acquisition module 502, an annotation module 504, an adding module 506, and a training module 508. Specifically,
  • the acquiring module 502 is configured to acquire a facial motion recognition data set, and the facial motion image recognition data set includes a variety of facial motion images.
  • the labeling module 504 is used to input each facial motion image in the facial motion recognition data set to a preset multi-task convolutional neural network, so as to use the multi-task convolutional neural network to perform face detection on the facial motion image to obtain a variety of corresponding Facial feature image.
  • the adding module 506 is configured to add black blocks to the facial feature images based on preset rules, and the obtained images are used as a training image set.
  • the training module 508 is configured to input the training image set into a preset neural network to be trained to train the neural network to be trained, and use the trained neural network to be trained as a facial action recognition model.
  • the adding module 506 is further configured to generate corresponding random numbers for the facial feature images, and determine whether to add black blocks to the corresponding facial feature images according to the random numbers; when determining to add black blocks according to the random numbers, based on the random numbers Determine the black block information with the corresponding facial feature image; add black blocks to the corresponding facial feature image according to the black block information, and the obtained image is used as the training image set.
  • the facial action recognition model training device further includes a data enhancement module, which is used to perform data enhancement on the facial feature image to obtain a facial feature image after data enhancement.
  • the labeling module 504 is also used to perform scaling processing on the facial motion images in the facial motion recognition data set, and construct an image pyramid; use a multi-task convolutional neural network to perform feature extraction and frame calibration on the image pyramid to obtain The first feature map; filtering the calibrated frame in the first feature map to obtain a second feature map, and obtain a variety of corresponding facial feature images according to the second feature map.
  • the training module 508 is also used to initialize the network parameters of the neural network to be trained; the training image set is input to the neural network to be trained in batches, and the neural network to be trained is trained based on the preset first learning rate to obtain The gradient value of the network parameter of the neural network to be trained; update the network parameters of the neural network to be trained according to the gradient value to obtain the neural network with the updated network parameters; take the neural network with the updated network parameters as the neural network to be trained, and return to the training
  • the image set is input in batches to the steps of the neural network to be trained until the loss function of the neural network to be trained converges, and the neural network to be trained with the loss function converged is used as the facial action recognition model.
  • the training module 508 is also used to initialize the network parameters of the neural network to be trained; the training image set is input to the neural network to be trained in batches, and the neural network to be trained is based on the preset first learning rate and second learning rate. Rate training, and use the trained neural network to be trained as a facial action recognition model.
  • the training module 508 is also used to input the training image set to the neural network to be trained in batches.
  • the neural network to be trained performs the first stage training based on the first learning rate and the second learning rate, and the first stage training
  • a good neural network to be trained is used as the initial facial action recognition model;
  • the training image set is input into the initial facial action recognition model in batches, and the initial facial action recognition model is trained in the second stage based on the preset second learning rate.
  • the trained initial facial motion recognition model is used as the facial motion recognition model.
  • a facial motion recognition device which includes an image acquisition module and a recognition module. Specifically,
  • An image acquisition module for acquiring facial motion images to be recognized
  • the recognition module is configured to use the facial motion recognition model trained by the facial motion recognition model training method provided in any of the above embodiments to perform facial motion recognition on the facial motion image to be recognized to obtain a recognition result.
  • Each module in the facial motion recognition model training device and the facial motion recognition device can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in FIG. 6.
  • the computer equipment includes a processor, a memory, a network interface, and a database connected through a system bus.
  • the processor of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system, computer readable instructions, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer-readable instructions in the non-volatile storage medium.
  • the database of the computer equipment is used to store training data.
  • the network interface of the computer device is used to communicate with an external terminal through a network connection.
  • the computer-readable instructions are executed by the processor to realize a facial motion recognition model training method and a facial motion recognition method.
  • FIG. 6 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
  • the specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
  • a computer device includes a memory and one or more processors.
  • the memory stores computer-readable instructions.
  • the method for training a facial motion recognition model provided in any one of the embodiments of the present application is implemented. Steps and steps of the facial motion recognition method.
  • One or more non-volatile storage media storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, the one or more processors realize the face provided in any one of the embodiments of the present application.
  • the steps of the action recognition model training method and the steps of the facial action recognition method are executed by one or more processors.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种面部动作识别模型训练方法,包括:获取面部动作识别数据集,面部动作图像识别数据集中包括多种面部动作图像;将面部动作识别数据集中的各面部动作图像输入至预设的多任务卷积神经网络,以利用多任务卷积神经网络对面部动作图像进行面部检测,得到多种对应的面部特征图像;基于预设规则分别对面部特征图像添加黑块,得到的图像作为训练图像集;将训练图像集输入预设的待训练神经网络,以对待训练神经网络进行训练,将训练好的待训练神经网络作为面部动作识别模型。

Description

面部动作识别模型训练方法、面部动作识别方法、装置、计算机设备和存储介质
相关申请的交叉引用
本申请要求于2019年10月12日提交中国专利局,申请号为2019109695494,申请名称为“面部动作识别模型训练方法、面部动作识别方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及一种面部动作识别模型训练方法、面部动作识别方法、装置、计算机设备和存储介质。
背景技术
人脸识别又称为面部识别,面部动作识别则是指能够识别人脸面部具体动作表情。现有技术中,为了得到更好的识别效果,通常使用训练好的神经网络模型作为面部动作识别模型进行面部动作识别。
然而,发明人意识到,由于传统训练面部动作识别模型的训练数据都是从开源上获取,数据量不仅有限,而且大部分数据的特征是相同的,特征比较单一,从而导致模型识别的准确率下降。
发明内容
根据本申请公开的各种实施例,提供一种面部动作识别模型训练方法、面部动作识别方法、装置、计算机设备和存储介质
一种面部动作识别模型训练方法,包括:
获取面部动作识别数据集,所述面部动作图像识别数据集中包括多种面部动作图像;
将所述面部动作识别数据集中的各面部动作图像输入至预设的多任务卷积神经网络,以利用所述多任务卷积神经网络对所述面部动作图像进行面部检测,得到多种对应的面部特征图像;
基于预设规则分别对所述面部特征图像添加黑块,得到的图像作为训练图像集;及
将所述训练图像集输入预设的待训练神经网络,以对所述待训练神经网络进行训练,将训练好的待训练神经网络作为面部动作识别模型。
一种面部动作识别方法,包括:
获取待识别面部动作图像;及
利用上述任意一项所述的面部动作识别模型训练方法所训练的面部动作识别模型,对所述待识别面部动作图像进行面部动作识别,得到识别结果。
一种面部动作识别模型训练装置,包括:
获取模块,用于获取面部动作识别数据集,面部动作图像识别数据集中包括多种面部动作图像;
标注模块,用于将面部动作识别数据集中的各面部动作图像输入至预设的多任务卷积神经网络,以利用多任务卷积神经网络对面部动作图像进行面部检测,得到多种对应的面部特征图像;
添加模块,用于基于预设规则分别对面部特征图像添加黑块,得到的图像作为训练图像集;及
训练模块,用于将训练图像集输入预设的待训练神经网络,以对待训练神经网络进行训练,将训练好的待训练神经网络作为面部动作识别模型。
一种面部动作识别装置,包括:
获取图像模块,用于获取待识别面部动作图像;及
识别模块,用于利用上述任意一项所述的面部动作识别模型训练方法所训练的面部动作识别模型,对所述待识别面部动作图像进行面部动作识别,得到识别结果。
一种计算机设备,包括存储器和一个或多个处理器,存储器中存储有计算机可读指令,计算机可读指令被处理器执行时实现本申请任意一个实施例中提供的面部动作识别模型训练方法的步骤和面部动作识别方法的步骤。
一个或多个存储有计算机可读指令的非易失性存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器实现本申请任意一个实施例中提供的面部动作识别模型训练方法的步骤和面部动作识别方法的步骤。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为根据一个或多个实施例中面部动作识别模型训练方法的应用场景图。
图2为根据一个或多个实施例中面部动作识别模型训练方法的流程示意图。
图3为另一个实施例中面部动作识别模型训练方法的流程示意图。
图4为根据一个或多个实施例中得到面部特征图像步骤的流程示意图。
图5为根据一个或多个实施例中面部动作识别模型训练装置的框图。
图6为根据一个或多个实施例中计算机设备的框图。
具体实施方式
为了使本申请的技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供的面部动作识别模型训练方法,可以应用于如图1所示的应用环境中。终端102通过网络与服务器104进行通信。服务器104接收终端102发送的模型训练指令,服务器104响应模型训练指令获取面部动作识别数据集,面部动作图像识别数据集中包括多种面部动作图像。服务器104将面部动作识别数据集中的各面部动作图像输入至预设的多任务卷积神经网络,以利用多任务卷积神经网络对面部动作图像进行面部检测,得到多种对应的面部特征图像。服务器104基于预设规则分别对面部特征图像添加黑块,得到的图像作为训练图像集。服务器104将训练图像集输入预设的待训练神经网络,以对待训练神经网络进行训练,将训练好的待训练神经网络作为面部动作识别模型。终端102可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备,服务器104可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
在一些实施例中,如图2所示,提供了一种面部动作识别模型训练方法,以该方法应用于图1中的服务器为例进行说明,包括以下步骤:
步骤S202,获取面部动作识别数据集,面部动作图像识别数据集中包括多种面部动 作图像。
面部动作识别数据集为包括多张面部动作图像的集合,可以理解为,面部动作数据集中的面部动作图像为多种不同类型的图像,例如包括不同表情动作、不同性别、不同年龄、不同造型以及不同颜色等等。面部动作识别数据集中的面部动作图像可以为人工预先采集存储在数据库中,也可以利用爬虫从开源数据库上获取。
具体地,当用户有训练面部动作识别模型需求时,通过操作终端向服务器下发模型训练指令。当服务器接收到模型训练指令之后,响应模型训练指令从数据库中获取预先存储的面部动作识别数据集。或者,利用模型训练指令中携带的URL(Uniform Resource Locator,统一资源定位符)链接从开源上爬虫获取面部动作识别数据集。
步骤S204,将面部动作识别数据集中的各面部动作图像输入至预设的多任务卷积神经网络,以利用多任务卷积神经网络对面部动作图像进行面部检测,得到多种对应的面部特征图像。
多任务卷积神经网络(Multi-task convolutional neural network,Mtcnn)是用于人脸检测的神经网络。Mtcnn可分为三大部分,分别为P-Net(Proposal Network,提案网络)、R-Net(Refine Network,优化网络)和O-Net(Output Network,输出网络)三层网络结构。P-Net基本的构造是一个全连接神经网络,R-Net基本的构造是一个卷积神经网络,R-Net相比于P-Net来说,增加了一个全连接层,因此R-Net对于输入数据的筛选会更加严格。而O-Net是一个较为复杂的卷积神经网络,相对于R-Net来说多了一个卷积层。O-Net的效果与R-Net的区别在于这一层结构会通过更多的监督来识别面部的区域,而且会对人的面部特征点进行回归,最终输出包括面部特征点的面部特征图像。
具体地,当服务器获取到面部动作识别数据集后,调用预设的多任务卷积神经网络。将面部动作识别数据集中的各面部动作图像分别输入至多任务卷积神经网络,面部动作图像依次经过多任务卷积神经网络的P-Net、R-Net和O-Net进行检测,得到对应的面部特征图像。即P-Net输出的图像作为R-Net的输入,R-Net输出的图像作为O-Net的输入。可以理解为,由于面部动作识别数据集中包括多种不同类型的面部动作图像,以及每张面部动作图像均能得到对应的面部特征图像,则最终得到的面部特征图像同样为多种不同类型的图像,且每种面部特征图像都有对应的面部动作图像。
步骤S206,基于预设规则分别对面部特征图像添加黑块,得到的图像作为训练图像集。
预设规则是指存储有用于指示如何添加黑块的规则的文件。黑块是指颜色为黑色或者灰色的遮挡图形,即灰度值在0-50之间。当灰度值为0的时候,黑块的颜色为全黑色。黑块可以理解为各种形状不一致的图形。例如,黑块可以为三角形、圆形、正方形或者不规则形状。训练数据集则是指添加了黑块的面部特征图像的集合,即训练数据集中包括多张已经添加黑块的面部特征图像。或者,为了增加数据的多样性,训练数据集中可以包括已经添加黑块的面部特征图像,以及确定不添加黑块的面部特征图像,即包括包含黑块和不包含黑块两种类型的面部特征图像。
在一些实施例中,基于预设规则分别对面部特征图像添加黑块,得到的图像作为训练图像集,包括:分别为面部特征图像生成对应的随机数,根据随机数确定对应的面部特征图像是否添加黑块;当根据随机数确定对应的面部特征图像添加黑块时,基于随机数与对应的面部特征图像,确定黑块信息;根据黑块信息,在对应的面部特征图像上添加黑块,得到的图像作为训练图像集。
随机数是指随机生成的数值,随机数的范围为0~1,通过随机数确定是否添加黑块。黑块信息包括黑块覆盖位置、覆盖角度以及颜色。
具体地,当获取到面部特征图像之后,服务器随机生成0~1的随机数。将生成的随机数与预设随机数进行比较,当生成的随机数大于等于预设随机数时,则确定为该面部特征图像添加黑块,反之则不添加黑块。例如,面部特征图像包括图像1、图像2以及图像3,则随机生成一个随机数1作为决定图像1是否被添加黑块的随机数。当图像1添加黑块后或者确定不添加黑块后,再次生成一个随机数2作为决定图像2是否被添加黑块的随机数。同理,图像3的随机数在图像2添加黑块之后或者确定不添加黑块后生成。当确定为面部特征图像添加黑块之后,获取该面部特征图像的像素、预设的角度、以及预设的灰度值。将随机数与面部特征图像的像素相乘、与预设角度相乘以及与预设灰度值相乘,从而确定黑块信息,即黑块的位置、角度和颜色。即,将生成的随机数分别与像素、预设的角度以及预设的灰度值进行相乘,得到的三个值分别表示黑块的位置、角度和颜色。确定黑块信息后,即根据黑块信息生成对应的黑块以及将黑块覆盖到面部特征图像上。预设随机数为0.7,即当生成的随机数大于等于0.7时,表示添加黑块,当小于0.7时,表示不添加黑块。
在本实施例中,由于传统训练面部动作的数据量过少,且大部分数据集属于无遮挡状态。然而,在实际应用场景中,通常都会遇到面部被遮挡的情况,例如口罩、帽子以及手 部动作都会造成一定遮挡。因此以这种无遮挡数据进行训练的面部动作识别模型与实际应用场景不同,使得模型在实际应用中的准确率不高。因此通过对获取到的多种不同的面部特征图像随机添加不同的黑块,使得被添加了黑块的面部特征图像通过黑块遮挡图像中的部分特征,从而保证多样性的训练数据,从而通过无遮挡图像以及有遮挡图像训练神经网络,能够提高神经网络的鲁棒性以及提高模型的准确率。
步骤S208,将训练图像集输入预设的待训练神经网络,以对待训练神经网络进行训练,将训练好的待训练神经网络作为面部动作识别模型。
具体地,将获取的训练图像集按批量输入至预设的神经网络中,使得神经网络学习训练图像集中各面部特征图像的特征,从而完成训练。将基于训练图像集训练好的神经网络作为面部动作识别模型。本实施例中预设的神经网络模型为ResNet50网络结构。
上述面部动作识别模型训练方法、装置、计算机设备和存储介质,获取面部动作识别数据集后,通过多任务卷积神经网络对面部动作识别数据集中的面部动作图像进行面部检测,得到面部特征图像,从而确定各面部图像的图像特征,实现图像特征的自动标注。然后,基于预设规则分别对面部特征图像添加黑块,得到的图像作为训练图像集,从而确保训练样本的多样性。将训练图像集输入预设的待训练神经网络对待训练神经网络进行训练,得到面部动作识别模型,从而确保提高神经网络的鲁棒性,提升面部动作识别模型识别的准确率。
在一些实施例中,如图3所示,提供另一种面部动作识别模型训练方法,在步骤S206,基于预设规则分别对面部特征图像添加黑块,得到的图像作为训练图像集之前,还包括步骤S205,将面部特征图像进行数据增强,得到数据增强后的面部特征图像。
具体地,在基于预设规则对面部特征图像添加黑块之前,将面部特征图像进行数据增强。数据增强是指基本常用的数据增强方法,包括但不限于旋转改变图像的朝向、沿着水平或者垂直方向翻转变换、按照比例放大或者缩小、对比度变换等等。即,将面部特征图像进行数据增强后,得到面部特征图像以及与其对应的数据增强后的面部特征图像。然后,为原始的面部特征图像、以及经过数据增强的面部特征图像分别生成对应的随机数,用于确定其是否需要添加黑块,从而保证训练数据的多样性。在本实施例中,由于传统大多训练以及使用人脸识别模型较多,因此对于面部动作识别模型的数据量较少,通过数据增强可以增强训练面部动作识别模型的数据量。
在一些实施例中,如图4所示,将面部动作识别数据集中的各面部动作图像输入至 预设的多任务卷积神经网络,以利用多任务卷积神经网络对面部动作图像进行面部检测,得到多种对应的面部特征图像,包括以下步骤:
步骤S402,将面部动作识别数据集中的面部动作图像进行缩放处理,并构建得到图像金字塔。
图像金字塔是指通过不同尺寸的图像构建成的金字塔,可以理解为,最底层的图像的尺寸最大,最上层的图像的尺寸最小,即每一张图像的尺寸大于上一层的图像的尺寸,小于下一层的图像的尺寸,从而构造出图像金字塔。
具体地,对面部动作图像进行缩放处理,即缩小或者放大处理,得到该面部动作图像对应的尺寸不一致的面部动作图像。将尺寸不一致的面部特征图像按照尺寸从大到小叠加排序得到对应的图像金字塔。面部动作识别数据集中的各面部动作图像均进行缩放处理,得到对应的图像金字塔。可以理解为,每张面部动作图像均有对应的图像金字塔。
步骤S404,利用多任务卷积神经网络对图像金字塔进行特征提取和边框标定,得到第一特征图。
具体地,将利用多任务卷积神经网络中的P-Net对图像金字塔进行初步特征提取与边框标定,得到包括多个标定边框的特征图。通过对该特征图进行Bounding-Box Regression(边框回归向量)调整边框和使用NMS(非极大值抑制)进行大部分边框的过滤,也就是合并重叠的边框,从而得到第一特征图像。Bounding-Box Regression的作用是网络预测得到边框进行微调,使其接近真实值。而NMS就是抑制不是极大值的元素,使用该方法可以快速去掉重合度很高且标定相对不准确的边框。
步骤S406,过滤所述第一特征图中标定的边框,获得第二特征图,根据所述第二特征图得到多种对应的面部特征图像。
具体地,由于在面部特征图像经过P-Net之后,输出的第一特征图还是留下了许多预测窗口。因此,将第一特征图输入至R-Net,通过R-Net对第一特征图进行大部分的边框进行过滤,确定候选边框。同样的,进一步对候选边框进行Bounding-Box Regression(边框回归)调整边框和使用NMS(非极大值抑制),从而得到只包括一个边框的第二特征图。也就是说,利用R-Net进一步优化预测结果。最后,将R-Net输出的第二特征图输入至O-Net中,利用O-Net对只包括一个边框的第二特征图进行更进一步的特征提取,最终输出包括人脸标定的五个特征点的面部特征图像。五个特征点分别为左眼、右眼、鼻子、左嘴角和右嘴角。在本实施例中,通过多任务卷积神经网络进行检测得到包括特征点的面 部特征图像,无需人工手动进行特征点的标注。
在一些实施例中,将训练图像集输入预设的待训练神经网络,以对待训练神经网络进行训练,将训练好的待训练神经网络作为面部动作识别模型,具体包括:初始化待训练神经网络的网络参数;将训练图像集按批量输入至待训练神经网络,待训练神经网络基于预设的第一学习率进行训练,得到待训练神经网络的网络参数的梯度值;根据梯度值更新待训练神经网络的网络参数,得到已更新网络参数的神经网络;将已更新网络参数的神经网络作为待训练神经网络,并返回将训练图像集按批量输入至待训练神经网络步骤,直至待训练神经网络的损失函数收敛为止,将损失函数收敛的待训练神经网络作为面部动作识别模型。
具体地,利用Xavier方法初始预设待训练神经网络中每一层的网络参数,Xavier是一种神经网络初始化的方法。确定好待训练神经网络的初始网络参数后,将训练图像集按批量(batch)输入至待训练神经网络。即将训练图像集中的面部特征图像按batch分批输入至神经网络,在本实施例中,batch为128。可以理解为,将训练图像集中的面部特征图像128张为一批方式输入至网络参数初始化后的待训练神经网络中,待训练神经网络中的特征层和分类层基于预设的第一学习率对输入的面部特征图像进行前向传播,得到对应的输出值。第一学习率为预先设置好的,第一学习率固定为0.001,可以理解为,待训练神经网络中的特征层和分类层均使用第一学习率。待训练神经网络根据预设的损失函数以及对应的输出值计算本次训练的损失值,基于损失值在进行反向传播,从而得到每个网络参数的梯度值,根据得到梯度值对网络参数进行更新。然后,将下一批面部特征图像输入至网络参数更新后的待训练神经网络,待训练神经网络同样基于该第一学习率,重新进行训练。即输入第二批面部特征图像,待训练神经网络基于第一学习率,再次对输入的面部特征图像进行前向传播,同样得到对应的输出值并计算损失值之后进行反向传播再次更新网络参数。重复上述步骤进行迭代训练,直到损失函数收敛为止。可以理解为,当损失函数一直不收敛时,表示神经网络的各个网络参数并未达到最优值,即还需要进行训练,而当损失函数收敛,则表示神经网络到了最优,可以将该神经网络作为面部动作识别模型投入使用。也就是说,当第二批面部特征图像训练完成后,当损失函数还未收敛时,即可在第二次更新网络参数后再次输入第三批面部特征图像,一直到损失函数收敛为止。损失函数收敛可以理解为损失函数计算的损失值趋向于0,趋向于0则表示神经网络的预测值和期望值越接近,表示神经网络训练完成。预设的待训练神经网络的网络结构为优化的 ResNet50模型,与传统ResNet50模型的区别在于将最后一层全连接层更换成输出通道为12的全连接层。而输出值包括预测值和真实标签,基于预测值和真实标签,以及损失函数计算损失值。在一实施例中,损失函数为binary cross entropy loss(二元交叉熵损失)函数,进行训练所使用的优化器为adam优化器。
在一些实施例中,将训练图像集输入预设的待训练神经网络,以对待训练神经网络进行训练,将训练好的待训练神经网络作为面部动作识别模型,具体包括:初始化待训练神经网络的网络参数;将训练图像集按批量输入至待训练神经网络,待训练神经网络基于预设的第一学习率和第二学习率进行训练,将训练好的待训练神经网络作为面部动作识别模型。
具体地,利用InsightFace和Xavier初始化方法对待训练神经网络进行网络参数的初始化,InsightFace是一种人脸识别模型。即,通过InsightFace预训练模型的参数初始化本实施例中待训练神经网络中特征层的网络参数,利用Xavier初始化方法初始化本实施例中待训练神经网络中分类层的网络参数。可以理解为,待训练神经网络的全连接层使用Xavier初始化方法进行网络参数的初始化,其他层的网络参数则初始化为InsightFace预训练模型的参数,即将InsightFace预训练模型的参数迁移到待训练神经网络中。当待训练神经网络的网络参数初始化后,将训练图像集按批量(batch)输入至待训练神经网络。即将训练图像集中的面部特征图像按batch分批输入至神经网络,在本实施例中,batch为128。可以理解为,将训练图像集中的面部特征图像128张为一批方式输入至网络参数初始化后的待训练神经网络中。待训练网络基于预设的第一学习率和第二学习率,分阶段进行训练。第一学习率为0.001,第二学习率为0.0001。在本实施例中,待训练神经网络的网络结构同样优化的ResNet50模型,即将传统ResNet50模型最后一层全连接层更换成输出通道为12的全连接层。在一实施例中,优化器同样使用adam优化器,损失函数为binary cross entropy loss(二元交叉熵损失)函数。
在一些实施例中,将训练图像集按批量输入至待训练神经网络,待训练神经网络基于预设的第一学习率和第二学习率进行训练,将训练好的待训练神经网络作为面部动作识别模型,包括:将训练图像集按批量输入至待训练神经网络,待训练神经网络基于第一学习率和第二学习率进行第一阶段训练,将第一阶段训练好的待训练神经网络作为初始面部动作识别模型;将训练图像集按批量输入至初始面部动作识别模型,初始面部动作识别模型基于预设的第二学习率进行第二阶段训练,将第二阶段训练好的初始面部动作识别模型作 为面部动作识别模型。
第一学习率为待训练神经网络中的分类层的学习率,即全连接层的学习率。第二学习率为待训练神经网络中的特征层的学习率,即除全连接层以外其他层的学习率。
具体地,将训练图像集中的面部特征图像按批量输入至待训练神经网络,首先从训练图像集中按批量选择第一批面部特征图像输入至待训练神经网络,待训练神经网络中的特征层基于第二学习率,以及分类层基于第一学习率对面部特征图像进行前向传播,得到对应的输出值。待训练神经网络根据预设的损失函数以及对应的输出值计算本次训练的损失值,基于损失值在进行反向传播,从而得到每个网络参数的梯度值,根据得到梯度值对网络参数进行更新。然后,将下一批面部特征图像输入至网络参数更新后的待训练神经网络,待训练神经网络中的分类层同样基于第一学习率、特征层同样基于第二学习率,重新进行训练。即,输入第二批面部特征图像至已更新网络参数的待训练神经网络,待训练神经网络中的特征层基于第二学习率,分类层基于第一学习率对再次对输入的面部特征图像进行前向传播,同样得到对应的输出值并计算损失值之后进行反向传播再次更新网络参数。重复上述步骤进行迭代训练,直到损失函数收敛为止,将损失函数收敛后得到的待训练神经网络作为初始面部动作识别模型。
当得到初始面部动作识别模型之后,进行第二阶段的训练。即,重新将训练图像集中的面部特征图像按批量输入至初始面部动作识别模型,初始面部动作识别模型中的特征层和全连接层均基于第二学习率对面部特征图像进行前向传播,得到对应的输出值。初始面部动作识别模型根据预设的损失函数以及对应的输出值计算本次训练的损失值,基于损失值在进行反向传播,从而得到每个网络参数的梯度值,根据得到梯度值对初始面部动作识别模型中的网络参数进行更新。同样的,将下一批面部特征图像输入至网络参数更新后的初始面部动作识别模型,初始面部动作识别模型中的特征层和分类层均基于第二学习率,重新进行训练。即,输入第二批面部特征图像至已更新网络参数的初始面部动作识别模型,初始面部动作识别模型中的特征层和分类层基于第二学习率再次对输入的面部特征图像进行前向传播,同样得到对应的输出值并计算损失值之后进行反向传播再次更新初始面部动作识别模型的网络参数。重复上述步骤进行迭代训练,直到初始面部动作识别模型的损失函数收敛为止,将损失函数收敛后得到的初始面部动作识别模型作为最终的面部动作识别模型。在本实施例中,由于传统训练面部动作的训练数据较少,通常会导致模型训练时过拟合以及收敛速度过慢。本实例从InsightFace中迁移得到特征层的网络参数,并且特 征层和分类层使用不同的学习率,不仅能够使得模型特征提取层的参数倾向于人脸识别参数,同时加快了分类层的收敛速度。
在一些实施例中,提供一种面部动作识别方法。当经过面部动作识别模型训练方法训练得到面部动作识别模型之后,即可利用该面部动作识别模型进行面部动作识别。
具体地,获取待识别面部动作图像,将待识别面部动作图像输入至该面部动作识别模型。该面部动作识别模型通过对待识别面部动作图像进行特征提取,以及对特征进行分类后确定待识别面部动作图像中面部的动作,例如不同的面部动作表情、张嘴、闭眼等动作。
应该理解的是,虽然图2-4的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图2-4中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
在一些实施例中,如图5所示,提供了一种面部动作识别模型训练装置,包括:获取模块502、标注模块504、添加模块506和训练模块508,具体地,
获取模块502,用于获取面部动作识别数据集,面部动作图像识别数据集中包括多种面部动作图像。
标注模块504,用于将面部动作识别数据集中的各面部动作图像输入至预设的多任务卷积神经网络,以利用多任务卷积神经网络对面部动作图像进行面部检测,得到多种对应的面部特征图像。
添加模块506,用于基于预设规则分别对面部特征图像添加黑块,得到的图像作为训练图像集。
训练模块508,用于将训练图像集输入预设的待训练神经网络,以对待训练神经网络进行训练,将训练好的待训练神经网络作为面部动作识别模型。
在一些实施例中,添加模块506还用于分别为面部特征图像生成对应的随机数,根据随机数确定对应的面部特征图像是否添加黑块;当根据随机数确定添加黑块时,基于随机数与对应的面部特征图像,确定黑块信息;根据黑块信息,在对应的面部特征图像上添加黑块,得到的图像作为训练图像集。
在一些实施例中,面部动作识别模型训练装置还包括数据增强模块,用于将面部特征图像进行数据增强,得到数据增强后的面部特征图像。
在一些实施例中,标注模块504还用于将面部动作识别数据集中的面部动作图像进行缩放处理,并构建得到图像金字塔;利用多任务卷积神经网络对图像金字塔进行特征提取和边框标定,得到第一特征图;过滤所述第一特征图中标定的边框,获得第二特征图,根据所述第二特征图得到多种对应的面部特征图像。
在一些实施例中,训练模块508还用于初始化待训练神经网络的网络参数;将训练图像集按批量输入至待训练神经网络,待训练神经网络基于预设的第一学习率进行训练,得到待训练神经网络的网络参数的梯度值;根据梯度值更新待训练神经网络的网络参数,得到已更新网络参数的神经网络;将已更新网络参数的神经网络作为待训练神经网络,并返回将训练图像集按批量输入至待训练神经网络步骤,直至待训练神经网络的损失函数收敛为止,将损失函数收敛的待训练神经网络作为面部动作识别模型。
在一些实施例中,训练模块508还用于初始化待训练神经网络的网络参数;将训练图像集按批量输入至待训练神经网络,待训练神经网络基于预设的第一学习率和第二学习率进行训练,将训练好的待训练神经网络作为面部动作识别模型。
在一些实施例中,训练模块508还用于将训练图像集按批量输入至待训练神经网络,待训练神经网络基于第一学习率和第二学习率进行第一阶段训练,将第一阶段训练好的待训练神经网络作为初始面部动作识别模型;将训练图像集按批量输入至初始面部动作识别模型,初始面部动作识别模型基于预设的第二学习率进行第二阶段训练,将第二阶段训练好的初始面部动作识别模型作为面部动作识别模型。
在一些实施例中,提供一种面部动作识别装置,包括获取图像模块和识别模块,具体地,
获取图像模块,用于获取待识别面部动作图像;及
识别模块,用于利用上述任意一个实施例中提供的面部动作识别模型训练方法所训练的面部动作识别模型,对所述待识别面部动作图像进行面部动作识别,得到识别结果。
关于面部动作识别模型训练装置和面部动作识别装置的具体限定可以参见上文中对于面部动作识别模型训练方法和面部动作识别方法的限定,在此不再赘述。上述面部动作识别模型训练装置和面部动作识别装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以 软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一些实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图6所示。该计算机设备包括通过系统总线连接的处理器、存储器、网络接口和数据库。该计算机设备的处理器用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统、计算机可读指令和数据库。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该计算机设备的数据库用于存储训练数据。该计算机设备的网络接口用于与外部的终端通过网络连接通信。该计算机可读指令被处理器执行时以实现一种面部动作识别模型训练方法和面部动作识别方法。
本领域技术人员可以理解,图6中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
一种计算机设备,包括存储器和一个或多个处理器,存储器中储存有计算机可读指令,计算机可读指令被处理器执行时实现本申请任意一个实施例中提供的面部动作识别模型训练方法的步骤和面部动作识别方法的步骤。
一个或多个存储有计算机可读指令的非易失性存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器实现本申请任意一个实施例中提供的面部动作识别模型训练方法的步骤和面部动作识别方法的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储 器总线动态RAM(RDRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种面部动作识别模型训练方法,包括:
    获取面部动作识别数据集,所述面部动作图像识别数据集中包括多种面部动作图像;
    将所述面部动作识别数据集中的各面部动作图像输入至预设的多任务卷积神经网络,以利用所述多任务卷积神经网络对所述面部动作图像进行面部检测,得到多种对应的面部特征图像;
    基于预设规则分别对所述面部特征图像添加黑块,得到的图像作为训练图像集;及
    将所述训练图像集输入预设的待训练神经网络,以对所述待训练神经网络进行训练,将训练好的待训练神经网络作为面部动作识别模型。
  2. 根据权利要求1所述的方法,其特征在于,所述基于预设规则分别对所述面部特征图像添加黑块,得到的图像作为训练图像集,包括:
    分别为所述面部特征图像生成对应的随机数,根据所述随机数确定对应的面部特征图像是否添加黑块;
    当根据所述随机数确定对应的面部特征图像添加黑块时,基于所述随机数与对应的面部特征图像,确定黑块信息;及
    根据所述黑块信息,在对应的面部特征图像上添加黑块,得到的图像作为训练图像集。
  3. 根据权利要求1或2所述的方法,其特征在于,所述基于预设规则分别对所述面部特征图像添加黑块,得到的图像作为训练图像集之前,包括:
    将所述面部特征图像进行数据增强,得到数据增强后的面部特征图像。
  4. 根据权利要求1所述的方法,其特征在于,所述将所述面部动作识别数据集中的各面部动作图像输入至预设的多任务卷积神经网络,以利用所述多任务卷积神经网络对所述面部动作图像进行面部检测,得到多种对应的面部特征图像,包括:
    将所述面部动作识别数据集中的面部动作图像进行缩放处理,并构建得到图像金字塔;
    利用多任务卷积神经网络对所述图像金字塔进行特征提取和边框标定,得到第一特征图;及
    过滤所述第一特征图中标定的边框,获得第二特征图,根据所述第二特征图得到多种对应的面部特征图像。
  5. 根据权利要求1所述的方法,其特征在于,所述将所述训练图像集输入预设的待 训练神经网络,以对所述待训练神经网络进行训练,将训练好的待训练神经网络作为面部动作识别模型,包括:
    初始化所述待训练神经网络的网络参数;
    将所述训练图像集按批量输入至所述待训练神经网络,所述待训练神经网络基于预设的第一学习率进行训练,得到所述待训练神经网络的网络参数的梯度值;
    根据所述梯度值更新所述待训练神经网络的网络参数,得到已更新网络参数的神经网络;及
    将已更新网络参数的神经网络作为待训练神经网络,并返回将所述训练图像集按批量输入至所述待训练神经网络步骤,直至所述待训练神经网络的损失函数收敛为止,将损失函数收敛的待训练神经网络作为面部动作识别模型。
  6. 根据权利要求1所述的方法,其特征在于,所述将所述训练图像集输入预设的待训练神经网络,以对所述待训练神经网络进行训练,将训练好的待训练神经网络作为面部动作识别模型,包括:
    初始化所述待训练神经网络的网络参数;及
    将所述训练图像集按批量输入至所述待训练神经网络,所述待训练神经网络基于预设的第一学习率和第二学习率进行训练,将训练好的待训练神经网络作为面部动作识别模型。
  7. 根据权利要求6所述的方法,其特征在于,所述将所述训练图像集按批量输入至所述待训练神经网络,所述待训练神经网络基于预设的第一学习率和第二学习率进行训练,将训练好的待训练神经网络作为面部动作识别模型,包括:
    将所述训练图像集按批量输入至待训练神经网络,所述待训练神经网络基于所述第一学习率和第二学习率进行第一阶段训练,将第一阶段训练好的神经网络作为初始面部动作识别模型;及
    将所述训练图像集按批量输入至所述初始面部动作识别模型,所述初始面部动作识别模型基于预设的第二学习率进行第二阶段训练,将第二阶段训练好的初始面部动作识别模型作为面部动作识别模型。
  8. 一种面部动作识别方法,其特征在于,包括:
    获取待识别面部动作图像;及
    利用权利要求1-7任意一项所述的面部动作识别模型训练方法所训练的面部动作识 别模型,对所述待识别面部动作图像进行面部动作识别,得到识别结果。
  9. 一种面部动作识别模型训练装置,包括:
    获取模块,用于获取面部动作识别数据集,面部动作图像识别数据集中包括多种面部动作图像;
    标注模块,用于将面部动作识别数据集中的各面部动作图像输入至预设的多任务卷积神经网络,以利用多任务卷积神经网络对面部动作图像进行面部检测,得到多种对应的面部特征图像;
    添加模块,用于基于预设规则分别对面部特征图像添加黑块,得到的图像作为训练图像集;及
    训练模块,用于将训练图像集输入预设的待训练神经网络,以对待训练神经网络进行训练,将训练好的待训练神经网络作为面部动作识别模型。
  10. 一种面部动作识别装置,包括:
    获取图像模块,用于获取待识别面部动作图像;及
    识别模块,用于利用上述任意一项所述的面部动作识别模型训练方法所训练的面部动作识别模型,对所述待识别面部动作图像进行面部动作识别,得到识别结果。
  11. 一种计算机设备,包括存储器及一个或多个处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
    获取面部动作识别数据集,所述面部动作图像识别数据集中包括多种面部动作图像;
    将所述面部动作识别数据集中的各面部动作图像输入至预设的多任务卷积神经网络,以利用所述多任务卷积神经网络对所述面部动作图像进行面部检测,得到多种对应的面部特征图像;
    基于预设规则分别对所述面部特征图像添加黑块,得到的图像作为训练图像集;及
    将所述训练图像集输入预设的待训练神经网络,以对所述待训练神经网络进行训练,将训练好的待训练神经网络作为面部动作识别模型。
  12. 根据权利要求11所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时还执行以下步骤:
    分别为所述面部特征图像生成对应的随机数,根据所述随机数确定对应的面部特征图像是否添加黑块;
    当根据所述随机数确定对应的面部特征图像添加黑块时,基于所述随机数与对应的面部特征图像,确定黑块信息;及
    根据所述黑块信息,在对应的面部特征图像上添加黑块,得到的图像作为训练图像集。
  13. 根据权利要求11或12所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时还执行以下步骤:
    将所述面部特征图像进行数据增强,得到数据增强后的面部特征图像。
  14. 根据权利要求11所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时还执行以下步骤:
    将所述面部动作识别数据集中的面部动作图像进行缩放处理,并构建得到图像金字塔;
    利用多任务卷积神经网络对所述图像金字塔进行特征提取和边框标定,得到第一特征图;及
    过滤所述第一特征图中标定的边框,获得第二特征图,根据所述第二特征图得到多种对应的面部特征图像。
  15. 一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
    获取面部动作识别数据集,所述面部动作图像识别数据集中包括多种面部动作图像;
    将所述面部动作识别数据集中的各面部动作图像输入至预设的多任务卷积神经网络,以利用所述多任务卷积神经网络对所述面部动作图像进行面部检测,得到多种对应的面部特征图像;
    基于预设规则分别对所述面部特征图像添加黑块,得到的图像作为训练图像集;及
    将所述训练图像集输入预设的待训练神经网络,以对所述待训练神经网络进行训练,将训练好的待训练神经网络作为面部动作识别模型。
  16. 根据权利要求15所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:
    分别为所述面部特征图像生成对应的随机数,根据所述随机数确定对应的面部特征图像是否添加黑块;
    当根据所述随机数确定对应的面部特征图像添加黑块时,基于所述随机数与对应的面部特征图像,确定黑块信息;及
    根据所述黑块信息,在对应的面部特征图像上添加黑块,得到的图像作为训练图像集。
  17. 根据权利要求15或16所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:
    将所述面部特征图像进行数据增强,得到数据增强后的面部特征图像。
  18. 根据权利要求15所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还执行以下步骤:
    将所述面部动作识别数据集中的面部动作图像进行缩放处理,并构建得到图像金字塔;
    利用多任务卷积神经网络对所述图像金字塔进行特征提取和边框标定,得到第一特征图;及
    过滤所述第一特征图中标定的边框,获得第二特征图,根据所述第二特征图得到多种对应的面部特征图像。
  19. 一种计算机设备,包括存储器及一个或多个处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
    获取待识别面部动作图像;及
    利用上述任意一项所述的面部动作识别模型训练方法所训练的面部动作识别模型,对所述待识别面部动作图像进行面部动作识别,得到识别结果。
  20. 一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
    获取待识别面部动作图像;及
    利用上述任意一项所述的面部动作识别模型训练方法所训练的面部动作识别模型,对所述待识别面部动作图像进行面部动作识别,得到识别结果。
PCT/CN2019/117027 2019-10-12 2019-11-11 面部动作识别模型训练方法、面部动作识别方法、装置、计算机设备和存储介质 WO2021068325A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910969549.4 2019-10-12
CN201910969549.4A CN110909595B (zh) 2019-10-12 2019-10-12 面部动作识别模型训练方法、面部动作识别方法

Publications (1)

Publication Number Publication Date
WO2021068325A1 true WO2021068325A1 (zh) 2021-04-15

Family

ID=69815217

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/117027 WO2021068325A1 (zh) 2019-10-12 2019-11-11 面部动作识别模型训练方法、面部动作识别方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN110909595B (zh)
WO (1) WO2021068325A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113077535A (zh) * 2021-04-16 2021-07-06 深圳追一科技有限公司 模型训练、嘴部动作参数获取方法、装置、设备及介质
CN113192530A (zh) * 2021-04-26 2021-07-30 深圳追一科技有限公司 模型训练、嘴部动作参数获取方法、装置、设备及介质
CN113469358A (zh) * 2021-07-05 2021-10-01 北京市商汤科技开发有限公司 一种神经网络训练方法、装置、计算机设备及存储介质
CN115396831A (zh) * 2021-05-08 2022-11-25 中国移动通信集团浙江有限公司 交互模型生成方法、装置、设备及存储介质
CN116912923A (zh) * 2023-09-12 2023-10-20 深圳须弥云图空间科技有限公司 一种图像识别模型训练方法和装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111626193A (zh) * 2020-05-26 2020-09-04 北京嘀嘀无限科技发展有限公司 一种面部识别方法、面部识别装置及可读存储介质
CN114511893A (zh) * 2020-10-26 2022-05-17 京东方科技集团股份有限公司 一种卷积神经网络的训练方法、人脸识别方法及装置
CN113723185B (zh) * 2021-07-26 2024-01-26 深圳大学 动作行为识别方法、装置、存储介质及终端设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030009078A1 (en) * 1999-10-29 2003-01-09 Elena A. Fedorovskaya Management of physiological and psychological state of an individual using images congnitive analyzer
CN107016370A (zh) * 2017-04-10 2017-08-04 电子科技大学 一种基于数据增强的部分遮挡人脸识别方法
CN107967456A (zh) * 2017-11-27 2018-04-27 电子科技大学 一种基于人脸关键点的多神经网络级联识别人脸方法
CN109840477A (zh) * 2019-01-04 2019-06-04 苏州飞搜科技有限公司 基于特征变换的受遮挡人脸识别方法及装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107967466A (zh) * 2018-01-03 2018-04-27 深圳市句点志能电子有限公司 一种突显血管的图像处理算法
CN109711297A (zh) * 2018-12-14 2019-05-03 深圳壹账通智能科技有限公司 基于面部图片的风险识别方法、装置、计算机设备及存储介质
CN109840512A (zh) * 2019-02-28 2019-06-04 北京科技大学 一种面部动作单元识别方法及识别装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030009078A1 (en) * 1999-10-29 2003-01-09 Elena A. Fedorovskaya Management of physiological and psychological state of an individual using images congnitive analyzer
CN107016370A (zh) * 2017-04-10 2017-08-04 电子科技大学 一种基于数据增强的部分遮挡人脸识别方法
CN107967456A (zh) * 2017-11-27 2018-04-27 电子科技大学 一种基于人脸关键点的多神经网络级联识别人脸方法
CN109840477A (zh) * 2019-01-04 2019-06-04 苏州飞搜科技有限公司 基于特征变换的受遮挡人脸识别方法及装置

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113077535A (zh) * 2021-04-16 2021-07-06 深圳追一科技有限公司 模型训练、嘴部动作参数获取方法、装置、设备及介质
CN113077535B (zh) * 2021-04-16 2023-06-06 深圳追一科技有限公司 模型训练、嘴部动作参数获取方法、装置、设备及介质
CN113192530A (zh) * 2021-04-26 2021-07-30 深圳追一科技有限公司 模型训练、嘴部动作参数获取方法、装置、设备及介质
CN113192530B (zh) * 2021-04-26 2023-08-22 深圳追一科技有限公司 模型训练、嘴部动作参数获取方法、装置、设备及介质
CN115396831A (zh) * 2021-05-08 2022-11-25 中国移动通信集团浙江有限公司 交互模型生成方法、装置、设备及存储介质
CN113469358A (zh) * 2021-07-05 2021-10-01 北京市商汤科技开发有限公司 一种神经网络训练方法、装置、计算机设备及存储介质
CN116912923A (zh) * 2023-09-12 2023-10-20 深圳须弥云图空间科技有限公司 一种图像识别模型训练方法和装置
CN116912923B (zh) * 2023-09-12 2024-01-05 深圳须弥云图空间科技有限公司 一种图像识别模型训练方法和装置

Also Published As

Publication number Publication date
CN110909595A (zh) 2020-03-24
CN110909595B (zh) 2023-04-18

Similar Documents

Publication Publication Date Title
WO2021068325A1 (zh) 面部动作识别模型训练方法、面部动作识别方法、装置、计算机设备和存储介质
WO2021068323A1 (zh) 多任务面部动作识别模型训练方法、多任务面部动作识别方法、装置、计算机设备和存储介质
CN111444881B (zh) 伪造人脸视频检测方法和装置
WO2021077984A1 (zh) 对象识别方法、装置、电子设备及可读存储介质
US10325181B2 (en) Image classification method, electronic device, and storage medium
WO2022134337A1 (zh) 人脸遮挡检测方法、系统、设备及存储介质
CN110135406B (zh) 图像识别方法、装置、计算机设备和存储介质
US11928893B2 (en) Action recognition method and apparatus, computer storage medium, and computer device
TWI754887B (zh) 活體檢測方法和裝置、電子設備及儲存介質
US20230237841A1 (en) Occlusion Detection
JP2022534337A (ja) ビデオターゲット追跡方法と装置、コンピュータ装置、プログラム
WO2020103700A1 (zh) 一种基于微表情的图像识别方法、装置以及相关设备
WO2021174880A1 (zh) 征提取模型训练方法、人脸识别方法、装置、设备及介质
CN109409198A (zh) Au检测模型训练方法、au检测方法、装置、设备及介质
CN110516541B (zh) 文本定位方法、装置、计算机可读存储介质和计算机设备
CN111968134B (zh) 目标分割方法、装置、计算机可读存储介质及计算机设备
AU2014253687B2 (en) System and method of tracking an object
US20220198836A1 (en) Gesture recognition method, electronic device, computer-readable storage medium, and chip
CN111860582B (zh) 图像分类模型构建方法、装置、计算机设备和存储介质
Tarasiewicz et al. Skinny: A lightweight U-Net for skin detection and segmentation
US20230036338A1 (en) Method and apparatus for generating image restoration model, medium and program product
WO2024041108A1 (zh) 图像矫正模型训练及图像矫正方法、装置和计算机设备
CN112836653A (zh) 人脸隐私化方法、设备、装置及计算机存储介质
US20230343137A1 (en) Method and apparatus for detecting key point of image, computer device and storage medium
WO2024011859A1 (zh) 一种基于神经网络的人脸检测方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19948556

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.08.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19948556

Country of ref document: EP

Kind code of ref document: A1