GB2588747A - Facial behaviour analysis - Google Patents

Facial behaviour analysis Download PDF

Info

Publication number
GB2588747A
GB2588747A GB1909300.4A GB201909300A GB2588747A GB 2588747 A GB2588747 A GB 2588747A GB 201909300 A GB201909300 A GB 201909300A GB 2588747 A GB2588747 A GB 2588747A
Authority
GB
United Kingdom
Prior art keywords
facial images
predicted
facial
action unit
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1909300.4A
Other versions
GB201909300D0 (en
GB2588747B (en
Inventor
Zafeiriou Stefanos
Kollias Dimitrios
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to GB1909300.4A priority Critical patent/GB2588747B/en
Publication of GB201909300D0 publication Critical patent/GB201909300D0/en
Priority to PCT/GB2020/051503 priority patent/WO2020260862A1/en
Priority to CN202080044948.0A priority patent/CN113994341A/en
Publication of GB2588747A publication Critical patent/GB2588747A/en
Application granted granted Critical
Publication of GB2588747B publication Critical patent/GB2588747B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/175Static expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

This invention relates to methods for analysing facial behaviour in facial images using neural networks (NN). A first aspect covers a method of training a NN 212, the method comprising: inputting to the NN a plurality of facial images, the images comprising: one or more images from a first dataset 202-1 wherein each image comprises a known emotion label 204 (e.g. happy, sad); and one or more images from a second dataset 201-2 wherein each image comprises a known action unit activation 206 (e.g. wrinkled nose, raised eyebrow). The method further comprises: generating for each of the images using the NN, a predicted emotion label 214 and predicted action unit activation 216; then updating parameters of the NN in dependence on a comparison of the predicted and known emotion labels, and the predicted and known action unit activations. The comparison could be performed using a multi-task function 222-226 for calculating losses, e.g. cross entropy loss. Further images 202-3 having known arousal or valence values 208 could also be used. A second aspect covers using the trained NN to classify facial images.

Description

Facial Behaviour Analysis
Field
This specification relates to methods for analysing facial behaviour in facial images using neural networks, and methods of training neural networks for facial behaviour analysis by processing of facial images.
Background
Automatic facial behaviour analysis relates to determining human affective states (e.g. io emotion) from facial visual data. Facial behaviour analysis has wide ranging applications such as, for example, improved human-computer/human-robot interaction. Moreover, facial behaviour analysis systems may be used to automatically annotate data (for example, visual/audio data), e.g. by determining a human reaction to the data instead of requiring a manual annotation. Therefore, improving system(s) and /5 methods for facial behaviour analysis may improve systems directed at these and other applications.
Developing methods and systems for facial behaviour analysis of facial images recorded under unconstrained conditions (e.g. in-the-wild') is a difficult task. The labels required to annotate in-the-wild datasets may be costly to acquire as they may require skilled annotators to provide these manual annotations, and so datasets with these labels may have fewer examples than is desired for developing facial behaviour analysis systems with a desired level of performance. Additionally, facial behaviour analysis comprises various different tasks which reflect different, yet inter-connected, aspects of human affective states.
Summary
According to a first aspect of this disclosure, there is described a method of training a neural network for facial behaviour analysis, the method comprising: inputting, to the neural network, a plurality of facial images, the plurality of facial images comprising: one or more first facial images from a first dataset, the first training dataset comprising facial images each with a known emotion label; and one or more second facial images from a second dataset, the second training dataset comprising facial images each with known action unit activations, generating, for each of the plurality of facial images and using the neural network, a predicted emotion label and predicted action unit activations; and updating parameters of the neural network in dependence on a comparison of: predicted emotion labels of the one or more first facial images to the known emotion labels of the one or more first facial images; and predicted action unit activations of the one or more second facial images to the known action unit activations of the one or more second facial images.
The comparison may be performed by a multi-task objective function, the multitask objective function comprising: an emotion loss comparing predicted emotion labels to known emotion labels; and an activation loss comparing predicted action unit activations to known action unit activations. The emotion loss and/or activation loss jo may comprise a cross entropy loss.
The plurality of facial images may further comprise one or more third facial images from a third dataset, the third training dataset comprising facial images each with a known valence and/or arousal value. Updating the parameters of the neural network may be further in dependence on comparison of predicted valence and/or arousal values of the one or more third facial images to the known valence and/or arousal values of the one or more third facial images. The comparison may be performed by a multi-task objective function, the multitask objective function comprising a continuous loss comparing predicted valence and/or arousal values to known valence and/or arousal values. The continuous loss may comprise a measure of a concordance correlation coefficient between the predicted valence and/or arousal values and the known valence and/or arousal values.
The facial images of the first dataset may be each associated with derived action unit or activations, the derived action unit activations determined based on the known emotion label of said image. The parameters of the neural network may be updated in dependence on a comparison of the predicted action unit activations of the one or more first facial images to the derived action unit activations of the one or more first facial images. One or more facial images of the second dataset may be each associated with a derived emotion label, each derived emotion label determined based on the known action unit activations of said facial image. The parameters of the neural network may be updated in dependence on a comparison of the predicted emotion labels of one or more second facial images to the corresponding derived emotion labels of the one or more second facial images. The derived action unit activations and the derived emotion labels may be determined based on a set of prototypical action unit activations for each emotion label and a set of weighted action unit activations for each emotional label. The -3 -derived emotion labels may be a distribution over a set possible emotion labels. The predicted emotion labels may comprise a probability measure across the set of possible emotions. The parameters of the neural network may be updated in dependence on a comparison of derived emotional labels to the predicted emotion labels.
The parameters of the neural network may be updated in dependence on a comparison of a distribution of the predicted action unit activations to an expected distribution of action unit activations, the expected distribution of action unit activations being determined based on the predicted emotional labels of the plurality of facial images.
jo The expected distribution of action unit activations may further be determined based on a modelled relationship between emotion labels and action unit activations.
The method may be iterated until a threshold condition is met.
According to a further aspect of this disclosure, there is described a method of facial behaviour analysis, the method comprising: inputting a facial image into a neural network; processing the image using the neural network; and outputting, from the neural network, a predicted emotion label for the facial image, predicted action unit activations for the facial image and/or a predicted valence and/or arousal value for the facial image, wherein the neural network comprises a plurality of parameters determined using the training methods described herein.
According to a further aspect of this disclosure, there is described a system comprising: one or more processors; and a memory, the memory comprising computer readable instructions that, when executed by the one or more processors, cause the system to perform one or more of the methods described herein.
According to a further aspect of this disclosure, there is described a computer program product comprising computer readable instructions that, when executed by a computing device, cause the computing device to perform one or more of the methods described herein.
Brief Description of the Drawings
Embodiments will now be described by way of non-limiting examples with reference to 35 the accompanying drawings, in which: -4 -Figure la and ib show an overview of example methods of facial behaviour analysis of facial images using a trained neural network; Figure 2 shows an overview of an example method of training a neural network for facial behaviour analysis of facial images; Figure 3 shows an overview of an example structure of a neural network for facial behaviour analysis of facial images; Figure 4 shows a flow diagram of an example method of training a neural network for facial behaviour analysis of facial images; Figure 5 shows a flow diagram of an example method of facial behaviour analysis of /0 facial images using a trained neural network; and Figure 6 shows a schematic example of a system/apparatus for performing any of the methods described herein.
Detailed Description
Example implementations provide system(s) and methods for facial behaviour analysis of facial images.
Improved facial behaviour analysis may be achieved by various example implementations comprising a facial behaviour analysis method utilising a neural network trained with a multi-task objective function. Use of such a multi-task loss function in training a neural network can result in a higher performance for facial behaviour analysis by the neural network when compared to other facial localisation methods. For example, the neural network may have a lower error rate in recognising or facial emotions/behaviour.
Methods and systems disclosed herein enable the implantation of a single multi-task, multi-domain and multi-label network that can be trained jointly end-to-end on various tasks relating to facial behaviour analysis. The methods ofjointly training a neural network end-to-end for face behaviour analysis disclosed herein may obviate the need to utilise pre-trained neural networks, which may require fine-tuning to perform well on the new task(s) and/or domain(s). Therefore, the trained neural network may generalise better to unseen facial images captured in-the-wild than previous approaches. The trained neural network simultaneously predicts different aspects of facial behaviour analysis, and may outperform single-task neural networks as a result of enhanced emotion recognition capabilities. Additionally, the enhanced emotion -5 -recognition capabilities may allow the neural network to generate useful feature representations of input facial images, and so the neural network may be successfully applied to perform tasks beyond the ones it has been trained for.
The multiple tasks may include, for example, automatic recognition of expressions, estimation of continuous emotions (e.g. valence and/or arousal), and detection of facial unit activations (activations of e.g. upper/inner eyebrows, nose wrinkles; facial unit activations are also referred to as action unit activations herein). Facial images used to train the neural network may be from multiple domains. For example, facial images jo may be captured from a user operating a camera of a mobile device, they may be extracted from video frames, and they may be captured in a controlled, lab-based, recording environment (while still allowing for spontaneous expressions of the subjects). A facial image used to train the neural network may have multiple labels; one or more of these labels may be derived from known labels as will be described below in relation to Figure 2.
Figure la and ib show an overview of example methods of facial behaviour analysis of facial images using a trained neural network. The method 100 takes a facial image 102 as input and outputs a predicted emotion label 106, predicted action unit (AU) activations 108, and optionally, predicted valence and/or arousal values 110 using a trained neural network 104.
The facial image 102, x, is an image comprising one or more faces. For example, in a colour image, x E Enxwx3 where H is the height of the image in pixels, W is the height or of the image in pixels and the image has three colour channels (e.g. RGB or CIELAB). The facial images 102, 108 may, in some embodiments, be in black-andwhite/greyscale. Additionally or alternatively, any visual data relating to faces may be input to the systems and methods described herein, such as, for example, 3D facial scans and UV representations of 3D facial visual data.
The predicted emotion label 106 is a label describing the predicted emotion/expression of the face present in the facial image 102. The emotion label may be a discrete variable, which can take on one value out of a possible set of values. For example, the set of possible emotions may include: angry, disgust, fear, happy, sad, surprise, and neutral. The predicted emotion label 106 may be represented by a one-hot vector with dimension equal to the number of possible emotions, with non-zero entries apart from -6 -the value of the index corresponding to the predicted emotion. Additionally or alternatively, the predicted emotion label 106 may take on more than one value, which may represent compound emotions. Additionally or alternatively, the predicted emotion label 106 may represent a probability distribution over the set of possible emotions representing how confident the trained neural network 104 is in its predictions. For example, the predicted emotion label 106 may comprise, for each emotion in the set of emotions used, a probability that said emotion is present in the input image 102, X, i.e. p(yeim,Ix) for each emotion label ye.,.
/0 The predicted action unit activations 108 represent the predicted activation of facial muscles of the face in the facial image 102. Each action unit activation represents a part of a facial expression. For example, one action unit may be activated when a person wrinkles their nose, another may be activated when the lip corner is pulled. The action units (AU) coding system is a way of coding facial motion with respect to facial muscles, adopted as a common standard towards systematically categorising physical manifestation of complex facial expressions. Acquiring labels for action unit activations may be costly as it may require skilled annotators with expert knowledge of the AU coding system to manually label facial images with the action unit activations. The activation of each action unit may be modelled as a binary variable.
The predicted valence and/or arousal values 110 are predictions of continuous emotions depicted in the facial image 102. Generally, valence values may measure how negative/positive a person is, and arousal values may measure how active/passive a person is. These values may be configured to lie in a standardised range, for example, they may lie in a continuous range from -1 to 1.
The trained neural network 104 is a neural network configured to process the input facial image 102 to output a predicted emotion label 106, predicted action unit (AU) activations 108, and optionally, predicted valence and/or arousal values no. The trained neural network may be configured to output any label/target output that relates to facial behaviour analysis.. Examples of neural network architectures are described below in relation to Figure 3.
The trained neural network 104 comprises a plurality of layers of nodes, each node 35 associated with one or more parameters. The parameters of each node of the neural network may comprise one or more weights and/or biases. The nodes take as input one -7 -or more outputs of nodes in the previous layer. The one or more outputs of nodes in the previous layer are used by the node to generate an activation value using an activation function and the parameters of the neural network. One or more of the layers of the trained generator neural network 106 may be convolutional layers.
Figure 2 shows an overview of an example method of training a neural network, such as the neural network in Figures la and ub, for facial behaviour analysis of facial images. The method zoo comprises training a neural network 212 jointly to perform multiple facial behaviour analysis tasks; the neural network may be trained using a plurality of ucu datasets which may be from different domains and have different labels/target outputs.
The objective of the neural network is to generate predictions for facial visual data which are similar to the labels/target outputs in the training dataset, while generalising well (e.g. by accurately capturing various aspects of emotion recognition) on unseen examples of facial visual data.
The neural network 212 is trained by processing a training batch of facial images 210, generating predictions for the training batch 210, and updating the parameters of the neural network 212 based on a multi-task objective function 220 comprising a comparison of: (i) the generated predictions and (ii) the corresponding labels/target outputs of the examples in the training batch 210. Example structures of the neural network 212 are discussed below in relation to Figure 4.
The training batch of facial images 210 includes a plurality of batches of labelled training data. The plurality of batches of labelled training data comprises a first batch 202-1 of facial images, a second batch 202-2 of facial images, and optionally, a third batch 202-3 of facial images. Each batch is associated with different types of facial emotion labels, though facial images may have more than one type of label and be present in more than one of the batches. The batches may be taken from a plurality of different datasets. The size of each batch may be different. For example, the first batch 202-1 may contain a higher number of facial images than the second batch 202-2. The second batch 202-2 may contain a higher number of facial images than the third batch 202-3.
The first batch 202-1 includes one or more facial images with corresponding known emotion labels ye",, 204. The first batch 202-1 may comprise a plurality of facial images. Generally, the emotion label may be a discrete variable, which can take on one -8 -value out of a possible set of values. For example, the set of possible emotions may include: angry, disgust, fear, happy, sad, surprise, and neutral, and so ye," E fl, 2, ..., 7) for these 7 emotions. It will be appreciated that there maybe more or less than 7 emotions in the set of possible emotions: some of these emotions may be omitted and other emotions may be included in the set of possible emotions.
The second batch 202-2 includes one or more facial images with corresponding known action unit activations yaa 206. The second batch 202-2 may comprise a plurality of facial images. The action unit activations represent the activation of different facial lo muscles, and is a coding system for coding facial motion with respect to facial muscles.
The action unit activation may be represented as Kra E [0,1)17, where the activations of 17 action units are modelled as binary variables. It will be appreciated that there may be more or fewer action unit activations in the batch 202-2.
The third batch 202-3, which may optionally be included in the training batch 210, includes one or more facial images with corresponding known valence and/or arousal values 208. The third batch 202-3 may comprise a plurality of facial images. Valence and arousal values are continuous emotions. Generally, valence values y, may measure how negative/positive a person is, and arousal values ya may measure how active/passive a person is. These values may be configured to lie in a standardised range, for example, they may lie in a continuous range from -1 to 1. Where both valence and arousal values are included, this may be represented as yva E [-UP. Additionally or alternatively, other continuous emotion values may be included in the batch 202-3.
The training images 210 used in each batch 202 may be extracted from sets of images containing faces with one or more known labels. ResNet may be used to extract bounding boxes of facial images and facial landmarks. The facial landmarks may be used to align the extracted facial images. The extracted facial images may be resized to fixed image size (e.g. n-by-n-by-3 for a colour image, where n is the width/height of the image). The intensity of the extracted facial images may, in some embodiments, be normalised to the range Fiat During training, at each iteration the plurality of batches is concatenated and input into the neural network 212 (or input sequentially into the neural network 212). A plurality 35 of sets of output data are obtained, and compared to corresponding known labels of the -9 -facial images in the training batch 210. The parameters of the neural network are updated in dependence on this comparison.
By inputting a training batch 210 that comprises facial images from each of the plurality of batches 202-1, 202-2, 202-3, the network processes images from each category of labels, and thus the parameters of the neural network 212 are updated based on all the types of label present in the training data. In examples where each batch 202-1, 202-2, 202-3 comprises a plurality of facial images, the problem of "noisy gradients" in the parameter updates is avoided, leading to better convergence /o behaviour. Furthermore, some types of cost/objective function (such as the CCC cost function described below) use a sequence of predictions to perform the comparison of predicted labels and ground truth labels.
The parameters of the neural network 212 may be updated using a multi-task objective function 220, which may comprise an emotion loss 222 comparing predicted emotion labels 214 to known emotion labels 204; and an action unit activation loss 224 comparing predicted action unit activations 216 to known action unit activations 206. Optionally, the multi-task objective function 220 may further comprise a continuous loss 226 comparing predicted valence and/or arousal values 218 to known valence and/or arousal values 208. Additionally or alternatively, the multi-task objective function 220 may compare predictions with labels derived from known labels in order to update the parameters of the neural network 212 as will be described in more detail below.
In some embodiments, the multi-task objective function 220 may comprise an emotion loss 222. An emotion loss 222 compares predictions of emotions 214 p (y e mai x) to known emotion labels 204. Additionally or alternatively, the emotion loss 222 may compare predicted emotion labels 214 with derived emotion labels as described below. The emotion loss 222 may comprise a cross-entropy loss. In embodiments where the output of the neural network is a probability distribution over the set of possible emotions, the emotion loss 222 may be given by an expectation value over the set of emotions present in the input facial images: E Elmo = x,yento [-1°g P (37 emo IX)] However, it will be appreciated that any type of classification loss may alternatively be utilised, for example, a hinge loss, a square loss, or an exponential loss.
-10 -In some embodiments, the multi-task objective function 220 may comprise an action unit activation loss 224 An action unit activation loss 224 compares predictions of action unit activations 216 p (ylx) to known action unit activations 2436. Additionally or alternatively, the action unit activation loss 222 may compare predicted action unit activations 216 with derived action unit activations as described below. The action unit activation loss 224 may comprise a binary cross-entropy loss. An action unit activation loss 224 may, in some embodiments, be given by: LAU = Ex,y[-log p (y,,,, Ix)] io where the negative log likelihood may be given by: -log p:= [Er k] 1 * Erisi* [yat, log p (yai, + (1 -yieht) log (1 -p The Si E 11 in the equation above denotes whether the image x contains an annotation for the ithaction unit AUi. However, it will be appreciated that any type of classification loss may be utilised, for example, a hinge loss, a square loss, or an exponential loss.
In some embodiments, a multi-task objective function 220 that may be used to train 20 the neural network may be given by: GMT = rEmo The real numbers Ails typically non-negative, and controls the relative contribution of the activation unit loss 224 to the emotion loss 222.
-0 or In some embodiments, the multi-task objective function 220 may further comprise a continuous loss 226. The continuous loss 226 compares predicted valence/arousal values with known valence/arousal values. The continuous loss 226 may measure the agreement between the predicted valence/arousal values with the ground truth valence/arousal values. This maybe measured by a concordance correlation coefficient.
For example, the continuous loss 226 may be given by: LV A -1 -CCCCV V Pa+ Pv CCC(y,"a,yva 2 E KYt Ey,) (Yt EY)] 2 fort E [V, a) E2 Ry, -EyD2i + E2 Ry, -lEgY] + (IE -An example of a multi-task objective function 220 that may be used to train the neural 5 network when valence and arousal labels are present may be given by: GmT = Enna AiLAu A2LvA One or more of the elements of the multi-task objective function 220 may be omitted. The real numbers Ai, Az are typically non-negative and control the relative /o contributions of the individual loss functions to the multi-task objective function 220.
Additional losses may be included in the multi-task objective function, such as a distribution matching loss, as will be discussed in further detail below.
The different tasks that the neural network 212 is trained to perform may be interconnected in relation to facial behaviour analysis. For example, a facial image depicting a certain expression may also result in certain action units being activated in substantially all examples of facial images depicting the certain expression. These action units may be referred to as prototypical action units. Action units activated in a significant proportion of examples of facial images depicting a certain expression may be referred to as observational action units. Prototypical and observational action units may be derived from an empirical model. For example, sets of images with known emotion labels may be annotated with action unit activations. Observational action unit activations, and associated weights, may be determined from action unit activations that a fraction of annotators observe. Table 1 below shows examples of emotions and their corresponding prototypical and observational action units.
-12 -Emotion Prototypical Airs Observational AUs (weight) Happy 12, 25 6 (o.51) Sad 4,15 1 (o.6), 6 (0.5), 11 (0.26), 17 (0.67) Fearful 1,4, 20, 25 2 (0.57), 5 (0.63), 26 (0.33) Angry 4, 7, 24 10 (0.26), 17 (0.52), 23 (0.29) Surprised 1, 2, 25, 26 5 (o.66) Disgusted 9, 10, 17 4(0.31), 24 (0.26) Table 1: Examples of emotions and their prototypical and observation action units (AUs). The weight is the fraction of examples that observed the activation of the AU.
As a result of the different tasks being interconnected, it may be beneficial to couple together two or more of these tasks. This may lead to a trained neural network with enhanced emotion recognition capabilities as a result of generating feature representations of facial images which better capture the different aspects of facial behaviour analysis. Training the neural network 212 with co-annotated images which have labels derived from known labels is one way to couple together different tasks.
/c) Another way to couple the tasks together is to use a distribution matching loss which align the predictions of the emotions and action units tasks during training.
For facial images in the batch 202-1 with known emotion labels 204, additional action unit activations can be derived using associations between the emotion label and action /5 units. Given an image x with the ground truth annotation of emotion yemo, the prototypical and observational AUs of this emotion may be provided as an additional label. The facial image may be co-annotated with derived action unit activations yau that contain only the prototypical and observational AUs. The activation of observational action units may be weighted using an empirically derived weight, for example, by using the weights from Table 1. The weights relate to the probability of an activation unit being present given a particular emotion label. Additionally, or alternatively, observational action units may have equal weighting to the activations of prototypical action units. The co-annotated facial image may be included in the training batch 210 twice, once with the emotion label and once with the derived action unit activations.
Similarly, for facial images in the batch 202-2 with known action unit activations 206, additional emotion labels can be derived using associations between the emotion label and action units. For an image x with the ground truth annotation of the action units Yaw, it can be determined whether it can be co-annotated with an emotion label. For example, an emotion may be present when all the associated prototypical and observational AUs are present in the ground truth annotation of action units. In cases when more than one emotion is possible, the derived emotion label ye"" may be assigned to the emotion with the largest requirement of prototypical and observational AUs. The co-annotated facial image may be included in the training batch 210 twice, once with the known action unit activations and once with the derived emotion label.
Additionally or alternatively, derived emotion labels may be soft labels, forming a io distribution over a set of possible emotion labels. More specifically, for each emotion, a score can be computed over its prototypical and observational AUs being present, e.g. a distribution over the emotional labels can be determined based on a comparison of the activation unit labels present to the prototypical and/or observational activation units for each emotional label. For example, for emotion happy, the score (y"(AU12) + y"(41125) + 0.51 y"(AU6))/ (1 + 1 + 0.51) can be computed. Additionally or alternatively, all weights may equal 1 if without reweighting. The scores over emotion categories may be normalised to form a probability distribution over emotion labels. The normalisation may, for example, be performed by a softmax operation over the scores for each emotion label.
In various embodiments, a distribution matching loss may also be included in the multi-task objective function 220. A distribution matching loss aligns the predictions of the emotions and action units tasks during training. This may be performed by a comparison between the probability distribution of predictions and an expected probability distribution. The expected distribution of action unit activations may be determined based on the predicted emotional labels of the plurality of facial images. This may be determined based on a modelled relationship between emotion labels and action unit activations, such as in Table 1.
For example, the action unit activations may be modelled as a mixture over the emotion categories. An expected action unit activation distribution may be given as: _ EY cmyEtl....71P(Yemoix) emo) XYGmae[1,...71 P (YaulYeuto) -14 -The conditional probability n( I may be defined deterministically from an "_,.,u vemo empirical model, such as the one provided in Table 1. For example, p (ycti iLlyemo) = 1 for prototypical and observational action units, and zero otherwise. For example, AU2 is prototypical for emotion surprised and observational for emotion fearful, then q(AU2 I x) = -(p (su rprise d I x) + p(teartql Ix)). Additionally, or alternatively, the conditional probability for observational action units may be weighted such that P(YiriulYemo) A distribution matching loss for action units may be given as: Lim = Ex IF (Yan lx)1°g q(Ylcin Ix)] i=1 Similarly, a distribution matching loss for emotion categories may be given, using the derived soft emotion labels, as described above, as q (y "nix).
/5 When distribution matching is used, an example multi-task objective function 220 may be given as: LIVIT = LEmo 21LAU ± 224 A ± A3LBM One or more of LALf and LliA may be omitted.
The parameters of the neural network 212 may be updated using an optimisation procedure in order to determine a setting of the parameters that substantially optimise (e.g. minimise) the multi-task objective function 220. The optimisation procedure may be stochastic gradient descent for example.
One or more of the datasets used to populate batches 202-1, 202-2, 202-3 may be optional, and other datasets relating to facial behaviour analysis tasks may be included in the training batch 210 when training the neural network 212. In some embodiments, the training batch 210 may comprise a number of training examples with sufficiently varied labels/output target types such that all of the components of the multi-task objective function zzo contribute to the objective function. In this way, the weight updates of the neural network 212 may be based on gradients which are not noisy, thus allowing better and/or faster convergence of the neural network 212 during training.
= w with weights w which may be from Table 1.
-15 -Faster convergence of the neural network 212 may reduce the computational/network resources required to train the neural network 212 to an appropriate level of performance, e.g. by reducing the number of calculations performed by a processor, or by reducing the amount of data transmitted over a network, for example, if the training datasets are stored on a remote data storage server.
Figure 3 shows an overview of an example structure of a neural network for facial behaviour analysis of facial images.
In this example, the neural network 104 is in the form of a convolutional neural network, comprising a plurality of convolutional layers 302 and a plurality of subsampling layers 304.
Each convolutional layer 302 is operable to apply one or more convolutional filters to the input of said convolutional layer 302. For example, one or more of the convolutional layers 302 may apply a two-dimensional convolutional block with kernel size three, a stride of one, and a padding size of one. However, other kernel sizes, strides and padding sizes may alternatively or additionally be used. In the example shown, there are a total of thirteen convolutional layers 302 in the neural network 104.
Other numbers of convolutional layers 302 may alternatively be used.
Interlaced with the convolutional layers 302 of are a plurality of subsampling layers 304 (also referred to herein as down-sampling layers). One or more convolutional layers 302 may be located between each subsampling layer 304. In the example shown, or either two or three convolutional layers 302 are applied between each application of a subsampling layer 304. Each subsampling layer 304 is operable to reduce the dimension of the input to that subsampling layer. For example, one or more of the subsampling layers may apply an average two-dimensional pooling with kernel and stride sizer of two. Other subsampling methods and/or subsampling parameters may alternatively or additionally be used.
One or more fully connected layers 306 may also be present in the neural network, for example three are shown in Figure 3. The fully connected layers 306 may be directly after the last subsampling layer 304, as shown in Figure 3. Each fully connected layer may have a dimension of 4096, although other dimension sizes are possible. The last fully connected layer may be a layer with no activation function. All of the predictions -16 -generated by the neural network 104 may be generated from this output layer. In this way, the predictions for all tasks are pooled from the same feature space.
A classification layer 310 may follow the last fully connected layer, in order to generate 5 the predicted emotion labels 312. This may be a softmax layer.
A plurality of sigmoid units may be applied to the last fully connected layer in order to generate predictions for the action unit activations. In this example, there are 17 sigmoid units in order to generate predictions for 17 action units.
The direct output of the last fully connected layer may be used in order to generate predictions for valence/arousal values, which are continuous variables.
One or more activation functions are used in the layers of the neural network 106. For example, the ReLU activation function may be used. Alternative or additionally, an ELU activation function may be used in one or more of the layers. Other activation functions may alternative or additionally be used.
Figure 4 shows a flow diagram of an example method of training a neural network for facial behaviour analysis of facial images. The flow diagram corresponds to the methods described above in relation to Figure 2.
At operation 4.1, a plurality of facial images, a plurality of facial images is input into a neural network. The neural network is described by a set of neural network parameters or (e.g. the weights and biases of various layers of the neural network).
The plurality of facial images comprise one or more first facial images from a first dataset, the first training dataset comprising facial images each with a known emotion label; and one or more second facial images from a second dataset, the second training dataset comprising facial images each with known action unit activations. The facial images of the first dataset may each be associated with derived action unit activations. The derived action unit activations may be determined based on the known emotion label of said image. One or more facial images of the second dataset may each be associated with a derived emotion label. Each derived emotion label may be determined based on the known action unit activations of said facial image. The derived action unit activations and the derived emotion labels may be determined based on a set of -17 -prototypical action unit activations for each emotion label and a set of weighted action unit activations for each emotional label. The derived emotion labels maybe a distribution over a set of possible emotion labels.
The plurality of facial images may additionally comprise one or more third facial images from a third dataset, the third training dataset comprising facial images each with a known valence and/or arousal value.
At operation 4.2, a predicted emotion label and predicted action unit activations are r() generated for each of the plurality of facial images using the neural network. The predicted emotion labels may comprise a probability measure across the set of possible emotions. In some embodiments, predicted valence and/or arousal values are additionally generated.
The neural network processes an input facial image through a plurality of neural network layers to output the predicted emotion label and predicted action unit activations (and optionally the predicted valence and arousal values).
At operation 4.3, the parameters of the neural network are updated. The updating is in dependence on a comparison of predicted emotion labels of the one or more first facial images to the known emotion labels of said one or more first facial images; and predicted action unit activations of the one or more second facial images to the known action unit activations of said one or more second facial images. Updating the parameters of the neural network may further be in dependence on comparison of or predicted valence and/or arousal values of the one or more third facial images to the known valence and/or arousal values of said one or more third facial images.
In some embodiments, the parameters of the neural network may be updated in dependence on a comparison of the predicted action unit activations of the one or more first facial images to derived action unit activations of the one or more first facial images, as described above in relation to Figure 2. The parameters of the neural network maybe updated in dependence on a comparison of the predicted emotion labels of one or more second facial images to corresponding derived emotion labels of the one or more second facial images, as described above in relation to Figure 2. The parameters of the neural network maybe updated in dependence on a comparison of derived emotional labels to the predicted emotion labels. The parameters of the neural -18 -network maybe updated in dependence on a comparison of a distribution of the predicted action unit activations to an expected distribution of action unit activations, as described above in relation to Figure 2. The expected distribution of action unit activations maybe determined based on the predicted emotional labels of the plurality of facial images. The expected distribution of action unit activations may further be determined based on a modelled relationship between emotion labels and action unit activations.
The comparison may be performed by a multi-task objective function. The multitask it) objective function may comprise: an emotion loss comparing predicted emotion labels to known emotion labels; and an activation loss comparing predicted action unit activations to known action unit activations. The emotion loss and/or activation loss may each comprise a cross entropy loss. The multi-task objective function may further comprise a continuous loss comparing predicted valence and/or arousal values to known valence and/or arousal values. The continuous loss may comprise a measure of a concordance correlation coefficient between the predicted valence and/or arousal values and the known valence and/or arousal values.
An optimisation procedure may be used to update the parameters of the neural 20 network. An example of such an optimisation procedure is a gradient descent algorithm, though other methods may alternatively be used.
Operations 4.1 to 4.3 may be iterated until a threshold condition is met. The threshold condition may be a predetermined number of iterations or training epochs. Alternatively, the threshold condition may be that a change in the value of the multitask loss function between iterations falls below a predetermined threshold value. Other examples of threshold conditions for terminating the training procedure may alternatively be used.
Figure 5 shows a flow diagram of an example method of facial behaviour analysis of facial images using a trained neural network. The flow diagram corresponds to the methods described above in relation to Figures ta and lb. The parameters of the neural network maybe determined using any of the training methods described herein (i.e. the neural network is trained using any of the training methods described herein).
At operation 5.1, a facial image is input into a neural network.
-19 -At operation 5.2, the image is processed using a neural network. The facial image is processed through a plurality of neural network layers.
At operation 5.3, a predicted emotion label for the facial image, predicted action unit activations for the facial image and/or a predicted valence and/or arousal value for the facial image is output from the neural network.
Figure 6 shows a schematic example of a system/apparatus for performing any of the jo methods described herein. The system/apparatus shown is an example of a computing device. It will be appreciated by the skilled person that other types of computing devices/systems may alternatively be used to implement the methods described herein, such as a distributed computing system.
The apparatus (or system) 600 comprises one or more processors 602. The one or more processors control operation of other components of the system/apparatus 600. The one or more processors 602 may, for example, comprise a general purpose processor. The one or more processors 602 may be a single core device or a multiple core device. The one or more processors 602 may comprise a central Central processing Processing unit Unit (CPU) or a graphical processing unit (CPU). Alternatively, the one or more processors 602 may comprise specialised processing hardware, for instance a RISC processor or programmable hardware with embedded firmware. Multiple processors may be included.
or The system/apparatus comprises a working or volatile memory 604. The one or more processors may access the volatile memory 604 in order to process data and may control the storage of data in memory. The volatile memory 604 may comprise RAM of any type, for example Static RAM (SRAM), Dynamic RAM (DRAM), or it may comprise Flash memory, such as an SD-Card.
The system/apparatus comprises a non-volatile memory 6o6. The non-volatile memory 606 stores a set of operation instructions 608 for controlling the operation of the processors 602 in the form of computer readable instructions. The non-volatile memory 606 may be a memory of any kind such as a Read Only Memory (ROM), a Flash memory or a magnetic drive memory.
-20 -The one or more processors 602 are configured to execute operating instructions 608 to cause the system/apparatus to perform any of the methods described herein. The operating instructions 6o8 may comprise code (i.e. drivers) relating to the hardware components of the system/apparatus 600, as well as code relating to the basic operation of the system/apparatus 600. Generally speaking, the one or more processors 602 execute one or more instructions of the operating instructions 608, which are stored permanently or semi-permanently in the non-volatile memory 6o6, using the volatile memory 604 to temporarily store data generated during execution of said operating instructions 608.
Implementations of the methods described herein may be realised as in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These may include computer program products (such as software stored on e.g. magnetic discs, optical disks, memory, Programmable Logic Devices) comprising computer readable instructions that, when executed by a computer, such as that described in relation to Figure 6, cause the computer to perform one or more of the methods described herein.
Any system feature as described herein may also be provided as a method feature, and vice versa. As used herein, means plus function features may be expressed alternatively in terms of their corresponding structure. In particular, method aspects may be applied to system aspects, and vice versa.
or Furthermore, any, some and/or all features in one aspect can be applied to any, some and/or all features in any other aspect, in any appropriate combination. It should also be appreciated that particular combinations of the various features described and defined in any aspects of the invention can be implemented and/or supplied and/or used independently.
Although several embodiments have been shown and described, it would be appreciated by those skilled in the art that changes may be made in these embodiments without departing from the principles of this disclosure, the scope of which is defined in the claims.

Claims (18)

  1. -21 -Claims 1. A computer implemented method of training a neural network for facial behaviour analysis, the method comprising: inputting, to the neural network, a plurality of facial images, the plurality of facial images comprising: one or more first facial images from a first dataset, the first training dataset comprising facial images each with a known emotion label; and one or more second facial images from a second dataset, the second training dataset comprising facial images each with known action unit activations, generating, for each of the plurality of facial images and using the neural network, a predicted emotion label and predicted action unit activations; and updating parameters of the neural network in dependence on a comparison of: predicted emotion labels of the one or more first facial images to the known emotion labels of the one or more first facial images; and predicted action unit activations of the one or more second facial images to the known action unit activations of the one or more second facial images.
  2. 2. The method of claim 1, wherein the comparison is performed by a multi-task objective function, the multitask objective function comprising: an emotion loss comparing predicted emotion labels to known emotion labels; and an activation loss comparing predicted action unit activations to known action unit activations.
  3. 3. The method of claim 2, wherein the emotion loss and/or activation loss comprise a cross entropy loss.
  4. 4. The method of any preceding claim, wherein the plurality of facial images further comprises one or more third facial images from a third dataset, the third training dataset comprising facial images each with a known valence and/or arousal value, wherein the method further comprises generating a predicted valence and/or arousal value, and wherein updating the parameters of the neural network is further in dependence on comparison of predicted valence and/or arousal values of the one or -22 -more third facial images to the known valence and/or arousal values of the one or more third facial images.
  5. 5- The method of claim 4, comparison is performed by a multi-task objective function, the multitask objective function comprising a continuous loss comparing predicted valence and/or arousal values to known valence and/or arousal values.
  6. 6. The method of any of claim 6, wherein the continuous loss comprises a measure of a concordance correlation coefficient between the predicted valence and/or arousal jo values and the known valence and/or arousal values.
  7. 7. The method of any preceding claim: wherein the facial images of the first dataset are each associated with derived action unit activations, the derived action unit activations determined based on the known emotion label of said image; and wherein the parameters of the neural network are updated in dependence on a comparison of the predicted action unit activations of the one or more first facial images to the derived action unit activations of the one or more first facial images.
  8. 8. The method of claim 7: wherein one or more facial images of the second dataset are each associated with a derived emotion label, each derived emotion label determined based on the known action unit activations of said facial image; and wherein the parameters of the neural network are updated in dependence on a or comparison of the predicted emotion labels of one or more second facial images to the corresponding derived emotion labels of the one or more second facial images.
  9. 9. The method of claim 8, wherein the derived action unit activations and the derived emotion labels are determined based on a set of prototypical action unit activations for each emotion label and a set of weighted action unit activations for each emotional label.to.
  10. The method of any of claims 8 or 9, wherein the derived emotion labels are a distribution over a set of possible emotion labels.
  11. The method of claim 10; -23 -wherein the predicted emotion labels comprise a probability measure across the set of possible emotions; wherein the parameters of the neural network are updated in dependence on a comparison of derived emotional labels to the predicted emotion labels.
  12. 12. The method of any preceding claim, wherein the parameters of the neural network are updated in dependence on a comparison of a distribution of the predicted action unit activations to an expected distribution of action unit activations, the expected distribution of action unit activations being determined based on the Jo predicted emotional labels of the plurality of facial images.
  13. 13. The method of claim 12, wherein the expected distribution of action unit activations is further determined based on a modelled relationship between emotion labels and action unit activations.
  14. 14. The method of any preceding claim, wherein: the one or more first facial images from a first dataset comprises a plurality of images from the first dataset; and the one or more second facial images from a second dataset comprises a plurality of images from the second dataset.
  15. 15. The method of any preceding claim, wherein the method is iterated until a threshold condition is met.or
  16. 16. A computer implemented method of facial behaviour analysis, the method comprising: inputting a facial image into a neural network; processing the image using the neural network; and outputting, from the neural network, a predicted emotion label for the facial image, predicted action unit activations for the facial image and/or a predicted valence and/or arousal value for the facial image, wherein the neural network comprises a plurality of parameters determined using the training method of any preceding claim.
  17. 17. A system comprising: one or more processors; and a memory, the memory comprising computer readable instructions that, when executed by the one or more processors, cause the system to perform the method of any preceding claim.
  18. 18. A computer program product comprising computer readable instructions that, /o when executed by a computing device, cause the computing device to perform the method of any of claims 1-16.
GB1909300.4A 2019-06-28 2019-06-28 Facial behaviour analysis Active GB2588747B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB1909300.4A GB2588747B (en) 2019-06-28 2019-06-28 Facial behaviour analysis
PCT/GB2020/051503 WO2020260862A1 (en) 2019-06-28 2020-06-22 Facial behaviour analysis
CN202080044948.0A CN113994341A (en) 2019-06-28 2020-06-22 Facial behavior analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1909300.4A GB2588747B (en) 2019-06-28 2019-06-28 Facial behaviour analysis

Publications (3)

Publication Number Publication Date
GB201909300D0 GB201909300D0 (en) 2019-08-14
GB2588747A true GB2588747A (en) 2021-05-12
GB2588747B GB2588747B (en) 2021-12-08

Family

ID=67540031

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1909300.4A Active GB2588747B (en) 2019-06-28 2019-06-28 Facial behaviour analysis

Country Status (3)

Country Link
CN (1) CN113994341A (en)
GB (1) GB2588747B (en)
WO (1) WO2020260862A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022116771A1 (en) * 2020-12-02 2022-06-09 Zhejiang Dahua Technology Co., Ltd. Method for analyzing emotion shown in image and related devices
US20230282028A1 (en) * 2022-03-04 2023-09-07 Opsis Pte., Ltd. Method of augmenting a dataset used in facial expression analysis

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112395922A (en) * 2019-08-16 2021-02-23 杭州海康威视数字技术股份有限公司 Face action detection method, device and system
TWI811605B (en) * 2020-12-31 2023-08-11 宏碁股份有限公司 Method and system for mental index prediction
CN112949708B (en) * 2021-02-26 2023-10-24 平安科技(深圳)有限公司 Emotion recognition method, emotion recognition device, computer equipment and storage medium
CN113822183B (en) * 2021-09-08 2024-02-27 北京科技大学 Zero sample expression recognition method and system based on AU-EMO association and graph neural network
US20230153377A1 (en) * 2021-11-12 2023-05-18 Covera Health Re-weighted self-influence for labeling noise removal in medical imaging data
CN115497146B (en) * 2022-10-18 2023-04-07 支付宝(杭州)信息技术有限公司 Model training method and device and identity verification method and device
CN116721457B (en) * 2023-08-09 2023-10-24 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Multi-task facial expression recognition method guided by emotion priori topological graph

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090285456A1 (en) * 2008-05-19 2009-11-19 Hankyu Moon Method and system for measuring human response to visual stimulus based on changes in facial expression
US20140316881A1 (en) * 2013-02-13 2014-10-23 Emotient Estimation of affective valence and arousal with automatic facial expression measurement
CN109344760A (en) * 2018-09-26 2019-02-15 江西师范大学 A kind of construction method of natural scene human face expression data collection
CN109508654A (en) * 2018-10-26 2019-03-22 中国地质大学(武汉) Merge the human face analysis method and system of multitask and multiple dimensioned convolutional neural networks
CN109919047A (en) * 2019-02-18 2019-06-21 山东科技大学 A kind of mood detection method based on multitask, the residual error neural network of multi-tag

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107463888B (en) * 2017-07-21 2020-05-19 竹间智能科技(上海)有限公司 Face emotion analysis method and system based on multi-task learning and deep learning
CN108229298A (en) * 2017-09-30 2018-06-29 北京市商汤科技开发有限公司 The training of neural network and face identification method and device, equipment, storage medium
CN108388890A (en) * 2018-03-26 2018-08-10 南京邮电大学 A kind of neonatal pain degree assessment method and system based on human facial expression recognition
CN108764207B (en) * 2018-06-07 2021-10-19 厦门大学 Face expression recognition method based on multitask convolutional neural network
CN109800734A (en) * 2019-01-30 2019-05-24 北京津发科技股份有限公司 Human facial expression recognition method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090285456A1 (en) * 2008-05-19 2009-11-19 Hankyu Moon Method and system for measuring human response to visual stimulus based on changes in facial expression
US20140316881A1 (en) * 2013-02-13 2014-10-23 Emotient Estimation of affective valence and arousal with automatic facial expression measurement
CN109344760A (en) * 2018-09-26 2019-02-15 江西师范大学 A kind of construction method of natural scene human face expression data collection
CN109508654A (en) * 2018-10-26 2019-03-22 中国地质大学(武汉) Merge the human face analysis method and system of multitask and multiple dimensioned convolutional neural networks
CN109919047A (en) * 2019-02-18 2019-06-21 山东科技大学 A kind of mood detection method based on multitask, the residual error neural network of multi-tag

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022116771A1 (en) * 2020-12-02 2022-06-09 Zhejiang Dahua Technology Co., Ltd. Method for analyzing emotion shown in image and related devices
US20230282028A1 (en) * 2022-03-04 2023-09-07 Opsis Pte., Ltd. Method of augmenting a dataset used in facial expression analysis

Also Published As

Publication number Publication date
CN113994341A (en) 2022-01-28
GB201909300D0 (en) 2019-08-14
GB2588747B (en) 2021-12-08
WO2020260862A1 (en) 2020-12-30

Similar Documents

Publication Publication Date Title
GB2588747A (en) Facial behaviour analysis
US10628705B2 (en) Combining convolution and deconvolution for object detection
EP4145353A1 (en) Neural network construction method and apparatus
CN109840530A (en) The method and apparatus of training multi-tag disaggregated model
Glauner Deep convolutional neural networks for smile recognition
CN110991380A (en) Human body attribute identification method and device, electronic equipment and storage medium
CN111832592B (en) RGBD significance detection method and related device
EP4006777A1 (en) Image classification method and device
US20190114532A1 (en) Apparatus and method for convolution operation of convolution neural network
US20220405528A1 (en) Task-based image masking
EP3996035A1 (en) Methods and systems for training convolutional neural networks
Güçlü et al. End-to-end semantic face segmentation with conditional random fields as convolutional, recurrent and adversarial networks
KR20230133854A (en) Cross-domain adaptive learning
CN112232355A (en) Image segmentation network processing method, image segmentation device and computer equipment
CN111783996A (en) Data processing method, device and equipment
CN115018039A (en) Neural network distillation method, target detection method and device
CN116844032A (en) Target detection and identification method, device, equipment and medium in marine environment
Rosenfeld et al. Fast on-device adaptation for spiking neural networks via online-within-online meta-learning
CN113409329B (en) Image processing method, image processing device, terminal and readable storage medium
Yifei et al. Flower image classification based on improved convolutional neural network
CN110163049B (en) Face attribute prediction method, device and storage medium
CN113705307A (en) Image processing method, device, equipment and storage medium
Uddin et al. A convolutional neural network for real-time face detection and emotion & gender classification
Shukla et al. Deep Learning Model to Identify Hide Images using CNN Algorithm
Sopov et al. Design efficient technologies for context image analysis in dialog HCI using self-configuring novelty search genetic algorithm

Legal Events

Date Code Title Description
COOA Change in applicant's name or ownership of the application

Owner name: HUAWEI TECHNOLOGIES CO., LTD

Free format text: FORMER OWNER: FACESOFT LTD.