CN113723287A - Micro-expression identification method, device and medium based on bidirectional cyclic neural network - Google Patents

Micro-expression identification method, device and medium based on bidirectional cyclic neural network Download PDF

Info

Publication number
CN113723287A
CN113723287A CN202111007435.5A CN202111007435A CN113723287A CN 113723287 A CN113723287 A CN 113723287A CN 202111007435 A CN202111007435 A CN 202111007435A CN 113723287 A CN113723287 A CN 113723287A
Authority
CN
China
Prior art keywords
micro
expression
neural network
recurrent neural
bidirectional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111007435.5A
Other languages
Chinese (zh)
Inventor
孔德松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202111007435.5A priority Critical patent/CN113723287A/en
Publication of CN113723287A publication Critical patent/CN113723287A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The present application relates to the field of expression recognition technologies, and more particularly, to a method, an apparatus, and a medium for recognizing micro expressions based on a bidirectional recurrent neural network. The method comprises the following steps: preprocessing original micro-expression data to obtain a facial behavior coding sequence; carrying out facial behavior coding on the facial behavior coding sequence to obtain micro expression coding feature vectors; inputting the micro expression coding feature vector into a trained bidirectional recurrent neural network; extracting the micro-expression time sequence characteristics of the characteristic vector output by the bidirectional recurrent neural network based on a time attention mechanism; and identifying the expression category of the micro expression by the micro expression time sequence characteristic through a Softmax function. The method and the device can fully identify the characteristics in the micro-expression, and improve the identification precision of the micro-expression.

Description

Micro-expression identification method, device and medium based on bidirectional cyclic neural network
Technical Field
The present application relates to the field of expression recognition technologies, and more particularly, to a method, an apparatus, and a medium for recognizing micro expressions based on a bidirectional recurrent neural network.
Background
The expression is one of important ways of expressing emotion by human, can reflect human behavior expression through facial expression recognition, and is the basis for realizing human-computer interaction. The facial expression recognition can be divided into micro expression and macro expression according to duration and intensity of heat. The micro expression is called micro expression for short, and as the external expression of the human mind's real emotion, the micro expression has an important role that linguistic clues can not replace in the fields of psychotherapy, judicial interrogation, risk control of claim settlement and the like, and further becomes a research hotspot in academia and industry.
Compared with macroscopic expression, the micro expression has the characteristics of small face amplitude change, short duration and the like during expression, and is very difficult to perform feature extraction during mathematical modeling, so that the classification result is poor. There are two main methods for micro-expression recognition. One method is based on the time sequence characteristics of micro-expression data, modeling analysis is carried out by combining an optical flow algorithm, the difference change between micro-expressions is represented by dynamic optical flow characteristics between continuous frames, and the optical flow algorithm of the whole process is established by analyzing the change trend between frames. And the other method is to adopt a texture feature extraction algorithm to extract the texture change features of the same frame according to the spatial characteristics of the micro-expression data. However, the two methods have insufficient utilization of the characteristics in the micro-expression recognition, which results in low precision of the detection system and needs a large amount of priori knowledge, thereby not only consuming a large amount of manpower and material resources, but also easily losing important micro-expression characteristics.
Neural networks were originally inspired by the biological nervous system and appeared to simulate the biological nervous system, and were formed by a large number of nodes or neurons interconnected. The neural network adjusts the weight according to the input change, improves the system behavior and automatically learns a model capable of solving the problem. The Long Short-Term Memory network (LSTM) is a special form of RNN (recurrent neural network), and as a nonlinear model, the LSTM can be used as a complex nonlinear unit for constructing a larger deep neural network, thereby effectively solving the problems of gradient elimination of a multilayer neural network and Long-Term dependence of the RNN of the gradient explosion recurrent neural network.
Disclosure of Invention
Based on the technical defects, the invention aims to fuse a GRU neural network and an LSTM neural network to construct a bidirectional cyclic neural network, wherein the bidirectional cyclic neural network consists of an input layer, a hidden layer and an output layer, the first layer of the hidden layer is acted on forward propagation by the LSTM, GRU is acted on backward propagation, the second layer of the hidden layer is opposite to the first layer in structure, GRU is used as forward propagation, LSTM is used as backward propagation, and the continuous correlation between frames of micro-representation data is fully utilized. Meanwhile, in order to better extract the space-time characteristics of the micro-expression, a space attention mechanism and a time attention mechanism are added on the basis of a bidirectional cyclic neural network, so that the selection of different importance degrees of continuous frames of the micro-expression video image and different weights of local characteristics of the same frame are realized. The embodiment of the application provides a micro-expression identification method, a device, equipment and a medium based on a bidirectional cyclic neural network. The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
The application provides a micro-expression recognition method based on a bidirectional cyclic neural network, which comprises the following steps:
preprocessing original micro-expression data to obtain a facial behavior coding sequence;
carrying out facial behavior coding on the facial behavior coding sequence to obtain micro expression coding feature vectors;
inputting the micro expression coding feature vector into a trained bidirectional recurrent neural network;
extracting the micro-expression time sequence characteristics of the characteristic vector output by the bidirectional recurrent neural network based on a time attention mechanism;
and identifying the expression category of the micro expression by the micro expression time sequence characteristic through a Softmax function.
Further, before inputting the micro expression coding feature vector into the trained bidirectional recurrent neural network, the method further includes: and extracting the micro expression space features of the micro expression coding feature vectors through a space attention mechanism.
Specifically, the formula for extracting the micro expression spatial features from the micro expression coding feature vector through a spatial attention mechanism is as follows:
St,k=wσ(ft,kwx+ct-1wc,k+b)
wherein S ist,kMicro-expression space characteristics of k-th action of t-th frame image of micro-expression video, sigma represents mapping function, ft,kMicro-expression coded feature vector representing the kth motion of the t-th frame image of the micro-expression video, ct-1Representing the memory characteristics at time t-1, wx,wc,kRepresenting the parameter matrix and b the offset.
Preferably, the bidirectional recurrent neural network is constructed based on a GRU neural network LSTM neural network, having an input layer, a hidden layer and an output layer.
Further, the hidden layers comprise a first hidden layer and a second hidden layer, the first hidden layer takes an LSTM neural network as forward propagation and a GRU neural network as backward propagation; the second hidden layer takes GRU neural network as forward propagation and LSTM neural network as backward propagation.
Specifically, the formula for extracting the micro-expression time sequence features from the feature vectors output by the bidirectional recurrent neural network through a time attention mechanism is as follows:
Vt=f(wt(htwht+h′wh′t)+b)
wherein, VtRepresenting the time sequence characteristics of the micro expression at the moment t, f representing an activation function, wt,wht,wh′tRepresenting a trainable parameter matrix htThe implicit characteristic of the moment t is shown, h' represents the implicit characteristic of the previous moment, and b is an offset.
In the above embodiment, the bidirectional recurrent neural network is trained, that is, before the micro expression coded feature vector is input into the trained bidirectional recurrent neural network, the micro expression spatial feature is extracted from the micro expression coded feature vector through a spatial attention mechanism, and the micro expression timing feature is extracted from the feature vector output by the bidirectional recurrent neural network based on a temporal attention mechanism, which are both trained, so the training step of the bidirectional recurrent neural network includes:
selecting a training data set;
initializing iteration times, batch processing size and learning rate;
the loss function is configured as a cross entropy loss function;
inputting the training data set into the bidirectional recurrent neural network for training;
when training reaches a first preset number of times, extracting micro expression spatial features from a training data set input into the bidirectional cyclic neural network by adding a spatial attention mechanism, and extracting micro expression time sequence features from an output result of the bidirectional cyclic neural network based on a time attention mechanism;
and when the iteration times reach a second preset time, terminating the training.
The invention provides a micro expression recognition device based on a bidirectional cyclic neural network, which comprises:
the acquisition module is used for preprocessing the original micro-expression data to obtain a facial behavior coding sequence;
the coding module is used for carrying out facial behavior coding on the facial behavior coding sequence to obtain micro expression coding feature vectors;
the bidirectional circulation module is used for inputting the micro expression coding feature vector into a trained bidirectional circulation neural network;
the time mechanism module is used for extracting the micro-expression time sequence characteristics from the characteristic vectors output by the bidirectional recurrent neural network based on a time attention mechanism;
and the expression identification module is used for identifying the expression category of the micro expression by the micro expression time sequence characteristics through a Softmax function.
Preferably, the device further comprises a space mechanism module, wherein the space mechanism module is used for extracting the micro expression space features from the micro expression coding feature vectors through a space attention mechanism before inputting the micro expression coding feature vectors into the trained bidirectional recurrent neural network.
A third aspect of the invention provides a computer device comprising a memory and a processor, the memory having stored therein computer-readable instructions which, when executed by the processor, cause the processor to perform the steps of:
preprocessing original micro-expression data to obtain a facial behavior coding sequence;
carrying out facial behavior coding on the facial behavior coding sequence to obtain micro expression coding feature vectors;
inputting the micro expression coding feature vector into a trained bidirectional recurrent neural network;
extracting the micro-expression time sequence characteristics of the characteristic vector output by the bidirectional recurrent neural network based on a time attention mechanism;
and identifying the expression category of the micro expression by the micro expression time sequence characteristic through a Softmax function.
A fourth aspect of the present invention provides a computer storage medium having stored thereon a plurality of instructions adapted to be loaded by a processor and to carry out the steps of:
preprocessing original micro-expression data to obtain a facial behavior coding sequence;
carrying out facial behavior coding on the facial behavior coding sequence to obtain micro expression coding feature vectors;
inputting the micro expression coding feature vector into a trained bidirectional recurrent neural network;
extracting the micro-expression time sequence characteristics of the characteristic vector output by the bidirectional recurrent neural network based on a time attention mechanism;
and identifying the expression category of the micro expression by the micro expression time sequence characteristic through a Softmax function.
The beneficial effect of this application: according to the micro expression recognition method based on the bidirectional cyclic neural network, the bidirectional cyclic neural network is constructed based on the LSTM and GRU deep learning neural network, and meanwhile, a space-time strategy mechanism is fused, so that selection of different importance degrees of continuous frames of a micro expression video and different weights of local features of the same frame are achieved, the features in the micro expression can be fully recognized, and the recognition accuracy of the micro expression is improved. Meanwhile, the bidirectional cyclic neural network-based micro-expression recognition method does not need the condition of a large amount of priori knowledge, saves a large amount of manpower and material resources, can be applied to an intelligent vehicle damage assessment method, improves the vehicle insurance processing efficiency, and further reduces the false video reporting risk.
Drawings
FIG. 1 shows a schematic flow chart of the method of embodiment 1 of the present application;
FIG. 2 is a schematic diagram showing the operation process of the bidirectional recurrent neural network in embodiment 2 of the present application;
FIG. 3 is a schematic diagram showing the structure of the apparatus according to embodiment 3 of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic diagram of a storage medium provided in an embodiment of the present application.
Detailed Description
Hereinafter, embodiments of the present application will be described with reference to the accompanying drawings. It should be understood that the description is intended to be exemplary only, and is not intended to limit the scope of the present application. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present application. It will be apparent to one skilled in the art that the present application may be practiced without one or more of these details. In other instances, well-known features of the art have not been described in order to avoid obscuring the present application.
It should be noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments in accordance with the application. As used herein, the singular is intended to include the plural unless the context clearly dictates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Exemplary embodiments according to the present application will now be described in more detail with reference to the accompanying drawings. These exemplary embodiments may, however, be embodied in many different forms and should not be construed as limited to only the embodiments set forth herein. The figures are not drawn to scale, wherein certain details may be exaggerated and omitted for clarity. The shapes of various regions, layers, and relative sizes and positional relationships therebetween shown in the drawings are merely exemplary, and deviations may occur in practice due to manufacturing tolerances or technical limitations, and a person skilled in the art may additionally design regions/layers having different shapes, sizes, relative positions, as actually required.
Example 1:
in this embodiment, a micro expression recognition method based on a bidirectional recurrent neural network is implemented, as shown in fig. 1, including the following steps:
s1, preprocessing the original micro expression data to obtain a facial behavior coding sequence;
s2, carrying out facial behavior coding on the facial behavior coding sequence to obtain micro expression coding feature vectors;
s3, inputting the micro expression coding feature vector into a trained bidirectional circulation neural network;
s4, extracting micro-expression time sequence features from the feature vectors output by the bidirectional recurrent neural network based on a time attention mechanism;
and S5, identifying the expression category of the micro expression by the micro expression time sequence characteristics through a Softmax function.
Further, before inputting the micro expression coding feature vector into the trained bidirectional recurrent neural network, the method further includes: and extracting the micro expression space features of the micro expression coding feature vectors through a space attention mechanism.
Specifically, the formula for extracting the micro expression spatial features from the micro expression coding feature vector through a spatial attention mechanism is as follows:
St,k=wσ(ft,kwx+ct-1wc,k+b)
wherein S ist,kMicro-expression space characteristics of k-th action of t-th frame image of micro-expression video, sigma represents mapping function, ft,kMicro-expression coded feature vector representing the kth motion of the t-th frame image of the micro-expression video, ct-1Representing the memory characteristics at time t-1, wx,wc,kRepresenting the parameter matrix and b the offset.
Preferably, the bidirectional recurrent neural network is constructed based on a GRU neural network LSTM neural network, having an input layer, a hidden layer and an output layer.
Further, the hidden layers comprise a first hidden layer and a second hidden layer, the first hidden layer takes an LSTM neural network as forward propagation and a GRU neural network as backward propagation; the second hidden layer takes GRU neural network as forward propagation and LSTM neural network as backward propagation.
Specifically, the formula for extracting the micro-expression time sequence features from the feature vectors output by the bidirectional recurrent neural network through a time attention mechanism is as follows:
Vt=f(wt(htwht+h′wh′t)+b)
wherein, VtRepresenting the time sequence characteristics of the micro expression at the moment t, f representing an activation function, wt,wht,wh′tRepresenting a trainable parameter matrix htThe implicit characteristic of the moment t is shown, h' represents the implicit characteristic of the previous moment, and b is an offset.
In the above embodiment, the bidirectional recurrent neural network is trained, that is, before the micro expression coded feature vector is input into the trained bidirectional recurrent neural network, the micro expression spatial feature is extracted from the micro expression coded feature vector through a spatial attention mechanism, and the micro expression timing feature is extracted from the feature vector output by the bidirectional recurrent neural network based on a temporal attention mechanism, which are both trained, so the training step of the bidirectional recurrent neural network includes:
selecting a training data set;
initializing iteration times, batch processing size and learning rate;
the loss function is configured as a cross entropy loss function;
inputting the training data set into the bidirectional recurrent neural network for training;
when training reaches a first preset number of times, extracting micro expression spatial features from a training data set input into the bidirectional cyclic neural network by adding a spatial attention mechanism, and extracting micro expression time sequence features from an output result of the bidirectional cyclic neural network based on a time attention mechanism;
and when the iteration times reach a second preset time, terminating the training.
Example 2:
the embodiment implements a micro-expression recognition method based on a bidirectional recurrent neural network, and the specific steps are described in detail as follows.
Firstly, preprocessing original micro-expression data to obtain a facial behavior coding sequence. Specifically, the original micro expression is a picture or a video, and in the embodiment, a micro expression video is preferred. Preprocessing the original micro-expression data comprises time granularity division of the original micro-expression data. We refer to absolute time at a certain time in the real world, which has a corresponding real number on the absolute time axis, while the micro-expression involves very compact time changes, so that a fine time granularity is required.
And secondly, carrying out facial behavior coding on the facial behavior coding sequence to obtain micro expression coding feature vectors. The human face has 42 muscles, which are controlled by different regions of the brain, some directly by consciousness, called voluntary muscles, but some not by consciousness, or even subconsciously, called involuntary muscles, so that micro-expressions can reflect the psychological characteristics of a person. For example, if there is no wrinkle in the corners of eyes in a smile, the questioner lifts up the eyebrows indicating that he knows the answer to the question completely. The facial behavior coding is called FACS for short, a large number of human expressions are classified, reference standards of facial expression muscle movement are made, AU1 represents an inner muscle group of the fronto-abdominal surface, AU2 represents an outer muscle group of the fronto-abdominal surface, AU4 represents eyebrow-lowering muscles, and the like, and the facial behavior coding sequence is subjected to facial behavior coding by the FACS to obtain micro-expression coding feature vectors.
And thirdly, extracting the micro expression space features from the micro expression coding feature vector through a space attention mechanism.
When a human body expresses the self-mind state, the facial muscles correspondingly perform different action expressions to generate texture features. The embodiment of the application is to obtain different importance degrees of face regions through a spatial attention mechanism and endow different weights to the face regions so as to represent the spatial features of the micro-expression. Specifically, the micro expression coding feature vector is used for extracting micro expression space features through a space attention mechanism, and different weights of facial texture features are determined based on features in unit memory cells at the previous time t-1 of the bidirectional recurrent neural network and the feature vector at the current time t. Specifically, the formula for extracting the micro expression spatial features from the micro expression coding feature vector through a spatial attention mechanism is as follows:
St,k=wσ(ft,kwx+ct-1wc,k+b)
wherein S ist,kMicro-expression space characteristics of k-th action of t-th frame image of micro-expression video, sigma represents mapping function, ft,kMicro-expression coded feature vector representing the kth motion of the t-th frame image of the micro-expression video, ct-1Representing the memory characteristics at time t-1, wx,wc,kRepresenting the parameter matrix and b the offset.
Further, normalization operation is performed on the obtained weights of the action elements. Normalized function of
Figure BDA0003237483860000111
p represents the normalized weight and represents the original importance degree of the facial motion, so the normalized weight and the corresponding features are multiplied to estimate the final micro-expression spatial features, and the method comprises the following steps:
Figure BDA0003237483860000121
wherein p is a normalized weight,
Figure BDA0003237483860000122
Spatial features obtained based on a spatial attention mechanism are represented.
And fourthly, inputting the micro-expression spatial characteristics into a trained bidirectional circulation neural network.
Preferably, the bidirectional recurrent neural network is constructed based on a GRU neural network LSTM neural network, having an input layer, a hidden layer and an output layer. Further, the hidden layers comprise a first hidden layer and a second hidden layer, the first hidden layer takes an LSTM neural network as forward propagation and a GRU neural network as backward propagation; the second hidden layer takes GRU neural network as forward propagation and LSTM neural network as backward propagation.
And when the LSTM neural network is used as forward propagation, the first hidden layer comprises an input gate, a forgetting gate and an output gate. The forgetting gate is used for forgetting information, and outputs a value between [0, 1] to act on the cell value at the previous moment, which is expressed mathematically as:
ft=σ(wf*[ht-1,xt]+bf)
where σ denotes a mapping function, wfWeight representing forgetting gate, bfIndicating a forgotten door bias. The input gate is used for loading new characteristic information and updating the cell state, the output of the tanh layer is the current cell state, and the mathematical expression is as follows:
it=σ(wi*[ht-1,xt]+bt)
ct=tanh(wc*[ht-1,xt]+bc)
the output gate is used as the output of the network, the output after the mapping of the sigma layer is obtained by the product of the tanh layer, and the mathematical expression is as follows:
ot=σ(wo*[ht-1,xt]+bo)
ht=ot*tanh(ct)
wherein, the sigma tableMapping function, htImplicit feature indicating time t, ht-1Implicit feature indicating the moment of t-1, b0Indicating the offset of the output gate.
Further, the GRU is constructed as a back propagation of the hidden layer, and the structure of the GRU is constructed by a reset gate and an update gate. Updating is used for loading new characteristic information and filtering redundant information, and the mathematical expression of the characteristic information is as follows:
zt=σ(wz*[ht-1,xt])
the reset gate is used for filtering the characteristic information at the last moment, and the mathematical expression of the reset gate is as follows:
rt=σ(wr*[ht-1,xt])
in the above two mathematical expressions, σ represents the mapping function, wzAnd wrRepresents weight, t represents time, ht-1Represents the output of the previous time, htOutput representing the current time, xtIndicating the input at the current time.
The output and output state of the hidden layer of the GRU respectively have the functions of a reset gate and an update gate, and the mathematical expression of the output and output state of the GRU is as follows:
Figure BDA0003237483860000131
ht=(1-z)*ht-1+zt*ht
wherein the content of the first and second substances,
Figure BDA0003237483860000132
representing the hidden layer output of the GRU, tanh representing the activation function,
Figure BDA0003237483860000133
represents a weight, rtAnd ztRespectively representing a reset gate and a refresh gate, ht-1Represents the output of the previous time, htOutput representing the current time, xtThe characteristic data is acted by the input gate, the forgetting gate and the output gate of the LSTM to obtain forward transmissionAnd playing the output, and taking the output as the input of the GRU, and taking the output as the back propagation by the action of the updating gate and the resetting gate, so as to continuously adjust the network parameters of the first hidden layer.
The second hidden layer is composed of LSTM and GRU, the difference is that GRU is used as forward propagation of the hidden layer, and has the functions of updating gate and resetting gate, LSTM is used as backward propagation, so as to continuously adjust the network parameters of the second hidden layer, and the input is the backward propagation output of the first hidden layer. Based on the above, under the action of a plurality of LSTMs and GRUs, a multi-layer bidirectional circulation neural network is realized.
And fifthly, extracting the micro-expression time sequence characteristics from the characteristic vectors output by the bidirectional recurrent neural network based on a time attention mechanism.
Specifically, the formula for extracting the micro-expression time sequence features from the feature vectors output by the bidirectional recurrent neural network through a time attention mechanism is as follows:
Vt=f(wt(htwht+h′wh′t)+b)
wherein, VtRepresenting the time sequence characteristics of the micro expression at the moment t, f representing an activation function, wt,wht,wh′tRepresenting a trainable parameter matrix htThe implicit characteristic of the moment t is shown, h' represents the implicit characteristic of the previous moment, and b is an offset.
Sixthly, identifying the expression category of the micro expression by the micro expression time sequence characteristic through a Softmax function. Specifically, the bidirectional recurrent neural network carries out classification and identification, and the expression category of the micro expression is identified by the micro expression time sequence characteristics through a Softmax function.
In the above embodiment, the bidirectional recurrent neural network is trained, that is, before the micro expression coded feature vector is input into the trained bidirectional recurrent neural network, the micro expression spatial feature is extracted from the micro expression coded feature vector through a spatial attention mechanism, and the micro expression timing feature is extracted from the feature vector output by the bidirectional recurrent neural network based on a temporal attention mechanism, which are both trained, so the training step of the bidirectional recurrent neural network includes:
selecting a training data set;
initializing iteration times, batch processing size and learning rate;
the loss function is configured as a cross entropy loss function;
inputting the training data set into the bidirectional recurrent neural network for training;
when training reaches a first preset number of times, extracting micro expression spatial features from a training data set input into the bidirectional cyclic neural network by adding a spatial attention mechanism, and extracting micro expression time sequence features from an output result of the bidirectional cyclic neural network based on a time attention mechanism;
and when the iteration times reach a second preset time, terminating the training.
In contrast, the embodiment constructs a bidirectional cyclic neural network based on deep learning neural networks such as LSTM and GRU, and simultaneously integrates a space-time strategy mechanism, so that selection of different importance degrees of continuous frames of the micro expression video and different weights of local features of the same frame are realized, and the problem of low precision of a detection system caused by insufficient feature utilization in micro expression recognition is solved.
It should be noted that the steps in the above description are overall technical logic steps, each step may be implemented in several small steps in a specific implementation process, and the steps may be executed in parallel, or the order of the steps may be adjusted, and so on.
Example 3:
at present, two methods are mainly used for micro-expression recognition, one method is based on the time sequence characteristics of micro-expression data, modeling analysis is carried out by combining an optical flow algorithm, the difference change between micro-expressions is represented by dynamic optical flow characteristics between continuous frames, and the optical flow algorithm of the whole process is established by analyzing the change trend between frames. Although the micro-expression recognition method based on the optical flow algorithm has the characteristics of low complexity of feature calculation and easiness in implementation, the difficulty in improving the classification precision is high due to the requirement of artificial priori knowledge such as positive and definite samples. And the other method is to adopt a texture feature extraction algorithm to extract the texture change features of the same frame according to the spatial characteristics of the micro-expression data. And filtering the micro expression image by adopting Gabor filtering so as to extract the texture characteristics of the facial expression, analyzing the phase information of the micro expression by a Riesz spectrum, and classifying the micro expression. Based on the texture features, the spatial features of the facial expressions can be effectively extracted, but the micro expressions have strong time sequence information, so that the recognition based on the single texture feature recognition is disadvantageous.
Based on the technical defects, the embodiment provides a micro-expression recognition method based on a bidirectional circulation neural network, the method integrates a GRU neural network and an LSTM neural network, and constructs the bidirectional circulation neural network, the bidirectional circulation neural network consists of an input layer, a hidden layer and an output layer, the first layer of the hidden layer is acted on forward propagation by the LSTM, the GRU is acted on backward propagation, the second layer of the hidden layer is opposite to the first layer in structure, the GRU is used as the forward propagation, the LSTM is used as the backward propagation, and the continuous correlation between frames of micro-expression data is fully utilized. Meanwhile, in order to better extract the space-time characteristics of the micro-expression, a space attention mechanism and a time attention mechanism are added on the basis of a bidirectional cyclic neural network, so that the selection of different importance degrees of continuous frames of the micro-expression video image and different weights of local characteristics of the same frame are realized.
Fig. 2 is a schematic diagram of the operation process of the bidirectional recurrent neural network, as shown in fig. 2, the GRU is a gated recurrent neural network (GRU), and the LSTM is a Long Short-Term Memory (LSTM) network. The RNN (recurrent neural network) is limited by the defects of its own structure, which cannot guarantee long-term holding of information, and the structural improvement based GRU and LSTM can effectively overcome the problem of long-term dependence of RNN. As shown in fig. 2 again, the original micro-expression data is preprocessed, then processed through a spatial mechanism, processed through a bidirectional recurrent neural network, processed through a temporal mechanism, and finally recognized by a Softmax function for the expression category of the micro-expression. The logic step of the knowledge integration specifically comprises the following steps: step 101: preprocessing original micro-expression data to obtain a facial behavior coding sequence; carrying out facial behavior coding on the facial behavior coding sequence to obtain micro expression coding feature vectors; extracting the micro expression space features of the micro expression coding feature vectors through a space attention mechanism; inputting the micro-expression spatial features into a trained bidirectional cyclic neural network; extracting the micro-expression time sequence characteristics from the characteristic vectors output by the bidirectional cyclic neural network through a time attention mechanism; and identifying the expression category of the micro expression by the micro expression time sequence characteristic through a Softmax function.
Further, before inputting the micro expression coding feature vector into the trained bidirectional recurrent neural network, the method further includes: and extracting the micro expression space features of the micro expression coding feature vectors through a space attention mechanism.
Specifically, the formula for extracting the micro expression spatial features from the micro expression coding feature vector through a spatial attention mechanism is as follows:
St,k=wσ(ft,kwx+ct-1wc,k+b)
wherein S ist,kMicro-expression space characteristics of k-th action of t-th frame image of micro-expression video, sigma represents mapping function, ft,kMicro-expression coded feature vector representing the kth motion of the t-th frame image of the micro-expression video, ct-1Representing the memory characteristics at time t-1, wx,wc,kRepresenting the parameter matrix and b the offset.
Preferably, the bidirectional recurrent neural network is constructed based on a GRU neural network LSTM neural network, having an input layer, a hidden layer and an output layer.
Further, the hidden layers comprise a first hidden layer and a second hidden layer, the first hidden layer takes an LSTM neural network as forward propagation and a GRU neural network as backward propagation; the second hidden layer takes GRU neural network as forward propagation and LSTM neural network as backward propagation.
Specifically, the formula for extracting the micro-expression time sequence features from the feature vectors output by the bidirectional recurrent neural network through a time attention mechanism is as follows:
Vt=f(wt(htwht+h′wh′t)+b)
wherein, VtRepresenting the time sequence characteristics of the micro expression at the moment t, f representing an activation function, wt,wht,wh′tRepresenting a trainable parameter matrix htThe implicit characteristic of the moment t is shown, h' represents the implicit characteristic of the previous moment, and b is an offset.
Generally, the micro expression is a piece of continuous data, different frames have strong correlation, and the current and future prediction output is only based on historical information, so that a better classification effect cannot be obtained. Therefore, the bidirectional recurrent neural network is constructed by fusing historical information and future information and based on GRU and LSTM. The bidirectional circulation neural network consists of an input layer, a hidden layer and an output layer, wherein the first layer of the hidden layer is acted on forward propagation by LSTM, GRU is acted on backward propagation, the second layer of the hidden layer is opposite to the first layer in structure, GRU is used for forward propagation, LSTM is used for backward propagation, and continuous correlation among frames of micro-expression data is fully utilized. Back propagation is consistent with forward propagation in network structure, except that a reverse process is done in the time dimension for the forward input. Wherein the input layer takes micro-expression feature vectors obtained based on a spatial attention mechanism as input. The LSTM comprises three modules, wherein the first module is a forgetting gate to determine that useless information in a filtering memory unit at the moment t; the second is an input gate to determine to input new characteristic information at the time t; the third is an output gate to determine the response at time t. Compared with the LSTM, the GRU has a simpler internal structure and is easier to calculate magnitude data. The input layer comprises two parts, one hidden layer h at a previous momentt-1The other is an input x of the current time tt. The GRU layer comprises two modules, one is an updating gate and is used for controlling and updating the characteristic information, when the value is larger, the more information of the current time t is kept, and more information of the time t-1 is discarded.
Further, the hidden layer function is:
Figure BDA0003237483860000181
wherein, wa,wb,wc,wd,we,wsTrainable connection vectors for bidirectional RNN, FIFor a forward implicit vector, FOFor the reverse implicit vector, f and g are the activation functions.
In the above embodiment, the bidirectional recurrent neural network is trained, that is, before the micro expression coded feature vector is input into the trained bidirectional recurrent neural network, the micro expression spatial feature is extracted from the micro expression coded feature vector through a spatial attention mechanism, and the micro expression timing feature is extracted from the feature vector output by the bidirectional recurrent neural network based on a temporal attention mechanism, which are both trained, so the training step of the bidirectional recurrent neural network includes:
selecting a training data set;
initializing iteration times, batch processing size and learning rate;
the loss function is configured as a cross entropy loss function;
inputting the training data set into the bidirectional recurrent neural network for training;
when training reaches a first preset number of times, extracting micro expression spatial features from a training data set input into the bidirectional cyclic neural network by adding a spatial attention mechanism, and extracting micro expression time sequence features from an output result of the bidirectional cyclic neural network based on a time attention mechanism;
and when the iteration times reach a second preset time, terminating the training.
Example 4:
the present embodiment provides a micro expression recognition apparatus based on a bidirectional recurrent neural network, as shown in fig. 3, the apparatus includes:
the acquisition module 301 is configured to pre-process the original micro-expression data to obtain a facial behavior coding sequence;
the coding module 302 is configured to perform facial behavior coding on the facial behavior coding sequence to obtain micro expression coding feature vectors;
a bidirectional circulation module 303, configured to input the micro expression coding feature vector into a trained bidirectional circulation neural network;
the time mechanism module 304 is used for extracting the micro-expression time sequence features from the feature vectors output by the bidirectional recurrent neural network based on a time attention mechanism;
and the expression identification module 305 is configured to identify the expression category of the micro expression through the Softmax function according to the micro expression timing characteristic.
Preferably, the device further comprises a space mechanism module, wherein the space mechanism module is used for extracting the micro expression space features from the micro expression coding feature vectors through a space attention mechanism before inputting the micro expression coding feature vectors into the trained bidirectional recurrent neural network.
Referring next to fig. 4, a schematic diagram of an electronic device provided in some embodiments of the present application is shown. As shown in fig. 4, the electronic device 2 includes: the system comprises a processor 200, a memory 201, a bus 202 and a communication interface 203, wherein the processor 200, the communication interface 203 and the memory 201 are connected through the bus 202; the memory 201 stores a computer program that can be executed on the processor 200, and the processor 200 executes the computer program to execute the bi-directional recurrent neural network-based micro expression recognition method provided by any of the foregoing embodiments of the present application.
The Memory 201 may include a high-speed Random Access Memory (RAM) and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 203 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used.
Bus 202 can be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The memory 201 is configured to store a program, and the processor 200 executes the program after receiving an execution instruction, and the bidirectional recurrent neural network-based micro expression recognition method disclosed in any embodiment of the present application may be applied to the processor 200, or implemented by the processor 200.
The processor 200 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 200. The Processor 200 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 201, and the processor 200 reads the information in the memory 201 and completes the steps of the method in combination with the hardware thereof.
The electronic equipment provided by the embodiment of the application and the micro expression identification method based on the bidirectional recurrent neural network provided by the embodiment of the application have the same inventive concept and have the same beneficial effects as the method adopted, operated or realized by the electronic equipment.
Referring to fig. 5, the computer readable storage medium is an optical disc 30, on which a computer program (i.e., a computer program product) is stored, and when the computer program is executed by a processor, the computer program executes the bidirectional recurrent neural network-based micro expression recognition method according to any of the foregoing embodiments.
Examples of the computer-readable storage medium may also include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory, or other optical and magnetic storage media, which are not described in detail herein.
It should be noted that: the algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may be used with the teachings herein. The required structure for constructing such a device will be apparent from the description above. In addition, this application is not directed to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present application as described herein, and any descriptions of specific languages are provided above to disclose the best modes of the present application. In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description. Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the application and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A micro-expression recognition method based on a bidirectional cyclic neural network is characterized by comprising the following steps:
preprocessing original micro-expression data to obtain a facial behavior coding sequence;
carrying out facial behavior coding on the facial behavior coding sequence to obtain micro expression coding feature vectors;
inputting the micro expression coding feature vector into a trained bidirectional recurrent neural network;
extracting the micro-expression time sequence characteristics of the characteristic vector output by the bidirectional recurrent neural network based on a time attention mechanism;
and identifying the expression category of the micro expression by the micro expression time sequence characteristic through a Softmax function.
2. The micro-expression recognition method based on the bidirectional recurrent neural network of claim 1, wherein before inputting the micro-expression encoded feature vector into the trained bidirectional recurrent neural network, the method further comprises: and extracting the micro expression space features of the micro expression coding feature vectors through a space attention mechanism.
3. The micro expression recognition method based on the bidirectional recurrent neural network as claimed in claim 2, wherein the formula for extracting the micro expression spatial features from the micro expression coded feature vector through the spatial attention mechanism is as follows:
St,k=wσ(ft,kwx+ct-1wc,k+b)
wherein S ist,kRepresenting the micro-space characteristics of the kth motion of the t frame image of the micro-representation video, sigma representing a mapping function, ft,kMicro-expression coding feature for expressing kth action of t frame image of micro-expression videoEigenvectors, ct-1Representing the memory characteristics at time t-1, wx,wc,kRepresenting the parameter matrix and b the offset.
4. The micro expression recognition method based on the bidirectional recurrent neural network of claim 1, wherein the bidirectional recurrent neural network is constructed based on a GRU neural network LSTM neural network, and has an input layer, an implicit layer, and an output layer.
5. The micro expression recognition method based on the bidirectional cyclic neural network of claim 1, wherein the hidden layers comprise a first hidden layer and a second hidden layer, the first hidden layer uses an LSTM neural network as forward propagation and a GRU neural network as backward propagation; the second hidden layer takes GRU neural network as forward propagation and LSTM neural network as backward propagation.
6. The microexpression situation recognition method based on the bidirectional recurrent neural network as claimed in claim 5, wherein the formula for extracting the microexpression time sequence feature from the feature vector output by the bidirectional recurrent neural network through the time attention mechanism is as follows:
Vt=f(wt(htwht+h′wh′t)+b)
wherein, VtRepresenting the time sequence characteristics of the micro expression at the moment t, f representing an activation function, wt,wht,wh′tRepresenting a trainable parameter matrix htThe implicit characteristic of the moment t is shown, h' represents the implicit characteristic of the previous moment, and b is an offset.
7. The micro expression recognition method based on the bidirectional recurrent neural network as claimed in claim 2, wherein the training step of the bidirectional recurrent neural network comprises:
selecting a training data set;
initializing iteration times, batch processing size and learning rate;
the loss function is configured as a cross entropy loss function;
inputting the training data set into the bidirectional recurrent neural network for training;
when training reaches a first preset number of times, extracting micro expression spatial features from a training data set input into the bidirectional cyclic neural network by adding a spatial attention mechanism, and extracting micro expression time sequence features from an output result of the bidirectional cyclic neural network based on a time attention mechanism;
and when the iteration times reach a second preset time, terminating the training.
8. A micro expression recognition device based on a bidirectional recurrent neural network, the device comprising:
the acquisition module is used for preprocessing the original micro-expression data to obtain a facial behavior coding sequence;
the coding module is used for carrying out facial behavior coding on the facial behavior coding sequence to obtain micro expression coding feature vectors;
the bidirectional circulation module is used for inputting the micro expression coding feature vector into a trained bidirectional circulation neural network;
the time mechanism module is used for extracting the micro-expression time sequence characteristics from the characteristic vectors output by the bidirectional recurrent neural network based on a time attention mechanism;
and the expression identification module is used for identifying the expression category of the micro expression by the micro expression time sequence characteristics through a Softmax function.
9. A computer device comprising a memory and a processor, wherein computer readable instructions are stored in the memory, which computer readable instructions, when executed by the processor, cause the processor to perform the steps of the method according to any one of claims 1 to 7.
10. A computer storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor and to carry out the steps of the method according to any one of claims 1 to 7.
CN202111007435.5A 2021-08-30 2021-08-30 Micro-expression identification method, device and medium based on bidirectional cyclic neural network Pending CN113723287A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111007435.5A CN113723287A (en) 2021-08-30 2021-08-30 Micro-expression identification method, device and medium based on bidirectional cyclic neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111007435.5A CN113723287A (en) 2021-08-30 2021-08-30 Micro-expression identification method, device and medium based on bidirectional cyclic neural network

Publications (1)

Publication Number Publication Date
CN113723287A true CN113723287A (en) 2021-11-30

Family

ID=78679319

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111007435.5A Pending CN113723287A (en) 2021-08-30 2021-08-30 Micro-expression identification method, device and medium based on bidirectional cyclic neural network

Country Status (1)

Country Link
CN (1) CN113723287A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091956A (en) * 2022-09-08 2023-05-09 北京中关村科金技术有限公司 Video-based micro-expression recognition method, device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287801A (en) * 2019-05-29 2019-09-27 中国电子科技集团公司电子科学研究院 A kind of micro- Expression Recognition algorithm
CN110348271A (en) * 2018-04-04 2019-10-18 山东大学 A kind of micro- expression recognition method based on long memory network in short-term
CN110516571A (en) * 2019-08-16 2019-11-29 东南大学 Inter-library micro- expression recognition method and device based on light stream attention neural network
CN111310672A (en) * 2020-02-19 2020-06-19 广州数锐智能科技有限公司 Video emotion recognition method, device and medium based on time sequence multi-model fusion modeling
CN112307958A (en) * 2020-10-30 2021-02-02 河北工业大学 Micro-expression identification method based on spatiotemporal appearance movement attention network
CN112527966A (en) * 2020-12-18 2021-03-19 重庆邮电大学 Network text emotion analysis method based on Bi-GRU neural network and self-attention mechanism
CN112800891A (en) * 2021-01-18 2021-05-14 南京邮电大学 Discriminative feature learning method and system for micro-expression recognition

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110348271A (en) * 2018-04-04 2019-10-18 山东大学 A kind of micro- expression recognition method based on long memory network in short-term
CN110287801A (en) * 2019-05-29 2019-09-27 中国电子科技集团公司电子科学研究院 A kind of micro- Expression Recognition algorithm
CN110516571A (en) * 2019-08-16 2019-11-29 东南大学 Inter-library micro- expression recognition method and device based on light stream attention neural network
CN111310672A (en) * 2020-02-19 2020-06-19 广州数锐智能科技有限公司 Video emotion recognition method, device and medium based on time sequence multi-model fusion modeling
CN112307958A (en) * 2020-10-30 2021-02-02 河北工业大学 Micro-expression identification method based on spatiotemporal appearance movement attention network
CN112527966A (en) * 2020-12-18 2021-03-19 重庆邮电大学 Network text emotion analysis method based on Bi-GRU neural network and self-attention mechanism
CN112800891A (en) * 2021-01-18 2021-05-14 南京邮电大学 Discriminative feature learning method and system for micro-expression recognition

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091956A (en) * 2022-09-08 2023-05-09 北京中关村科金技术有限公司 Video-based micro-expression recognition method, device and storage medium

Similar Documents

Publication Publication Date Title
US10311326B2 (en) Systems and methods for improved image textures
Guo et al. Face recognition based on convolutional neural network and support vector machine
CN111133453B (en) Artificial neural network
CN112464865A (en) Facial expression recognition method based on pixel and geometric mixed features
CN112070044B (en) Video object classification method and device
CN111325664B (en) Style migration method and device, storage medium and electronic equipment
CN112183747A (en) Neural network training method, neural network compression method and related equipment
CN109522925A (en) A kind of image-recognizing method, device and storage medium
CN112668366B (en) Image recognition method, device, computer readable storage medium and chip
CN111368672A (en) Construction method and device for genetic disease facial recognition model
CN110222718B (en) Image processing method and device
CN113158815A (en) Unsupervised pedestrian re-identification method, system and computer readable medium
CN113392210A (en) Text classification method and device, electronic equipment and storage medium
CN112749737A (en) Image classification method and device, electronic equipment and storage medium
Gao et al. Natural scene recognition based on convolutional neural networks and deep Boltzmannn machines
CN115966010A (en) Expression recognition method based on attention and multi-scale feature fusion
Arnaud et al. Tree-gated deep mixture-of-experts for pose-robust face alignment
CN113239866B (en) Face recognition method and system based on space-time feature fusion and sample attention enhancement
Ullah et al. Emotion recognition from occluded facial images using deep ensemble model
CN113723287A (en) Micro-expression identification method, device and medium based on bidirectional cyclic neural network
Liu et al. Hybrid neural network text classification combining TCN and GRU
CN111950373B (en) Method for micro expression recognition based on transfer learning of optical flow input
Zhao et al. Human action recognition based on improved fusion attention CNN and RNN
CN113408721A (en) Neural network structure searching method, apparatus, computer device and storage medium
Sang et al. Image recognition based on multiscale pooling deep convolution neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination