CN109271930B - Micro-expression recognition method, device and storage medium - Google Patents

Micro-expression recognition method, device and storage medium Download PDF

Info

Publication number
CN109271930B
CN109271930B CN201811075329.9A CN201811075329A CN109271930B CN 109271930 B CN109271930 B CN 109271930B CN 201811075329 A CN201811075329 A CN 201811075329A CN 109271930 B CN109271930 B CN 109271930B
Authority
CN
China
Prior art keywords
image
feature
micro
face
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811075329.9A
Other languages
Chinese (zh)
Other versions
CN109271930A (en
Inventor
杜翠凤
温云龙
蒋仕宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Jiesai Communication Planning And Design Institute Co ltd
GCI Science and Technology Co Ltd
Original Assignee
Guangzhou Jiesai Communication Planning And Design Institute Co ltd
GCI Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Jiesai Communication Planning And Design Institute Co ltd, GCI Science and Technology Co Ltd filed Critical Guangzhou Jiesai Communication Planning And Design Institute Co ltd
Priority to CN201811075329.9A priority Critical patent/CN109271930B/en
Publication of CN109271930A publication Critical patent/CN109271930A/en
Application granted granted Critical
Publication of CN109271930B publication Critical patent/CN109271930B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a micro-expression recognition method, a device and a storage medium, wherein the method comprises the following steps: detecting the face features of a pre-collected target face image, and acquiring at least five face feature points of the target face image; according to the face characteristic points of the target face image and a preset image blocking rule, carrying out blocking processing on the target face image to obtain a plurality of image blocks; and obtaining a micro-expression classification result through a pre-established convolutional neural network model according to the plurality of image blocks. According to the method, the target face image is subjected to the block processing according to the acquired face characteristic points and the preset image blocking rules, and the plurality of blocks obtained after the block processing are identified and classified by adopting the convolutional neural network, so that the micro-expression identification speed and precision can be effectively improved, and the micro-expression identification working efficiency is greatly improved.

Description

Micro-expression recognition method, device and storage medium
Technical Field
The invention relates to the technical field of micro expression recognition, in particular to a micro expression recognition method, a device and a storage medium.
Background
Micro-expressions are very transient, involuntary-uncontrollable facial expressions revealed by a human attempting to suppress or hide a real emotion. It differs from ordinary expressions in that the duration of micro-expressions is very short, only 1/25 seconds to 1/5 seconds. Therefore, most people tend to be unaware of its presence. This rapid appearance of imperceptible facial expressions is thought to be associated with self-defense mechanisms, expressing depressed emotions. However, the generation of micro expressions and the psychological and neural mechanisms for recognition are still under study, the occurrence frequency of micro expressions is low, the recognition capability of ordinary people on micro expressions is not high, and workers want to benefit the micro expressions in advance, so that a set of micro expression recognition system is developed and is very necessary for developing and researching micro expressions.
At present, Affectiva company researches 'micro expressions' of people through a deep learning algorithm, and the 'micro expressions' are relative to expressions, namely dominant expressions, and potential, recessive and potential expressions. With the help of deep learning, the difference of smile feature and smile and even other micro expression is finally summarized by observing the texture and wrinkles of all the faces and the change of the shape. Generally, the company identifies signs of smiling, laughing, delighting, confusion, etc. by locating 42 keypoints on the face of a person and then tracking the differences between these keypoints in 0.2 seconds (approximately 30 pictures in 1 second). The specific method comprises the following steps: when a user has expression change, rendering characteristics of geometric models (distances between points and points) of other points and the nose anchor point are calculated, so that whether the micro expression of the current user is smiling or surprised is determined. The above approach is a classical global computation method, since features that compute the distance between different composition points are required to determine what the micro-expression is. Such as: smiling, which is the linkage of points at the corners of the mouth and points of facial muscles; in the following, for example: possibly a micro-tensor card in the mouth and a small variation in the points around the eyes. However, the global computing method has a slow computing speed and low micro-expression recognition accuracy, which results in low micro-expression recognition efficiency.
Disclosure of Invention
Based on the above, the invention provides a micro expression recognition method, a device and a storage medium, which can effectively improve the speed and the precision of micro expression recognition, thereby greatly improving the working efficiency of micro expression recognition.
In order to achieve the above object, an aspect of the embodiments of the present invention provides a micro expression recognition method, including:
detecting the face features of a pre-collected target face image, and acquiring at least five face feature points of the target face image;
according to the face characteristic points of the target face image and a preset image blocking rule, carrying out blocking processing on the target face image to obtain a plurality of image blocks;
and obtaining a micro-expression classification result through a pre-established convolutional neural network model according to the plurality of image blocks.
Preferably, the block-cutting processing is performed on the target face image according to the face feature points of the target face image and a preset image blocking rule to obtain a plurality of image blocks, and the method specifically includes:
establishing a plurality of first image tangent lines on the target face image according to the face characteristic points;
extracting N first image segmentation lines from the plurality of first image segmentation lines according to a preset image segmentation rule, and carrying out segmentation processing on the target face image by adopting the extracted N first image segmentation lines; wherein N < 3;
and obtaining a plurality of image blocks through the plurality of first image splitting lines.
Preferably, the method further comprises:
identifying non-deformation feature points and deformation feature points in the human face feature points through the convolutional neural network model according to the plurality of image blocks; wherein the non-deformed feature points are points of the face feature points that are not activated by neurons in the convolutional neural network model, and the deformed feature points are points of the face feature points that are activated by neurons in the convolutional neural network model;
establishing a plurality of second image tangent lines on the target face image according to deformation feature points in the face feature points;
extracting M second image segmentation lines from the plurality of second image segmentation lines according to the image segmentation rule, and carrying out segmentation processing on the target face image by adopting the extracted M second image segmentation lines; wherein M < 3;
obtaining a plurality of deformation characteristic point image blocks in total through the plurality of second image tangent lines;
and obtaining a micro-expression classification adjustment result through the convolutional neural network model according to the deformation feature point image blocks, and updating the current micro-expression classification result into the micro-expression classification adjustment result.
Preferably, the obtaining of the micro-episodic classification result according to the plurality of image blocks through a pre-established convolutional neural network model specifically includes:
carrying out graying processing on the plurality of image blocks to obtain a plurality of grayed image blocks;
horizontally overturning the plurality of image blocks to obtain a plurality of horizontally overturning image blocks;
inputting the image blocks, the grayed image blocks and the horizontal turnover image blocks into the convolutional neural network model for convolution calculation to obtain a feature vector corresponding to the target face image, and performing PCA (principal component analysis) dimension reduction processing on the feature vector;
and obtaining a micro-expression classification result corresponding to the target face image through a multilayer classifier in the convolutional neural network model according to the feature vector after dimension reduction.
Preferably, before the target face image is subjected to block segmentation processing according to the face feature points of the target face image and a preset image blocking rule to obtain a plurality of image blocks, the method further includes:
and according to the face characteristic points, carrying out alignment processing on the target face image.
Preferably, the number of tiles includes: a first tile comprising a binocular feature, a second tile comprising a binocular feature and a nose feature, a third tile comprising a left eye feature and a left alar feature, a fourth tile comprising a right eye feature and a right alar feature, a fifth tile comprising a nose feature and a mouth feature, a sixth tile comprising a left alar feature and a left corner of mouth feature, a seventh tile comprising a right alar feature and a right corner of mouth feature, an eighth tile comprising an eyebrow feature, a ninth tile comprising a mouth feature, and a tenth tile comprising a full-face feature.
Another aspect of an embodiment of the present invention provides a micro expression recognition apparatus, including:
the system comprises a human face feature detection module, a face feature detection module and a face feature detection module, wherein the human face feature detection module is used for detecting human face features of a target human face image acquired in advance and acquiring at least five human face feature points of the target human face image;
the image blocking module is used for carrying out blocking processing on the target face image according to the face characteristic points of the target face image and a preset image blocking rule to obtain a plurality of image blocks;
and the micro expression identification module is used for obtaining a micro expression classification result through a pre-established convolutional neural network model according to the plurality of image blocks.
Preferably, the image dicing module includes:
the first image tangent line establishing unit is used for establishing a plurality of first image tangent lines on the target face image according to the face characteristic points;
the first segmentation unit is used for extracting N first image segmentation lines from the plurality of first image segmentation lines according to a preset image segmentation rule, and segmenting the target face image by adopting the extracted N first image segmentation lines; wherein N < 3;
and the first image block acquisition unit is used for obtaining a plurality of image blocks through the plurality of first image splitting lines.
An aspect of the embodiments of the present invention provides a micro expression recognition apparatus, including a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, where the processor executes the computer program to implement the micro expression recognition method as described above.
An aspect of the embodiments of the present invention provides a computer-readable storage medium, where the computer-readable storage medium includes a stored computer program, where when the computer program runs, the apparatus where the computer-readable storage medium is located is controlled to execute the method for recognizing a micro-expression as described above.
Compared with the prior art, the embodiment of the invention has the beneficial effects that: the micro expression recognition method comprises the following steps: detecting the face features of a pre-collected target face image, and acquiring at least five face feature points of the target face image; according to the face characteristic points of the target face image and a preset image blocking rule, carrying out blocking processing on the target face image to obtain a plurality of image blocks; and obtaining a micro-expression classification result through a pre-established convolutional neural network model according to the plurality of image blocks. According to the method, the target face image is subjected to the block processing according to the acquired face characteristic points and the preset image blocking rules, and the plurality of blocks obtained after the block processing are identified and classified by adopting the convolutional neural network, so that the micro-expression identification speed and precision can be effectively improved, and the micro-expression identification working efficiency is greatly improved.
Drawings
Fig. 1 is a schematic flow chart of a micro expression recognition method according to an embodiment of the present invention;
FIG. 2 is a schematic block diagram of a target face image;
FIG. 3 is a schematic diagram of a number of tiles resulting from the dicing process;
fig. 4 is a schematic block diagram of a micro expression recognition apparatus according to a second embodiment of the present invention;
fig. 5 is a schematic block diagram of a micro expression recognition apparatus according to a third embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Please refer to fig. 1, which is a flowchart illustrating a micro expression recognition method according to an embodiment of the present invention. The method can be executed by a micro expression recognition device, and specifically comprises the following steps:
s100: detecting the face features of a pre-collected target face image, and acquiring at least five face feature points of the target face image;
in the embodiment of the invention, the micro expression recognition device can be a computer, a mobile phone, a tablet computer, an access control device, a notebook computer or a server and other computing devices, and the micro expression recognition method can be integrated with the micro expression recognition device as one of the functional modules and executed by the micro expression recognition device.
In an embodiment of the present invention, the micro expression recognition device receives a target face image, and it should be noted that the present invention does not limit the acquisition manner of the target face image, for example, the target face image may be acquired by a camera carried by the micro expression recognition device, or the target face image may be received from a network or other devices in a wired manner or a wireless manner, after the target face image is received, the micro expression recognition device performs face detection on the target face image to obtain at least five facial feature points of the target face image, that is, the target face image includes a plurality of image samples, for example, consecutive frame image samples extracted from an image video, and for each frame image sample, at least five facial feature points corresponding to the acquired image samples, for example, extracting eyes (2 eyes), a nose, a face image, a, Mouth angle (2 mouth angles). Further, more face feature points can be obtained by making the target face image symmetrical based on the centers of two eyes and the center of two points of a mouth boundary. The micro expression recognition device can also directly receive the image video and extract the frame image sample of the image video, and the target face image obtained after extraction comprises the following steps: several frames of consecutive or equally spaced image samples are subjected to the following micro-expression recognition process.
S200: according to the face characteristic points of the target face image and a preset image blocking rule, carrying out blocking processing on the target face image to obtain a plurality of image blocks;
in the embodiment of the present invention, for example, according to the extracted 5 personal face feature points of eyes (2 eyes), nose, and mouth corners (2 mouth corners), according to a set image blocking rule, region combinations are performed on the eyes (2 eyes), nose, and mouth corners (2 mouth corners) in the target face image, such as a binocular region, a left eye + left alar region, a right eye + right alar region, a mouth, a right alar + right mouth corner, and the like, and the target face image is subjected to block processing according to the region combinations to obtain a plurality of blocks. By segmenting the target face image, on one hand, a data sample can be expanded, on the other hand, the selection of key points can be reduced and the micro-expression recognition precision can be improved through different face characteristic point combination areas.
S300: and obtaining a micro-expression classification result through a pre-established convolutional neural network model according to the plurality of image blocks.
In the embodiment of the present invention, the microexpression recognition apparatus uses the plurality of blocks as input values of a pre-established convolutional neural network model, calculates 160-dimensional Harr feature vectors of each block, and finally combines the feature vectors of each block (i.e., concatenates the vectors), thereby finally forming 160 × Q feature numbers. And finally classifying the feature vectors by using a softmax classifier in the convolutional neural network model, specifically, judging the input feature vectors by using the softmax classifier according to preset feature types (generally more than 10, such as happiness, pain, sadness, surprise, anger, confusion, disgust, anepity, sleepiness, sight, lack of confidence and the like) of the micro expression, wherein the softmax is a multi-classifier, and the output result of the classifier corresponding to the maximum probability is extracted as the micro-table emotion classification result in the embodiment, because the same group of data is input and the probability is possibly existed in the plurality of classifiers. According to the invention, the target face image is cut into blocks through the extracted face characteristic points, so that the selection of key points can be reduced, and the convolution neural network is combined to carry out convolution calculation on the blocks extracted from the blocks, so that the speed and the precision of micro expression recognition can be effectively improved, and the working efficiency of the micro expression recognition is greatly improved.
In an alternative embodiment, S200: according to the face feature points of the target face image and a preset image blocking rule, carrying out blocking processing on the target face image to obtain a plurality of image blocks, wherein the method specifically comprises the following steps:
establishing a plurality of first image tangent lines on the target face image according to the face characteristic points;
as shown in fig. 2, a plurality of first image segmentation lines are established according to the extracted 5 characteristic points of the face, such as the eyes (2 eyes), the nose, and the mouth corner (2 mouth corners), for example: first image tangent line L connecting left eye corner inner side point and left nose wing point1(ii) a First image tangent line L formed by connecting nose bridge point and nose tip point2(ii) a A first image tangent line L formed by connecting the right canthus inner side point and the right nose wing point3(ii) a First image segmentation line L formed by connecting key points at the most upper part of the eye key points4(ii) a First image segmentation line L formed by connecting lowest points of eye key points5(ii) a At the lowest point of the nose, parallel to the first image dividing line L4To form a first image dividing line L6
Extracting N first image segmentation lines from the plurality of first image segmentation lines according to a preset image segmentation rule, and carrying out segmentation processing on the target face image by adopting the extracted N first image segmentation lines; wherein N < 3;
in this embodiment, 1 or 2 first image segmentation lines of the plurality of first image segmentation lines may be arbitrarily selected to perform the segmentation processing on the target face image.
And obtaining a plurality of image blocks through the plurality of first image splitting lines.
In an alternative embodiment, the number of tiles includes: a first tile comprising a binocular feature, a second tile comprising a binocular feature and a nose feature, a third tile comprising a left eye feature and a left alar feature, a fourth tile comprising a right eye feature and a right alar feature, a fifth tile comprising a nose feature and a mouth feature, a sixth tile comprising a left alar feature and a left corner of mouth feature, a seventh tile comprising a right alar feature and a right corner of mouth feature, an eighth tile comprising an eyebrow feature, a ninth tile comprising a mouth feature, and a tenth tile comprising a full-face feature.
As shown in fig. 3, the first tile (patch) is cut by a first image cutting line L5 and includes a binocular feature, and can be understood as a preset face feature combination rule (e.g., including a binocular feature and a nose feature, including a left eye feature and a left alar feature, including a right eye feature and a right alar feature, including a nose feature and a mouth feature, including a left alar feature and a left mouth corner feature, including a right alar feature and a right mouth corner feature, including an eye feature, including a mouth feature, and including a full-face feature); the second picture block is cut by the first image cutting line L6 and comprises a binocular feature and a nose feature; the third block is formed by combining a first image tangent line L3 and a first image tangent line L6, and comprises a left eye feature and a left alar feature; the fourth block is formed by combining a first image tangent line L1 and a first image tangent line L6, and comprises a right alar feature and a right mouth corner feature; the fifth picture block is cut from the first image tangent line L5 and comprises a nose feature and a mouth feature; the sixth segment is a combination of a first image bisecting line L5 and a first image bisecting line L3, including a left alar feature and a left corner of mouth feature; the seventh block is formed by combining a first image tangent line L5 and a first image tangent line L6, and comprises a right alar feature and a right mouth corner feature; the eighth image block is formed by cutting the first image cutting line L4 and contains the eyebrow feature; the ninth block is cut from the first image dividing line L4 and includes mouth features; the tenth tile is a full map, including the entire face region. By extracting the target image in blocks according to the above rules, the convolutional neural network can be used to better learn the characteristics of different micro expressions, for example, study which face feature point combinations correspondingly move when a certain micro expression occurs, the face feature points in the convolutional neural network without neuron activation can be regarded as non-deformed feature points (key points do not move when a certain micro expression occurs), and the face feature points activated by neurons can be regarded as deformed feature points (key points move when a micro expression occurs). The micro expression judgment method comprises the steps of cutting according to organ characteristics set by micro expressions, for example, positions of mouths (including the number of points), combinations of canthi and nose, combinations of eyes and forehead and eyes, and then carrying out combination calculation on different face characteristic points.
In an optional embodiment, the method further comprises:
identifying non-deformation feature points and deformation feature points in the human face feature points through the convolutional neural network model according to the plurality of image blocks; wherein the non-deformed feature points are points of the face feature points that are not activated by neurons in the convolutional neural network model, and the deformed feature points are points of the face feature points that are activated by neurons in the convolutional neural network model;
establishing a plurality of second image tangent lines on the target face image according to deformation feature points in the face feature points;
extracting M second image segmentation lines from the plurality of second image segmentation lines according to the image segmentation rule, and carrying out segmentation processing on the target face image by adopting the extracted M second image segmentation lines; wherein M < 3;
in this embodiment, 1 or 2 first image segmentation lines of the plurality of first image segmentation lines may be arbitrarily selected to perform the segmentation processing on the target face image.
Obtaining a plurality of deformation characteristic point image blocks in total through the plurality of second image tangent lines;
and obtaining a micro-expression classification adjustment result through the convolutional neural network model according to the deformation feature point image blocks, and updating the current micro-expression classification result into the micro-expression classification adjustment result.
It can be understood that the target image comprises a plurality of frames of continuous or equally spaced image samples, and in the smiling expression currently identified, the eyebrow feature is an undeformed feature point, and the other features are deformed feature points; and removing eyebrow features in the image blocking rule for the next frame of target face image, repeating the image blocking step for the remaining deformed feature points to perform block extraction, and based on the identification of the deformed feature points and the non-deformed feature points, the selection of the image blocks and the face feature points can be guided in reverse, so that the selection of the unattended face feature points can be effectively reduced, the convolution calculation process is further simplified, and the speed and the precision of micro-expression identification are greatly improved.
In an optional embodiment, the obtaining, according to the plurality of blocks, a micro-representation classification result through a pre-established convolutional neural network model specifically includes:
carrying out graying processing on the plurality of image blocks to obtain a plurality of grayed image blocks;
horizontally overturning the plurality of image blocks to obtain a plurality of horizontally overturning image blocks;
inputting the image blocks, the grayed image blocks and the horizontal turnover image blocks into the convolutional neural network model for convolution calculation to obtain a feature vector corresponding to the target face image, and performing PCA (principal component analysis) dimension reduction processing on the feature vector;
and obtaining a micro-expression classification result corresponding to the target face image through a multilayer classifier in the convolutional neural network model according to the feature vector after dimension reduction.
In the embodiment of the present invention, for example, a weighted average algorithm is used to perform gray processing on the plurality of blocks. The image blocks extracted from the target face image are color images, and the image blocks are specifically composed of a plurality of pixel points, and each pixel point is represented by three values of RGB; the grey processing is carried out on each image block, the textural feature information of the expression image cannot be influenced, and each pixel point can be represented by only one grey value, so that the expression image processing efficiency is greatly improved. Specifically, each tile is grayed by the following grayed processing weighted average algorithm formula:
f(i,j)=0.30R(i,j)+0.59G(i,j)+0.11B(i,j)
wherein, i, j represent the position of a pixel point in the two-dimensional space vector, namely: row i, column j.
And calculating the gray value of each pixel point in each image block according to the formula, wherein the value range is 0-255, so that the expression image is in a black, white and gray state.
The plurality of image blocks are respectively subjected to graying and horizontal turning, so that the data sample can be further expanded, and the micro expression identification precision is improved.
In the embodiment of the invention, based on 10 blocks, 30 blocks are obtained after graying and horizontal inversion processing, the micro expression recognition device takes the 30 blocks as input values of a pre-established convolutional neural network model, calculates 160-dimensional Harr feature vectors of each block, and finally combines the vectors of each patch (namely splicing the vectors) to finally form 160 × 30 feature numbers. Then, the feature number of 4800 is subjected to PCA dimension reduction processing to form a feature vector of 150 dimensions, and the feature vector is input to a softmax classifier for classification.
The PCA dimension reduction processing comprises the following steps:
converting the feature vector of the target face image into an n-m matrix;
carrying out zero equalization processing on each row in the matrix;
calculating a covariance matrix according to the matrix subjected to zero-mean processing, and calculating an eigenvector of the covariance matrix and a corresponding eigenvalue of the eigenvector;
arranging the eigenvectors of the covariance matrix from top to bottom according to the eigenvalue size to obtain a variation matrix;
extracting front k rows from the change matrix to form a dimensionality reduction matrix, and obtaining a feature vector of the PCA of the target face image after dimensionality reduction; and determining the value of k according to the compressed error of the feature vector of the target face image.
Specifically, according to formula (1), determining the value of k;
Figure BDA0001800625300000111
wherein m is the number of all the eigenvectors in the first k rows; and selecting a k, and when error < phi and phi are set thresholds (for example 0.01), determining that the dimension reduction matrix formed by the first k rows extracted from the change matrix meets the dimension reduction requirement.
And the feature vector corresponding to the target face image obtained after the face detection alignment processing is a matrix with a higher dimension. The high-dimensional matrix is easy to cause insufficient memory in calculation and is easy to have an over-fitting problem; therefore, based on modes such as PCA (principal component analysis) processing function and the like, high-dimensional feature vectors corresponding to the face feature points are converted into feature data of a low-dimensional space through dimension reduction. For example, based on the above method, one K is chosen such that error < Φ, we consider this m acceptable, otherwise we try others. Through PCA dimension reduction transformation, the feature vector corresponding to the target face image is changed into 150 dimensions from the original more than 4800 dimensions, the subsequent classification problem is changed into a division problem in a 150-dimensional space, and the calculation process is greatly simplified while the integrity of main information is kept.
In an optional embodiment, before the block-cutting processing is performed on the target face image according to the face feature points of the target face image and a preset image blocking rule to obtain a plurality of blocks, the method further includes:
and according to the face characteristic points, carrying out alignment processing on the target face image.
For example, in the embodiment of the present invention, an open source component OpenCV is used to perform alignment processing, and after the face of each image sample in the target face image is detected, the target face image is converted into an image (for example, 160 × 160 pixels) at the same scale. Specifically, a detectMultiScale algorithm of OpenCV is adopted to detect faces in each image sample of the target face image and select the faces in a frame mode. Standardizing key points of the human face, namely searching points on the leftmost side and the uppermost side of the human face as edges of the image, and translating other points by taking the two edges as a reference; and finally dividing the standard points by (rightmost side-leftmost side) and (bottommost side-topmost side) to uniformly distribute the standard points of the human face in the framed graph, so that the burden of redundant pixel points on calculation can be reduced as much as possible. And finally, performing affine transformation by adopting a getAffinitransform algorithm of OpenCV, and outputting a target face image with aligned images.
Taking the target face image as an input value of a pre-established candidate area network model, so as to obtain a target face area image and at least two face organ area images from the candidate area network model, for example, extracting area images of eyes (2 eyes), a nose and a mouth corner (2 mouth corners), in the candidate area network model, according to a set proportion and an area specification, obtaining a series of area frames of the target face image meeting conditions, in the selection process of the area frames, selecting features by using a convolutional layer, obtaining candidate frames from the series of area frames through non-maximum value inhibition, and then performing fine adjustment on parameters of the candidate frames through a full connecting layer, so as to obtain the target face area image and at least two face organ area images, and through the candidate area network model, a suggested area can be directly generated by using a convolutional neural network, the method realizes the weight sharing of the area generation network and the classification network, and greatly improves the monitoring performance and speed.
Compared with the prior art, the micro-expression recognition method provided by the embodiment of the invention has the beneficial effects that:
(1) according to the invention, the target face image is subjected to block processing according to the acquired face characteristic points and the preset image blocking rule, and a plurality of block blocks obtained after block cutting are identified and classified by adopting a convolutional neural network, so that the micro-expression identification speed and precision can be effectively improved, and the micro-expression identification working efficiency is greatly improved;
(2) the invention is cut according to the organ characteristics set by the micro expression, for example, the position of the mouth (including the number of points), the combination of the canthus and the nose, the combination of the eyes and the combination of the forehead and the eyes, then the convolution neural network is adopted to carry out the combination calculation of different human face characteristic points, the combination calculation mode is used to find out the degree to which different micro expressions influence the muscle movement of different organs, and whether the characteristic points on each image block in the current image sample and the characteristic points of each image block in the previous image sample have movement signs can be more accurately identified, thereby realizing the judgment of the micro expression according to the human face characteristic point combination, and the micro expression identification precision is high;
(3) the method is based on the identification of the deformed characteristic points and the non-deformed characteristic points identified by the convolutional neural network model, can guide the selection of the image blocks and the human face characteristic points in turn, can effectively reduce the selection of the unmanned human face characteristic points, and further improves the speed and the precision of the micro-expression identification.
Example two
Please refer to fig. 4, which is a schematic block diagram of a micro expression recognition apparatus according to an embodiment of the present invention, the apparatus includes:
the system comprises a human face feature detection module 1, a face feature detection module and a face feature detection module, wherein the human face feature detection module is used for detecting human face features of a target human face image acquired in advance and acquiring at least five human face feature points of the target human face image;
the image block cutting module 2 is used for carrying out block cutting processing on the target face image according to the face characteristic points of the target face image and a preset image block dividing rule to obtain a plurality of image blocks;
and the micro-expression recognition module 3 is used for obtaining micro-expression classification results through a pre-established convolutional neural network model according to the plurality of image blocks.
In an alternative embodiment, the image dicing module 2 includes:
the first image tangent line establishing unit is used for establishing a plurality of first image tangent lines on the target face image according to the face characteristic points;
the first segmentation unit is used for extracting N first image segmentation lines from the plurality of first image segmentation lines according to a preset image segmentation rule, and segmenting the target face image by adopting the extracted N first image segmentation lines; wherein N < 3;
and the first image block acquisition unit is used for obtaining a plurality of image blocks through the plurality of first image splitting lines.
In an alternative embodiment, the number of tiles includes: a first tile comprising a binocular feature, a second tile comprising a binocular feature and a nose feature, a third tile comprising a left eye feature and a left alar feature, a fourth tile comprising a right eye feature and a right alar feature, a fifth tile comprising a nose feature and a mouth feature, a sixth tile comprising a left alar feature and a left corner of mouth feature, a seventh tile comprising a right alar feature and a right corner of mouth feature, an eighth tile comprising an eyebrow feature, a ninth tile comprising a mouth feature, and a tenth tile comprising a full-face feature.
In an alternative embodiment, the apparatus further comprises:
the feature point identification module is used for identifying the non-deformation feature points and the deformation feature points in the human face feature points through the convolutional neural network model according to the plurality of image blocks; wherein the non-deformed feature points are points of the face feature points that are not activated by neurons in the convolutional neural network model, and the deformed feature points are points of the face feature points that are activated by neurons in the convolutional neural network model;
a second image tangent line establishing unit, configured to establish a plurality of second image tangent lines on the target face image according to deformed feature points in the face feature points;
the second segmentation unit is used for extracting M second image segmentation lines from the plurality of second image segmentation lines according to the image segmentation rule and performing segmentation processing on the target face image by adopting the extracted M second image segmentation lines; wherein M < 3;
a second image block obtaining unit, configured to obtain a plurality of deformation feature point image blocks in total through the plurality of second image segmentation lines;
and the micro-expression classification adjusting unit is used for obtaining a micro-expression classification adjusting result through the convolutional neural network model according to the deformation feature point image blocks and updating the current micro-expression classification result into the micro-expression classification adjusting result.
In an alternative embodiment, the micro expression recognition module 3 comprises:
the gray processing unit is used for carrying out gray processing on the plurality of image blocks to obtain a plurality of gray image blocks;
the horizontal overturning processing unit is used for carrying out horizontal overturning processing on the plurality of image blocks to obtain a plurality of horizontal overturning image blocks;
the PCA dimension reduction processing unit is used for inputting the image blocks, the grayed image blocks and the horizontal turnover image blocks into the convolutional neural network model for convolution calculation to obtain a feature vector corresponding to the target face image and carrying out PCA dimension reduction processing on the feature vector;
and the micro-episodic classification unit is used for obtaining a micro-episodic classification result corresponding to the target face image through a multilayer classifier in the convolutional neural network model according to the feature vector after dimension reduction.
In an alternative embodiment, the apparatus further comprises:
and the alignment processing module is used for performing alignment processing on the target face image according to the face characteristic points.
EXAMPLE III
Fig. 5 is a schematic view of a micro expression recognition device according to a third embodiment of the present invention. The micro expression recognition device includes: at least one processor 11, such as a CPU, at least one network interface 14 or other user interface 13, a memory 15, at least one communication bus 12, the communication bus 12 being used to enable connectivity communications between these components. The user interface 13 may optionally include a USB interface, and other standard interfaces, wired interfaces. The network interface 14 may optionally include a Wi-Fi interface as well as other wireless interfaces. The memory 15 may comprise a high-speed RAM memory, and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 15 may optionally comprise at least one memory device located remotely from the aforementioned processor 11.
In some embodiments, memory 15 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof:
an operating system 151, which contains various system programs for implementing various basic services and for processing hardware-based tasks;
and (5) a procedure 152.
Specifically, the processor 11 is configured to call the program 152 stored in the memory 15, and execute the face matching method according to the above embodiment, for example, step S100 shown in fig. 1. Alternatively, the processor, when executing the computer program, implements the functions of the modules/units in the above-mentioned apparatus embodiments, such as a human face feature detection module.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory and executed by the processor to implement the invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used for describing the execution process of the computer program in the micro expression recognition device.
The micro expression recognition device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing equipment. The micro-expression recognition device may include, but is not limited to, a processor, a memory. It will be understood by those skilled in the art that the schematic diagram is merely an example of the micro expression recognition apparatus, and does not constitute a limitation on the micro expression recognition apparatus, and may include more or less components than those shown, or combine some components, or different components, for example, the micro expression recognition apparatus may further include an input/output device, a network access device, a bus, and the like.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, the processor is a control center of the micro expression recognition apparatus, and various interfaces and lines are used to connect various parts of the entire micro expression recognition apparatus.
The memory may be used to store the computer programs and/or modules, and the processor may implement various functions of the micro expression recognition device by executing or executing the computer programs and/or modules stored in the memory and calling data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Wherein, the integrated module/unit of the micro expression recognition device can be stored in a computer readable storage medium if the module/unit is implemented in the form of a software functional unit and sold or used as an independent product. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
Example four
The embodiment of the invention provides a computer-readable storage medium, which includes a stored computer program, wherein when the computer program runs, a device where the computer-readable storage medium is located is controlled to execute the above-mentioned micro-expression recognition method.
It should be noted that the above-described device embodiments are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.

Claims (10)

1. A micro-expression recognition method is characterized by comprising the following steps:
detecting the face features of a pre-collected target face image, and acquiring at least five face feature points of the target face image;
according to the face characteristic points of the target face image and a preset image blocking rule, carrying out blocking processing on the target face image to obtain a plurality of image blocks;
obtaining a micro-expression classification result through a pre-established convolutional neural network model according to the plurality of image blocks;
identifying non-deformation feature points and deformation feature points in the human face feature points through the convolutional neural network model according to the plurality of image blocks; wherein the non-deformed feature points are points of the face feature points that are not activated by neurons in the convolutional neural network model, and the deformed feature points are points of the face feature points that are activated by neurons in the convolutional neural network model;
according to deformation feature points in the face feature points and the image blocking rules, carrying out blocking processing on the target face image to obtain a plurality of deformation feature point image blocks;
and obtaining a micro-expression classification adjustment result through the convolutional neural network model according to the deformation feature point image blocks, and updating the current micro-expression classification result into the micro-expression classification adjustment result.
2. The micro-expression recognition method of claim 1, wherein the obtaining of the plurality of blocks by performing block segmentation on the target face image according to the face feature points of the target face image and a preset image blocking rule comprises:
establishing a plurality of first image tangent lines on the target face image according to the face characteristic points;
extracting N first image segmentation lines from the plurality of first image segmentation lines according to a preset image segmentation rule, and carrying out segmentation processing on the target face image by adopting the extracted N first image segmentation lines; wherein N < 3;
and obtaining a plurality of image blocks through the plurality of first image splitting lines.
3. The micro expression recognition method of claim 2, wherein the obtaining a plurality of deformed feature point blocks by performing block segmentation on the target face image according to the deformed feature points in the face feature points and the image blocking rules comprises:
establishing a plurality of second image tangent lines on the target face image according to deformation feature points in the face feature points;
extracting M second image segmentation lines from the plurality of second image segmentation lines according to the image segmentation rule, and carrying out segmentation processing on the target face image by adopting the extracted M second image segmentation lines; wherein M < 3;
and obtaining a plurality of deformation characteristic point image blocks in total through the plurality of second image tangent lines.
4. The micro-expression recognition method of claim 1, wherein the obtaining of micro-expression classification results according to the plurality of blocks through a pre-established convolutional neural network model specifically comprises:
carrying out graying processing on the plurality of image blocks to obtain a plurality of grayed image blocks;
horizontally overturning the plurality of image blocks to obtain a plurality of horizontally overturning image blocks;
inputting the image blocks, the grayed image blocks and the horizontal turnover image blocks into the convolutional neural network model for convolution calculation to obtain a feature vector corresponding to the target face image, and performing PCA (principal component analysis) dimension reduction processing on the feature vector;
and obtaining a micro-expression classification result corresponding to the target face image through a multilayer classifier in the convolutional neural network model according to the feature vector after dimension reduction.
5. The micro-expression recognition method of claim 1, wherein before the target face image is subjected to the block segmentation processing according to the face feature points of the target face image and a preset image blocking rule to obtain a plurality of blocks, the method further comprises:
and according to the face characteristic points, carrying out alignment processing on the target face image.
6. The micro-expression recognition method of claim 1 or 2, wherein the plurality of tiles comprise: a first tile comprising a binocular feature, a second tile comprising a binocular feature and a nose feature, a third tile comprising a left eye feature and a left alar feature, a fourth tile comprising a right eye feature and a right alar feature, a fifth tile comprising a nose feature and a mouth feature, a sixth tile comprising a left alar feature and a left corner of mouth feature, a seventh tile comprising a right alar feature and a right corner of mouth feature, an eighth tile comprising an eyebrow feature, a ninth tile comprising a mouth feature, and a tenth tile comprising a full-face feature.
7. A micro expression recognition device, comprising:
the system comprises a human face feature detection module, a face feature detection module and a face feature detection module, wherein the human face feature detection module is used for detecting human face features of a target human face image acquired in advance and acquiring at least five human face feature points of the target human face image;
the image blocking module is used for carrying out blocking processing on the target face image according to the face characteristic points of the target face image and a preset image blocking rule to obtain a plurality of image blocks;
the micro expression recognition module is used for obtaining micro expression classification results through a pre-established convolutional neural network model according to the plurality of image blocks;
the feature point identification module is used for identifying the non-deformation feature points and the deformation feature points in the human face feature points through the convolutional neural network model according to the plurality of image blocks; wherein the non-deformed feature points are points of the face feature points that are not activated by neurons in the convolutional neural network model, and the deformed feature points are points of the face feature points that are activated by neurons in the convolutional neural network model;
the second image block cutting module is used for cutting the target face image into blocks according to deformation feature points in the face feature points and the image block dividing rule to obtain a plurality of deformation feature point image blocks;
and the feature point identification module is also used for obtaining a micro-expression classification adjustment result through the convolutional neural network model according to the plurality of deformed feature point image blocks, and updating the current micro-expression classification result into the micro-expression classification adjustment result.
8. The micro-expression recognition device of claim 7, wherein the image dicing module comprises:
the first image tangent line establishing unit is used for establishing a plurality of first image tangent lines on the target face image according to the face characteristic points;
the first segmentation unit is used for extracting N first image segmentation lines from the plurality of first image segmentation lines according to a preset image segmentation rule, and segmenting the target face image by adopting the extracted N first image segmentation lines; wherein N < 3;
and the first image block acquisition unit is used for obtaining a plurality of image blocks through the plurality of first image splitting lines.
9. A micro expression recognition apparatus comprising a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the micro expression recognition method according to any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, comprising a stored computer program, wherein the computer program, when executed, controls an apparatus in which the computer-readable storage medium is located to perform the micro-expression recognition method according to any one of claims 1 to 6.
CN201811075329.9A 2018-09-14 2018-09-14 Micro-expression recognition method, device and storage medium Active CN109271930B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811075329.9A CN109271930B (en) 2018-09-14 2018-09-14 Micro-expression recognition method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811075329.9A CN109271930B (en) 2018-09-14 2018-09-14 Micro-expression recognition method, device and storage medium

Publications (2)

Publication Number Publication Date
CN109271930A CN109271930A (en) 2019-01-25
CN109271930B true CN109271930B (en) 2020-11-13

Family

ID=65189116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811075329.9A Active CN109271930B (en) 2018-09-14 2018-09-14 Micro-expression recognition method, device and storage medium

Country Status (1)

Country Link
CN (1) CN109271930B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046955A (en) * 2019-03-12 2019-07-23 平安科技(深圳)有限公司 Marketing method, device, computer equipment and storage medium based on recognition of face
CN110147729A (en) * 2019-04-16 2019-08-20 深圳壹账通智能科技有限公司 User emotion recognition methods, device, computer equipment and storage medium
CN110941992B (en) * 2019-10-29 2023-09-05 平安科技(深圳)有限公司 Smile expression detection method and device, computer equipment and storage medium
CN111178151A (en) * 2019-12-09 2020-05-19 量子云未来(北京)信息科技有限公司 Method and device for realizing human face micro-expression change recognition based on AI technology
CN112668384B (en) * 2020-08-07 2024-05-31 深圳市唯特视科技有限公司 Knowledge graph construction method, system, electronic equipment and storage medium
CN112329663B (en) * 2020-11-10 2023-04-07 西南大学 Micro-expression time detection method and device based on face image sequence
CN112511748A (en) * 2020-11-30 2021-03-16 努比亚技术有限公司 Lens target intensified display method and device, mobile terminal and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8848068B2 (en) * 2012-05-08 2014-09-30 Oulun Yliopisto Automated recognition algorithm for detecting facial expressions
CN106295566A (en) * 2016-08-10 2017-01-04 北京小米移动软件有限公司 Facial expression recognizing method and device
CN106570474A (en) * 2016-10-27 2017-04-19 南京邮电大学 Micro expression recognition method based on 3D convolution neural network
CN106599800A (en) * 2016-11-25 2017-04-26 哈尔滨工程大学 Face micro-expression recognition method based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10076250B2 (en) * 2015-06-14 2018-09-18 Facense Ltd. Detecting physiological responses based on multispectral data from head-mounted cameras

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8848068B2 (en) * 2012-05-08 2014-09-30 Oulun Yliopisto Automated recognition algorithm for detecting facial expressions
CN106295566A (en) * 2016-08-10 2017-01-04 北京小米移动软件有限公司 Facial expression recognizing method and device
CN106570474A (en) * 2016-10-27 2017-04-19 南京邮电大学 Micro expression recognition method based on 3D convolution neural network
CN106599800A (en) * 2016-11-25 2017-04-26 哈尔滨工程大学 Face micro-expression recognition method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Image Based Facial Micro-Expression Recognition Using Deep Learning on Small Datasets;M. A. Takalkar et al.;《2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)》;20171201;1-7页 *
基于组合特征的人脸表情识别算法研究;王勋;《中国优秀硕士学位论文全文数据库信息科技辑》;20170215(第2期);第38页,图4-7 *

Also Published As

Publication number Publication date
CN109271930A (en) 2019-01-25

Similar Documents

Publication Publication Date Title
CN109271930B (en) Micro-expression recognition method, device and storage medium
US11836853B2 (en) Generation and presentation of predicted personalized three-dimensional body models
WO2022134337A1 (en) Face occlusion detection method and system, device, and storage medium
CN110084135B (en) Face recognition method, device, computer equipment and storage medium
CN109657554B (en) Image identification method and device based on micro expression and related equipment
AU2014368997B2 (en) System and method for identifying faces in unconstrained media
CN109145871B (en) Psychological behavior recognition method, device and storage medium
CN111767900B (en) Face living body detection method, device, computer equipment and storage medium
WO2017088432A1 (en) Image recognition method and device
JP5361524B2 (en) Pattern recognition system and pattern recognition method
CN109685713B (en) Cosmetic simulation control method, device, computer equipment and storage medium
US20230081982A1 (en) Image processing method and apparatus, computer device, storage medium, and computer program product
WO2021196721A1 (en) Cabin interior environment adjustment method and apparatus
CN107832740B (en) Teaching quality assessment method and system for remote teaching
CN111814620A (en) Face image quality evaluation model establishing method, optimization method, medium and device
US10210424B2 (en) Method and system for preprocessing images
US20230095182A1 (en) Method and apparatus for extracting biological features, device, medium, and program product
CN110598638A (en) Model training method, face gender prediction method, device and storage medium
US11861860B2 (en) Body dimensions from two-dimensional body images
Gudipati et al. Efficient facial expression recognition using adaboost and haar cascade classifiers
CN112149732A (en) Image protection method and device, electronic equipment and storage medium
CN115035581A (en) Facial expression recognition method, terminal device and storage medium
Travieso et al. Using a discrete Hidden Markov Model Kernel for lip-based biometric identification
RU2768797C1 (en) Method and system for determining synthetically modified face images on video
Tandon et al. An efficient age-invariant face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant