CN111967359A - Human face expression recognition method based on attention mechanism module - Google Patents

Human face expression recognition method based on attention mechanism module Download PDF

Info

Publication number
CN111967359A
CN111967359A CN202010783432.XA CN202010783432A CN111967359A CN 111967359 A CN111967359 A CN 111967359A CN 202010783432 A CN202010783432 A CN 202010783432A CN 111967359 A CN111967359 A CN 111967359A
Authority
CN
China
Prior art keywords
pixel
unit
layer
size
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010783432.XA
Other languages
Chinese (zh)
Inventor
李菁
金侃
陈则金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang University
Original Assignee
Nanchang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang University filed Critical Nanchang University
Priority to CN202010783432.XA priority Critical patent/CN111967359A/en
Publication of CN111967359A publication Critical patent/CN111967359A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a facial expression recognition method, in particular to a facial expression recognition method based on an attention mechanism module, which cuts out a face part from a picture to obtain an original picture, then sending the cut picture into a network model designed based on an attention mechanism module to carry out extraction operation to obtain the attention characteristics of the human face image, then, two convolution layers are adopted to carry out feature extraction operation to obtain the extraction features of the face image, a global average pooling layer is utilized to carry out feature dimension reduction operation to obtain the dimension reduction features of the face image, finally, a Softmax classifier is utilized to carry out classification and identification operation on the dimension reduction features, the invention avoids the interference of irrelevant features by focusing on the very useful features for classifying the expression, thereby improving the recognition accuracy.

Description

Human face expression recognition method based on attention mechanism module
Technical Field
The invention relates to a facial expression recognition method, in particular to a facial expression recognition method based on an attention mechanism module.
Background
The facial expression recognition means that a computer automatically, reliably and efficiently "recognizes" or "partially recognizes" information transmitted by facial expressions by observing the face shape or the change of the face shape of a person to be detected, and the research of psychologists shows that the amount of the information transmitted by the faces in the daily communication of human beings is as high as 55% of the total amount of the information. In the intelligent era, the facial expression recognition is an important component of artificial intelligence, provides an interface for human-computer interaction, and can conveniently finish the communication between people and computers and the transmission of emotions between people and computers.
The existing facial image-based expression recognition method mainly comprises the following steps: the traditional manual feature-based method, the shallow learning-based method and the deep learning-based method have limitations and disadvantages in the practical application of facial expression recognition, cannot well adapt to some interference factors irrelevant to expressions, such as illumination transformation, different head postures and facial obstruction, and have low accuracy of facial recognition.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a simple, convenient and high-precision human face expression recognition method based on an attention mechanism module, and in order to solve the technical problems, the invention is realized by the following technical scheme:
a facial expression recognition method based on an attention mechanism module comprises the following steps:
(1) extraction of original pictures
Detecting and selecting a face part by adopting a local constraint neuron domain, cutting the face part to be used as an original picture, and zooming all the cut pictures into pictures with the same size;
(2) the characteristics of the cut picture are preliminarily extracted by utilizing the two convolution layers, so that the network can achieve better generalization.
The two convolution layers are respectively a first convolution layer and a second convolution layer, the size of a convolution kernel of the first convolution layer is set to be 3 x 3, the unit is a pixel, the step length is set to be 1, the unit is a pixel, the filling is set to be 1, and the unit is a pixel; the size of the convolution kernel of the second convolution layer is set to be 3 multiplied by 3, the unit is a pixel, the step length is set to be 1, the unit is a pixel, the filling is set to be 1, and the unit is a pixel;
(3) sending the cut picture into an attention-based module network model designed by people for extraction operation to obtain the attention characteristics of the face image;
(4) performing feature extraction operation by adopting the two convolution layers to obtain extraction features of the face image;
the two convolutional layers are respectively a third convolutional layer and a fourth convolutional layer, the size of a convolutional core of the third convolutional layer is set to be 2 x 2, the unit is a pixel, the step length is set to be 1, the unit is a pixel, the filling is set to be 1, and the unit is a pixel; the size of the convolution kernel of the fourth convolution layer is set to be 3 multiplied by 3, the unit is a pixel, the step length is set to be 1, the unit is a pixel, the filling is set to be 1, and the unit is a pixel;
(5) the global average pooling layer is used for carrying out feature dimension reduction operation to obtain dimension reduction features of the face image, and the global average pooling layer is used for replacing a full-connection layer, so that the number of parameters can be greatly reduced without weakening the performance;
(6) and carrying out classification and identification operation on the dimensionality reduction features by using a Softmax classifier, and outputting a facial expression identification result.
Specifically, the network model in step (3) is composed of 4 attention mechanism modules, and each attention mechanism module is composed of two channels: the upper layer is a main channel and is composed of a convolution layer, the size of a convolution kernel of the convolution layer is set to be 3 x 3, the unit is a pixel, the step length is set to be 1, the unit is a pixel, the filling is set to be 1, the unit is a pixel, and the channel is mainly used for extracting features; the lower layer is a mask channel which is an hourglass network from bottom to top to bottom, the mask channel is composed of two variable separation convolutional layers, an anti-convolutional layer and a pooling layer, and the channel mainly provides an attention feature to enable the next attention module to focus more on key position extraction features such as eyes, a mouth, eyebrows and the like; the main channel outputs an input picture X as a face image characteristic F (X), the mask channel outputs the input picture X as a face image characteristic G (X) after sequentially passing through the variable separation convolution layer, the pooling layer, the deconvolution layer and the variable separation convolution layer, then the face image characteristic F (X) is used as a weight, the face image characteristic F (X) is multiplied by a pixel of the face image characteristic G (X), a face image attention characteristic M (X) is obtained, and the calculation formula is as follows:
Mc(X)=Fc(X)+Fc(X)·Gc(X)
in the formula: fc(X) denotes the c channel of F (X), Gc(X) denotes the c-th channel of G (X), and the symbol-represents the dot product of the matrix. By the method of overlapping the first multiplication and the second multiplication, the characteristics of the key positions can be emphasized, and the whole information of one picture cannot be lost.
Specifically, the size of the convolution kernel of the first variably separated convolution layer is set to 3 × 3 in pixels, the step size is set to 1 in pixels, the padding is set to 1 in pixels; the size of the convolution kernel of the second variable separation convolutional layer is set to be 3 × 3 in pixels, the step size is set to be 1 in pixels, the padding is set to be 1 in pixels; the size of the convolution kernel of the deconvolution layer is set to be 3 x 3, the unit is a pixel, the step length is set to be 2, the unit is a pixel, the filling is set to be 1, and the unit is a pixel; the convolution kernels of the pooling layer are all set to 3 x 3 in pixel size, step size is set to 2 in pixel size, padding is set to 1 in pixel size.
The invention has the beneficial effects that: the invention adopts the convolutional neural network based on the attention mechanism module, so that the characteristics extracted by the network model mainly focus on the characteristics which are very useful for classifying the representation, such as the mouth, the nose, the eyes and the like, and some irrelevant characteristics are ignored, thereby improving the precision. The method obtains better results on some common data sets at present, the accuracy of the CK + and fer2013 data sets is respectively 99.2% and 68%, and the facial expression recognition accuracy is greatly improved by the method.
Drawings
FIG. 1 is a process flow diagram of the present invention;
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, a facial expression recognition method based on an attention mechanism module includes the following steps:
(1) extraction of original pictures
Detecting and selecting a face part by adopting a local constraint neuron domain, cutting the face part to be used as an original picture, and zooming all the cut pictures into pictures with the same size;
(2) the characteristics of the cut picture are preliminarily extracted by utilizing the two convolution layers, so that the network can achieve better generalization.
The two convolution layers are respectively a first convolution layer and a second convolution layer, the size of a convolution kernel of the first convolution layer is set to be 3 x 3, the unit is a pixel, the step length is set to be 1, the unit is a pixel, the filling is set to be 1, and the unit is a pixel; the size of the convolution kernel of the second convolution layer is set to be 3 multiplied by 3, the unit is a pixel, the step length is set to be 1, the unit is a pixel, the filling is set to be 1, and the unit is a pixel;
(3) sending the cut picture into an attention-based module network model designed by people for extraction operation to obtain the attention characteristics of the face image;
(4) performing feature extraction operation by adopting the two convolution layers to obtain extraction features of the face image;
the two convolutional layers are respectively a third convolutional layer and a fourth convolutional layer, the size of a convolutional core of the third convolutional layer is set to be 2 x 2, the unit is a pixel, the step length is set to be 1, the unit is a pixel, the filling is set to be 1, and the unit is a pixel; the size of the convolution kernel of the fourth convolution layer is set to be 3 multiplied by 3, the unit is a pixel, the step length is set to be 1, the unit is a pixel, the filling is set to be 1, and the unit is a pixel;
(5) the global average pooling layer is used for carrying out feature dimension reduction operation to obtain dimension reduction features of the face image, and the global average pooling layer is used for replacing a full-connection layer, so that the number of parameters can be greatly reduced without weakening the performance;
(6) and carrying out classification and identification operation on the dimensionality reduction features by using a Softmax classifier, and outputting a facial expression identification result.
Specifically, the network model in step (3) is composed of 4 attention mechanism modules, and each attention mechanism module is composed of two channels: the upper layer is a main channel and is composed of a convolution layer, the size of a convolution kernel of the convolution layer is set to be 3 x 3, the unit is a pixel, the step length is set to be 1, the unit is a pixel, the filling is set to be 1, the unit is a pixel, and the channel is mainly used for extracting features; the lower layer is a mask channel which is an hourglass network from bottom to top to bottom, the mask channel is composed of two variable separation convolutional layers, an anti-convolutional layer and a pooling layer, and the channel mainly provides an attention feature to enable the next attention module to focus more on key position extraction features such as eyes, a mouth, eyebrows and the like; the main channel outputs an input picture X as a face image characteristic F (X), the mask channel outputs the input picture X as a face image characteristic G (X) after sequentially passing through the variable separation convolution layer, the pooling layer, the deconvolution layer and the variable separation convolution layer, then the face image characteristic F (X) is used as a weight, the face image characteristic F (X) is multiplied by a pixel of the face image characteristic G (X), a face image attention characteristic M (X) is obtained, and the calculation formula is as follows:
Mc(X)=Fc(X)+Fc(X)·Gc(X)
in the formula: fc(X) denotes the c channel of F (X), Gc(X) denotes the c-th channel of G (X), and the symbol-represents the dot product of the matrix. By the method of overlapping the first multiplication and the second multiplication, the characteristics of the key positions can be emphasized, and the whole information of one picture cannot be lost.
Specifically, the size of the convolution kernel of the first variably separated convolution layer is set to 3 × 3 in pixels, the step size is set to 1 in pixels, the padding is set to 1 in pixels; the size of the convolution kernel of the second variable separation convolutional layer is set to be 3 × 3 in pixels, the step size is set to be 1 in pixels, the padding is set to be 1 in pixels; the size of the convolution kernel of the deconvolution layer is set to be 3 x 3, the unit is a pixel, the step length is set to be 2, the unit is a pixel, the filling is set to be 1, and the unit is a pixel; the convolution kernels of the pooling layer are all set to 3 x 3 in pixel size, step size is set to 2 in pixel size, padding is set to 1 in pixel size.
In summary, the present invention employs a convolutional neural network based on attention mechanism module, so that the features extracted by the network model mainly focus on the features very useful for representing the categories, such as mouth, nose, eyes, etc., and ignore some irrelevant features. Thereby improving accuracy.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (3)

1. A facial expression recognition method based on an attention mechanism module is characterized by comprising the following steps:
(1) extraction of original pictures
Detecting and selecting a face part by adopting a local constraint neuron domain, cutting the face part to be used as an original picture, and zooming all the cut pictures into pictures with the same size;
(2) performing primary feature extraction on the cut picture by using two convolution layers
The two convolution layers are respectively a first convolution layer and a second convolution layer, the size of a convolution kernel of the first convolution layer is set to be 3 x 3, the unit is a pixel, the step length is set to be 1, the unit is a pixel, the filling is set to be 1, and the unit is a pixel; the size of the convolution kernel of the second convolution layer is set to be 3 multiplied by 3, the unit is a pixel, the step length is set to be 1, the unit is a pixel, the filling is set to be 1, and the unit is a pixel;
(3) sending the preliminarily extracted features into an attention mechanism module-based network model designed by people for extraction operation to obtain the attention features of the face image;
(4) performing feature extraction operation by using two convolution layers to obtain extracted features of the face image
The two convolutional layers are respectively a third convolutional layer and a fourth convolutional layer, the size of a convolutional core of the third convolutional layer is set to be 2 x 2, the unit is a pixel, the step length is set to be 1, the unit is a pixel, the filling is set to be 1, and the unit is a pixel; the size of the convolution kernel of the fourth convolution layer is set to be 3 multiplied by 3, the unit is a pixel, the step length is set to be 1, the unit is a pixel, the filling is set to be 1, and the unit is a pixel;
(5) performing feature dimension reduction operation by using a global average pooling layer to obtain dimension reduction features of the face image;
(6) and carrying out classification and identification operation on the dimensionality reduction features by using a Softmax classifier, and outputting a facial expression identification result.
2. The facial expression recognition method based on the attention mechanism module as claimed in claim 1, wherein: the network model in the step (3) is composed of 4 attention mechanism modules, each attention mechanism module is composed of two channels, the upper layer is a main channel and is composed of a convolution layer, the size of a convolution kernel of the convolution layer is set to be 3 x 3, the unit is a pixel, the step length is set to be 1, the unit is a pixel, the filling is set to be 1, and the unit is a pixel; the lower layer is a mask channel which is an hourglass network from bottom to top to bottom, the mask channel is composed of two variable separation convolutional layers, an anti-convolutional layer and a pooling layer, the two variable separation convolutional layers are respectively a first variable separation convolutional layer and a second variable separation convolutional layer, the main channel outputs an input picture X as a face image characteristic F (X), the mask channel sequentially passes through the variable separation convolutional layers, the pooling layer, the anti-convolutional layer and the variable separation convolutional layers to output a face image characteristic G (X), then the face image characteristic F (X) is used as a weight, the face image characteristic F (X) is multiplied by a face image characteristic G (X) pixel to obtain a face image attention characteristic M (X), and the calculation formula is as follows:
Mc(X)=Fc(X)+Fc(X)·Gc(X)
in the formula: fc(X) denotes the c channel of F (X), Gc(X) denotes the c-th channel of G (X), and the symbol-represents the dot product of the matrix.
3. The facial expression recognition method based on the attention mechanism module as claimed in claim 1 or 2, wherein: the size of the convolution kernel of the first variable separation convolutional layer is set to be 3 × 3 in pixels, the step size is set to be 1 in pixels, the padding is set to be 1 in pixels; the size of the convolution kernel of the second variable separation convolutional layer is set to be 3 × 3 in pixels, the step size is set to be 1 in pixels, the padding is set to be 1 in pixels; the size of the convolution kernel of the deconvolution layer is set to be 3 x 3, the unit is a pixel, the step length is set to be 2, the unit is a pixel, the filling is set to be 1, and the unit is a pixel; the convolution kernels of the pooling layer are all set to 3 x 3 in pixel size, step size is set to 2 in pixel size, padding is set to 1 in pixel size.
CN202010783432.XA 2020-08-06 2020-08-06 Human face expression recognition method based on attention mechanism module Pending CN111967359A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010783432.XA CN111967359A (en) 2020-08-06 2020-08-06 Human face expression recognition method based on attention mechanism module

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010783432.XA CN111967359A (en) 2020-08-06 2020-08-06 Human face expression recognition method based on attention mechanism module

Publications (1)

Publication Number Publication Date
CN111967359A true CN111967359A (en) 2020-11-20

Family

ID=73364990

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010783432.XA Pending CN111967359A (en) 2020-08-06 2020-08-06 Human face expression recognition method based on attention mechanism module

Country Status (1)

Country Link
CN (1) CN111967359A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418095A (en) * 2020-11-24 2021-02-26 华中师范大学 Facial expression recognition method and system combined with attention mechanism
CN113076890A (en) * 2021-04-09 2021-07-06 南京邮电大学 Facial expression recognition method and system based on improved channel attention mechanism

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256426A (en) * 2017-12-15 2018-07-06 安徽四创电子股份有限公司 A kind of facial expression recognizing method based on convolutional neural networks

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256426A (en) * 2017-12-15 2018-07-06 安徽四创电子股份有限公司 A kind of facial expression recognizing method based on convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JING LI 等: "Attention mechanism-based CNN for facial expression recognition", 《NEUROCOMPUTING》, pages 340 - 350 *
张锋 等: "深度学习的单人姿态估计方法综述", 《小型微型计算机系统》, pages 1501 - 1507 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112418095A (en) * 2020-11-24 2021-02-26 华中师范大学 Facial expression recognition method and system combined with attention mechanism
CN112418095B (en) * 2020-11-24 2023-06-30 华中师范大学 Facial expression recognition method and system combined with attention mechanism
CN113076890A (en) * 2021-04-09 2021-07-06 南京邮电大学 Facial expression recognition method and system based on improved channel attention mechanism
CN113076890B (en) * 2021-04-09 2022-07-29 南京邮电大学 Facial expression recognition method and system based on improved channel attention mechanism

Similar Documents

Publication Publication Date Title
Sun et al. A visual attention based ROI detection method for facial expression recognition
CN105512624B (en) A kind of smiling face's recognition methods of facial image and its device
US20210271862A1 (en) Expression recognition method and related apparatus
CN112860888B (en) Attention mechanism-based bimodal emotion analysis method
CN109492529A (en) A kind of Multi resolution feature extraction and the facial expression recognizing method of global characteristics fusion
JP6788264B2 (en) Facial expression recognition method, facial expression recognition device, computer program and advertisement management system
Jain et al. Multimodal document image classification
CN112800903B (en) Dynamic expression recognition method and system based on space-time diagram convolutional neural network
CN110321805B (en) Dynamic expression recognition method based on time sequence relation reasoning
CN114549850B (en) Multi-mode image aesthetic quality evaluation method for solving modal missing problem
CN111967359A (en) Human face expression recognition method based on attention mechanism module
CN112836702B (en) Text recognition method based on multi-scale feature extraction
CN113837366A (en) Multi-style font generation method
CN108537109B (en) OpenPose-based monocular camera sign language identification method
CN111652273A (en) Deep learning-based RGB-D image classification method
Hussein et al. Emotional stability detection using convolutional neural networks
Hazourli et al. Deep multi-facial patches aggregation network for facial expression recognition
CN113158828B (en) Facial emotion calibration method and system based on deep learning
WO2024060909A1 (en) Expression recognition method and apparatus, and device and medium
Gao et al. Tfe: A transformer architecture for occlusion aware facial expression recognition
CN112580527A (en) Facial expression recognition method based on convolution long-term and short-term memory network
CN112016592A (en) Domain adaptive semantic segmentation method and device based on cross domain category perception
Jadhav et al. Content based facial emotion recognition model using machine learning algorithm
Özyurt et al. A new method for classification of images using convolutional neural network based on Dwt-Svd perceptual hash function
Kumar et al. Facial emotion recognition and detection using cnn

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201120