CN111680536A - Light face recognition method based on case and management scene - Google Patents

Light face recognition method based on case and management scene Download PDF

Info

Publication number
CN111680536A
CN111680536A CN201911044979.1A CN201911044979A CN111680536A CN 111680536 A CN111680536 A CN 111680536A CN 201911044979 A CN201911044979 A CN 201911044979A CN 111680536 A CN111680536 A CN 111680536A
Authority
CN
China
Prior art keywords
network
lightweight
face recognition
face
method based
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911044979.1A
Other languages
Chinese (zh)
Other versions
CN111680536B (en
Inventor
毛亮
王祥雪
王秋子
林焕凯
许丹丹
谭焕新
刘双广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gosuncn Technology Group Co Ltd
Original Assignee
Gosuncn Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gosuncn Technology Group Co Ltd filed Critical Gosuncn Technology Group Co Ltd
Priority to CN201911044979.1A priority Critical patent/CN111680536B/en
Publication of CN111680536A publication Critical patent/CN111680536A/en
Application granted granted Critical
Publication of CN111680536B publication Critical patent/CN111680536B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of face recognition, and particularly relates to a lightweight face recognition method based on a case and management scene, which analyzes network structures of lightweight networks of MobileneetV 2 and Mobilefacene, partially adjusts the network structures of the Mobilefacene on the basis of a main structure of the Mobilefacene, and increases the depth and width of the network; and monitoring the adjusted network by adopting softmax and a variant loss function thereof, and extracting the face feature vector. The method and the device can realize light algorithm, higher recognition speed and better user experience.

Description

Light face recognition method based on case and management scene
Technical Field
The invention belongs to the technical field of face recognition, and particularly relates to a light face recognition method based on a case and administration scene.
Background
In recent years, with the rapid development of computer technology, face recognition has become one of the hottest research topics in pattern recognition and image processing in the last 30 years. The purpose of face recognition is to extract personalized features of a person from a face image and thereby identify the identity of the person. A simple automatic face recognition system comprises face detection, face key point positioning and alignment, face feature extraction and comparison. At present, the methods for extracting the human face features based on images are mainly divided into the following methods:
a face recognition method based on geometric features, one of the early face recognition methods. The geometric features commonly used are the local shape features of the five sense organs of the face, such as the eyes, nose, mouth, etc. Facial features and the geometric features of the distribution of the five sense organs on the face. Some prior knowledge of the face structure is often used in extracting features. The geometric features adopted for recognition are feature vectors based on the shape and geometric relationship of human face organs, and are essentially the matching between the feature vectors, and the components of the feature vectors generally comprise Euclidean distance, curvature, angle and the like between two points specified by the human face. The identification method based on the geometric features is simple and easy to understand, but does not form a uniform feature extraction standard; it is difficult to extract stable features from an image, especially when the features are occluded; the robustness to large expression changes or posture changes is poor.
Based on the subspace approach, the commonly used linear subspace approach has intrinsic subspace, difference subspace, independent component subspace, etc. In addition, there are a local feature analysis method, a factor analysis method, and the like. These methods are also extended to hybrid linear and non-linear subspaces, respectively. Since the image form of each eigenvector resembles a human face, it is called an eigenface. And (3) establishing an intrinsic subspace for the characteristics of eyes, nose, mouth and the like respectively, and obtaining a good identification result by combining the intrinsic face subspace method.
The statistical-based identification method comprises a KL algorithm, a Singular Value Decomposition (SVD) method and a hidden Markov (HMM) method. The KL transformation takes a high-dimensional vector formed by expanding the face image in rows and columns as a random vector, so that an orthogonal K-L substrate is obtained by adopting K-L transformation, and the substrate corresponding to a larger characteristic value has a shape similar to the face. The hidden Markov model trains an HMM model through the space sequence of a plurality of sample images, and the parameter of the HMM model is a characteristic value; based on the structural features of the face from top to bottom and from left to right; first 1-DHMM and 2-DHMM are used for face recognition. Good recognition effect is obtained by using low-frequency DCT coefficients as observation vectors.
The method based on the correlation matching comprises a template matching method and an equal intensity line method. The template matching method is superior to the recognition method based on the geometric characteristics in the recognition rate. The equal intensity lines use the equal intensity lines of the multi-level gray values of the gray images as features to perform matching identification of the two human face images. The method depends on the description capability of the manually designed feature descriptors, and the description capability of the manually designed feature descriptors is very limited and is not suitable for complex and variable practical application scenes.
With the explosive growth of deep learning research and the continuous enhancement of the computing power of related hardware such as a GPU, more and more technical schemes based on a convolutional neural network are provided, and from the deep face, which is introduced from early Facebook, a series of Facenet and FaceID of Google, to a series of variant loss (A-softmax, cosface and arcfacce) based on face recognition, which is proposed based on softmax loss in recent years, the intra-class distance of the face features is further compressed, the inter-class distance is expanded, and the face features with higher resolution are extracted. However, the face recognition algorithm based on deep learning achieves high performance and is accompanied by consumption of a large number of parameters and hardware resources, which is not favorable for the floor of an actual product and the deployment of a mobile terminal. Although a lightweight face feature extraction network which is subjected to refined design by the Mobilefacenet is also emerged in 2018, the performance of the lightweight face feature extraction network still does not reach the level of practical application in the industrial field.
With the appearance of deep network structures such as Resnet and DenseNet, the existing face recognition technology gradually matures along with the continuous improvement of face feature extraction loss functions, the open source of face large-scale training sets and test sets, and other factors. However, the research on the face recognition technology aiming at the mobile terminal lightweight model is relatively less at present, and although the lightweight network structures are finely designed, the universal classification network structures are directly used for face feature extraction, so that the effect is poor; aiming at the problem, the Mobilefacenet redesigns a CNN (compressed natural network) suitable for face feature extraction, and the lightweight network is only 4M in the implementation of the Arcface open source project, but the performance of the lightweight network cannot meet the actual requirement of industrial production.
Aiming at the problems, the invention needs to research a lightweight face feature extraction algorithm under a case handling management scene. In case handling scenes belonging to practical application scenes, the environment is complex and changeable, and the face recognition performance is reduced due to the problems of illumination, blurring, shielding, deflection angles and the like when the face is captured; moreover, as a lightweight face feature extraction model on the RK3288 android platform needs to be developed, the designed CNN must simultaneously meet multiple requirements of recognition performance, time consumption of inference, memory occupation and the like.
Disclosure of Invention
In order to solve the technical defects in the prior art, the invention designs a light-weight face recognition method based on a case and management scene.
The invention relates to a face feature extraction module in a case handling management scene, in particular to a face recognition module which is mainly used for a video interview system in a case handling scene by utilizing network structures such as MobilenetV2 and Mobilefacenet and variants thereof, and has the advantages of light algorithm, higher recognition speed and better user experience.
The invention is realized by the following technical scheme:
based on a lightweight face recognition method under a case and management scene, the network structures of lightweight networks of MobileneetV 2 and Mobilefacene are analyzed, the network structures of the lightweight networks are partially adjusted on the basis of the backbone structure of the Mobilefacene, and the depth and the width of the network are increased; the method comprises the following steps:
s1, acquiring a training sample set, and carrying out standardized naming and labeling on the training sample set;
s2, inputting the training set sample processed in the step S2 into a lightweight network for off-line training, and obtaining a human face feature extractor through softmax and a variant loss function thereof;
the structure of the lightweight network includes: 1) the second layer depth separable convolution of the Mobilefacenet is common convolution, and the characteristic loss in initial calculation is reduced; 2) changing the input size, and setting the input to be 112 multiplied by 96 according to the face feature size of the case handling area; 3) deepening the network, and increasing the repetition times n of the third and fifth bottleecks; the number of channels of the input characteristic diagram of the convolutional layer inside the bottleeck is equal to the number of channels of the input bottleeck characteristic diagram multiplied by t, wherein t represents the channel expansion multiple; the first layer of each bottleck has a step s, and the remaining layers all use step 1.
And S3, establishing a standard test library.
Further, the variant loss function includes A-softmax, arcfacce, or cosface.
Further, the standard test library comprises face sample images of various scenes and various postures under various angles.
Further, the method also comprises the following steps: and testing the face feature extractor obtained in the step S2 by using the standard test library, taking cosine similarity as an evaluation index, and analyzing a test result.
A computer-readable storage medium having stored thereon a computer program, wherein the program, when executed by a processor, implements the steps of a light-weight face recognition method based on a scenario.
A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of a light-weight face recognition method based on a managed scenario.
Although the Mobilefacenet network structure mentioned in the background art is a face feature extraction lightweight network structure which is designed in a fine-grained manner, the dimensionality of a feature map is quickly reduced at the head of a network due to the excessive pursuit of a lightweight effect, so that the network does not sufficiently extract the basic features of the face, and the representation force of extracting the face features is influenced. Compared with the prior art, the invention has at least the following beneficial effects or advantages:
firstly, the lightweight network of the current mainstream is analyzed, the network structure is partially adjusted and topologically arranged on the basis of the backbone structure of the Mobilefacenet, the network is supervised and trained by adopting the face loss function of the current mainstream, the characterization capability of the face features is improved on the basis of meeting the lightweight requirement, and the end-to-end training test can be realized. In practical application, the method has the advantages of high recognition rate, low false recognition rate and good real-time performance under certain limiting conditions, and can meet the intelligent requirements of a case handling area.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to meet the intelligent analysis requirements of case handling areas, the invention provides a light-weight face recognition network based on case management scenes, which is characterized in that the light-weight networks of MobilenetV2 and Mobilefacene which are mainstream at present are firstly analyzed, partial structure adjustment is carried out on the network structure on the basis of the main structure of the Mobilefacene, the depth and the width of the network are increased, the network is supervised by adopting softmax and a variant loss function thereof which are mainstream at present, the extracted face feature vector has better recognition performance under certain limiting conditions (complete and clear facial features are captured, human eye interpupillary distance is more than 40, human face age span is less than five years and the like), and the forward reasoning time consumption also meets the product requirements.
The lightweight face recognition network technology based on the case and administration scene mainly comprises the following steps:
s1, analyzing the network structures of MobileneetV 2 and Mobilefacenet;
s2, taking the Mobilefacenet structure as a main body, and deepening the network on the basis of modifying partial network branches; tables 1 and 2 are the Mobilefacenet and the lightweight network of the present invention, respectively;
table 1 MobileFaceNet network architecture
Figure BDA0002253894550000061
Major improvements to lightweight networks include: 1) changing the second layer depth separable convolution of the Mobilefacenet (shown in table 1) into a common convolution to reduce the characteristic loss in the initial calculation; 2) changing the input size to 112 multiplied by 96 aiming at the face feature size of the case handling area; 3) deepening the network, increasing the number of repetitions of the third and fifth bottleeck (as shown in table 3); t represents the channel expansion multiple, namely the number of channels of the input characteristic diagram of the inside convolutional layer of the bottleeck is equal to the number of channels of the input characteristic diagram of the bottleeck multiplied by t; c represents the number of output channels of the bottleeck, n represents the repetition times of the bottleeck, s represents the step length stride, the first layer of each bottleeck has a step length s, and the rest layers use the step length 1.
TABLE 2 improved lightweight network
Figure BDA0002253894550000071
TABLE 3 bottleeck structure
Figure BDA0002253894550000072
The bottleeck structure is composed of 3 layers of convolution operations, h, w and k are respectively the height, width and channel number of the characteristic diagram, and s is the step length of the convolution operation. The first layer expands the channel by 1 × 1, t is the expansion coefficient, the second layer performs feature extraction by 3 × 3 convolution, and the third layer reduces the channel to k by 1 × 1.
S3, carrying out standardized naming and labeling on the training sample set;
s4, inputting the training set sample processed in S3 into the lightweight network for off-line training, and extracting major softmax and variant loss (A-softmax, arcfacce, cosface and the like) of the current flow by adopting a face feature through a loss function to obtain a face feature extractor;
s5, establishing a standard test library which comprises human face sample images of various postures of various scenes at various angles as much as possible;
and S6, testing the human face feature extractor obtained in the step S4 by using a test library, taking cosine similarity as an evaluation index, and analyzing a test result.
The present invention also provides a computer-readable storage medium having a computer program stored thereon, wherein the program, when executed by a processor, performs the steps of the face recognition method.
The invention also provides computer equipment comprising a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor realizes the steps of the face recognition method when executing the program.
The above-mentioned embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, and it should be understood that the above-mentioned embodiments are only examples of the present invention and are not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement and the like made without departing from the spirit and scope of the invention are also within the protection scope of the invention.

Claims (6)

1. The lightweight face recognition method based on the case and management scene is characterized in that the network structure of lightweight networks of MobilenetV2 and Mobilefacene is analyzed, the network structure of the lightweight networks of Mobilefacene is partially adjusted on the basis of the backbone structure of the Mobilefacene, and the depth and the width of the network are increased; the method comprises the following steps:
s1, acquiring a training sample set, and carrying out standardized naming and labeling on the training sample set;
s2, inputting the training set sample processed in the step S2 into a lightweight network for off-line training, and obtaining a human face feature extractor through softmax and a variant loss function thereof;
the structure of the lightweight network includes: 1) the second layer depth separable convolution of the Mobilefacenet is common convolution, and the characteristic loss in initial calculation is reduced; 2) changing the input size, and setting the input to be 112 multiplied by 96 according to the face feature size of the case handling area; 3) deepening the network, and increasing the repetition times n of the third and fifth bottleecks; the number of channels of the input characteristic diagram of the convolutional layer inside the bottleeck is equal to the number of channels of the input bottleeck characteristic diagram multiplied by t, wherein t represents the channel expansion multiple; the first layer of each bottleck has a step s, and the remaining layers all use step 1.
And S3, establishing a standard test library.
2. The light-weight face recognition method based on the management scene as claimed in claim 1, wherein the variant loss function comprises A-softmax, arcfacce or cosface.
3. The light-weight face recognition method based on the tape management scene as claimed in claim 1, wherein the standard test library comprises face sample images of various scenes and various poses at various angles.
4. The light-weight face recognition method based on the management scene as claimed in claim 1, further comprising the steps of: and testing the face feature extractor obtained in the step S2 by using the standard test library, taking cosine similarity as an evaluation index, and analyzing a test result.
5. A computer-readable storage medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the steps of the method for lightweight face recognition based on a scenario of claim 1 to 4.
6. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of the method for lightweight face recognition based on a managed scenario of any one of claims 1-4.
CN201911044979.1A 2019-10-30 2019-10-30 Light-weight face recognition method based on case management scene Active CN111680536B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911044979.1A CN111680536B (en) 2019-10-30 2019-10-30 Light-weight face recognition method based on case management scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911044979.1A CN111680536B (en) 2019-10-30 2019-10-30 Light-weight face recognition method based on case management scene

Publications (2)

Publication Number Publication Date
CN111680536A true CN111680536A (en) 2020-09-18
CN111680536B CN111680536B (en) 2023-06-30

Family

ID=72451265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911044979.1A Active CN111680536B (en) 2019-10-30 2019-10-30 Light-weight face recognition method based on case management scene

Country Status (1)

Country Link
CN (1) CN111680536B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733665A (en) * 2020-12-31 2021-04-30 中科院微电子研究所南京智能技术研究院 Face recognition method and system based on lightweight network structure design
CN113255576A (en) * 2021-06-18 2021-08-13 第六镜科技(北京)有限公司 Face recognition method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886064A (en) * 2017-11-06 2018-04-06 安徽大学 A kind of method that recognition of face scene based on convolutional neural networks adapts to
CN108304788A (en) * 2018-01-18 2018-07-20 陕西炬云信息科技有限公司 Face identification method based on deep neural network
CN108985236A (en) * 2018-07-20 2018-12-11 南京开为网络科技有限公司 A kind of face identification method separating convolution model based on depthization
CN109165566A (en) * 2018-08-01 2019-01-08 中国计量大学 A kind of recognition of face convolutional neural networks training method based on novel loss function
CN109214360A (en) * 2018-10-15 2019-01-15 北京亮亮视野科技有限公司 A kind of construction method of the human face recognition model based on ParaSoftMax loss function and application
CN109858362A (en) * 2018-12-28 2019-06-07 浙江工业大学 A kind of mobile terminal method for detecting human face based on inversion residual error structure and angle associated losses function

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107886064A (en) * 2017-11-06 2018-04-06 安徽大学 A kind of method that recognition of face scene based on convolutional neural networks adapts to
CN108304788A (en) * 2018-01-18 2018-07-20 陕西炬云信息科技有限公司 Face identification method based on deep neural network
CN108985236A (en) * 2018-07-20 2018-12-11 南京开为网络科技有限公司 A kind of face identification method separating convolution model based on depthization
CN109165566A (en) * 2018-08-01 2019-01-08 中国计量大学 A kind of recognition of face convolutional neural networks training method based on novel loss function
CN109214360A (en) * 2018-10-15 2019-01-15 北京亮亮视野科技有限公司 A kind of construction method of the human face recognition model based on ParaSoftMax loss function and application
CN109858362A (en) * 2018-12-28 2019-06-07 浙江工业大学 A kind of mobile terminal method for detecting human face based on inversion residual error structure and angle associated losses function

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
毛亮,李立琛: "浅谈人脸识别技术", 《第三届深圳国际智能交通与卫星导航位置服务展览会》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112733665A (en) * 2020-12-31 2021-04-30 中科院微电子研究所南京智能技术研究院 Face recognition method and system based on lightweight network structure design
CN112733665B (en) * 2020-12-31 2024-05-28 中科南京智能技术研究院 Face recognition method and system based on lightweight network structure design
CN113255576A (en) * 2021-06-18 2021-08-13 第六镜科技(北京)有限公司 Face recognition method and device

Also Published As

Publication number Publication date
CN111680536B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
Lee et al. Srm: A style-based recalibration module for convolutional neural networks
CN108615010B (en) Facial expression recognition method based on parallel convolution neural network feature map fusion
Cheng et al. Exploiting effective facial patches for robust gender recognition
Cai et al. Facial expression recognition method based on sparse batch normalization CNN
CN108108677A (en) One kind is based on improved CNN facial expression recognizing methods
CN109472198A (en) A kind of video smiling face's recognition methods of attitude robust
CN109376787B (en) Manifold learning network and computer vision image set classification method based on manifold learning network
CN113705290A (en) Image processing method, image processing device, computer equipment and storage medium
CN114299542A (en) Video pedestrian re-identification method based on multi-scale feature fusion
CN111680536B (en) Light-weight face recognition method based on case management scene
CN110458235A (en) Movement posture similarity comparison method in a kind of video
CN108564061A (en) A kind of image-recognizing method and system based on two-dimensional principal component analysis
Zhang et al. FCHP: Exploring the discriminative feature and feature correlation of feature maps for hierarchical DNN pruning and compression
Gilani et al. Towards large-scale 3D face recognition
Bao et al. Optimized faster-RCNN in real-time facial expression classification
Huang et al. Incremental kernel null foley-sammon transform for person re-identification
Okada et al. Online incremental clustering with distance metric learning for high dimensional data
Li et al. A face recognition algorithm based on LBP-EHMM
Su et al. Early facial expression recognition using early rankboost
Chun-man et al. Face expression recognition based on improved MobileNeXt
CN112381176B (en) Image classification method based on binocular feature fusion network
Dwivedi et al. A new hybrid approach on face detection and recognition
Haque et al. Object localization and detection using SALNet with deformable convolutional network
Ku et al. Person re-identification method based on CNN and manually-selected feature fusion
KR100457928B1 (en) Hand signal recognition method by subgroup based classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant