CN108875623A - A kind of face identification method based on multi-features correlation technique - Google Patents
A kind of face identification method based on multi-features correlation technique Download PDFInfo
- Publication number
- CN108875623A CN108875623A CN201810593767.8A CN201810593767A CN108875623A CN 108875623 A CN108875623 A CN 108875623A CN 201810593767 A CN201810593767 A CN 201810593767A CN 108875623 A CN108875623 A CN 108875623A
- Authority
- CN
- China
- Prior art keywords
- image
- eyes
- face
- correlation technique
- original image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Abstract
The present invention provides a kind of face identification methods based on multi-features correlation technique, including:Step 1: acquiring realtime graphic sample using electronic equipment;Step 2: the grey scale pixel value for calculating image pattern obtains gray level image, and Threshold segmentation is carried out to gray level image, then uses histogram equalization processing, independent noise is finally filtered out using filtering method, obtains pretreated image pattern;Step 3: carrying out portrait analysis to described image sample, and feature extraction is carried out, calculates portrait area in image, obtain the eyes accounting of portrait, and correct to the eyes vector;Step 4: comparing the original image of target person and the similarity of comparison head portrait, identifies that target person, the present invention correct important eye feature by extracting face feature, improve picture quality, matching accuracy is higher.
Description
Technical field
The present invention relates to technical field of face recognition, more particularly to the recognition of face based on multi-features correlation technique
Method.
Background technique
Recognition of face is to carry out a kind of biological identification technology summary of the invention of identification based on facial feature information of people,
Especially in public security department, significant role is played for arresting criminal and finding Missing Persons.But current recognition of face
Technology, acquisition range is big, and identification crowd is more, so has any problem so that arresting criminal and appointing.
Summary of the invention
The present invention provides a kind of face identification methods based on multi-features correlation technique, extract face feature,
And important eye feature is corrected, picture quality is improved, matching accuracy is higher.
The present invention has also designed and developed a kind of face identification method based on multi-features correlation technique, including:
Step 1: acquiring realtime graphic sample using electronic equipment;
Step 2: the grey scale pixel value for calculating image pattern obtains gray level image, and Threshold segmentation is carried out to gray level image,
Then histogram equalization processing is used, independent noise is finally filtered out using filtering method, obtains pretreated image pattern;
Step 3: carrying out portrait analysis to described image sample, and feature extraction is carried out, calculates portrait area in image,
The eyes accounting of portrait is obtained, and the eyes vector is corrected;
Step 4: comparing the original image of target person and the similarity of comparison head portrait, target person is identified.
Preferably, described image sample is video or pictorial information.
Preferably, the calculation formula of grey scale pixel value is in the step 2:
Wherein R is the red component for including in image, and G is green component, and B is blue component.
Preferably, bianry image is after Threshold segmentation in the step 2:
Wherein, f (x, y) is original-gray image;G (x, y) is bianry image after Threshold segmentation, and t is gray value, is as divided
Cut threshold value.
Preferably, the histogram equalization process includes:
Step a, the gray level f of image after listing original image and convertingk(k=0,1,2, L-1), wherein L is ash
Spend total number of grades.
Step b, the appearance sum of each tonal gradation of histogram is calculated
Wherein, nkFor each gray-scale number of pixels of original image, k=0,1,2, L-1, n are the pixel of original image
Total number, L are tonal gradation sum, Pf(fk) indicate the frequency that the gray scale occurs;
Step c, cumulative distribution function is calculated
nkFor each gray-scale number of pixels of original image, k=0,1,2, L-1;N is that the pixel of original image is always a
Number;L is tonal gradation sum;
Step d, the tonal gradation g of image after histogram equalization is calculatedi
gi=INT [(gmax-gmin)C(f)+gmin+0.5]
Wherein, giThe tonal gradation of image after histogram equalization, i=0,1,2,255;INT is rounding operation,
gmaxFor tonal gradation maximum value, gminTonal gradation minimum value
Step e, the tonal gradation of output image is calculated
niIt is the number of pixels of each gray level, i=0,1,2,255, histogram equalization is carried out to original image
That Shi Liyong is giAnd fkMapping relations, the image after histogram equalization can be obtained after mapping.
Preferably, the filtering method uses median filtering algorithm.
Preferably, the step 3 includes:
Step A, mathematical model is constructed using principal component analysis PCA algorithm, obtains the spy of face each section using Karhunen-Loeve transformation
Collection, these features form coordinate system, and each reference axis is exactly a characteristic image, is included at least in the feature set:Eyes,
Nose, mouth, eye distance, eyebrow;
Step B, the corresponding area of feature eyes is extracted, the accounting of eyes and face is calculated;
Step C, according to the area ratio of two eyes, other features in comprehensive analysis feature set obtain face's angle, and right
Eye feature vector is corrected.
Preferably, calculation formula is corrected in the step C is:
Wherein, ωi(i, m) is eye character pair vector after correction, eiFor the area ratio of two eyes, DiFor eye distance, β
For eye-angle,S is biggish eyes area in two eyes, and π is pi,For
Its characteristic ratio coefficient,
Wherein,For its characteristic ratio coefficient, face characteristic number in n feature set, zjFor face feature vector, fjFor
The corresponding feature vector of eyes, λjFor equalizing coefficient.
Preferably, the similarity of original image and comparison head portrait judges in the step 4, including:
Calculate the Euclidean distance between original image and contrast images:
Wherein, Y is the feature vector set of original image, and D is the feature vector set of contrast images, yiFor original image
Corresponding single feature vector, diFor the corresponding single feature vector of contrast images, n, which is characterized, concentrates face characteristic number;
Think that successful match completes identification as Φ (Y, D)≤σ;
Wherein, σ is the characteristic threshold value of setting.
Beneficial effects of the present invention
A kind of face identification method based on multi-features correlation technique provided by the invention extracts face feature,
And important eye feature is corrected, picture quality is improved, matching accuracy is higher.
Detailed description of the invention
Fig. 1 is the flow chart of the face identification method of the present invention in multi-features correlation technique.
Specific embodiment
Present invention will be described in further detail below with reference to the accompanying drawings, to enable those skilled in the art referring to specification text
Word can be implemented accordingly.
As shown in Figure 1, the present invention provides a kind of face identification method based on multi-features correlation technique, according to such as
Lower step is implemented:
Step S110:Acquire image, certain model using electronic eyes in particular place position first in the present invention
Inner acquisition image is enclosed, acquired image includes video or pictorial information.
Step S120:Image preprocessing handles collected original image according to following procedure:
Step S121:The gray processing of image, first input acquired image data, obtain tri- components of original image RGB
Value, then by formula calculate grey scale pixel value, obtain gray level image finally by grey scale pixel value.
Wherein, R is the red component for including in image, and G is green component, and B is blue component;
Step S122:Binaryzation is become the gray value for the gray level image that 2.1 steps obtain only by dynamic thresholding method
Surplus 0 and 255 black white image.
Wherein, f (x, y) is original-gray image;G (x, y) is bianry image after Threshold segmentation, and t is gray value, is as divided
Cut threshold value
Step S123, histogram equalization, the histogram equalization process include:
Step a. lists the gray level f of image after original image and transformationk(k=0,1,2, L-1), wherein L is ash
Spend total number of grades.
Step b. calculates the appearance sum of each tonal gradation of histogram
Wherein, nkFor each gray-scale number of pixels of original image, (k=0,1,2, L-1), n is the picture of original image
Plain total number, L are tonal gradation sum, Pf(fk) indicate the frequency that the gray scale occurs;
Step c. calculates cumulative distribution function
nkFor each gray-scale number of pixels of original image, (k=0,1,2, L-1);N is that the pixel of original image is always a
Number (k=0,1,2, L-1);L is tonal gradation sum;
Step d. calculates the tonal gradation g of image after histogram equalizationi
gi=INT [(gmax-gmin)C(f)+gmin+0.5]
Wherein, giThe tonal gradation of image after histogram equalization, i=0,1,2,255;INT is rounding operation,
gmaxFor tonal gradation maximum value, gminTonal gradation minimum value
Step e. calculates the tonal gradation of output image
niIt is the number of pixels of each gray level, i=0,1,2,255, histogram equalization is carried out to original image
That Shi Liyong is giAnd fkMapping relations, the image after histogram equalization can be obtained after mapping.
Step S124, median filtering.The image set that step S123 is obtained removes independent noise.Realization process:Firstly, will
Template is compared with the image that step 123 obtains, and then template center is overlapped with some location of pixels in image;Next
The gray value for reading each respective pixel under template, these gray values are formed a line from small to large, are found out in coming in these values
Between one, then this value is assigned to the pixel of corresponding templates center.
Step S130:Face characteristic extracts,
Step S131:Using Principal Component Analysis, that is, PCA, feature extraction is carried out to pretreated image.Become using K-L
Get the principal set of face each section in return, these principal components form coordinate system, and each reference axis is exactly a feature face image.
When being identified, as long as identified image is carried out space projection, so that it may obtain one group of projection vector, then by with people
The image in face library is matched, and then is identified.These features form coordinate system, and each reference axis is exactly a characteristic pattern
Picture includes at least in the feature set:Eyes, nose, mouth, eye distance, eyebrow;
Assuming that y is the stochastic variable of n dimension, then Y can be expressed as:
aiFor weighting coefficient,It is defined as base vector
Be converted to matrix form:
In formulaA=(a1,a2···an)T
Amount of orientation is orthogonal vectors, obtains following formula
Due to Φ formula orthogonal vectors composition, so Φ should be orthogonal matrix
ΦTΦ=I
Both sides are simultaneously multiplied by ΦTIt is available
A=ΦTY
ai=Φi TY
Irrelevant between each vector in order to meet a vector, random vector matrix form is:
R=E [YTY]
It obtains
R=Φ E [aTa]ΦT
It is complementary related between each component in order to meet a, it needs to meet relational expression
Write as matrix form,
Transformation obtains
R Φ=Φ Γ
RΦj=λjΦj(j=1,2, n)
λjIt is the characteristic value of Y, ΦjIt is feature vector.
Step S132, the corresponding area of feature eyes is extracted, the accounting of eyes and face is calculated;
Step 133, according to the area ratio of two eyes, other features in comprehensive analysis feature set obtain face's angle, and
Eye feature vector is corrected, correction calculation formula is:
Wherein, ωi(i, m) is eye character pair vector after correction, eiFor the area ratio of two eyes, DiFor eye distance, β
For eye-angle,S is biggish eyes area in two eyes, and π is pi,For
Its characteristic ratio coefficient,
Wherein,For its characteristic ratio coefficient, face characteristic number in n feature set, ΦjFor face feature vector, fj
For the corresponding feature vector of eyes, λjFor equalizing coefficient, numerical value 0.813.
Step is 140:Recognition of face calculates the Euclidean distance between original image and contrast images:
Wherein, Y is the feature vector set of original image, and D is the feature vector set of contrast images, yiFor original image
Corresponding single feature vector, diFor the corresponding single feature vector of contrast images, n, which is characterized, concentrates face characteristic number;
Think that successful match completes identification as Φ (Y, D)≤σ;
Wherein, σ is the characteristic threshold value of setting, and value requires to determine that general value is according to screeningFor the mean value of Euclidean distance calculated results all in contrast images library.
When target person particular place mesh disappears, particular place position is the image and information S of L acquisition target person, specific
Place name.Then the moment acquires the image in the range, and acquisition information includes image, image position and position name
Claim.When target person is again introduced into a certain range of particular place position, then by collected conventional images and target
People's image identifies target person by comparing face characteristic.
Although the embodiments of the present invention have been disclosed as above, but its is not only in the description and the implementation listed
With it can be fully applied to various fields suitable for the present invention, for those skilled in the art, can be easily
Realize other modification, therefore without departing from the general concept defined in the claims and the equivalent scope, the present invention is simultaneously unlimited
In specific details and legend shown and described herein.
Claims (9)
1. a kind of face identification method based on multi-features correlation technique, which is characterized in that including:
Step 1: acquiring realtime graphic sample using electronic equipment;
Step 2: the grey scale pixel value for calculating image pattern obtains gray level image, and Threshold segmentation is carried out to gray level image, then
Using histogram equalization processing, independent noise is finally filtered out using filtering method, obtains pretreated image pattern;
Step 3: carrying out portrait analysis to described image sample, and feature extraction is carried out, calculates portrait area in image, obtain
The eyes accounting of portrait, and the eyes vector is corrected;
Step 4: comparing the original image of target person and the similarity of comparison head portrait, target person is identified.
2. the face identification method according to claim 1 based on multi-features correlation technique, which is characterized in that institute
Stating image pattern is video or pictorial information.
3. the face identification method according to claim 1 based on multi-features correlation technique, which is characterized in that institute
The calculation formula for stating grey scale pixel value in step 2 is:
Wherein, R is the red component for including in image, and G is green component, and B is blue component.
4. the face identification method according to claim 2 based on multi-features correlation technique, which is characterized in that institute
State in step 2 that bianry image is after Threshold segmentation:
Wherein, f (x, y) is original-gray image;G (x, y) is bianry image after Threshold segmentation, and t is gray value, as segmentation threshold
Value.
5. the face identification method according to claim 2 based on multi-features correlation technique, which is characterized in that institute
Stating histogram equalization process includes:
Step a, the gray level f of image after listing original image and convertingk, k=0,1,2 ... L-1, wherein L is that tonal gradation is total
Number;
Step b, the appearance sum of each tonal gradation of histogram is calculated
Wherein, nkFor each gray-scale number of pixels of original image, k=0,1,2 ... L-1, n are the total number of pixels of original image, and L is
Tonal gradation sum, Pf(fk) indicate the frequency that the gray scale occurs;
Step c. calculates cumulative distribution function
nkFor each gray-scale number of pixels of original image, k=0,1,2 ... L-1;N is the total number of pixels of original image;L is gray scale
Total number of grades;
Step d, the tonal gradation g of image after histogram equalization is calculatedi
gi=INT [(gmax-gmin)C(f)+gmin+0.5];
Wherein, giFor the tonal gradation of image after histogram equalization, i=0,1,2 ..., 255;INT is rounding operation, gmaxFor gray scale
Grade maximum value, gminFor tonal gradation minimum value;
Step e, the tonal gradation of output image is calculated
Wherein, niFor the number of pixels of each gray level, i=0,1,2 ..., 255.
6. the face identification method according to claim 1 based on multi-features correlation technique, which is characterized in that institute
Filtering method is stated using median filtering algorithm.
7. the face identification method according to claim 1 based on multi-features correlation technique, which is characterized in that institute
Stating step 3 includes:
Step A, mathematical model is constructed using principal component analysis PCA algorithm, obtains the feature of face each section using Karhunen-Loeve transformation
Collection, these features form coordinate system, and each reference axis is exactly a characteristic image, is included at least in the feature set:Eyes, nose
Son, mouth, eye distance, eyebrow;
Step B, the corresponding area of feature eyes is extracted, the accounting of eyes and face is calculated;
Step C, according to the area ratio of two eyes, other features in comprehensive analysis feature set obtain face's angle, and to eyes
Feature vector is corrected.
8. the face identification method according to claim 7 based on multi-features correlation technique, which is characterized in that institute
Stating correction calculation formula in step C is:
Wherein, ωi(i, m) is eye character pair vector after correction, eiFor the area ratio of two eyes, DiFor eye distance, β is eye
Eyeball angle,S is biggish eyes area in two eyes, and π is pi,For its spy
Proportionality coefficient is levied,
Wherein,For its characteristic ratio coefficient, n, which is characterized, concentrates face characteristic number, zjFor face feature vector, fjFor eye
The corresponding feature vector of eyeball, λjFor equalizing coefficient.
9. the face identification method according to claim 1 based on multi-features correlation technique, which is characterized in that institute
The similarity of original image and comparison head portrait in step 4 is stated to judge, including:
Calculate the Euclidean distance between original image and contrast images:
Wherein, Y is the feature vector set of original image, and D is the feature vector set of contrast images, yiIt is corresponding for original image
Single feature vector, diFor the corresponding single feature vector of contrast images, n, which is characterized, concentrates face characteristic number;
Think that successful match completes identification as Φ (Y, D)≤σ;
Wherein, σ is the characteristic threshold value of setting.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810593767.8A CN108875623B (en) | 2018-06-11 | 2018-06-11 | Face recognition method based on image feature fusion contrast technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810593767.8A CN108875623B (en) | 2018-06-11 | 2018-06-11 | Face recognition method based on image feature fusion contrast technology |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108875623A true CN108875623A (en) | 2018-11-23 |
CN108875623B CN108875623B (en) | 2020-11-10 |
Family
ID=64337944
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810593767.8A Expired - Fee Related CN108875623B (en) | 2018-06-11 | 2018-06-11 | Face recognition method based on image feature fusion contrast technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108875623B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110533609A (en) * | 2019-08-16 | 2019-12-03 | 域鑫科技(惠州)有限公司 | Image enchancing method, device and storage medium suitable for endoscope |
CN111914632A (en) * | 2020-06-19 | 2020-11-10 | 广州杰赛科技股份有限公司 | Face recognition method, face recognition device and storage medium |
CN113052497A (en) * | 2021-02-02 | 2021-06-29 | 浙江工业大学 | Criminal worker risk prediction method based on dynamic and static feature fusion learning |
CN114155480A (en) * | 2022-02-10 | 2022-03-08 | 北京智视数策科技发展有限公司 | Vulgar action recognition method |
CN114821712A (en) * | 2022-04-07 | 2022-07-29 | 上海应用技术大学 | Face recognition image fusion method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070031041A1 (en) * | 2005-08-02 | 2007-02-08 | Samsung Electronics Co., Ltd. | Apparatus and method for detecting a face |
CN102750526A (en) * | 2012-06-25 | 2012-10-24 | 黑龙江科技学院 | Identity verification and recognition method based on face image |
CN107742094A (en) * | 2017-09-22 | 2018-02-27 | 江苏航天大为科技股份有限公司 | Improve the image processing method of testimony of a witness comparison result |
-
2018
- 2018-06-11 CN CN201810593767.8A patent/CN108875623B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070031041A1 (en) * | 2005-08-02 | 2007-02-08 | Samsung Electronics Co., Ltd. | Apparatus and method for detecting a face |
CN102750526A (en) * | 2012-06-25 | 2012-10-24 | 黑龙江科技学院 | Identity verification and recognition method based on face image |
CN107742094A (en) * | 2017-09-22 | 2018-02-27 | 江苏航天大为科技股份有限公司 | Improve the image processing method of testimony of a witness comparison result |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110533609A (en) * | 2019-08-16 | 2019-12-03 | 域鑫科技(惠州)有限公司 | Image enchancing method, device and storage medium suitable for endoscope |
CN110533609B (en) * | 2019-08-16 | 2022-05-27 | 域鑫科技(惠州)有限公司 | Image enhancement method, device and storage medium suitable for endoscope |
CN111914632A (en) * | 2020-06-19 | 2020-11-10 | 广州杰赛科技股份有限公司 | Face recognition method, face recognition device and storage medium |
CN111914632B (en) * | 2020-06-19 | 2024-01-05 | 广州杰赛科技股份有限公司 | Face recognition method, device and storage medium |
CN113052497A (en) * | 2021-02-02 | 2021-06-29 | 浙江工业大学 | Criminal worker risk prediction method based on dynamic and static feature fusion learning |
CN114155480A (en) * | 2022-02-10 | 2022-03-08 | 北京智视数策科技发展有限公司 | Vulgar action recognition method |
CN114821712A (en) * | 2022-04-07 | 2022-07-29 | 上海应用技术大学 | Face recognition image fusion method |
Also Published As
Publication number | Publication date |
---|---|
CN108875623B (en) | 2020-11-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108875623A (en) | A kind of face identification method based on multi-features correlation technique | |
CN108230252B (en) | Image processing method and device and electronic equipment | |
US20220092882A1 (en) | Living body detection method based on facial recognition, and electronic device and storage medium | |
US6181805B1 (en) | Object image detecting method and system | |
CN110363183B (en) | Service robot visual image privacy protection method based on generating type countermeasure network | |
US8385609B2 (en) | Image segmentation | |
CN104751108B (en) | Facial image identification device and facial image recognition method | |
US8682029B2 (en) | Rule-based segmentation for objects with frontal view in color images | |
CN100423020C (en) | Human face identifying method based on structural principal element analysis | |
JP2004086891A (en) | Object detection method in digital image | |
CN111667400B (en) | Human face contour feature stylization generation method based on unsupervised learning | |
CN109740572A (en) | A kind of human face in-vivo detection method based on partial color textural characteristics | |
CN106778785A (en) | Build the method for image characteristics extraction model and method, the device of image recognition | |
Hebbale et al. | Real time COVID-19 facemask detection using deep learning | |
CN111126307A (en) | Small sample face recognition method of joint sparse representation neural network | |
CN111222433A (en) | Automatic face auditing method, system, equipment and readable storage medium | |
Ataer-Cansizoglu et al. | Verification of very low-resolution faces using an identity-preserving deep face super-resolution network | |
Pigeon et al. | Image-based multimodal face authentication | |
CN109522865A (en) | A kind of characteristic weighing fusion face identification method based on deep neural network | |
Monwar et al. | Eigenimage based pain expression recognition | |
Grigoryan et al. | Color facial image representation with new quaternion gradients | |
Lai et al. | Skin colour-based face detection in colour images | |
CN109165551B (en) | Expression recognition method for adaptively weighting and fusing significance structure tensor and LBP characteristics | |
CN116386118B (en) | Drama matching cosmetic system and method based on human image recognition | |
JPH1185988A (en) | Face image recognition system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20201110 Termination date: 20210611 |