CN108710823A - A kind of face similarity system design method - Google Patents
A kind of face similarity system design method Download PDFInfo
- Publication number
- CN108710823A CN108710823A CN201810311000.1A CN201810311000A CN108710823A CN 108710823 A CN108710823 A CN 108710823A CN 201810311000 A CN201810311000 A CN 201810311000A CN 108710823 A CN108710823 A CN 108710823A
- Authority
- CN
- China
- Prior art keywords
- characteristic
- block
- point
- characteristic point
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24147—Distances to closest patterns, e.g. nearest neighbour classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
The present invention relates to a kind of face similarity system design methods, it is extracted by the characteristic point to facial image, and divide characteristic block, characteristic point is included in characteristic block, Multi-layer technology is carried out to characteristic block, characteristic point therein is further extracted, be carried out at the same time similar block, matching characteristic point proportion calculating, to compare the similarity in two facial images.There is significantly practicability in this method, and compare the feature of facial image more careful, and the process for comparing calculating is also more stringent.
Description
Technical field
The invention belongs to artificial intelligence fields, relate in particular to a kind of novel face similarity system design method.
Background technology
With the fast development of computer network and multimedia technology, Face datection, identification based on image, retrieval technique
Especially active research category is had become.One of them important research topic is exactly human face similarity degree measurement, it is
Face datection, identification, the key foundation of retrieval technique and important content, therefore, the research of human face similarity degree have important reality
With value and research significance.
Invention content
In view of this, the present invention provide it is a kind of solution or part solve human face similarity degree evaluation problem novel face it is similar
Property comparative approach.
Specifically, present invention employs following technical schemes:
A kind of face similarity system design method, which is characterized in that the method includes:1)Characteristic point is arranged:To two face figures
It takes pictures as being acquired, characteristic point is set on facial image, wherein the characteristic point is to have significant spy on facial image
The point of sign;2)Divide characteristic block:Two facial images are each divided into multiple characteristic blocks, each characteristic block includes at least two
The shape of characteristic point, each characteristic block on every face is indefinite, but the characteristic block on two faces corresponds, corresponding
Characteristic block characteristic point having the same;3)Characteristic block compares:Similitude is carried out to the corresponding characteristic block of a pair on two faces
Compare, wherein being amplified to a pair of corresponding characteristic block using identical spreading rate, the multiple of amplification is identical, right after amplification
Wherein corresponding characteristic point is matched, and line is carried out between corresponding characteristic point, and connected line is horizontal two features
Point is matching characteristic point, and the quantity of matching characteristic point is denoted as m, if all corresponding characteristic points are matching in a pair of of characteristic block
Characteristic point, then this pair of of characteristic block is similar block, the quantity of similar block is denoted as n and calculates similar block accounts for all feature numbers of blocks
Proportion, if proportion be more than 50%, further united to other features matching characteristic point in the block in addition to similar block
Meter, and calculate matching characteristic point in characteristic block and account for weight of the ratio of all characteristic point quantity as other characteristic blocks, by weight
Size carries out descending arrangement, to the characteristic block to sort preceding 50%, to it into traveling if including wherein more than one characteristic point
The division of one step, the characteristic block after being divided, and divide after characteristic block in contained characteristic point quantity only account for division before
Characteristic block in characteristic point quantity 50%, carry out further Feature Points Matching, if it is a pair of divide after feature it is in the block all
Characteristic point is matching characteristic point, then the characteristic block after the division be two level similar block and write down two level similar block account for division after
Then the proportion of the quantity of characteristic block counts the quantity of matching characteristic point in the characteristic block after other divisions, passes through following formula
Carry out human face similarity degree measurement:
Wherein, m is characterized the quantity of matching characteristic point in block, and n is the quantity of similar block, and j is the feature spy in the block after dividing
The quantity of matching characteristic point in sign point, c are the quantity of two level similar block, and N is the quantity of the characteristic block after dividing,、Respectively
The regulation coefficient of characteristic block after characteristic block, division, is arbitrary real number, and w is the numerical value of human face similarity degree measurement.
Preferably, include and the relevant point of face as the point with symbolic characteristic of characteristic point on face.Further,
The characteristic point includes two, edge point, the intermediate point of eyebrow, the eyeball point in eyes, the nose on nose of eyebrow
The intermediate point of sharp point, the edge point of two faces, face.
In addition, when being matched to characteristic point, if the matched characteristic point of institute is not clear enough, part is carried out to it and is put
Greatly, secondary characteristics point is then taken in the characteristic point of partial enlargement, is further matched, and obtaining matched secondary characteristics point is
Matching characteristic point.
Beneficial effects of the present invention are:Novel face similarity system design method provided by the invention, by facial image
Similar features point be compared, by characteristic point be included in characteristic block in, to characteristic block carry out Multi-layer technology, to compare two
Similarity in facial image, there is significantly practicability in this method, and compare the feature of facial image more thin
It causes, the process for comparing calculating is also more stringent.
Specific implementation mode
Recognition of face is the research hotspot of current computer vision and machine learning, is had broad application prospects.How
It obtains the expression of effective face characteristic and designs powerful grader to become research crucial, and the uncontrollable factor in actual environment
Increase the difficulty obtained.With the proposition and development of compressive sensing theory, the face identification method based on sparse coding model is ground
Study carefully, causes the extensive concern and great interest of researcher.First, it proposes a kind of based on rarefaction representation classification (Sparse R
Epresentation Based Classificaion, SRC) face identification method, the table in terms of robustness recognition of face
Reveal preferable performance, to there are brightness change, noise and the recognitions of face blocked to all have good result.Wherein basic thought
For, if by known class attribute, but belong to inhomogeneous training sample in spatial domain or its property field vector quantization and constitute expression
Dictionary then belongs to the image to be tested of wherein certain class after same vector quantization, can be indicated by the dictionary sparse coding, and obtain
Nonzero coefficient be concentrated mainly in expression coefficient of the test image about affiliated similar sample, therefore so that the test chart
As the error minimum by corresponding class training sample linear expression, and thus determine the correct classification belonging to tested image.
A lot of research work is unfolded to the face identification method based on SRC frames in domestic and international many experts and scholars.For with
The problem that dictionary causes dimension excessively high is blocked, the proposition such as Yang is a kind of to block dictionary to reduce system based on Gabor transformation
Computation complexity.It being solved using the l1 norms of regularization code coefficient in view of SRC, operation has higher computation complexity,
The it is proposeds such as Zhang replace the coding method of regularization l1 norms using regularization l2 norms, propose cooperation presentation class
The concept of (Collaborative Representation Based Classification, CRC).SRC can be considered most
The popularization of nearest neighbour classification and the classification of arest neighbors proper subspace, although sufficiently high in intrinsic dimensionality based on the recognition of face of SRC
Under the conditions of, final recognition performance will not be had an important influence on by choosing different character representations.But selected characteristic dimension compared with
Under conditions of low, the degree of freedom of rarefaction representation will will increase, therefore, cause the Classification and Identification performance based on rarefaction representation have compared with
Significantly reduce.Wang etc. proposes uniform enconding (the Locality Constrained Linear based on position constraint
Coding, LLC), by using existing position constraint between sample the code coefficient of regularization is had and sparse coding system
Sparsity as several classes of, LLC also can be effectively used for image classification.
Chao etc. proposes a kind of SRC face identification methods based on position constraint and group sparse constraint.Similarly, Lu etc.
SRC (Weighted SRC, the WSRC) method weighted based on position (similitude) is you can well imagine out with Guo deciles.Timofte etc.
CRC of weighting etc. be you can well imagine out with Waqas deciles for recognition of face.It can be seen that in linear/sparse coding presentation class mistake
Position (or similitude) information in journey between Embedded test sample and training sample contributes to the differentiation for effectively promoting code coefficient
Ability, to enhance its classification performance.However in actual face recognition application, since what is obtained under uncontrolled scene waits for
Facial image is identified there may be expression shape change and intentional partial occlusion, camouflage, the similarity measurement based on image overall is very
Hardly possible really reflects mutual position relationship, so that the weighted coding based on image overall similitude indicates face classification performance
It reduces.It finds under the conditions of uncontrolled, there are between image when expression shape change, partial occlusion and camouflage for the image especially obtained
Active position indicates that becoming in the research of the weighted coding representation face identification method based on position constraint frame is worth that explores to ask
Topic.There are problems that expression shape change, partial occlusion and camouflage for uncontrolled facial image thus, inquires into based on image block
Maximum comparability is embedded in the recognition of face of rarefaction representation.By carrying out non-overlapping piecemeal to training image and test image, calculate
Similitude between each corresponding piecemeal, and with the similitude between its maxima metric image, and then the largest block of extraction is similar
Property information, in embedded sparse coding presentation class, to effectively improve the stability of sparse coding under low-dimensional Feature Selection and be
The recognition performance of system.
In order to make technical problems, technical solutions and advantages to be solved be more clearly understood, tie below
Embodiment is closed, the present invention will be described in detail.It should be noted that specific embodiment described herein is only explaining
The present invention is not intended to limit the present invention, and can be realized that the product of said function belongs to equivalent replacement and improvement, is all contained in this hair
Within bright protection domain.The specific method is as follows:
Two facial images are acquired by camera shooting collecting device and are taken pictures, characteristic point, characteristic point are set on facial image
For the point with symbolic characteristic on facial image, including two, the edge of eyebrow point, the intermediate point of eyebrow, eyes
In eyeball point, the nose point on nose, two, the edge point of face, face intermediate point, to two
A facial image carries out block division, is divided into multiple characteristic blocks, and the shape of each characteristic block is indefinite, wherein at least includes two spies
Point is levied, also, is corresponded on two facial images and divides characteristic block, is i.e. characteristic block occurs in pairs, a pair of of characteristic block
Respectively on a facial image, characteristic point having the same.Identical characteristic point is two, edge point, the eyebrow for being all eyebrow
Intermediate point, the eyeball point in eyes, the nose point on nose, two, the edge point of face, the face of hair
The combination of arbitrary several points in intermediate point.
A pair of of characteristic block is matched to carry out the comparison of similitude, detailed process is:
A pair of of characteristic block is extended using identical spreading rate, spreading rate is characterized the expansion speed of block, expands speed
Degree is equal to the speed of amplification.The characteristic point for including in the characteristic block after expansion is matched again after expansion, by corresponding spy
Sign point is connected.Corresponding characteristic point is to be similarly two, the edge point of eyebrow, or be similarly the intermediate point etc. of eyebrow.Phase
It is matching characteristic point that line even, which is horizontal two characteristic points,.If the quantity of matching characteristic point is m, m is just whole more than 0
Number.If matching characteristic point is not clear enough, it is necessary to partial enlargement is carried out to it, in the matching characteristic of partial enlargement after partial enlargement
It is to be chosen on the matching characteristic point of partial enlargement for further matching that secondary characteristics point, secondary characteristics point are taken in point
Point.The corresponding point further to match is connected, all connected lines are horizontal line, then spy where matching characteristic point is arranged
Sign block is similar block.The quantity of similar block is counted, if quantity is n, n is positive integer, the value of n divided by the quantity of characteristic block
Obtain proportion.If proportion is more than 50%, matching characteristic point in other characteristic blocks in addition to similar block is further counted,
The quantity of statistics wherein matching characteristic point accounts for the ratio of the quantity of characteristic point, in this, as the weight of characteristic block.By the big of weight
Small descending sort, if its inside includes more than one characteristic point, makees further feature block to the characteristic block of sequence preceding 50%
It divides.Characteristic point, which only accounts for, in characteristic block after division divides 50% of original characteristic point quantity in preceding characteristic block, traveling one of going forward side by side
Step ground Feature Points Matching, detects whether as matching characteristic point.Characteristic block after the division of matching characteristic point with complete quantity
For two level similar block, the matching characteristic point of complete quantity is that all characteristic points are all matching characteristic point in characteristic block after dividing.
The proportion for the characteristic block that two level similar block accounts for after dividing also is recorded.In addition to two level similar block in characteristic block after division,
The quantity for counting matching characteristic point therein accounts for the quantity of all characteristic points, and all data of whole process are all counted at one
In table.Finally, human face similarity degree measurement is carried out according to statistical data, the formula of use is as follows:
Wherein, m is the quantity of matching characteristic point in a pair of of characteristic block, and j be matching spy in feature characteristic point in the block after dividing
The quantity of point is levied, c is the quantity of two level similar block, and N is the quantity of the characteristic block after dividing,、Respectively characteristic block, division
The regulation coefficient of characteristic block afterwards is arbitrary real number, and k is real number, and all data of whole process are all counted in a table
In;W is the numerical value of human face similarity degree measurement, and numerical value is higher, indicates that the similarity of two facial images is higher.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic
Property concept, then additional changes and modifications may be made to these embodiments.So the following claims are intended to be interpreted as includes
Preferred embodiment and all change and modification for falling into the scope of the invention.
Obviously, those skilled in the art can carry out the embodiment of the present invention various modification and variations without departing from this hair
The spirit and scope of bright embodiment.In this way, if these modifications and variations of the embodiment of the present invention belong to the claims in the present invention
And its within the scope of equivalent technologies, then the present invention is also intended to include these modifications and variations.
Embodiments of the present invention are described in detail above in conjunction with specific implementation mode, but the present invention is not limited to
The above embodiment, technical field those of ordinary skill within the scope of knowledge, this hair can also not departed from
It is made a variety of changes under the premise of bright objective.
Claims (4)
1. a kind of face similarity system design method, which is characterized in that the method includes:1)Characteristic point is arranged:To two faces
Image, which is acquired, takes pictures, and characteristic point is arranged on facial image, wherein the characteristic point is significant to have on facial image
The point of feature;2)Divide characteristic block:Two facial images are each divided into multiple characteristic blocks, each characteristic block includes at least two
The shape of a characteristic point, each characteristic block on every face is indefinite, but the characteristic block on two faces corresponds, corresponding
Characteristic block characteristic point having the same;3)Characteristic block compares:The corresponding characteristic block of a pair on two faces is carried out similar
Property compare, wherein being amplified to a pair of corresponding characteristic block using identical spreading rate, the multiple of amplification is identical, after amplification
Wherein corresponding characteristic point is matched, line is carried out between corresponding characteristic point, connected line is horizontal two spies
Sign point is matching characteristic point, and the quantity of matching characteristic point is denoted as m, if all corresponding characteristic points are in a pair of of characteristic block
With characteristic point, then this pair of of characteristic block is similar block, the quantity of similar block is denoted as n and calculates similar block accounts for all feature block numbers
The proportion of amount carries out other features matching characteristic point in the block in addition to similar block further if proportion is more than 50%
Statistics, and calculate matching characteristic point in characteristic block and account for weight of the ratio of all characteristic point quantity as other characteristic blocks, by power
Great small progress descending arrangement carries out it if including wherein more than one characteristic point the characteristic block to sort preceding 50%
It is further to divide, the characteristic block after being divided, and the quantity of contained characteristic point only accounts for division in the characteristic block after division
The 50% of characteristic point quantity in preceding characteristic block, carries out further Feature Points Matching, if the feature institute in the block after a pair of of division
It is matching characteristic point to have characteristic point, then the characteristic block after the division is two level similar block and writes down after two level similar block accounts for division
Characteristic block quantity proportion, then count other division after characteristic blocks in matching characteristic point quantity, pass through following public affairs
Formula carries out human face similarity degree measurement:
Wherein, m is characterized the quantity of matching characteristic point in block, and n is the quantity of similar block, and j is the feature spy in the block after dividing
The quantity of matching characteristic point in sign point, c are the quantity of two level similar block, and N is the quantity of the characteristic block after dividing,、Respectively
The regulation coefficient of characteristic block after characteristic block, division, is arbitrary real number, and w is the numerical value of human face similarity degree measurement.
2. face similarity system design method as described in claim 1, which is characterized in that there is mark as characteristic point on face
The point of will feature includes and the relevant point of face.
3. face similarity system design method as claimed in claim 2, which is characterized in that the characteristic point includes the edge of eyebrow
The edge of the nose point on eyeball point, nose in two points, the intermediate point of eyebrow, eyes, two faces
The intermediate point of point, face.
4. face similarity system design method as described in claim 1, which is characterized in that when being matched to characteristic point, such as
The matched characteristic point of fruit institute is not clear enough, then partial enlargement is carried out to it, then takes two level special in the characteristic point of partial enlargement
Point is levied, is further matched, it is matching characteristic point to obtain matched secondary characteristics point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810311000.1A CN108710823B (en) | 2018-04-09 | 2018-04-09 | Face similarity comparison method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810311000.1A CN108710823B (en) | 2018-04-09 | 2018-04-09 | Face similarity comparison method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108710823A true CN108710823A (en) | 2018-10-26 |
CN108710823B CN108710823B (en) | 2022-04-19 |
Family
ID=63866534
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810311000.1A Active CN108710823B (en) | 2018-04-09 | 2018-04-09 | Face similarity comparison method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108710823B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070183653A1 (en) * | 2006-01-31 | 2007-08-09 | Gerard Medioni | 3D Face Reconstruction from 2D Images |
CN101833672A (en) * | 2010-04-02 | 2010-09-15 | 清华大学 | Sparse representation face identification method based on constrained sampling and shape feature |
CN105740808A (en) * | 2016-01-28 | 2016-07-06 | 北京旷视科技有限公司 | Human face identification method and device |
CN106980819A (en) * | 2017-03-03 | 2017-07-25 | 竹间智能科技(上海)有限公司 | Similarity judgement system based on human face five-sense-organ |
CN107729855A (en) * | 2017-10-25 | 2018-02-23 | 成都尽知致远科技有限公司 | Mass data processing method |
-
2018
- 2018-04-09 CN CN201810311000.1A patent/CN108710823B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070183653A1 (en) * | 2006-01-31 | 2007-08-09 | Gerard Medioni | 3D Face Reconstruction from 2D Images |
CN101833672A (en) * | 2010-04-02 | 2010-09-15 | 清华大学 | Sparse representation face identification method based on constrained sampling and shape feature |
CN105740808A (en) * | 2016-01-28 | 2016-07-06 | 北京旷视科技有限公司 | Human face identification method and device |
CN106980819A (en) * | 2017-03-03 | 2017-07-25 | 竹间智能科技(上海)有限公司 | Similarity judgement system based on human face five-sense-organ |
CN107729855A (en) * | 2017-10-25 | 2018-02-23 | 成都尽知致远科技有限公司 | Mass data processing method |
Non-Patent Citations (2)
Title |
---|
JIAN LAI等: "DISCRIMINATIVE SPARSITY PRESERVING EMBEDDING FOR FACE RECOGNITION", 《IEEE》 * |
PENGYUE ZHANG: "Sparse discriminativemulti-manifoldembeddingforone-sample", 《IEEE》 * |
Also Published As
Publication number | Publication date |
---|---|
CN108710823B (en) | 2022-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Song et al. | Region-based quality estimation network for large-scale person re-identification | |
Song et al. | Recognizing spontaneous micro-expression using a three-stream convolutional neural network | |
Li et al. | Insulator defect recognition based on global detection and local segmentation | |
CN107330364B (en) | A kind of people counting method and system based on cGAN network | |
WO2019134327A1 (en) | Facial expression recognition feature extraction method employing edge detection and sift | |
CN104143079B (en) | The method and system of face character identification | |
Peng et al. | Towards facial expression recognition in the wild: A new database and deep recognition system | |
CN109325443A (en) | A kind of face character recognition methods based on the study of more example multi-tag depth migrations | |
CN109063565A (en) | A kind of low resolution face identification method and device | |
CN106687989A (en) | Method and system of facial expression recognition using linear relationships within landmark subsets | |
Wang et al. | Improving human action recognition by non-action classification | |
CN107122712A (en) | It polymerize the palmprint image recognition methods of description vectors based on convolutional neural networks and two-way local feature | |
CN112541421B (en) | Pedestrian reloading and reloading recognition method for open space | |
CN102609693A (en) | Human face recognition method based on fuzzy two-dimensional kernel principal component analysis | |
CN106855883A (en) | The Research on face image retrieval of view-based access control model bag of words | |
Kindiroglu et al. | Temporal accumulative features for sign language recognition | |
Liu et al. | Boosting-POOF: boosting part based one vs one feature for facial expression recognition in the wild | |
Vats et al. | Multi-task learning for jersey number recognition in ice hockey | |
Yang et al. | Combining YOLOV3-tiny model with dropblock for tiny-face detection | |
Cao et al. | Bayesian correlation filter learning with Gaussian scale mixture model for visual tracking | |
CN105550642B (en) | Gender identification method and system based on multiple dimensioned linear Differential Characteristics low-rank representation | |
CN114663766A (en) | Plant leaf identification system and method based on multi-image cooperative attention mechanism | |
Jiang et al. | Orientation-guided similarity learning for person re-identification | |
Xu et al. | Robust seed localization and growing with deep convolutional features for scene text detection | |
Cai et al. | Performance analysis of distance teaching classroom based on machine learning and virtual reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |