CN110516626A - A kind of Facial symmetry appraisal procedure based on face recognition technology - Google Patents
A kind of Facial symmetry appraisal procedure based on face recognition technology Download PDFInfo
- Publication number
- CN110516626A CN110516626A CN201910810551.7A CN201910810551A CN110516626A CN 110516626 A CN110516626 A CN 110516626A CN 201910810551 A CN201910810551 A CN 201910810551A CN 110516626 A CN110516626 A CN 110516626A
- Authority
- CN
- China
- Prior art keywords
- face
- cornea
- symmetry
- facial
- circle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
Abstract
The Facial symmetry appraisal procedure based on face recognition technology that the invention discloses a kind of, use the positive face photo of facial paralysis patient as the training sample of Feature point recognition model, and select cornea as object of reference, the human face characteristic point coordinate that will identify that establishes new face coordinate system by translation and rotation transformation, to establish the assessment system of the more subitem overall merits of sectionalization based on numeralization score value system;The human face characteristic point identified can artificially adjust amendment according to accuracy requirements, and update information is acquired to optimization system in real time.Recognition result of the present invention is more accurate, information is richer;Cornea is identified using deep learning Method Modeling, is increased and is more objectively referred to scale, result is made to have more reliability.
Description
Technical field
The present invention relates to field of face identification more particularly to a kind of Facial symmetry assessment sides based on face recognition technology
Method.
Background technique
In existing facial paralysis Facial symmetry assessment system, 68 point face Critical point models are generallyd use, every is inputted
Photo carry out face 68 key points identification, after identification, system can the coordinates letter such as the obtained eye of basis, eyebrow, lip
Breath quantifies the Facial symmetry information of facial paralysis patient.Beam process is mainly compared using the absolute distance between coordinate
Compared with, generation absolute pixel distance or ratio, and the symmetry that can be quantized is generated by relevant calculation.
However, 68 point face Critical point model of face itself is the design of normal person's face, for facial paralysis patient, especially seriously
Asymmetric face is frequently present of the phenomenon that each position of face can not accurately be detected, and leads to not guarantee the assessment of face symmetry
Accuracy, in addition, existing scheme only considers from the pixel angle of each key point of image, when carrying out symmetry assessment without objective
Object of reference is seen, reduces the confidence level of assessment, and no longer change after detection model training, it can not for the part of detection inaccuracy
It is updated.
Therefore, those skilled in the art is dedicated to developing a kind of Facial symmetry assessment side based on face recognition technology
Method constructs facial characteristics Critical point model using the asymmetric facial paralysis patient face feature of face, acquires sufficient amount of key
Point data, and increase objective reference's object when symmetry assessment, constructs evaluation system, it is ensured that the confidence level of assessment result and accurate
Property, while feature key points model can be made to carry out acquisition amendment in real time, reach the function of constantly improve accuracy of identification.
Summary of the invention
In view of the above drawbacks of the prior art, the technical problem to be solved by the present invention is to the assessments of existing Facial symmetry
System lacks objective reference's object, can not carry out model training for the facial characteristics of facial paralysis patient, and system is unable to self-recision.
To achieve the above object, the Facial symmetry appraisal procedure based on face recognition technology that the present invention provides a kind of,
Characterized by comprising the following steps:
Step S1, the facial characteristics for facial paralysis patient establishes Feature point recognition model:
The positive face photo sample in head of facial paralysis patient, and people are acquired to chain hospital and doctor by online cloud service platform
Work marks face key point and cornea identification circle;
Divisional training is carried out to each position of face using the method for machine learning;
Circle, which is trained, to be identified to cornea using the method for deep learning;
Step S2, user uses the face key point and cornea of trained Feature point recognition model inspection target picture
Identification circle, and will test result and be plotted on Target Photo;
Step S3, user can carry out face key point and cornea the identification circle detected according to the precise requirements of itself
Artificial dragging adjustment, to meet higher precise requirements;
Step S4, face coordinate system is established according to the testing result of cornea identification circle, coordinate system is passed through to former Target Photo
Translation and rotation transformation constitute the distinctive coordinate system of face, and calculate coordinate of all face key points under face coordinate system
Value;
Step S5, according to the new coordinate value of all face key points, it is computed the quantization symmetry for obtaining each position of face
As a result, and establish face's symmetry evaluation system, obtain the symmetry assessment result of target picture.
Further, in step sl, the range of the face key point includes: and marks all faces for needing to identify to close
Key point, draws face location rectangle frame;The range of the cornea identification circle includes: to mark size and the center of circle position of left and right cornea
It sets.
Further, in step sl, the Divisional training includes: to instruct respectively to positions such as eye, eyebrow, nose, mouths
Practice modeling;For corneal moieties, left and right cornea is split as sample and is trained.
Further, in step s 2, when cornea is blocked by eyelid leads to imperfect, the cornea identification circle is by described
Feature point recognition model carries out complete identification and extracts.
Further, in step s3, the face key point in detected target picture by artificial dragging adjustment and
Cornea identification circle is uploaded to the online cloud service platform when network can be used, for the optimization of Feature point recognition model.
Further, in step s 4, the cornea identification diameter of a circle is used as referring to length and is defined as 10mm;It is described
In new coordinate system, the circle center line connecting of left and right cornea identification circle is X-axis, and line midpoint is origin (0,0), and line perpendicular bisector is Y
Axis.
Further, in step s 5, face's symmetry evaluation system includes static, dynamic and linkage three grouping greatly
Score-system, and every group is subdivided into the fraction assessment at each position of face, is finally weighted score that grouping obtains always
With obtain final symmetry assessment result.
Further, identical calculation is used during the training process of the Feature point recognition model and detection target picture
Method is operated.
Further, further include set of complementary exploitation graphical interfaces client, for show the face key point and
Cornea identification circle detection result image, user and by graphical interfaces client to the position of any face key point detected
It sets and is modified with the range of cornea identification circle.
Further, in step sl, while the face key point and cornea identify handmarking's process of circle, doctor
Life gives a mark to the symmetry at its each position of the photo sample of facial paralysis patient, the marking and face's symmetry described in step S5
Evaluation system is combined training, to achieve the purpose that provide symmetry outcome evaluation automatically.
The Facial symmetry appraisal procedure based on face recognition technology that the present invention uses, the front that need to only upload patient are shone
Piece can provide more accurately Facial symmetry quantitative evaluation result without the subjective judgement of doctor according to pre-defined rule.In
In the present invention, machine learning modeling is carried out using facial paralysis patient facial region's photo, to identify face key point, compares existing scheme
More accurately, information is richer;Cornea is identified using deep learning Method Modeling, is increased objective object of reference, is had more result
There is reliability;Simultaneously using the assessment system of the more subitem overall merits of sectionalization of numeralization score value system, more accurately assess
Facial symmetry character state;Finally, providing the calibration function of key point identification, optimize answering for high-precision requirement usage scenario
With.
It is described further below with reference to technical effect of the attached drawing to design of the invention, specific structure and generation, with
It is fully understood from the purpose of the present invention, feature and effect.
Detailed description of the invention
Fig. 1 is the work flow diagram of a preferred embodiment of the invention;
Fig. 2 is the face key point distribution schematic diagram manually marked in the present invention;
Fig. 3 is the schematic diagram for establishing face coordinate system in the present invention according to object of reference.
Specific embodiment
Multiple preferred embodiments of the invention are introduced below with reference to Figure of description, keep its technology contents more clear and just
In understanding.The present invention can be emerged from by many various forms of embodiments, and protection scope of the present invention not only limits
The embodiment that Yu Wenzhong is mentioned.
In the accompanying drawings, the identical component of structure is indicated with same numbers label, everywhere the similar component of structure or function with
Like numeral label indicates.The size and thickness of each component shown in the drawings are to be arbitrarily shown, and there is no limit by the present invention
The size and thickness of each component.Apparent in order to make to illustrate, some places suitably exaggerate the thickness of component in attached drawing.
As shown in Figure 1, the work flow diagram of a preferred embodiment of the invention comprising the steps of:
Step S1: progress face feature point identification model training first: by online cloud service platform to chain hospital and
Doctor acquires the positive face photo sample in head of facial paralysis patient, and handmarking's face key point and cornea identification circle.It needs to illustrate
, in the training process of face feature point identification model, the positive face of the used all facial paralysis patients of photo sample shines
Piece keeps the testing result of the face key point for facial paralysis patient more perfect.As shown in Fig. 2, the face key point of handmarking
Including needing all face key points 1 for identifying, such as lip, eyebrow, nose, eyes, and face location rectangle frame 3 is drawn;Together
When handmarking go out the magnitude range and center location of left and right cornea, as cornea identification circle 2;Meanwhile doctor suffers from for facial paralysis
Each site symmetry of the positive face photo sample of person is given a mark.Secondly, being instructed respectively to face key point and cornea identification circle
Practice, in the present embodiment, face key point part carries out Divisional to each position of face using open source machine learning platform dlib
Training, corneal moieties use in the symbolic mathematical system (TensorFlow) that is programmed based on data flow based on Target Recognition Algorithms
(Faster-RCNN) cornea identification circle in left and right is trained respectively.It illustrates, selects cornea to make in the present embodiment
For object of reference, and cornea identification circle is marked, but the selection of object of reference is not limited to cornea, also may be selected arbitrarily to can be considered as solid
The face position of fixed number value.
Step S2: user opens the target picture for needing to detect in the graphical interfaces client of mating exploitation, using
The face key point and cornea of trained Feature point recognition model inspection target picture identify circle, and will test result and be plotted in
On Target Photo.Contain incomplete cornea (such as being blocked by eyelid) in such as Target Photo, is identified using deep learning method
Out complete cornea and mark cornea identification circle.In this step, the feature in used detection recognizer and step S1
Point identification model training algorithm is consistent.The development language of graphical interfaces client includes but is not limited to Python and C++.
Step S3: user can scheme face key point and cornea the identification circle detected according to the precise requirements of itself
The enterprising pedestrian's work dragging adjustment of shape interface client, to meet higher precise requirements, adjustable content includes owner
The range and center location of the position of face key point and cornea identification circle.In this step, the artificial dragging of all users is adjusted
It is to be uploaded to online cloud service platform that whole result will can be used in network, for the optimization of Feature point recognition model.That is, with
User's access times increase, recognition effect can constantly optimize raising.
Step S4: face coordinate system is established according to the testing result of cornea identification circle, cornea identifies diameter of a circle as ginseng
According to length and it is defined as 10mm;Using the circle center line connecting of left and right cornea identification circle as X-axis, line midpoint is origin (0,0), in line
Vertical line is Y-axis, establishes face coordinate system.As shown in figure 3, being made up of to former Target Photo coordinate system translation and rotation transformation
The distinctive coordinate system of face, and calculate coordinate value of all face key points under face coordinate system.
Step S5: according to the new coordinate value of all face key points, it is computed the quantization symmetry for obtaining each position of face
As a result, and establish face's symmetry evaluation system, obtain the symmetry assessment result of target picture.Face's symmetry evaluation system
Including the score-systems that static, dynamic and linkage three are grouped greatly, and every group is subdivided into the fraction assessment at each position of face, finally
The score that grouping obtains is weighted summation, obtains final symmetry assessment result.In the actual implementation process, can make
Facial symmetry assessment is carried out with the symmetry assessment system of any numeralization and subdivision entry.
The preferred embodiment of the present invention has been described in detail above.It should be appreciated that the ordinary skill of this field is without wound
The property made labour, which according to the present invention can conceive, makes many modifications and variations.Therefore, all technician in the art
Pass through the available technology of logical analysis, reasoning, or a limited experiment on the basis of existing technology under this invention's idea
Scheme, all should be within the scope of protection determined by the claims.
Claims (10)
1. a kind of Facial symmetry appraisal procedure based on face recognition technology, which comprises the following steps:
Step S1, the facial characteristics for facial paralysis patient establishes Feature point recognition model:
The positive face photo sample in head of facial paralysis patient is acquired to chain hospital and doctor by online cloud service platform, and is manually marked
Remember face key point and cornea identification circle;
Divisional training is carried out to each position of face using the method for machine learning;
Circle, which is trained, to be identified to cornea using the method for deep learning;
Step S2, user is identified using the face key point of trained Feature point recognition model inspection target picture and cornea
Circle, and will test result and be plotted on Target Photo;
Step S3, user can carry out face key point and cornea the identification circle detected according to the precise requirements of itself artificial
Dragging adjustment, to meet higher precise requirements;
Step S4, face coordinate system is established according to the testing result of cornea identification circle, former Target Photo is translated by coordinate system
And rotation transformation, the distinctive coordinate system of face is constituted, and calculate coordinate value of all face key points under face coordinate system;
Step S5, according to the new coordinate value of all face key points, be computed the quantization symmetry for obtaining each position of face as a result,
And face's symmetry evaluation system is established, obtain the symmetry assessment result of target picture.
2. the Facial symmetry appraisal procedure based on face recognition technology as described in claim 1, which is characterized in that in step
In S1, the range of the face key point includes: all face key points for marking and needing to identify, draws face location rectangle
Frame;The range of the cornea identification circle includes: to mark the size and center location of left and right cornea.
3. the Facial symmetry appraisal procedure based on face recognition technology as described in claim 1, which is characterized in that in step
In S1, the Divisional training includes: to be trained modeling to positions such as eye, eyebrow, nose, mouths respectively;It, will be left for corneal moieties
Right corner film is split to be trained respectively as sample.
4. the Facial symmetry appraisal procedure based on face recognition technology as described in claim 1, which is characterized in that in step
In S2, when cornea is blocked by eyelid leads to imperfect, the cornea identification circle is carried out complete by the Feature point recognition model
Identification extract.
5. the Facial symmetry appraisal procedure based on face recognition technology as described in claim 1, which is characterized in that in step
In S3, face key point and cornea identification circle in detected target picture by artificial dragging adjustment when network can be used on
The online cloud service platform is reached, for the optimization of Feature point recognition model.
6. the Facial symmetry appraisal procedure based on face recognition technology as described in claim 1, which is characterized in that in step
In S4, the cornea identification diameter of a circle is used as referring to length and is defined as 10mm;In the new coordinate system, left and right cornea is known
Not Yuan circle center line connecting be X-axis, line midpoint be origin (0,0), line perpendicular bisector be Y-axis.
7. the Facial symmetry appraisal procedure based on face recognition technology as described in claim 1, which is characterized in that in step
In S5, face's symmetry evaluation system includes static, dynamic and the score-system that linkage three is grouped greatly, and every group is segmented
For the fraction assessment at each position of face, the score that grouping obtains finally is weighted summation, obtains final symmetry assessment
As a result.
8. the Facial symmetry appraisal procedure based on face recognition technology as described in claim 1, which is characterized in that the spy
It is operated during the training process and detection target picture of sign point identification model using identical algorithm.
9. the Facial symmetry appraisal procedure based on face recognition technology as described in claim 1, which is characterized in that further include
The graphical interfaces client of set of complementary exploitation, for showing the testing result figure of the face key point and cornea identification circle
Picture, user and by graphical interfaces client to the position of any face key point detected and and cornea identification circle range
It is modified.
10. the Facial symmetry appraisal procedure based on face recognition technology as described in claim 1, which is characterized in that in step
In rapid S1, while the face key point and cornea identify handmarking's process of circle, photo sample of the doctor to facial paralysis patient
The symmetry at this its each position is given a mark, which is combined instruction with face's symmetry evaluation system described in step S5
Practice, to achieve the purpose that provide symmetry outcome evaluation automatically.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910810551.7A CN110516626A (en) | 2019-08-29 | 2019-08-29 | A kind of Facial symmetry appraisal procedure based on face recognition technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910810551.7A CN110516626A (en) | 2019-08-29 | 2019-08-29 | A kind of Facial symmetry appraisal procedure based on face recognition technology |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110516626A true CN110516626A (en) | 2019-11-29 |
Family
ID=68629211
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910810551.7A Pending CN110516626A (en) | 2019-08-29 | 2019-08-29 | A kind of Facial symmetry appraisal procedure based on face recognition technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110516626A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111062936A (en) * | 2019-12-27 | 2020-04-24 | 中国科学院上海生命科学研究院 | Quantitative index evaluation method for facial deformation diagnosis and treatment effect |
CN111553250A (en) * | 2020-04-25 | 2020-08-18 | 深圳德技创新实业有限公司 | Accurate facial paralysis degree evaluation method and device based on face characteristic points |
CN113053517A (en) * | 2021-03-29 | 2021-06-29 | 深圳大学 | Facial paralysis grade evaluation method based on dynamic region quantitative indexes |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103824049A (en) * | 2014-02-17 | 2014-05-28 | 北京旷视科技有限公司 | Cascaded neural network-based face key point detection method |
CN106980815A (en) * | 2017-02-07 | 2017-07-25 | 王俊 | Facial paralysis objective evaluation method under being supervised based on H B rank scores |
CN107007257A (en) * | 2017-03-17 | 2017-08-04 | 深圳大学 | The automatic measure grading method and apparatus of the unnatural degree of face |
CN107146251A (en) * | 2017-03-28 | 2017-09-08 | 浙江大学 | A kind of symmetrical analysis method of three-dimensional face model |
CN107713984A (en) * | 2017-02-07 | 2018-02-23 | 王俊 | Facial paralysis objective evaluation method and its system |
WO2019033571A1 (en) * | 2017-08-17 | 2019-02-21 | 平安科技(深圳)有限公司 | Facial feature point detection method, apparatus and storage medium |
CN110084259A (en) * | 2019-01-10 | 2019-08-02 | 谢飞 | A kind of facial paralysis hierarchical synthesis assessment system of combination face texture and Optical-flow Feature |
-
2019
- 2019-08-29 CN CN201910810551.7A patent/CN110516626A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103824049A (en) * | 2014-02-17 | 2014-05-28 | 北京旷视科技有限公司 | Cascaded neural network-based face key point detection method |
CN106980815A (en) * | 2017-02-07 | 2017-07-25 | 王俊 | Facial paralysis objective evaluation method under being supervised based on H B rank scores |
CN107713984A (en) * | 2017-02-07 | 2018-02-23 | 王俊 | Facial paralysis objective evaluation method and its system |
CN107007257A (en) * | 2017-03-17 | 2017-08-04 | 深圳大学 | The automatic measure grading method and apparatus of the unnatural degree of face |
CN107146251A (en) * | 2017-03-28 | 2017-09-08 | 浙江大学 | A kind of symmetrical analysis method of three-dimensional face model |
WO2019033571A1 (en) * | 2017-08-17 | 2019-02-21 | 平安科技(深圳)有限公司 | Facial feature point detection method, apparatus and storage medium |
CN110084259A (en) * | 2019-01-10 | 2019-08-02 | 谢飞 | A kind of facial paralysis hierarchical synthesis assessment system of combination face texture and Optical-flow Feature |
Non-Patent Citations (2)
Title |
---|
ZEQING JIANG等: "A Cloud-Based Training And Evaluation System For Facial Paralysis Rehabilitation", 《IEEE》 * |
向江怀: "增强现实下面神经功能康复训练系统研发", 《中国优秀硕士学位论文全文数据库医药卫生科技辑》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111062936A (en) * | 2019-12-27 | 2020-04-24 | 中国科学院上海生命科学研究院 | Quantitative index evaluation method for facial deformation diagnosis and treatment effect |
CN111062936B (en) * | 2019-12-27 | 2023-11-03 | 中国科学院上海营养与健康研究所 | Quantitative index evaluation method for facial deformation diagnosis and treatment effect |
CN111553250A (en) * | 2020-04-25 | 2020-08-18 | 深圳德技创新实业有限公司 | Accurate facial paralysis degree evaluation method and device based on face characteristic points |
CN113053517A (en) * | 2021-03-29 | 2021-06-29 | 深圳大学 | Facial paralysis grade evaluation method based on dynamic region quantitative indexes |
CN113053517B (en) * | 2021-03-29 | 2023-03-07 | 深圳大学 | Facial paralysis grade evaluation method based on dynamic region quantitative indexes |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110516626A (en) | A kind of Facial symmetry appraisal procedure based on face recognition technology | |
Li et al. | Automated feature extraction in color retinal images by a model based approach | |
CN107506693B (en) | Distort face image correcting method, device, computer equipment and storage medium | |
CN106469302A (en) | A kind of face skin quality detection method based on artificial neural network | |
CN105741375B (en) | A kind of infrared image Work attendance method of big field-of-view binocular vision | |
Aquino | Establishing the macular grading grid by means of fovea centre detection using anatomical-based and visual-based features | |
CN106960202A (en) | A kind of smiling face's recognition methods merged based on visible ray with infrared image | |
CN107368859A (en) | Training method, verification method and the lesion pattern recognition device of lesion identification model | |
CN106355973A (en) | Method and device for guiding drawing | |
CN111524080A (en) | Face skin feature identification method, terminal and computer equipment | |
CN108629336A (en) | Face value calculating method based on human face characteristic point identification | |
CN108520512B (en) | Method and device for measuring eye parameters | |
CN105426695A (en) | Health status detecting system and method based on irises | |
CN112967285B (en) | Chloasma image recognition method, system and device based on deep learning | |
CN107845300B (en) | Real-time monitoring safety examination system | |
CN110766656B (en) | Method, device, equipment and storage medium for screening fundus macular region abnormality | |
CN114694236B (en) | Eyeball motion segmentation positioning method based on cyclic residual convolution neural network | |
Campomanes-Álvarez et al. | Modeling facial soft tissue thickness for automatic skull-face overlay | |
CN107392151A (en) | Face image various dimensions emotion judgement system and method based on neutral net | |
CN113012093B (en) | Training method and training system for glaucoma image feature extraction | |
CN109410138A (en) | Modify jowled methods, devices and systems | |
US20230020160A1 (en) | Method for determining a value of at least one geometrico-morphological parameter of a subject wearing an eyewear | |
CN111862118B (en) | Pressure sore staging training method, staging method and staging system | |
CN108416304A (en) | A kind of three classification method for detecting human face using contextual information | |
CN109711306B (en) | Method and equipment for obtaining facial features based on deep convolutional neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20191129 |