CN110232307A - A kind of multi-frame joint face recognition algorithms based on unmanned plane - Google Patents
A kind of multi-frame joint face recognition algorithms based on unmanned plane Download PDFInfo
- Publication number
- CN110232307A CN110232307A CN201910278641.6A CN201910278641A CN110232307A CN 110232307 A CN110232307 A CN 110232307A CN 201910278641 A CN201910278641 A CN 201910278641A CN 110232307 A CN110232307 A CN 110232307A
- Authority
- CN
- China
- Prior art keywords
- feature
- face
- lbp
- dimension
- unmanned plane
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013135 deep learning Methods 0.000 claims abstract description 6
- 238000000605 extraction Methods 0.000 claims abstract description 3
- 239000013598 vector Substances 0.000 claims description 14
- 238000000034 method Methods 0.000 claims description 10
- 238000013528 artificial neural network Methods 0.000 claims description 7
- 230000009467 reduction Effects 0.000 claims description 7
- 238000013461 design Methods 0.000 claims description 6
- 238000000513 principal component analysis Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000012937 correction Methods 0.000 claims description 2
- 230000005764 inhibitory process Effects 0.000 claims description 2
- 229910002056 binary alloy Inorganic materials 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 8
- 238000013527 convolutional neural network Methods 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 abstract description 4
- 230000008901 benefit Effects 0.000 abstract description 2
- 238000006116 polymerization reaction Methods 0.000 abstract 1
- 230000006870 function Effects 0.000 description 3
- 238000009432 framing Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000012847 principal component analysis method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/0202—Control of position or course in two dimensions specially adapted to aircraft
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Remote Sensing (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Aviation & Aerospace Engineering (AREA)
- Molecular Biology (AREA)
- Automation & Control Theory (AREA)
- Radar, Positioning & Navigation (AREA)
- Biomedical Technology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Unmanned plane has good flight advantage and wide flight range, can easily some special duties, the extensive application in numerous areas, deep learning has started one tide again at present.The present invention proposes a kind of multi-frame joint face recognition algorithms based on unmanned plane, key step: to video flowing cutting after the video flowing of ground receiver end acquisition unmanned plane shooting, being compared by consecutive frame, all frame phase-polymerizations are realized to recognition of face and tracking effect.It has wherein mainly used the key position positioning that close even convolutional neural networks come to face and similitude between face is finally judged using COS distance using higher-dimension LBP algorithm extraction feature.A kind of multi-frame joint face recognition algorithms based on unmanned plane, which realize, can automatically position pedestrian according to input picture feature when unmanned plane shoots pedestrian and reach tracking effect, the identification and tracking of video are extended under the technology of existing particular picture identification, algorithm speed is obviously improved.
Description
Technical field
The present invention relates to deep learning, unmanned planes to fly control and field of face identification, and in particular to is based on unmanned plane to a kind of
Multi-frame joint face recognition algorithms.
Background technique
Currently, unmanned plane has good flight advantage and wide flight range, it easily can complete to take photo by plane, search
It rescues, topographic(al) reconnaissance, monitor the tasks such as scouting, in an increasingly wide range of applications in numerous areas, face recognition technology exists at present
Start one upsurge again both at home and abroad, but currently, the correlative study for recognition of face in UAV Video image is also less, and
There is extensive demand for the every profession and trade in nowadays market for the recognition of face aspect of successive frame.
What is studied at present is identified to two static pictures mostly, although precision has reached very high now,
It is still an important research topic for dynamic recognition of face, development in this respect still has deficiency, especially
The result that must often make mistake under different illumination and different situations.The movement speed of people is sometimes very in video
Fastly, the effect that unmanned plane is shot when shooting pedestrian due to the difference of angle is also different, this be also solve at present it is one big
Problem.So present invention mainly solves the treatment effect to image and improving algorithm speed.
Traditional Face datection algorithm speed is than very fast, but accuracy rate is relatively low;Based on convolutional neural networks ratio of precision
It is higher, but the time run is slow, the face recognition technology of multi-frame joint is far not achieved the effect of needs.
Summary of the invention
To solve shortcoming and defect in the prior art, the invention proposes a kind of multi-frame joint face based on unmanned plane
Every photograph frame is carried out Face datection, feature extraction by obtaining the video flowing of unmanned plane shooting and separating framing by recognizer
And recognition of face.Whole features of people are extracted using intensive Connection Neural Network, feature outstanding is between different faces
Difference degree rather than the variation on the directions such as exposure, expression, using supervision descent method come to face key position carry out
Positioning, it is contemplated that will appear the non-uniform situation of light in actual picture, so extracting around these key points multiple dimensioned
Higher-dimension LBP feature.Feature Dimension Reduction is carried out using Principal Component Analysis method.It is calculated between face finally by COS distance
Similitude, be compared with the face similitude that before and after frames obtain, if it is correct to be considered as result in a certain threshold value.
The technical solution of the present invention is as follows:
A. the video flowing of unmanned plane shooting is obtained.After the video stream of unmanned plane shooting is to ground surface end, ground surface end is to view
The data of frequency stream are decoded and cutting framing, and an image is all formed to each frame.
B. face characteristic identification is carried out to every frame picture.The image that every frame is generated successively carries out recognition of face, and obtains
To corresponding face characteristic, face key position is positioned using intensive connection convolutional network, parameter is than general convolution
Neural network will be lacked, and not only improve accuracy, but also also improve speed.
C. based on the accuracy of before and after frames, a backpropagation function is designed.It is not inconsistent in the result obtained with desired effect
When, parameter is reversely updated by backpropagation function, the accuracy of recognition of face is compared again, to make to tie
Fruit and desired effect are more close.
D. unmanned plane is controlled according to the position of the face identified, the angle that adjustment unmanned plane shoots pedestrian
To reach better shooting effect.
Beneficial effects of the present invention:
(1) this method can eliminate traditional convolutional neural networks bring with net by intensively connection convolutional neural networks
Network depth increase brought by diffusion the problems such as, the intensive convolutional neural networks that connect can make that precision is got higher, speed becomes faster,
It is very suitable for handling the picture of multiple successive frames;
(2) this method carries out dimensionality reduction using Principal Component Analysis, greatly reduced operation times, in unmanned plane shooting
It obtains a result faster in video flowing;
(3) by backpropagation function come gradual undated parameter, to find the optimal solution for meeting expected results.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.
Fig. 1 is a kind of structure chart of the multi-frame joint face recognition algorithms based on unmanned plane of the present invention;
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall within the protection scope of the present invention.
As shown in Figure 1, the detailed process to a kind of multi-frame joint face recognition algorithms based on unmanned plane carries out specifically
It is bright.
A. the design based on higher-dimension LBP algorithm
Although deep learning does not appear in front of image domains, LBP (local binary patterns) algorithm accounts for main status,
It is due to, since light unevenly just will appear the situation of feature calculation mistake, being obtained using the LBP algorithm of higher-dimension in LBP algorithm
Result there is better robustness than LBP algorithm, it is also available very high accurate even if being influenced in terms of by illumination
Rate.Concrete implementation mode is as follows:
This represents the size windows for choosing one 3 × 3 centered on a certain pixel (here for 50) at one, will
Eight pixels for changing pixel and surrounding are bigger small, are denoted as 1 greater than central pixel point, otherwise are denoted as 0, obtained 8 two
System 01001011, being converted into the decimal system is later 75, as the LBP value of center pixel.Using supervision descent algorithm come
The key position of face is positioned, the dimension of LBP feature is obtained using following formula:
D=n*s*k*size2 (1)
Wherein d represents the dimension of the LBP feature finally found out, and n indicates to choose the number of key point, and size is selection window
Size, i.e. size2Indicate the number containing pixel in the window, s is the number zoomed in and out to original image, and k indicates LBP
The digit of value.
Obtain LBP algorithm dimension after, it is also necessary to dimensionality reduction is carried out to it because to the feature extracted at present have noise with
Other redundancies influence the accuracy of identification.The present invention carries out Feature Dimension Reduction using principal component analysis (PCA) method, will be former
Carry out one group of vector drop of n dimension into the vector unrelated less than n dimensional linear, while not interfering with the correctness of identification also.
B. the design based on multiple image joint identification
The present invention using intensive Connection Neural Network as the basis of deep learning, first layer by the input to image,
Regression vector group is estimated, some candidate regions are selected, these regions are merged using non-maximization inhibition, are obtained
Some values of maximum probability will work after transmitting three times in neural network above, and enter four-layer network network, to carry out key
The positioning of point will finally obtain the vector of n dimension, while again return to the picture that first layer reads a later frame, by four times
Another set key point is obtained after screening, two key points are compared, by the way that threshold value is arranged, using COS distance to two groups
Feature is compared, i.e., makees following operation to two n-dimensional vectors:
T indicates the similarity between two feature vectors of calculating, and two features are indicated when the result of t is intended to 1
It is more similar, on the contrary indicate that the two vectors are more independent, and initial threshold is set as 0.8, reaches 0.8 or more in similarity and is then considered
The same person saves feature.Then next frame image is started to identify.
C. the design adjusted based on backpropagation
Backpropagation can carry out correction process to the result identified, if passing through higher-dimension LBP algorithm and COS distance
Obtained mistake as a result, be changed using backpropagation to parameter, record each sample think how to modify weight and
Biasing, finally removes an average value again.It is trained again to reach the result of high-accuracy.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention
Within mind and principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.
Claims (1)
1. a kind of multi-frame joint face recognition algorithms based on unmanned plane, by obtaining the video flowing of unmanned plane shooting and being separated into
Every photograph frame is carried out Face datection, feature extraction and recognition of face by frame.The complete of people is extracted using intensive Connection Neural Network
Portion's feature, feature outstanding is the difference degree between different faces rather than the variation on the directions such as exposure, expression, is used
Descent method is supervised to position to the key position of face, it is contemplated that it will appear the non-uniform situation of light in actual picture,
So extracting multiple dimensioned higher-dimension LBP feature around these key points.Feature Dimension Reduction is carried out using Principal Component Analysis.Most
The similitude between face is calculated by COS distance afterwards, completes the recognition of face in terms of multiframe, mainly includes following step
It is rapid:
A. the design based on higher-dimension LBP algorithm
Although LBP (local binary patterns) algorithm accounts for main status before deep learning does not appear in image domains,
Since light unevenly just will appear the situation of feature calculation mistake in LBP algorithm, the result obtained using the LBP algorithm of higher-dimension
There is better robustness than LBP algorithm, even if being illuminated by the light the influence of aspect, also available very high accuracy rate.Specifically
Implementation it is as follows:
This represents the size windows for choosing one 3 × 3 centered on a certain pixel (here for 50) at one, by the picture
Eight pixels of vegetarian refreshments and surrounding are bigger small, are denoted as 1 greater than central pixel point, otherwise are denoted as 0, obtained 8 binary systems
01001011, being converted into the decimal system is later 75, as the LBP value of center pixel.Using supervision descent algorithm come to people
The key position of face is positioned, and the dimension of LBP feature is obtained using following formula:
D=n*s*k*size2 (1)
Wherein d represents the dimension of the LBP feature finally found out, and n indicates to choose the number of key point, and size is the big of selection window
It is small, i.e. size2Indicate the number containing pixel in the window, s is the number zoomed in and out to original image, and k indicates LBP value
Digit.
After obtaining LBP algorithm dimension, it is also necessary to dimensionality reduction is carried out to it, because having noise and other to the feature extracted at present
Redundancy influences the accuracy of identification.The present invention carries out Feature Dimension Reduction using principal component analysis (PCA) method, will be original
One group of n-dimensional vector drop also will not influence the correctness of identification at the vector unrelated less than n dimensional linear.
B. based on multiple image joint identification design
The present invention is using intensive Connection Neural Network as the basis of deep learning, and first layer is by the input to image, to returning
Return Vector Groups to be estimated, select some candidate regions, these regions are merged using non-maximization inhibition, obtain probability
Maximum some values will be worked above after transmitting three times in neural network, enter four-layer network network, Lai Jinhang key point
Positioning will finally obtain the vector of n dimension, while again return to the picture that first layer reads a later frame, screen by four times
After obtain another set n-tuple, two key points are compared, by be arranged threshold value, using COS distance to two groups of features
It is compared, i.e., following operation is made to two n-dimensional vectors:
T indicates the similarity between two feature vectors of calculating, indicates that two features get over phase when the result of t is intended to 1
Seemingly, otherwise indicate that the two vectors are more independent, initial threshold is set as 0.8, reaches 0.8 or more in similarity and is then considered same
Individual saves feature.Then next frame image is started to identify.
C. the design adjusted based on backpropagation
Backpropagation can carry out correction process to the result identified, if obtained by higher-dimension LBP algorithm and COS distance
Mistake as a result, be changed using backpropagation to parameter, record each sample and think how to modify weight and biasing,
An average value is finally removed again.It is trained again to reach the result of high-accuracy.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910278641.6A CN110232307A (en) | 2019-04-04 | 2019-04-04 | A kind of multi-frame joint face recognition algorithms based on unmanned plane |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910278641.6A CN110232307A (en) | 2019-04-04 | 2019-04-04 | A kind of multi-frame joint face recognition algorithms based on unmanned plane |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110232307A true CN110232307A (en) | 2019-09-13 |
Family
ID=67860746
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910278641.6A Pending CN110232307A (en) | 2019-04-04 | 2019-04-04 | A kind of multi-frame joint face recognition algorithms based on unmanned plane |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110232307A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110969646A (en) * | 2019-12-04 | 2020-04-07 | 电子科技大学 | Face tracking method adaptive to high frame rate |
CN111824406A (en) * | 2020-07-17 | 2020-10-27 | 南昌航空大学 | Public safety independently patrols four rotor unmanned aerial vehicle based on machine vision |
WO2021082006A1 (en) * | 2019-11-01 | 2021-05-06 | 华为技术有限公司 | Monitoring device and control method |
CN112800867A (en) * | 2021-01-13 | 2021-05-14 | 重庆英卡电子有限公司 | Pine wood nematode withered tree detection method based on two-stage high-altitude pan-tilt video |
-
2019
- 2019-04-04 CN CN201910278641.6A patent/CN110232307A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021082006A1 (en) * | 2019-11-01 | 2021-05-06 | 华为技术有限公司 | Monitoring device and control method |
CN110969646A (en) * | 2019-12-04 | 2020-04-07 | 电子科技大学 | Face tracking method adaptive to high frame rate |
CN111824406A (en) * | 2020-07-17 | 2020-10-27 | 南昌航空大学 | Public safety independently patrols four rotor unmanned aerial vehicle based on machine vision |
CN112800867A (en) * | 2021-01-13 | 2021-05-14 | 重庆英卡电子有限公司 | Pine wood nematode withered tree detection method based on two-stage high-altitude pan-tilt video |
CN112800867B (en) * | 2021-01-13 | 2023-05-12 | 重庆英卡电子有限公司 | Pine wood nematode disease dead tree detection method based on two-stage high-altitude tripod head video |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110232307A (en) | A kind of multi-frame joint face recognition algorithms based on unmanned plane | |
US10719940B2 (en) | Target tracking method and device oriented to airborne-based monitoring scenarios | |
WO2020098158A1 (en) | Pedestrian re-recognition method and apparatus, and computer readable storage medium | |
US8780195B1 (en) | Fusion of multi-sensor information with operator-learned behavior for automatic and efficient recognition of objects and control of remote vehicles | |
CN110210276A (en) | A kind of motion track acquisition methods and its equipment, storage medium, terminal | |
RU2476825C2 (en) | Method of controlling moving object and apparatus for realising said method | |
CN108960404B (en) | Image-based crowd counting method and device | |
CN111444744A (en) | Living body detection method, living body detection device, and storage medium | |
CN108648216B (en) | Visual odometer implementation method and system based on optical flow and deep learning | |
CN109598225A (en) | Sharp attention network, neural network and pedestrian's recognition methods again | |
CN110084837B (en) | Target detection and tracking method based on unmanned aerial vehicle video | |
JP2020038666A (en) | Method for generating data set for learning for detection of obstacle in autonomous driving circumstances and computing device, learning method, and learning device using the same | |
CN115880784A (en) | Scenic spot multi-person action behavior monitoring method based on artificial intelligence | |
CN110110580B (en) | Wi-Fi signal-oriented sign language isolated word recognition network construction and classification method | |
CN111597978B (en) | Method for automatically generating pedestrian re-identification picture based on StarGAN network model | |
CN112633234A (en) | Method, device, equipment and medium for training and applying face glasses-removing model | |
CN112633417A (en) | Pedestrian depth feature fusion method for pedestrian re-identification and with neural network modularization | |
Bacca et al. | Compressive classification from single pixel measurements via deep learning | |
CN110825916A (en) | Person searching method based on body shape recognition technology | |
CN114266952A (en) | Real-time semantic segmentation method based on deep supervision | |
CN112418229A (en) | Unmanned ship marine scene image real-time segmentation method based on deep learning | |
Al-Obodi et al. | A Saudi Sign Language recognition system based on convolutional neural networks | |
US20230080120A1 (en) | Monocular depth estimation device and depth estimation method | |
CN115713546A (en) | Lightweight target tracking algorithm for mobile terminal equipment | |
CN116630369A (en) | Unmanned aerial vehicle target tracking method based on space-time memory network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190913 |