CN108108010A - A kind of brand-new static gesture detection and identifying system - Google Patents
A kind of brand-new static gesture detection and identifying system Download PDFInfo
- Publication number
- CN108108010A CN108108010A CN201611044775.4A CN201611044775A CN108108010A CN 108108010 A CN108108010 A CN 108108010A CN 201611044775 A CN201611044775 A CN 201611044775A CN 108108010 A CN108108010 A CN 108108010A
- Authority
- CN
- China
- Prior art keywords
- gesture
- hand
- detection
- module
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The present invention discloses a kind of brand-new static gesture detection and identifying system, the system comprises:Gesture input module, gesture detection module and gesture recognition module;The method of gesture detection module described in system includes:Human hand is positioned with the method for shaking detection;Gesture is tracked with simplified and improved CAMSHIFT algorithms;The segmentation of hand is carried out with the complexion model obtained based on spot sampling;The method of the gesture recognition module is:It is identified with the method extraction simple feature of pattern-recognition.The present invention uses the human-computer interaction scheme based on gesture identification, solves the confinement problems of the existing man-machine interaction mode that need to be sensed with a large amount of hardware devices, makes human-computer interaction more naturally, conveniently.
Description
Technical field
The present invention relates to gestures detections and identifying system, are with identification more particularly to a kind of brand-new static gesture detection
System.
Background technology
With the fast development of human-computer interaction technology, gesture as it is a kind of from however intuitively exchange way, be man-machine friendship
An important component in mutually.With the fast development of science and technology and becoming increasingly popular for computer vision, people are to people
The requirement of machine exchange naturality is higher and higher, and traditional interactive mode based on mouse, keyboard shows the limitation of itself, newly
Man-machine interaction mode become research hot spot.Secondly, the mode that gesture is a kind of efficient human-computer interaction and equipment controls, base
In the gesture identification of vision be a challenging research topic in the fields such as human-computer interaction, pattern-recognition.
Current gesture recognition system is mostly using following two:
(1) data glove or adornment:This mode can reduce the complexity of detection and recognizer, but the operation of Worn type
Mode is obviously difficult to the needs for meeting natural human-computer interaction;
(2) 3D depth cameras are based on:3D scanning device volumes are larger, and hardware cost is higher, required operational capability higher, difficult
To integrate and be applied on popular intelligent terminal.
Secondly, visual location technology, which is divided into, is positioned based on color and based on two kinds of motion positions.Wherein, it is most
The training data method of establishing look-up table of the color location technology dependent on Histogram Matching or using skin, mainly lack
Point is under different conditions such as illumination, picture pick-up device etc., and skin color changes greatly, and this positioning expense is also very big.
Application publication number is that the application for a patent for invention of CN105138136A discloses a kind of " gesture identifying device, gesture knowledge
Other method and gesture recognition system ", the identifying system include a kind of gesture identifying device, and gesture identifying device includes being arranged on
At least one sensor corresponding with finger position, input mode judging unit and input content generation unit.The patent of invention
Application realizes virtual human-computer interaction, is fast and accurately simulated virtual content, but its device needs that sensor is set to use
It is identified in detection, required hardware is more and cost is higher.
Application publication number is that the application for a patent for invention of CN105045398A discloses a kind of a kind of " void based on gesture identification
Intend real interactive device ", the equipment include 3D utilizing camera interfaces, helmet-type virtual reality display, signal processing component and
Mobile equipment interface.The application for a patent for invention can capture the testing image sequence of user's hand containing depth information and pass through
Virtual reality interaction is realized in processing identification, but user need to wear helmet use, it is difficult to meet the need of natural human-computer interaction
It asks.
The content of the invention
It is an object of the invention to overcome the deficiencies of the prior art and provide a kind of brand-new static gesture detection and identification systems
System, the system use to static number gesture carry out gestures detection, tracking with know method for distinguishing, by human-computer interaction program to
The identifying processing of family gesture, it is final to realize the interpersonal interaction of intelligence machine.
It is another object of the present invention to provide a kind of hands using the above-mentioned man-machine interactive system based on gesture identification
Gesture is detected with knowing method for distinguishing, wherein, in gesture detection module:Human hand is positioned with the method for shaking detection;It is adopted with based on scene
The complexion model that sample obtains carries out the segmentation of hand;(come from paper Computer with simplified and improved CAMSHIFT algorithms
Vision Face Tracking For Use In a Perceptual User Interface, author Gary
R.Bradski, Microcomputer Research Lab, Santa Clara, CA, Intel Corporation, 1998) it is right
Gesture is into line trace;In gesture recognition module:It is identified with the method extraction simple feature of pattern-recognition, it is existing so as to solve
The confinement problems of man-machine interaction mode that are sensed with a large amount of hardware devices of need, make human-computer interaction more natural, conveniently.
The present invention solve above-mentioned technical problem technical solution be:
A kind of brand-new static gesture detection and identifying system, which is characterized in that wherein:
The system comprises gesture input module, gesture detection module and gesture recognition module, wherein:
The gesture input module is used for the user gesture in capture images or video, and the images of gestures that will be captured
Input as gesture detection module and gesture recognition module;
The gesture detection module is used to detect human hand region in complicated scene, and gesture is tracked,
The specific region of gesture is obtained, finally gesture is split to obtain the bianry image of gesture, pair as gesture recognition module
As.
The gesture recognition module does it for identifying the gesture that the input picture user of certain particular moment makes
Go out classification and judgement.
The brand-new static gesture detection of the present invention and the implementation method of gesture detection module in identifying system include following
Step:
(1) detection is shaken:The candidate region of hand is found in the position of detection positioning hand;
(2) hand skin color model treatment:To shake detect after the hand region that finds carry out skin color model, distinguish human hand and
Background establishes the skin similarity model of " working as remote holder ";
(3) gesture tracking:Gesture tracking is carried out using CAMSHIFT algorithms, will identify that object frames with window;
(4) gesture area is split:After shake detect and track is carried out to hand region, hand skin color mould is further utilized
Type is split obtained hand region, obtains the bianry image of gesture.
The brand-new static gesture detection of the present invention and the gesture recognition module in identifying system method, are classified using k-NN
Device extracts the feature of gesture, identifies digital gesture.
The present invention has following advantageous effect compared with prior art:
1st, the images of gestures of input is detected and identified step by step, realize the interpersonal interaction of intelligence machine,
The shortcomings that overcoming hardware device needed for prior art more, of high cost;
2nd, gesture tracking is carried out using CAMSHIFT trackers, positions position and the size of any moment hand exactly, from
And provide good basis for gesture identification process;And CAMSHIFT algorithms are improved, add one it is didactic
Rule solves the problems, such as common " the losing tracking object " of gesture tracking system.
Description of the drawings
Fig. 1 is the brand-new static gesture detection of the present invention and the system framework of a specific embodiment of identifying system
Flow diagram.
Fig. 2 is the schematic diagram of the brightness change rule of pixel when hand shakes in the specific embodiment of the present invention.
Fig. 3 is the visual representation schematic diagram that picture frame sub-block I (i, j, t) is defined.
Fig. 4 is the schematic diagram of HSV colour models.
Fig. 5 is the schematic diagram of gesture area bianry image.
Fig. 6 be human hand Skeleton after images of gestures schematic diagram.
Specific embodiment
With reference to embodiment and attached drawing, the present invention is described in further detail, but embodiments of the present invention are unlimited
In this.
Referring to Fig. 1, brand-new static gesture detection of the invention is made of with identifying system three modules in figure, including hand
Gesture input module, gesture detection module and gesture recognition module, wherein:
(1) gesture input module:Capture the user gesture in camera image or video, and the images of gestures that will be captured
Input as gesture detection module and gesture recognition module;
(2) gesture detection module:Human hand region is detected in complicated scene, and gesture is tracked, and obtains hand
The specific region of gesture is finally split gesture to obtain the bianry image of gesture, the object as gesture recognition module.Gesture
Detection module consists of the following parts:Detection, the segmentation of hand skin color model treatment, gesture tracking, gesture area are shaken, wherein:
(2.1) detection is shaken:The candidate region of hand is found in the position of detection positioning hand.
In view of the limitation to resource in robot application, in the present system, user, which first passes through, shakes hand to robot
(method is referred to from paper Detection of Waving Hands from Images Using Time one signal
Series of Intensity Values[C].In:The 3rd China-Japan Symposium on Mechatronics
(CJSM),2002-09).Referring to Fig. 2, when human hand shakes, the average brightness value of pixel will in the region that human hand passes through
There is continuous violent fluctuating change.This variation is in image not available for other regions.So system is only needed defeated
It in the sequence of the one section of continuous frame entered, finds out those and changes bigger region, it is possible to what the hand obtained substantially shook
Position.
It is in systems sub-block that several sizes are m × n first by the image separation of each frame, to each of t frames
Sub-block (i, j) calculates its variation degree in continuous 10 two field picture with following formula.
Wherein, I (i, j, t) illustrates the average brightness of sub-block (i, j) on t frames, referring to Fig. 3, visual representation I (i,
J, t) definition:
For the influence of reflecting time, when calculating each block in the accumulation of the variation degree on continuous 10 frame, to each
A frame assigns incremental weight w (n), a n=0 at any time ..., 9.
By above-mentioned calculating, the block of S (t) maximums is exactly to change region the most violent within nearest 10 frame.
What is used in the above process is based entirely on the positioning of movement.Then, with relatively simple colour of skin decision rule root
The hand region of candidate is found according to the colour of skin, i.e., if the colour of skin for having comparable pixel to be similar to experience around this region, judges
It is the region of human hand around this block.
(2.2) hand skin color model treatment:Skin color model is carried out to shaking the hand region found after detection, distinguishes human hand
And background, the skin similarity model of foundation " working as remote holder ".
Referring to the HSV colour models of Fig. 4, it can be seen that, color tone (Hue), saturation degree (Saturation) and bright
(Value) these three characteristics are spent to portray, and tone and saturation degree are collectively referred to as colourity.It is, in general, that colourity and brightness are mutually solely
Vertical.Under different illumination conditions, although the brightness of object color can generate very big difference, its colourity has constant
Property, it is held essentially constant.Some past researchs and statistics show all people's class, especially yellow and white skin
The Hue value differences of color are not minimum, and the average correlation coefficient of the colour of skin Hue values of any two people is up to 0.92;Hue values, which become, distinguishes skin
The mostly important feature of color.
The characteristics of in view of above-mentioned HSV space chromatic value, influences to reduce Face Detection caused by the factors such as illumination,
Preferably gesture is tracked and is split, images of gestures is changed to hsv color space by the system from RGB color, complete
It is complete to abandon the V component extremely sensitive to illumination, pay close attention to S components.
Secondly, if using the Hue colour of skin color distribution models of experience, due to environmental factor, such as picture pick-up device, racial difference
Deng influence it is varied, it is virtually impossible to predict and avoid the influence of these factors.Therefore in the present system, gestures detection is being passed through
After obtaining approximate location in one's hands, and without using the complexion model of experience, but taken by system from currently detected gesture area
Obtain the skin similarity model of some colour of skin Sample Establishings " working as remote holder ".Under normal circumstances, the central part of the palm area of acquirement
Major part is the colour of skin of current human hand.
Different color statistical models can be used from non-skin pixel by distinguishing skin pixels.Gauss model, mixed Gaussian
Model and histogram model can be used as the colour model of skin.In the present system, use using statistics with histogram model as base
The complexion model of plinth, the central area of the gesture to detecting count the histogram on colourity (H, S).The Bin numbers of histogram
For 64 (16*4).In statistic histogram, histogram value is normalized.Because Hue values S V values it is especially big or
It can be almost lost when especially small and characterizes the meaning of colourity, saturation degree or brightness will be ignored below 15% and more than 85%
Pixel, will be arranged to 0 by skin color probability herein in this case.Can be one according to the hand skin color histogram after normalization
Width image establishes a skin color probability map.Each pixel on skin color probability map has corresponded to the pixel on original image and has belonged to hand skin
The possibility of color.This possibility is calculated with the above-mentioned histogram on colourity.Assuming that the corresponding chromatic value of pixel (x, y) falls
In i-th of Bin of histogram, then value SPD (x, y) of the pixel (x, y) on skin color probability map declines for training data
Pixel ratio in i-th of Bin of histogram.
(2.3) gesture tracking:Gesture tracking is carried out using CAMSHIFT algorithms, will identify that object frames with window.In detail
It is described as follows:
Obtained gesture area is being detected into during line trace to shaking, and the system employs CAMSHIFT tracking.
CAMSHIFT is a tracker based on random color probabilistic model, its largest benefit is based on distribution of color, with tracking
The concrete model of object is unrelated.In the application of gesture identification, the colour of skin for the human hand to be tracked has on the chromatic value of HSV space
There are very distinct characteristic distributions, therefore CAMSHIFT trackers opponent can be used into line trace.
When tracking hand region with CAMSHIFT, the hand skin color model based on statistics with histogram has been used.
The process of CAMSHIFT algorithms is as follows:
An initial search window is inputted, that is, shakes the region that detector navigates to.
S1:Calculate the skin color probability map of search window.
S2:Calculate the 0 rank square and 1 rank square of skin color probability:
S3:Calculate the position of the high probability colour of skin barycenter in search window:
S4:Calculate the size of high probability area of skin color in search window.
S5:Center and the size of search window are adjusted according to the size of high probability area of skin color.
S6:Repeat the above steps S1-S5, until the variation of the center of search window and size in certain adjustment is less than some
Until threshold value.At this point, the position of high probability colour of skin barycenter seeks to the position of the object (human hand) of tracking.
In subsequent frame, after object (human hand) is mobile, high probability area of skin color is calculated still according to the above process, is adjusted
The region that hand of giving sb. a hard time occurs, it is achieved thereby that the tracking to human hand.
S5 steps in the above process are needed in each iterative process, according to the position of high probability colour of skin barycenter, really
Determine the size of search window.This is the key problem of CAMSHIFT algorithms.In the case of track human faces, new searches for experience suggestion
The width and height of rope window are respectively set toIn view of hand skin color and the difference of face complexion, the system
Mentioned above principle is adjusted.
According to the comparison to experimental result, when human hand is tracked, the wide and a height of of search window is set:CAMSHIFT trackers are tracked after movement in one's hands, it is possible to when reasonably accurately positioning any
Position and the size of hand are carved, so as to provide good basis for following identification process.At this time system stop shake detect into
Journey, unless tracker is lost the position of hand.
Any all unavoidable " lost objects " this problem of tracker.When Moving Objects remove screen or have that other are right
In the case of blocking, the search window of CAMSHIFT trackers will be less and less, i.e. M00Become minimum, accordingly even when tracking
Object reappears on the screen, and tracking can not also continue.Therefore adding a didactic rule solves the problems, such as this.When
When search window sufficiently small (5 pixels of such as 5 pixel *), search window is arranged to whole image.Therefore when object occurs again
When, gradually smaller window can frame object quickly.However, processing herein inevitably brings one new to ask
Topic, sometimes the analogical object of another in entire image can be mistakenly considered former object by tracker.
(2.4) gesture area is split:After shake detect and track is carried out to hand region, hand skin color is further utilized
Model is split obtained hand region, obtains the bianry image of gesture.It is described in detail as follows:
It, can be further using hand skin color model to obtaining after shake detect and track is carried out to hand region
Hand region is split.In hand region, skin color probability is labeled as 1 higher than the pixel of certain threshold value, other pixels are labeled as
0.In this manner it is possible to obtain a binary map on hand region, the gray value of wherein hand region is set to 255, other back ofs the body
The gray value of scene area is 0.
In bianry image, often comprising some cavities caused by image noise in hand connected region.It needs
To binary map Stepwise Refinement.In such a system, the small structure closed operation operator employed in morphological analysis carries out segmentation figure
Processing.Finally, by region merging technique and the algorithm of label, the connected region in hand region binary map can be calculated, is chosen
The region of area maximum is as human hand region, the hand region binary map after just having obtained smoothly.
Referring to Fig. 5, which depict the processes of above-mentioned hand region segmentation.Wherein (a) is according to shake detector or tracking
The gesture area original image that device obtains;(b) it is to the initial binary that is obtained after gesture area binaryzation using hand skin color model
Image;(c) the simply connected bianry image to be obtained after morphological process removes picture noise.
Static gesture identification is exactly in the input picture of certain particular moment, which kind of hand what identification user made is, this
The problem of being a traditional pattern-recognition.There is the sorting algorithm of many gesture identifications at present, such as syntactic pattern recognition method, mould
Method, Bayes classifier, combination neural net that plate matches and tables look-up etc..The problem of it is crucial is exactly which kind of gesture mould used
Type and extract what kind of feature.Common images of gestures feature refers to including gradation of image, two-value image, region, border, profile
Point etc..
Simply simple digital gesture rather than complexity limited in view of the processing capacity of robot, and being identified here
Language gesture, it requires extraction feature answer it is simple and quick and effective.As shown in fig. 6, human hand Skeleton (uses a pixel
Delete look-up table and carry out Skeleton original binary map) after, the feature of images of gestures is quite apparent, can simply calculate gesture bone
Number of branches after change, as a general characteristic N.
Row projection is done to upper 30 row of the skeletal graph of the 30*40 after normalization again to count, obtains m1,m2,...,m3030
Two features of parameter form feature vector V={ N, N, N..., the m of one 40 dimension1,m2,...,m30, wherein in order to protrude branch
The importance of number artificially allows N to account for 10 dimensions.This feature vector carries out feature during gesture identification as the system.
Also try with the other methods such as PCA methods and Zernike squares extract feature, and the feature with being extracted after Skeleton into
It has gone and has compared.Consider the velocity efficiency in real-time system, final we have still used skeleton character.
After being labeled to a certain amount of training data, two kinds of graders of k-NN and SVM have been attempted, have found the effect of the two
Fruit is extremely close.In view of the limitation of resource, quick and easy k-NN graders have been used in final system.
It is above-mentioned for the preferable embodiment of the present invention, but embodiments of the present invention and from the limitation of the above,
His any Spirit Essence without departing from the present invention with made under principle change, modification, replacement, combine, simplification, should be
The substitute mode of effect, is included within protection scope of the present invention.
Claims (3)
1. a kind of brand-new static gesture detection and identifying system, which is characterized in that the system includes gesture input module, gesture
Detection module and gesture recognition module, wherein:
The gesture input module for the user gesture in capture images or video, and using the images of gestures captured as
The input of gesture detection module and gesture recognition module;
The gesture detection module is used to detect human hand region in complicated scene, and gesture is tracked, and obtains
The specific region of gesture is finally split gesture to obtain the bianry image of gesture, the object as gesture recognition module;
The gesture recognition module is made it point for identifying the gesture that the input picture user of certain particular moment makes
Class and judgement.
2. a kind of brand-new static gesture detection according to claim 1 and identifying system, which is characterized in that the gesture
The implementation method of detection module, comprises the following steps:
(1) detection is shaken:The candidate region of hand is found in the position of detection positioning hand;
(2) hand skin color model treatment:Skin color model is carried out to shaking the hand region found after detection, distinguishes human hand and the back of the body
Scape establishes the skin similarity model of " working as remote holder ";
(3) gesture tracking:Gesture tracking is carried out using CAMSHIFT algorithms, will identify that object frames with window;
(4) gesture area is split:After shake detect and track is carried out to hand region, hand skin color model pair is further utilized
Obtained hand region is split, and obtains the bianry image of gesture.
3. a kind of brand-new static gesture detection according to claim 1 and identifying system, which is characterized in that the gesture
The implementation method of identification module using the feature of k-NN graders extraction gesture, identifies digital gesture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611044775.4A CN108108010A (en) | 2016-11-24 | 2016-11-24 | A kind of brand-new static gesture detection and identifying system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611044775.4A CN108108010A (en) | 2016-11-24 | 2016-11-24 | A kind of brand-new static gesture detection and identifying system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108108010A true CN108108010A (en) | 2018-06-01 |
Family
ID=62203814
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611044775.4A Pending CN108108010A (en) | 2016-11-24 | 2016-11-24 | A kind of brand-new static gesture detection and identifying system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108108010A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111061369A (en) * | 2019-12-13 | 2020-04-24 | 腾讯科技(深圳)有限公司 | Interaction method, device, equipment and storage medium |
-
2016
- 2016-11-24 CN CN201611044775.4A patent/CN108108010A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111061369A (en) * | 2019-12-13 | 2020-04-24 | 腾讯科技(深圳)有限公司 | Interaction method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107679448B (en) | Eyeball action-analysing method, device and storage medium | |
Radman et al. | Automated segmentation of iris images acquired in an unconstrained environment using HOG-SVM and GrowCut | |
Betancourt et al. | A sequential classifier for hand detection in the framework of egocentric vision | |
CN104318558B (en) | Hand Gesture Segmentation method based on Multi-information acquisition under complex scene | |
CN107633204A (en) | Face occlusion detection method, apparatus and storage medium | |
US20070098222A1 (en) | Scene analysis | |
Žemgulys et al. | Recognition of basketball referee signals from real-time videos | |
CN102324025A (en) | Human face detection and tracking method based on Gaussian skin color model and feature analysis | |
Kalas | Real time face detection and tracking using OpenCV | |
CN106200971A (en) | Man-machine interactive system device based on gesture identification and operational approach | |
Jung et al. | Eye detection under varying illumination using the retinex theory | |
Vishwakarma et al. | Simple and intelligent system to recognize the expression of speech-disabled person | |
Mahmood et al. | A Comparative study of a new hand recognition model based on line of features and other techniques | |
CN106909883A (en) | A kind of modularization hand region detection method and device based on ROS | |
Salunke et al. | Power point control using hand gesture recognition based on hog feature extraction and K-NN classification | |
Hu et al. | Depth sensor based human detection for indoor surveillance | |
CN107886060A (en) | Pedestrian's automatic detection and tracking based on video | |
Vafadar et al. | Human hand gesture recognition using motion orientation histogram for interaction of handicapped persons with computer | |
Yamamoto et al. | Algorithm optimizations for low-complexity eye tracking | |
CN108108010A (en) | A kind of brand-new static gesture detection and identifying system | |
CN103020631A (en) | Human movement identification method based on star model | |
Séguier | A very fast adaptive face detection system | |
Li | Vision based gesture recognition system with high accuracy | |
Pandey et al. | An efficient algorithm for sign language recognition | |
Singh et al. | Development of accurate face recognition process flow for authentication |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180601 |
|
WD01 | Invention patent application deemed withdrawn after publication |