CN106503619A - Gesture identification method based on BP neural network - Google Patents

Gesture identification method based on BP neural network Download PDF

Info

Publication number
CN106503619A
CN106503619A CN201610847276.2A CN201610847276A CN106503619A CN 106503619 A CN106503619 A CN 106503619A CN 201610847276 A CN201610847276 A CN 201610847276A CN 106503619 A CN106503619 A CN 106503619A
Authority
CN
China
Prior art keywords
gesture
image
profile
video
noise
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610847276.2A
Other languages
Chinese (zh)
Other versions
CN106503619B (en
Inventor
汪琦
秦凯伦
付潇聪
李胜
许鸣吉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201610847276.2A priority Critical patent/CN106503619B/en
Publication of CN106503619A publication Critical patent/CN106503619A/en
Application granted granted Critical
Publication of CN106503619B publication Critical patent/CN106503619B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of gesture identification method based on BP neural network, comprises the following steps:Video acquisition, obtains the video comprising staff by video acquisition device;The RGB types of image that video acquisition device is obtained is transformed into YCrCb color spaces, carries out adaptive thresholding to Cr passages by Hand Gesture Segmentation, and image is converted into binary map, is obtained gesture profile, and is removed noise;Gesture identification, after determining removal noise, bending moment, as gesture feature, does not build three layers of BP neural network to the Hu of binary image, and using Hu, bending moment value, as the input of neutral net, is not trained, and identifies the gesture that trains in video.The present invention can effectively split palm portion, and accurately recognize certain gestures.

Description

Gesture identification method based on BP neural network
Technical field
The invention belongs to technical field of computer vision, particularly a kind of gesture identification method based on BP neural network.
Background technology
With the extensive application of computer, man-machine interaction has become the pith in people's daily life.Man-machine interaction Refer between people and computer using certain conversational language, with certain interactive mode complete determination task people and computer it Between information exchanging process.And the final purpose of man-machine interaction is to realize that people is exchanged naturally with machine, break away from any type of Interactive interface, the mode of input information become increasingly simpler, random, by means of the fusion of artificial intelligence and big data, can Very intuitively, Man's Demands are comprehensively captured, and assists people to process affairs.Man-machine interaction master known to us now User to be depended on is interacted by the traditional approach such as keyboard, mouse, touch-screen and computer, has currently been had many emerging Interactive mode trial, such as body feeling interaction, interactive voice etc., but for now, man-machine interaction mode is also present The deficiency of many, is also still had a long way to go apart from our final purpose, maximum difficulty is exactly use range The difficulty that limitation is recognized with information.Gesture identification method is a kind of emerging interactive mode, using staff as input medium with Other input methods compare, with naturality, terseness, rich and direct the features such as.
It is the first step of gesture identification that gesture is obtained, and obtaining gesture at present mainly has two kinds of approach:Based on data glove and View-based access control model.It is that input data amount is little based on the advantage of data glove, real-time is good, but data glove is relatively costly, and wears Wearing gloves can reduce Consumer's Experience.
Hand Gesture Segmentation be most important in gesture recognition system be also a most basic step, modal method is to adopt color The method of feature realizes Hand Gesture Segmentation.But when weather is warmmer, user often wears cotta appearance, and due to arm Color characteristic and palm color characteristic relatively, certain impact can be produced to segmentation result.
Content of the invention
It is an object of the invention to provide a kind of gesture identification method based on BP neural network.
The technical scheme for realizing the object of the invention is:A kind of gesture identification method based on BP neural network, including following Step:
Video acquisition:Video comprising staff is obtained by video acquisition device;
Hand Gesture Segmentation:The RGB types of image that video acquisition device is obtained is transformed into YCrCb color spaces, to Cr passages Adaptive thresholding is carried out, image is converted into binary map, obtained gesture profile, and remove noise;
Gesture identification:After determining removal noise, bending moment, as gesture feature, does not build three layers of BP god to the Hu of binary image Through network, using Hu, bending moment value, as the input of neutral net, is not trained, and identifies the gesture that trains in video.
Compared with prior art, its remarkable advantage is the present invention:
(1) present invention proposes a kind of method using color space with reference to geometrical property and realizes Hand Gesture Segmentation, eliminates Arm part improves the correctness of identification for the impact of gesture identification;
(2) present invention directly expanded application in all kinds of control units, such as vehicle mounted multimedia control, can simplify and drive Operation of the person of sailing to vehicle, reduces accident rate;Control such as to unmanned plane, by increasing capacitance it is possible to increase the interest of operation.
Description of the drawings
Fig. 1 is the gesture identification method flow chart based on BP neural network of the present invention.
Fig. 2 is arm and palm parted pattern figure.
Fig. 3 (a) and Fig. 3 (b) are the image comprising face respectively and remove the binary map after face.
Fig. 4 (a) and Fig. 4 (b) are the image comprising arm respectively and remove the binary map after arm.
Fig. 5 (a) and Fig. 5 (b) are comprising ambient noise and to remove the binary map after noise respectively.
Fig. 6 (a)~Fig. 6 (h) is the eight kinds of gesture schematic diagrames that tests in the embodiment of the present invention.
Specific embodiment
In conjunction with Fig. 1, a kind of gesture identification method based on BP neural network of the present invention, comprise the following steps:
Video acquisition:Video comprising staff is obtained by video acquisition device;
Hand Gesture Segmentation:The RGB types of image that video acquisition device is obtained is transformed into YCrCb color spaces, to Cr passages Adaptive thresholding is carried out, image is converted into binary map, obtained gesture profile, and remove noise;
Gesture identification:After determining removal noise, bending moment, as gesture feature, does not build three layers of BP god to the Hu of binary image Through network, using Hu, bending moment value, as the input of neutral net, is not trained, and identifies the gesture that trains in video.
Further, the detailed process of Hand Gesture Segmentation is:
The RGB types of image that video acquisition device is obtained is transformed into YCrCb color spaces by step 21;
Step 22, carries out adaptive thresholding to Cr passages, and image is converted into binary map, obtains gesture profile;
Step 23, removes noise
The first step, removes face part
Face is gone out using approximate Haar feature detections, face part is removed;
Second step, removes arm part
(1) minimum enclosed rectangle comprising area of skin color is obtained;
(2) inscribed circle is sought:Each point in scanning profile, obtains which to the beeline on profile border;When all of point After scanning through, maximum value in all beelines is found, its corresponding point is incenter, the distance is inscribed circle Radius;
(3) acquiescence input gesture finger tip upwards, obtains incenter to the distance of boundary rectangle upper side frame, and which is connect with interior Radius of circle sum is hand length;
(4) according to hand length, arm part is removed, palm boundary rectangle and its respective coordinates are obtained;
3rd step, removes ambient noise
By calculating the area of each profile, profile of the area less than given threshold is deleted, ambient noise is excluded.
Further, the detailed process of gesture identification is:
Step 31, determines the Hu not bending moments for removing binary image after noise
A width digital picture is directed to, by coordinate x, the pixel at y regards two-dimensional random variable f (x, y) as, for ash Degree is distributed as the image of f (x, y), and its (p+q) rank square is defined as:
mpq=∫ ∫ xpyqF (x, y) dxdy
Wherein p, q=0,1,2 ...;
The centre of moment (x0, y0) be:
For digital picture, discretization, under discrete state, its (p+q) rank central moment is:
Wherein, N and M are the height and width of image respectively;
When image changes, mpqAlso change, and μpqThen there is translation invariant shape but to rotating still sensitivity. Character representation is carried out with common square or centre-to-centre spacing directly, it is impossible to make feature while having translation, Invariant to rotation and scale.If Using centre-to-centre spacing is normalized, then feature not only has translation invariance, but also there is constant rate.
Normalization central moment is defined as:
Wherein:P+q=2,3 ..., μ00Centered on square p=q=0 when value;
Obtain the expression formula of Hu not bending moments:
I1=y20+y02
I2=(y20+y02)2+4y11 2
I3=(y30-3y12)2+(3y21-y03)2
I4=(y30+y12)2+(y21+y03)2
I5=(y30-y12)(y30+y12)[(y30+y12)2-(3y21+y03)2]+(3y21-y03)(y21+y03)[3(y30+y12)2- (y21+y03)2]
I6=(y20-y02)[(y30+y12)2-(y21+y03)2]+4y11(y30+y12)(y21+y03)
I7=(3y21+y03)(y30+y12)[(y30+y12)2-(3y21+y03)2]+(y30-3y12)(y21+y30)[3(y30+y12)2- (y21+y03)2]
Step 32, using Hu, bending moment value, as the input of neutral net, is not trained, and identifies the hand that trains in video Gesture.
With reference to specific embodiment, the invention will be further described.
Embodiment
The present embodiment includes video acquisition, Hand Gesture Segmentation and gesture identification based on the gesture identification method of BP neural network Three parts:
(1) video acquisition
Video comprising staff is obtained by camera.
(2) Hand Gesture Segmentation
Method of the present invention using color space with reference to geometric properties realizes Hand Gesture Segmentation.By directly obtained by camera RGB types of image is transformed into YCrCb color spaces, carries out adaptive thresholding to Cr passages, image is converted into binary map, Gesture profile is found as the input value of identification, noise is finally removed.Concrete grammar includes:
2.1 color space conversion
YCrCb color spaces have the characteristics of separating colourity with brightness, and the Clustering features to the colour of skin are relatively good, by brightness The impact of change is little, can be very good the area of skin color for distinguishing people.
YCrCb color spaces can be expressed as with the conversion of RGB color:
2.2 noise remove
Noise mainly includes face that may be present, arm that may be present and other class colour of skin objects.
The first step, removal face part
Face is removed to be needed to carry out before the conversion of color of image space, and precondition is that hand and face can not have overlapping portion Point, in this case, the present invention goes out face initially with approximate Haar feature detections, then face is removed.
Second step, removal arm part
Man-machine interaction is mainly operated using palm portion, and therefore unnecessary arm part can affect to sentence gesture Disconnected, need to remove it, such case is occurring when user wears cotta.
As shown in Fig. 2 ABCD is the minimum enclosed rectangle comprising area of skin color, BC distances are arm length darm, and round O is The maximum inscribed circle of area of skin color, r is inscribed circle radius, and d arrives the distance between AB, length of the dhand for palm for the center of circle.
The method of minimum enclosed rectangle is asked to be provided by OpenCV functions minAreaRect.
Seek inscribed circle:Each point in scanning profile, obtains which to the beeline on profile border.When all of spot scan After complete, maximum value in all beelines is found, its corresponding point is incenter, the distance is inscribed circle half Footpath.For the faster procedure speed of service, during scanning, a pixel can be taken at interval of N number of point, the present invention is after many experiments N=10 is chosen, speed and the degree of accuracy for calculating can be ensured simultaneously.
If each point coordinates is:A (xa, ya), B (xb, yb), C (xc, yc), D (xd, yd), O (xo, yo), close according to geometry System can obtain:
After rectangle ABNM is for excluding arm part, only include the region of palm portion, point M (xm, ym), N (Xn, yn) are sat Mark can be obtained on the basis of dhand:
Dhand=d+r
3rd step, remove other noises
Gesture be the present invention detection subject goal, we give tacit consent to hand and occur in forever foremost, in fact it could happen that other Class colour of skin object is taken as background process, by calculating the area of each profile, deletes profile of the area less than certain value, you can Exclude other ambient noises.
(3) gesture identification
On the basis of Hand Gesture Segmentation, the Hu squares of binary image are obtained as gesture feature, while constructing three layers of BP Neutral net, using Hu squares value as the input of neutral net, is trained.
Using 7 Hu moment characteristics of images of gestures as the input of neutral net, it is 10 to choose hidden layer nodes to the present invention, Output layer nodes are chosen as needed, choose 8 here, comprising finger radical 0-5, and two special gestures, to verify The accuracy that system is recognized to static gesture, and the extensibility to more gestures.
The present embodiment is further illustrated with reference to simulation example figure.
The present embodiment combines OpenCV and provides Face datection program, the face being first likely to occur in the range of detection camera, If detecting, the value of elliptical region and its surrounding skin pixel is set to 0, that is, is converted to the black part in bianry image Point;Shown in segmentation effect such as Fig. 3 (a) and Fig. 3 (b).
Shown in effect such as Fig. 4 (a) of segmentation arm and hand, rectangle is overall boundary rectangle, circular for palm inscribed circle, square In shape, midium line segment is the cut-off rule calculated with reference to boundary rectangle and inscribed circle.By palm portion is set to region of interest Domain, you can remove redundance;Remove shown in the binary map such as Fig. 4 (b) after arm.
When occurring other class colour of skin objects in background, ambient noise is can remove by excluding area smaller portions, In the present embodiment, the threshold value for choosing exclusion noise area is 100;Shown in segmentation effect such as Fig. 5 (a) and Fig. 5 (b).
The present invention have chosen altogether 8 kinds of different gestures, including finger radical and two self-defined gestures, such as Fig. 6 (a)~figure 6(h).200 groups of data are acquired to every kind of gesture as training data, and 20 groups are carried out to every kind of gesture respectively by 5 experimenters Test, last average accuracy 96.25%.
The training of neutral net is completed when program starts, so not interfering with recognition speed.When experimenter wears long sleeves not During dew arm, identification average time is 33.3ms, and 30 frame about per second ensure that the smooth of video shows;When experimenter wear short During sleeve, Hand Gesture Segmentation can take certain time, predominantly ask inscribed circle to take more, finally averagely take as 65.6ms, about per 15 frames of second, are capable of the smooth display of basic guarantee video.

Claims (3)

1. a kind of gesture identification method based on BP neural network, it is characterised in that comprise the following steps:
Video acquisition:Video comprising staff is obtained by video acquisition device;
Hand Gesture Segmentation:The RGB types of image that video acquisition device is obtained is transformed into YCrCb color spaces, Cr passages are carried out Image is converted into binary map by adaptive thresholding, is obtained gesture profile, and is removed noise;
Gesture identification:After determining removal noise, bending moment, as gesture feature, does not build three layers of BP nerve nets to the Hu of binary image Network, using Hu, bending moment value, as the input of neutral net, is not trained, and identifies the gesture that trains in video.
2. the gesture identification method based on BP neural network according to claim 1, it is characterised in that the tool of Hand Gesture Segmentation Body process is:
The RGB types of image that video acquisition device is obtained is transformed into YCrCb color spaces by step 21;
Step 22, carries out adaptive thresholding to Cr passages, and image is converted into binary map, obtains gesture profile;
Step 23, removes noise
The first step, removes face part
Face is gone out using approximate Haar feature detections, face part is removed;
Second step, removes arm part
(1) minimum enclosed rectangle comprising area of skin color is obtained;
(2) inscribed circle is sought:Each point in scanning profile, obtains which to the beeline on profile border;When all of spot scan After complete, maximum value in all beelines is found, its corresponding point is incenter, the distance is inscribed circle half Footpath;
(3) acquiescence input gesture finger tip upwards, obtains incenter to the distance of boundary rectangle upper side frame, and itself and inscribed circle are partly Footpath sum is hand length;
(4) according to hand length, arm part is removed, palm boundary rectangle and its respective coordinates are obtained;
3rd step, removes ambient noise
By calculating the area of each profile, profile of the area less than given threshold is deleted, ambient noise is removed.
3. the gesture identification method based on BP neural network according to claim 1, it is characterised in that the tool of gesture identification Body process is:
Step 31, determines the Hu not bending moments for removing binary image after noise
For a width digital picture, by coordinate x, the pixel at y regards two-dimensional random variable f (x, y) as, for intensity profile For the image of f (x, y), its (p+q) rank square is defined as:
mpq=∫ ∫ xpyqF (x, y) dxdy
Wherein p, q=0,1,2 ...;
The centre of moment (x0, y0) be:
x 0 = m 10 m 00
y 0 = m 01 m 00
For digital picture, under discrete state, its (p+q) rank central moment is:
μ p q = Σ x = 1 M Σ y = 1 N ( x - x 0 ) p ( y - y 0 ) q f ( x , y )
Wherein, N and M are the height and width of image respectively;
Normalization central moment is defined as:
y p q = μ p q μ 00 r
Wherein:μ00Centered on square p=q=0 when value;
Obtain the expression formula of Hu not bending moments:
I1=y20+y02
I2=(y20+y02)2+4y11 2
I3=(y30-3y12)2+(3y21-y03)2
I4=(y30+y12)2+(y21+y03)2
I5=(y30-y12)(y30+y12)[(y30+y12)2-(3y21+y03)2]+(3y21-y03)(y21+y03)[3(y30+y12)2-(y21+ y03)2]
I6=(y20-y02)[(y30+y12)2-(y21+y03)2]+4y11(y30+y12)(y21+y03)
I7=(3y21+y03)(y30+y12)[(y30+y12)2-(3y21+y03)2]+(y30-3y12)(y21+y30)[3(y30+y12)2-(y21+ y03)2]
Step 32, using Hu, bending moment value, as the input of neutral net, is not trained, and identifies the gesture that trains in video.
CN201610847276.2A 2016-09-23 2016-09-23 Gesture recognition method based on BP neural network Active CN106503619B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610847276.2A CN106503619B (en) 2016-09-23 2016-09-23 Gesture recognition method based on BP neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610847276.2A CN106503619B (en) 2016-09-23 2016-09-23 Gesture recognition method based on BP neural network

Publications (2)

Publication Number Publication Date
CN106503619A true CN106503619A (en) 2017-03-15
CN106503619B CN106503619B (en) 2020-06-19

Family

ID=58291023

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610847276.2A Active CN106503619B (en) 2016-09-23 2016-09-23 Gesture recognition method based on BP neural network

Country Status (1)

Country Link
CN (1) CN106503619B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190496A (en) * 2018-08-09 2019-01-11 华南理工大学 A kind of monocular static gesture identification method based on multi-feature fusion
CN109299743A (en) * 2018-10-18 2019-02-01 京东方科技集团股份有限公司 Gesture identification method and device, terminal
CN109684959A (en) * 2018-12-14 2019-04-26 武汉大学 The recognition methods of video gesture based on Face Detection and deep learning and device
CN110309806A (en) * 2019-07-08 2019-10-08 哈尔滨理工大学 A kind of gesture recognition system and its method based on video image processing
CN113485558A (en) * 2021-07-22 2021-10-08 华东师范大学 Gesture recognition method and system based on infrared sensing array

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853071A (en) * 2010-05-13 2010-10-06 重庆大学 Gesture identification method and system based on visual sense
CN102339379A (en) * 2011-04-28 2012-02-01 重庆邮电大学 Gesture recognition method and gesture recognition control-based intelligent wheelchair man-machine system
CN105825193A (en) * 2016-03-25 2016-08-03 乐视控股(北京)有限公司 Method and device for position location of center of palm, gesture recognition device and intelligent terminals

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101853071A (en) * 2010-05-13 2010-10-06 重庆大学 Gesture identification method and system based on visual sense
CN102339379A (en) * 2011-04-28 2012-02-01 重庆邮电大学 Gesture recognition method and gesture recognition control-based intelligent wheelchair man-machine system
CN105825193A (en) * 2016-03-25 2016-08-03 乐视控股(北京)有限公司 Method and device for position location of center of palm, gesture recognition device and intelligent terminals

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
OZTURK O等: ""Boosting real-time recognition of hand posture and gesture for virtual mouse operations with segmentation"", 《APPLIED INTELLIGENCE》 *
姚争为等: ""基于深度相机的手腕识别与掌心估测"", 《中国图象图形学报》 *
成信宇: ""基于手臂去除的静态数字手势识别"", 《万方数据》 *
王先军等: ""复杂背景下BP神经网络的手势识别方法"", 《计算机应用与软件》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109190496A (en) * 2018-08-09 2019-01-11 华南理工大学 A kind of monocular static gesture identification method based on multi-feature fusion
CN109299743A (en) * 2018-10-18 2019-02-01 京东方科技集团股份有限公司 Gesture identification method and device, terminal
CN109299743B (en) * 2018-10-18 2021-08-10 京东方科技集团股份有限公司 Gesture recognition method and device and terminal
CN109684959A (en) * 2018-12-14 2019-04-26 武汉大学 The recognition methods of video gesture based on Face Detection and deep learning and device
CN110309806A (en) * 2019-07-08 2019-10-08 哈尔滨理工大学 A kind of gesture recognition system and its method based on video image processing
CN110309806B (en) * 2019-07-08 2020-12-11 哈尔滨理工大学 Gesture recognition system and method based on video image processing
CN113485558A (en) * 2021-07-22 2021-10-08 华东师范大学 Gesture recognition method and system based on infrared sensing array

Also Published As

Publication number Publication date
CN106503619B (en) 2020-06-19

Similar Documents

Publication Publication Date Title
CN106503619A (en) Gesture identification method based on BP neural network
CN103941866B (en) Three-dimensional gesture recognizing method based on Kinect depth image
CN102402680B (en) Hand and indication point positioning method and gesture confirming method in man-machine interactive system
CN104063059B (en) A kind of real-time gesture recognition method based on finger segmentation
CN103294996B (en) A kind of 3D gesture identification method
CN103971102B (en) Static gesture recognition method based on finger contour and decision-making trees
CN108256421A (en) A kind of dynamic gesture sequence real-time identification method, system and device
CN103598870A (en) Optometry method based on depth-image gesture recognition
CN107578023A (en) Man-machine interaction gesture identification method, apparatus and system
CN111104820A (en) Gesture recognition method based on deep learning
CN105739702A (en) Multi-posture fingertip tracking method for natural man-machine interaction
CN105536205A (en) Upper limb training system based on monocular video human body action sensing
CN106326860A (en) Gesture recognition method based on vision
CN103984928A (en) Finger gesture recognition method based on field depth image
CN111414837A (en) Gesture recognition method and device, computer equipment and storage medium
CN107341811A (en) The method that hand region segmentation is carried out using MeanShift algorithms based on depth image
CN103034851B (en) The hand tracking means based on complexion model of self study and method
Vishwakarma et al. Simple and intelligent system to recognize the expression of speech-disabled person
CN103927555A (en) Static sign language letter recognition system and method based on Kinect sensor
Thongtawee et al. A novel feature extraction for American sign language recognition using webcam
CN103336967A (en) Hand motion trail detection method and apparatus
CN109359566A (en) The gesture identification method of hierarchical classification is carried out using finger characteristic
CN107329564B (en) Man-machine finger guessing method based on gesture intelligent perception and man-machine cooperation mechanism
CN112699857A (en) Living body verification method and device based on human face posture and electronic equipment
CN109766559B (en) Sign language recognition translation system and recognition method thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant