CN109101860B - Electronic equipment and gesture recognition method thereof - Google Patents

Electronic equipment and gesture recognition method thereof Download PDF

Info

Publication number
CN109101860B
CN109101860B CN201710475523.5A CN201710475523A CN109101860B CN 109101860 B CN109101860 B CN 109101860B CN 201710475523 A CN201710475523 A CN 201710475523A CN 109101860 B CN109101860 B CN 109101860B
Authority
CN
China
Prior art keywords
depth information
hand
block
image
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710475523.5A
Other languages
Chinese (zh)
Other versions
CN109101860A (en
Inventor
杨荣浩
蔡东佐
庄志远
郭锦斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Futaihua Industry Shenzhen Co Ltd
Hon Hai Precision Industry Co Ltd
Original Assignee
Futaihua Industry Shenzhen Co Ltd
Hon Hai Precision Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Futaihua Industry Shenzhen Co Ltd, Hon Hai Precision Industry Co Ltd filed Critical Futaihua Industry Shenzhen Co Ltd
Priority to CN201710475523.5A priority Critical patent/CN109101860B/en
Publication of CN109101860A publication Critical patent/CN109101860A/en
Application granted granted Critical
Publication of CN109101860B publication Critical patent/CN109101860B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A gesture recognition method is applied to electronic equipment and comprises the following steps: acquiring an image which comprises a hand and has depth information; filtering static objects contained in the image; acquiring the coordinates of the hand in the image, and establishing a first block containing the hand according to the coordinates; acquiring depth information of each pixel point in the first block and counting the number of the pixel points of each depth information; acquiring depth information of the hand according to the statistical result, and establishing a second block by using the depth information of the hand; and acquiring a moving track of the hand in the second block and recognizing the hand gesture according to the moving track. The invention also provides an electronic arrangement. According to the electronic equipment and the gesture recognition method thereof, the gesture detection area is accurately established, the accuracy of the gesture operation of the electronic equipment is improved, and the use experience of a user is enhanced.

Description

Electronic equipment and gesture recognition method thereof
Technical Field
The invention relates to the technical field of electronic communication, in particular to electronic equipment and a gesture recognition method thereof.
Background
The current image recognition technology can recognize the gesture in the image and where the current gesture is located, but the detected block usually contains more than just the gesture, and often covers other objects than the hand, such as a wall, furniture, a head, a trunk, and the like, and there may be an error in the obtained gesture depth information. These errors may result in the failure to establish an accurate gesture detection area, which may cause inaccuracy in the operation of the gesture operation device by the user and reduce the user experience.
Disclosure of Invention
In view of the above, a gesture recognition method and an electronic device using the same are needed, which can achieve accurate establishment of a gesture detection area.
An embodiment of the present invention provides a gesture recognition method applied to an electronic device, including the steps of: acquiring an image which comprises a hand and has depth information; filtering static objects contained in the image; acquiring the coordinates of the hand in the image, and establishing a first block containing the hand according to the coordinates; acquiring depth information of each pixel point in the first block and counting the number of the pixel points of each depth information; acquiring depth information of the hand according to the statistical result, and establishing a second block by using the depth information of the hand; and acquiring a moving track of the hand in the second block and recognizing the hand gesture according to the moving track.
One embodiment of the present invention provides an electronic device, a memory; at least one processor; and one or more modules stored in the memory and executed by the at least one processor, the one or more modules comprising: the image acquisition module is used for acquiring an image which comprises a hand and has depth information; the first filtering module is used for filtering static objects contained in the image; the first establishing module is used for acquiring the coordinates of the hand in the image and establishing a first block containing the hand according to the coordinates; the counting module is used for acquiring the depth information of each pixel point in the first block and counting the number of the pixel points of each depth information; the second establishing module is used for acquiring the depth information of the hand according to the statistical result of the statistical module and establishing a second block by utilizing the depth information of the hand; and the recognition module is used for acquiring the moving track of the hand in the second block and recognizing the hand gesture according to the moving track.
Compared with the prior art, the electronic equipment and the gesture recognition method thereof can filter out non-hand objects when the depth information of the hand is acquired, so that the acquired depth information of the hand has small error, the gesture detection area is accurately established, the accuracy of gesture operation of the electronic equipment can be improved, and the user experience is enhanced.
Drawings
Fig. 1 is a functional block diagram of an electronic device according to an embodiment of the present invention.
FIG. 2 is a functional block diagram of a gesture recognition system according to an embodiment of the present invention.
Fig. 3 is a diagram illustrating a first block established by a first establishing module according to an embodiment of the invention.
Fig. 4 is a histogram for counting the number of pixels included in different depth values according to an embodiment of the present invention.
Fig. 5 is a diagram illustrating a second block established by a second establishing module according to an embodiment of the invention.
FIG. 6 is a flowchart illustrating steps of a gesture recognition method according to an embodiment of the invention.
Description of the main elements
Figure BDA0001328162040000021
Figure BDA0001328162040000031
The following detailed description will further illustrate the invention in conjunction with the above-described figures.
Detailed Description
Referring to fig. 1, in an embodiment, an electronic device 100 includes a gesture recognition system 1, a processor 2 and a memory 3. The above elements are electrically connected with each other. The electronic device 100 may be a television, a mobile phone, a tablet computer, or the like.
The gesture recognition system 1 is used for detecting and recognizing a gesture of a hand 20 in real time, thereby controlling the electronic device 100 through the gesture. The memory 3 may be used for storing various data of the electronic device 100, such as program codes of the gesture recognition system 1. The gesture recognition system 1 includes one or more modules stored in the memory 3 and executed by the processor 2 to perform the functions provided by the present invention.
Referring to fig. 2-5, the gesture recognition system 1 includes an image obtaining module 11, a first filtering module 12, a first establishing module 13, a counting module 14, a second establishing module 15, and a recognition module 16. The modules referred to in the present invention may be program segments for performing a specific function.
The image acquiring module 11 is used for acquiring an image including a hand 20 and having depth information. For example, the image capturing module 11 may activate the depth camera 4 to capture the RGB image of the hand 20 and the depth information of each object in the RGB image. Each pixel point in each frame of picture in the RGB image can be represented by a coordinate located in an XY coordinate system, the depth information can be represented by a Z coordinate, and each pixel point can be further represented by a three-dimensional coordinate.
The first filtering module 12 is used for filtering static objects contained in the image acquired by the image acquiring module 11. In one embodiment, the first filtering module 12 may send the image acquired by the image acquiring module 11 to a Gaussian Mixture Model (GMM), and further filter out static objects (background objects during shooting, such as walls, seats, etc.) in the image by the GMM, so as to retain dynamic objects (such as human head, hands, body, etc.) in the image.
The first establishing module 13 is configured to acquire coordinates of the hand 20 in the image, and establish a first block 200 including the hand 20 according to the acquired coordinates. In one embodiment, the first creating module 13 may find the coordinates of the hand 20 from the image filtered by the GMM through a deep learning algorithm. Specifically, the first creating module 13 may learn and create the feature value of the hand 20 through a deep learning algorithm, find the coordinates of the hand 20 from the GMM-filtered image using the feature value of the hand 20, and create the first block 200 (as shown in fig. 3) including the hand 20 according to the coordinates of the hand 20. The area ratio of the hand 20 in the first block 200 is preferably larger than a predetermined ratio, so as to avoid low recognition speed caused by too small ratio. The preset ratio can be adjusted according to actual identification precision requirements. In the present embodiment, the predetermined ratio is 40%, that is, the ratio of the area of the hand 20 in the first block 200 is preferably greater than 40%. In the first partition 200, each pixel also has corresponding XY coordinates and depth information (Z coordinates).
The counting module 14 is configured to obtain depth information of each pixel in the first block 200 and count the number of pixels in each depth information. Since each pixel has a corresponding XY coordinate and depth information, the statistical module 14 can directly query the depth information of each pixel through the XY coordinate of each pixel. Further, the statistic module 14 may utilize the histogram to count the number of pixels included in different depth values.
For example, as shown in fig. 4, the first block 200 is a 5 (row) × 5 (column) block, and the first block 200 includes 25 pixels, each of which has a depth value ranging from 0 to 255. In fig. 4, the X coordinate of the histogram is a numerical value (depth value) of 0 to 255, the Y coordinate is the number of pixels, and it can be counted by using the histogram that the number of pixels having a depth value of 50 is 10, the number of pixels having a depth value of 90 is 12, the number of pixels having a depth value of 240 is 2, and the number of pixels having a depth value of 8 is 1.
The second establishing module 15 is configured to obtain the depth information of the hand 20 according to the statistical result of the statistical module 14, and establish a second block 300 by using the depth information of the hand 20. In one embodiment, the pixels with depth information less than the predetermined depth value may be regarded as noise, and the second establishing module 15 may filter out the pixels with depth information less than the predetermined depth value in the first block 200. The preset depth value may be adjusted according to the actual recognition accuracy requirement, and in this embodiment, the preset depth value may be 10. That is, the second establishing module 15 filters the pixel points with the depth information less than 10 in the first block 200, and then the pixel points with the depth value of 8 in fig. 4 are filtered, and the remaining pixel points with the depth values of 50, 90, and 240 are filtered.
The second creating module 15 extracts two sets of depth information including the largest number of pixel points from the histogram and selects one set of depth information having a lower depth value as the depth information of the hand 20. For example, in fig. 4, the number of pixels having depth values of 50 and 90 is the largest (the number of pixels having depth values of 50 is 10, and the number of pixels having depth values of 90 is 12), and the depth value 50 belongs to a group having relatively low depth values (50<90), so the second creating module 15 selects the depth value 50 as the depth information of the position of the hand 20.
In one embodiment, the second building module 15 is also used to filter out other objects (e.g., head, body, etc.) after obtaining the depth information of the hands 20, thereby only keeping the hands 20. Specifically, the second establishing module 15 may establish a depth information interval according to the depth information of the hand 20, and filter the pixel points of the first block 200 that are not in the depth information interval to establish a block plane including the hand 20. For example, the second establishing module 15 establishes a depth information interval (48-52) by using the depth value 50 as a median, and the second establishing module 15 filters out the pixel points with the depth value smaller than 48 and the depth value larger than 52, so as to create a block plane which can cover the hand 20 and has the smallest block area. The second creating module 15 further uses the block plane as a plane, the depth information of the hand 20 is used as a depth (the depth information of the hand 20 is denoted by "H" in fig. 5), and a second block 300 with a three-dimensional space region is created, as shown in fig. 5, by the second creating module 15.
The recognition module 16 is configured to acquire a moving track of the hand 20 in the second block 300 and recognize a gesture of the hand 20 according to the detected moving track. In an embodiment, the memory 3 may pre-store the trajectories corresponding to different gestures, and the recognition module 16 may learn and improve the different trajectories corresponding to different gestures through a deep learning algorithm, so as to improve the accuracy of gesture recognition.
FIG. 6 is a flowchart illustrating a gesture recognition method according to an embodiment of the invention. The method may be used in the gesture recognition system 1 shown in fig. 2.
In step S600, the image acquiring module 11 acquires an image including the hand 20 and having depth information.
In step S602, the first filtering module 12 filters static objects included in the acquired image.
In step S604, the first establishing module 13 obtains coordinates of the hand portion 20 in the filtered image, and establishes the first block 200 including the hand portion 20 according to the obtained coordinates.
In step S606, the counting module 14 obtains depth information of each pixel in the first block 200 and counts the number of pixels in each depth information.
In step S608, the second creating module 15 obtains the depth information of the hand 20 according to the statistical result, and creates the second block 300 by using the depth information of the hand 20.
In step S610, the recognition module 16 obtains a moving track of the hand 20 in the second block 300 and recognizes a gesture of the hand 20 according to the moving track, so as to control the electronic device 100.
In one embodiment, the image capturing module 11 may obtain the RGB image including the hand 20 and the depth information of each object in the RGB image by activating the depth camera 4.
In one embodiment, the first filtering module 12 may filter out static objects in the image by the GMM, so as to retain dynamic objects in the image.
In an embodiment, the first creating module 13 may learn and create the feature value of the hand 20 through a deep learning algorithm, find the coordinates of the hand 20 from the GMM-filtered image using the feature value of the hand 20, and create the first block 200 including the hand 20 according to the coordinates of the hand 20. The area ratio of the hand 20 in the first block 200 is preferably larger than a predetermined ratio, so as to avoid low recognition speed caused by too small ratio. The preset ratio can be adjusted according to actual identification precision requirements.
In an embodiment, the counting module 14 may query the depth information of each pixel point through the XY coordinates of each pixel point and count the number of pixel points included in different depth values by using a histogram.
In an embodiment, the second establishing module 15 may filter out pixels of the first partition 200 having depth information smaller than a predetermined depth value. The preset depth value can be set and adjusted according to actual identification precision requirements.
In one embodiment, the second creating module 15 extracts two sets of depth information including the largest number of pixels from the histogram and selects one set of depth information with a lower depth value as the depth information of the hand 20.
In an embodiment, the second establishing module 15 may establish a depth information interval according to the depth information of the hand 20, filter the objects in the first block 200 that are not in the depth information interval to establish a block plane including the hand 20, and establish a second block 300 having a three-dimensional space region with the block plane as a plane and the depth information of the hand 20 as a depth.
In an embodiment, the memory 3 may pre-store the trajectories corresponding to different gestures, and the recognition module 16 may learn and improve the different trajectories corresponding to different gestures through a deep learning algorithm, so as to improve the accuracy of gesture recognition.
According to the electronic equipment and the gesture recognition method thereof, objects which are not hands can be filtered when the depth information of the hands is obtained, so that the obtained depth information of the hands has small errors, a gesture detection area is accurately established, the accuracy of gesture operation of the electronic equipment can be improved, and the use experience of a user is enhanced.
It will be apparent to those skilled in the art that other variations and modifications may be made in accordance with the invention and its spirit and scope in accordance with the practice of the invention disclosed herein.

Claims (10)

1. A gesture recognition method is applied to electronic equipment and is characterized by comprising the following steps:
acquiring an image which comprises a hand and has depth information;
filtering static objects contained in the image;
establishing a characteristic value of the hand by using a deep learning algorithm;
acquiring coordinates of the hand in the image according to the characteristic value of the hand;
establishing a first block containing the hand according to the coordinates, wherein the area ratio of the hand in the first block is larger than a preset ratio;
acquiring depth information of each pixel point in the first block and counting the number of the pixel points of each depth information;
filtering pixel points of which the depth information is smaller than a preset depth value in the first block;
extracting two groups of depth information containing the maximum number of pixel points and selecting one group of depth information with lower depth value from the two groups of depth information as the depth information of the hand;
establishing a second block by using the depth information of the hand; and
and acquiring a moving track of the hand in the second block and recognizing the hand gesture according to the moving track.
2. The gesture recognition method of claim 1, wherein the step of obtaining an image including a hand and having depth information comprises:
and acquiring the RGB image containing the hand and the depth information of each object in the RGB image.
3. The gesture recognition method of claim 2, wherein the step of filtering static objects contained in the image comprises:
and filtering static objects contained in the RGB image by utilizing a Gaussian mixture model.
4. The gesture recognition method of claim 1, wherein the step of obtaining depth information of each pixel point in the first block and counting the number of pixel points of each depth information comprises:
extracting depth information of each pixel point according to the coordinate of each pixel point in the first block; and
and counting the number of pixel points of each depth information by utilizing the histogram.
5. The gesture recognition method of claim 1, wherein the step of creating a second tile using the depth information of the hand comprises:
establishing a depth information interval according to the depth information of the hand;
filtering objects in the first block which are not in the depth information interval to establish a block plane containing the hand;
and establishing the second block according to the block plane and the depth information of the hand.
6. An electronic device, comprising:
a processor adapted to implement instructions; and
a memory adapted to store a plurality of instructions, wherein the instructions are adapted to be loaded and executed by the processor to:
acquiring an image which comprises a hand and has depth information;
filtering static objects contained in the image;
establishing a characteristic value of the hand by using a deep learning algorithm, acquiring a coordinate of the hand in the image according to the characteristic value of the hand, and establishing a first block containing the hand according to the coordinate, wherein the ratio of the area of the hand in the first block is greater than a preset ratio;
acquiring depth information of each pixel point in the first block and counting the number of the pixel points of each depth information;
filtering pixel points of which the depth information is smaller than a preset depth value in the first block;
extracting two groups of depth information containing the maximum number of pixel points and selecting one group of depth information with lower depth value from the two groups of depth information as the depth information of the hand;
establishing a second block by using the depth information of the hand; and
and acquiring a moving track of the hand in the second block and recognizing the hand gesture according to the moving track.
7. The electronic device of claim 6, wherein the obtaining an image comprising a hand and having depth information comprises:
and acquiring the RGB image containing the hand and the depth information of each object in the RGB image.
8. The electronic device of claim 7, wherein the filtering static objects contained in the image comprises:
and filtering static objects contained in the RGB image by utilizing a Gaussian mixture model.
9. The electronic device of claim 6, wherein the obtaining depth information of each pixel in the first tile and counting the number of pixels per depth information comprises:
and extracting the depth information of each pixel point according to the coordinates of each pixel point in the first block, and counting the number of the pixel points of each depth information by utilizing a histogram.
10. The electronic device of claim 6, wherein the creating a second tile using the depth information of the hand comprises:
establishing a depth information interval according to the depth information of the hand, filtering objects in the first block which are not in the depth information interval to establish a block plane containing the hand, and establishing the second block according to the block plane and the depth information of the hand.
CN201710475523.5A 2017-06-21 2017-06-21 Electronic equipment and gesture recognition method thereof Active CN109101860B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710475523.5A CN109101860B (en) 2017-06-21 2017-06-21 Electronic equipment and gesture recognition method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710475523.5A CN109101860B (en) 2017-06-21 2017-06-21 Electronic equipment and gesture recognition method thereof

Publications (2)

Publication Number Publication Date
CN109101860A CN109101860A (en) 2018-12-28
CN109101860B true CN109101860B (en) 2022-05-13

Family

ID=64796257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710475523.5A Active CN109101860B (en) 2017-06-21 2017-06-21 Electronic equipment and gesture recognition method thereof

Country Status (1)

Country Link
CN (1) CN109101860B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102024151A (en) * 2010-12-02 2011-04-20 中国科学院计算技术研究所 Training method of gesture motion recognition model and gesture motion recognition method
CN103226708A (en) * 2013-04-07 2013-07-31 华南理工大学 Multi-model fusion video hand division method based on Kinect
CN104463191A (en) * 2014-10-30 2015-03-25 华南理工大学 Robot visual processing method based on attention mechanism
CN104765440A (en) * 2014-01-02 2015-07-08 株式会社理光 Hand detecting method and device
CN104834922A (en) * 2015-05-27 2015-08-12 电子科技大学 Hybrid neural network-based gesture recognition method
CN104992171A (en) * 2015-08-04 2015-10-21 易视腾科技有限公司 Method and system for gesture recognition and man-machine interaction based on 2D video sequence
CN105389539A (en) * 2015-10-15 2016-03-09 电子科技大学 Three-dimensional gesture estimation method and three-dimensional gesture estimation system based on depth data

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102789568B (en) * 2012-07-13 2015-03-25 浙江捷尚视觉科技股份有限公司 Gesture identification method based on depth information

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102024151A (en) * 2010-12-02 2011-04-20 中国科学院计算技术研究所 Training method of gesture motion recognition model and gesture motion recognition method
CN103226708A (en) * 2013-04-07 2013-07-31 华南理工大学 Multi-model fusion video hand division method based on Kinect
CN104765440A (en) * 2014-01-02 2015-07-08 株式会社理光 Hand detecting method and device
CN104463191A (en) * 2014-10-30 2015-03-25 华南理工大学 Robot visual processing method based on attention mechanism
CN104834922A (en) * 2015-05-27 2015-08-12 电子科技大学 Hybrid neural network-based gesture recognition method
CN104992171A (en) * 2015-08-04 2015-10-21 易视腾科技有限公司 Method and system for gesture recognition and man-machine interaction based on 2D video sequence
CN105389539A (en) * 2015-10-15 2016-03-09 电子科技大学 Three-dimensional gesture estimation method and three-dimensional gesture estimation system based on depth data

Also Published As

Publication number Publication date
CN109101860A (en) 2018-12-28

Similar Documents

Publication Publication Date Title
TWI625678B (en) Electronic device and gesture recognition method applied therein
US10534957B2 (en) Eyeball movement analysis method and device, and storage medium
JP6392468B2 (en) Region recognition method and apparatus
CN107977659B (en) Character recognition method and device and electronic equipment
KR101760109B1 (en) Method and device for region extraction
KR101758580B1 (en) Method and apparatus for area identification
US9576193B2 (en) Gesture recognition method and gesture recognition apparatus using the same
US20150131855A1 (en) Gesture recognition device and control method for the same
US10650234B2 (en) Eyeball movement capturing method and device, and storage medium
JP5484184B2 (en) Image processing apparatus, image processing method, and program
CN110119700B (en) Avatar control method, avatar control device and electronic equipment
CN110276251B (en) Image recognition method, device, equipment and storage medium
US20170351253A1 (en) Method for controlling an unmanned aerial vehicle
CN105095881A (en) Method, apparatus and terminal for face identification
CN103916593A (en) Apparatus and method for processing image in a device having camera
EP2996067A1 (en) Method and device for generating motion signature on the basis of motion signature information
US20130321404A1 (en) Operating area determination method and system
CN107091704A (en) Pressure detection method and device
CN108833774A (en) Camera control method, device and UAV system
CN104065949A (en) Television virtual touch method and system
CN109101860B (en) Electronic equipment and gesture recognition method thereof
CN103942523A (en) Sunshine scene recognition method and device
CN104899611B (en) Determine the method and device of card position in image
CN106406507B (en) Image processing method and electronic device
CN110089103B (en) Demosaicing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant