CN112232192A - Gesture convenient control system for disabled people - Google Patents
Gesture convenient control system for disabled people Download PDFInfo
- Publication number
- CN112232192A CN112232192A CN202011101541.5A CN202011101541A CN112232192A CN 112232192 A CN112232192 A CN 112232192A CN 202011101541 A CN202011101541 A CN 202011101541A CN 112232192 A CN112232192 A CN 112232192A
- Authority
- CN
- China
- Prior art keywords
- gesture
- module
- image
- gesture recognition
- electrical equipment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013527 convolutional neural network Methods 0.000 claims description 18
- 238000000034 method Methods 0.000 claims description 9
- 230000011218 segmentation Effects 0.000 claims description 9
- 238000013526 transfer learning Methods 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 5
- 238000007781 pre-processing Methods 0.000 claims description 5
- 238000011176 pooling Methods 0.000 description 6
- 230000005012 migration Effects 0.000 description 5
- 238000013508 migration Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 241000023320 Luma <angiosperm> Species 0.000 description 1
- 240000007651 Rubus glaucus Species 0.000 description 1
- 235000011034 Rubus glaucus Nutrition 0.000 description 1
- 235000009122 Rubus idaeus Nutrition 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/113—Recognition of static hand signs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/117—Biometrics derived from hands
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention provides a gesture convenient control system for disabled people. Belonging to the field of intelligent home; the gesture recognition system is composed of an image acquisition module, a gesture recognition module, a gesture control module and an instruction setting module; the image acquisition module adopts the Kinect sensor to acquire the gesture image information of the disabled, the output end of the image acquisition module is connected with the input end of the gesture recognition module, the gesture recognition module is used for recognizing the figure gestures in the acquired gesture image information, the output end of the gesture recognition module is connected with the gesture control module, the gesture control module is used for controlling the electrical equipment in the house and is connected with the instruction setting module through a local area network, and the instruction setting module is used for establishing the corresponding instruction relation between the figure gestures in the acquired gesture image information and the electrical equipment. The invention can intelligently recognize the gesture of the user, control the electrical equipment in the house to run, bring comfortable and safe enjoyment to the user and have very wide application scenes.
Description
Technical Field
The invention relates to the field of intelligent home furnishing, in particular to a gesture convenient control system for disabled people.
Background
In recent years, the number of disabled people caused by various disasters and disabilities is gradually increased, how to expand the living space of disabled people by using modern information technology and improve the living quality of the disabled people and bring comfortable living environment to the disabled people becomes a hot point of attention in international academic circles and business circles in recent years and is also a focus of great attention of governments. The effect of controlling household electrical equipment is achieved by recognizing the gestures of the user in a household scene, and the user can control the operation of equipment such as lamplight, an air conditioner and a television only by using simple gestures, so that more convenience and freedom are provided for the user.
In a home scene, the real-time gesture recognition is more complicated due to the fact that the user gestures are complex and diverse and are easily influenced by factors such as background, illumination (especially difference between images at daytime and at night), shooting angles and the like. In the current market, smart home devices with gesture recognition functions are rare, and the problem of small samples exists.
Disclosure of Invention
Aiming at the problems, the invention provides a gesture convenient control system for the intelligent disabled, which can intelligently recognize the gesture of a user and control the electrical equipment in the house to operate. The system consists of an image acquisition module, a gesture recognition module, a gesture control module and an instruction setting module, the image acquisition module adopts a Kinect sensor to acquire the gesture image information of the disabled, the output end of the image acquisition module is connected with the input end of the gesture recognition module, the gesture recognition module is designed with a convolutional neural network (STL-CNN) model combined with quadratic migration learning to recognize the human gestures in the image, thereby solving the problems of image recognition of the gestures in the daytime and at night and small samples, the output end of the hand gesture control module is connected with the hand gesture control module which is used for controlling the electrical equipment in the house, the system is connected with an instruction setting module through a local area network, and the instruction setting module is used for establishing a corresponding instruction relation between the character gesture in the acquired gesture image information and the electrical equipment.
The technical scheme of the invention is as follows: the gesture convenient control system for the disabled comprises an image acquisition module, a gesture recognition module, a gesture control module and an instruction setting module which are connected with each other through wireless signals;
the output end of the image acquisition module is connected with the input end of the gesture recognition module, the output end of the gesture recognition module is connected with the input end of the gesture control module, and the output end of the gesture control module is connected with the output end of the instruction setting module through the local area network.
Further, the image acquisition module acquires gesture image information of the disabled by using the depth information of the Kinect sensor.
Further, the gesture recognition module is used for recognizing the human gestures in the acquired gesture image information; the specific operation steps are as follows:
preprocessing an acquired gesture image, converting the image from an RGB color space to a YCbCr color space, and performing skin color segmentation on the image by a global fixed threshold binarization method to obtain a binarization image;
step (2), performing hand region segmentation on the obtained binary image by adopting an RCE neural network;
and (3) designing a convolutional neural network model combined with secondary transfer learning for gesture recognition according to the obtained hand region.
Furthermore, the gesture control module is arranged in the household electrical equipment and used for acquiring the information of the household electrical equipment and controlling the household electrical equipment.
Further, the instruction setting module is used for establishing a corresponding instruction relation between the character gestures in the acquired gesture image information and the household electrical equipment, and sending the instruction to the gesture control module through the local area network.
The invention has the beneficial effects that: the gesture convenient control system for the intelligent disabled can intelligently recognize the gesture of the user and control the electrical equipment in the home to operate. The system consists of an image acquisition module, a gesture recognition module, a gesture control module and an instruction setting module, the image acquisition module adopts a Kinect sensor to acquire the gesture image information of the disabled, the output end of the image acquisition module is connected with the input end of the gesture recognition module, the gesture recognition module designs a convolutional neural network (STL-CNN) model combined with secondary transfer learning to recognize the human gestures in the image, thereby solving the problems of image recognition of the gestures in the daytime and at night and small samples, the output end of the hand gesture control module is connected with the hand gesture control module which is used for controlling the electrical equipment in the house, the system is connected with an instruction setting module through a local area network, and the instruction setting module is used for establishing a corresponding instruction relation between the character gesture in the acquired gesture image information and the electrical equipment. The invention can obtain a complete gesture image, quickly and accurately segment and identify the gesture in the image, and brings comfortable and safe enjoyment to a user.
Drawings
FIG. 1 is a flow chart of the architecture of the present invention;
FIG. 2 is a block diagram of a hand segmentation method for a gesture image according to the present invention;
FIG. 3 is a block diagram of a structure of a convolutional neural network (STL-CNN) model for performing gesture recognition in combination with quadratic migration learning in a gesture recognition module according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an embodiment of the present invention.
Detailed Description
In order to more clearly illustrate the technical solution of the present invention, the following detailed description is made with reference to the accompanying drawings:
as depicted in fig. 1; the gesture convenient control system for the disabled comprises an image acquisition module, a gesture recognition module, a gesture control module and an instruction setting module which are connected with each other through wireless signals;
the output end of the image acquisition module is connected with the input end of the gesture recognition module, the output end of the gesture recognition module is connected with the input end of the gesture control module, and the output end of the gesture control module is connected with the output end of the instruction setting module through the local area network.
Further, the image acquisition module acquires gesture image information of the disabled by using the depth information of the Kinect sensor;
compared with a traditional two-dimensional camera, the Kinect sensor has great advantages, not only has the function that the traditional camera can obtain the color information of an object, but also can obtain the depth information of the object, so that digital equipment can be helped to better sense the outside; therefore, the gesture image information can be obtained by utilizing the depth information of the Kinect sensor.
Furthermore, the input end of the gesture recognition module is connected with the output end of the image acquisition module, the output end of the gesture recognition module is connected with the input end of the gesture control module, the input end of the gesture recognition module is embedded into raspberry server equipment, and the acquired gesture image is preprocessed, hand area segmented and gesture recognized; the gesture recognition module is used for recognizing the human gestures in the acquired gesture image information; the specific operation steps are as follows:
preprocessing an acquired gesture image, converting the image from an RGB color space to a YCbCr color space, and performing skin color segmentation on the image by a global fixed threshold binarization method to obtain a binarization image;
step (2), performing hand region segmentation on the obtained binary image by adopting an RCE neural network;
and (3) designing a convolutional neural network (STL-CNN) model combined with secondary transfer learning for gesture recognition according to the obtained hand region.
Furthermore, the gesture control module is arranged in the household electrical equipment and is used for acquiring the information of the household electrical equipment and realizing the control on the household electrical equipment;
and the local area network is connected with the instruction setting module through the local area network, and the local area network can be established through the wireless router.
Furthermore, the instruction setting module is used for establishing a corresponding instruction relationship between the character gestures in the acquired gesture image information and the household electrical equipment, and sending the instruction to the gesture control module through the local area network, so that the electrical equipment is controlled.
The specific working principle is as follows:
as shown in fig. 2, a hand region segmentation method for a gesture image is provided, which mainly includes the following steps:
firstly, acquiring a gesture image;
acquiring gesture image information by using the depth information of the Kinect sensor; the Kinect optical part is provided with an infrared emission device, an RGB VGA color camera group and 3 cameras of a 3D depth sensor; the infrared emitting device emits laser to cover the visible range of the whole Kinect, and the camera group receives reflected light to identify a user; the image recognized by the infrared camera is a "Depth Field" (Depth Field), wherein the color of each pixel represents the distance from the object to the camera, for example, the body part close to the camera is bright red, green, etc., and the body part far from the camera is dark gray; through the matching of the VGA camera and the RGB camera, the Kinect can project a 3D image of a real object into a screen and shoot a color image and an infrared image at the same time, so that a gesture action image of a user is captured;
two, converting from the RGB color space to the YCbCr color space:
preprocessing the acquired gesture image, and converting the image from an RGB color space to a YCbCr color space; the YCbCr color space is a commonly used color space in video images and digital images; contains three components: y (luma, brightness) represents the brightness of the image, and the value range is 0-255; the Cb component represents the difference between the blue component in the RGB color space and the brightness value in the RGB color space, and the value range is 0-255; the Cr component represents the difference between the value of the red component in the RGB color space and the brightness in the RGB color space, and the value range is 0-255; wherein the Cb component and the Cr component are independent of each other and effectively separable from the Y component;
the conversion formula from RGB color space to YCbCr color space is as follows:
conversion to matrix form is:
comparing each pixel with a threshold value by using a global fixed threshold value binarization method to obtain a binarization image;
wherein the comparison of each pixel with the threshold specifically is: the Y, Cb and Cr values of human skin color are about [0:256,130:174,77:128], if the YCbCr value of a pixel in the image belongs to the interval, the pixel value is 255, otherwise, the YCbCr value is 0, and a binary image can be obtained;
fourthly, segmenting the hand region by using an RCE neural network;
the RCE neural network is composed of an input layer, a prototype layer and an output layer; the input layer and the prototype layer are fully connected, namely, the node of each input layer is connected with all the nodes of the prototype layer, and the node of the prototype layer is partially connected with the node of the output layer; each node of the prototype layer defines a sphere in a color space, and for each pixel point to be identified, if the pixel point falls in the sphere region of a certain prototype layer node, the pixel point belongs to a hand region, otherwise, the pixel point belongs to a background region, and therefore the hand region is segmented.
As shown in fig. 3, the convolutional neural network (STL-CNN) model combined with quadratic migration learning is provided for gesture recognition, which mainly includes the following steps:
firstly, training a Convolutional Neural Network (CNN);
the Convolutional Neural Network (CNN) consists of 4 convolutional layers, 2 pooling layers, 1 Dropout layer and 1 full-connection layer, and is trained on an ImageNet data set, wherein 1 pooling layer is connected behind each 2 convolutional layers; the convolution layer is used for performing convolution operation on the gesture image so as to realize feature extraction, and the convolution formula is as follows:
wherein ". sup." denotes a 2-dimensional convolution,the jth profile output at the ith hidden layer is represented,indicating the k channel output at the i-1 st hidden layer,a k-th filter weight coefficient representing a j-th feature map at an i-th layer,representing the corresponding bias term for the ith layer; after the convolution operation, the dimension reduction of the extracted features is needed to reduce the computational complexity; common methods include maximum pooling, average pooling, and the like; the maximum pooling is the point with the maximum local area value, and the method has the advantages that the factor with the maximum influence in the characteristic area is reserved, and the information loss is effectively avoided, so the method uses the maximum pooling;
in deep learning, for some data sets with small sample size, the biggest problem is easy overfitting, and in order to solve the problem, the invention uses Dropout technology to remove some nodes and related input and output connections; the full link layer can be considered as a special convolution operation; each node of the full connection layer is connected with all nodes of the previous layer and used for integrating the extracted features;
secondly, performing primary transfer learning on the trained CNN;
migrating parameters of partial layers in the trained CNN network to a first target network, removing the last 1 full-connection layer FC in the original network, and adding a full-connection layer FC1 in the last part of the migrated network to form a new network; finely adjusting the network by utilizing a day shooting gesture image data set to finish one-time migration of the whole network; at the moment, a target network trained and finished by adopting ImageNet large-scale data set has strong generalized image feature extraction capability and can be used for feature extraction of daytime gesture images, so that the aim of effectively identifying the daytime gesture images is fulfilled;
thirdly, performing secondary transfer learning on the first target network;
migrating parameters of a part of layers in the trained first target network to a second target network, removing the last 1 full-connection layer FC1 in the first target network, and adding a full-connection layer FC2 in the last part of the migrated network to form a new network; and the network is finely adjusted by utilizing the night shooting gesture image data set, so that secondary migration of the whole network is completed, and effective recognition of the night gesture image is realized.
The specific embodiment is as follows: the Kinect sensor obtains gesture image information, the gesture image is transmitted to the gesture recognition module, the gesture recognition module finally recognizes gesture meaning through preprocessing, gesture segmentation and gesture recognition of the gesture image, as shown in fig. 4, the gesture is recognized to be scissors, the possibility is 0.93, only when the gesture recognition possibility is more than or equal to 0.8, the gesture recognition module transmits the gesture meaning information to the gesture control module, meanwhile, the gesture control module transmits the information to the instruction setting module through a local area network, as the opening of the electric lamp corresponding to the scissors gesture is set in the instruction setting module, the instruction setting module transmits the instruction of turning on the electric lamp to the gesture control module, and the gesture control module turns on the electric lamp after receiving the instruction.
By the aid of the gesture convenient control system facing the disabled, gestures of the user can be intelligently recognized, electric equipment in a home can be controlled to operate, and great convenience is brought to the user.
It should be understood that the embodiments described herein are merely illustrative of the principles of embodiments of the invention; other variations are possible within the scope of the invention; thus, by way of example, and not limitation, alternative configurations of embodiments of the invention may be considered consistent with the teachings of the present invention; accordingly, the embodiments of the invention are not limited to the embodiments explicitly described and depicted.
Claims (5)
1. The gesture convenient control system for the disabled is characterized by comprising an image acquisition module, a gesture recognition module, a gesture control module and an instruction setting module which are connected with each other through wireless signals;
the output end of the image acquisition module is connected with the input end of the gesture recognition module, the output end of the gesture recognition module is connected with the input end of the gesture control module, and the output end of the gesture control module is connected with the output end of the instruction setting module through the local area network.
2. The handicapped-oriented gesture convenience control system according to claim 1, wherein the image acquisition module acquires gesture image information of the handicapped by using depth information of a Kinect sensor.
3. The system as claimed in claim 1, wherein the gesture recognition module is configured to recognize the human gesture in the acquired gesture image information; the specific operation steps are as follows:
preprocessing an acquired gesture image, converting the image from an RGB color space to a YCbCr color space, and performing skin color segmentation on the image by a global fixed threshold binarization method to obtain a binarization image;
step (2), performing hand region segmentation on the obtained binary image by adopting an RCE neural network;
and (3) designing a convolutional neural network model combined with secondary transfer learning for gesture recognition according to the obtained hand region.
4. The handicapped-oriented gesture convenient control system according to claim 1, wherein the gesture control module is arranged in household electrical equipment and controls the household electrical equipment by acquiring information of the household electrical equipment.
5. The gesture convenient control system for the disabled as claimed in claim 1, wherein the instruction setting module is configured to establish a corresponding instruction relationship between the person gesture in the acquired gesture image information and the household electrical device, and send the instruction to the gesture control module through the local area network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011101541.5A CN112232192A (en) | 2020-10-15 | 2020-10-15 | Gesture convenient control system for disabled people |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011101541.5A CN112232192A (en) | 2020-10-15 | 2020-10-15 | Gesture convenient control system for disabled people |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112232192A true CN112232192A (en) | 2021-01-15 |
Family
ID=74113154
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011101541.5A Pending CN112232192A (en) | 2020-10-15 | 2020-10-15 | Gesture convenient control system for disabled people |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112232192A (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107390573A (en) * | 2017-06-28 | 2017-11-24 | 长安大学 | Intelligent wheelchair system and control method based on gesture control |
CN208689542U (en) * | 2018-03-09 | 2019-04-02 | 南京邮电大学 | Gesture recognition control system towards Intelligent household scene |
-
2020
- 2020-10-15 CN CN202011101541.5A patent/CN112232192A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107390573A (en) * | 2017-06-28 | 2017-11-24 | 长安大学 | Intelligent wheelchair system and control method based on gesture control |
CN208689542U (en) * | 2018-03-09 | 2019-04-02 | 南京邮电大学 | Gesture recognition control system towards Intelligent household scene |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106874871B (en) | Living body face double-camera identification method and identification device | |
CN108694709B (en) | Image fusion method and device | |
CN109636754A (en) | Based on the pole enhancement method of low-illumination image for generating confrontation network | |
CN106960182B (en) | A kind of pedestrian's recognition methods again integrated based on multiple features | |
CN110346116B (en) | Scene illumination calculation method based on image acquisition | |
KR102392822B1 (en) | Device of object detecting and tracking using day type camera and night type camera and method of detecting and tracking object | |
CN106304533A (en) | A kind of lamplight scene intelligence control system | |
CN110139449A (en) | A kind of full room lighting system of intelligence based on human body attitude identification | |
CN114596620B (en) | Light supplement control method, device and equipment for face recognition equipment and storage medium | |
CN106488629A (en) | A kind of projection control type intelligence lamp system | |
CN116300507B (en) | Intelligent home management control system and method based on Internet of things technology | |
CN112487981A (en) | MA-YOLO dynamic gesture rapid recognition method based on two-way segmentation | |
Ćirić et al. | Thermal vision based intelligent system for human detection and tracking in mobile robot control system | |
CN113298177A (en) | Night image coloring method, device, medium, and apparatus | |
CN114549864A (en) | Intelligent lamp control method and system based on environment image | |
CN112232205B (en) | Mobile terminal CPU real-time multifunctional face detection method | |
CN113781408A (en) | Intelligent guidance system and method for image shooting | |
CN112232192A (en) | Gesture convenient control system for disabled people | |
US20230270593A1 (en) | Assistive smart glasses for visual impairment, and system and control method thereof | |
CN110532860A (en) | The modulation of visible light bar code and recognition methods based on RGB LED lamp | |
CN116685028A (en) | Intelligent control system for digital human scene lamplight in virtual environment | |
CN214231714U (en) | Intelligent glasses are assisted to visual barrier | |
CN106933209B (en) | Method and system for controlling intelligent electric appliance by watch based on image processing | |
CN114298973A (en) | Intelligent heat supply monitoring method based on infrared image segmentation | |
CN112115970B (en) | Lightweight image detection agricultural bird repelling method and system based on hierarchical regression |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210115 |
|
RJ01 | Rejection of invention patent application after publication |