CN112329566A - Visual perception system for accurately perceiving head movements of motor vehicle driver - Google Patents
Visual perception system for accurately perceiving head movements of motor vehicle driver Download PDFInfo
- Publication number
- CN112329566A CN112329566A CN202011157320.XA CN202011157320A CN112329566A CN 112329566 A CN112329566 A CN 112329566A CN 202011157320 A CN202011157320 A CN 202011157320A CN 112329566 A CN112329566 A CN 112329566A
- Authority
- CN
- China
- Prior art keywords
- head
- unit
- head posture
- module
- data set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Evolutionary Biology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the application provides a vision perception system of accurate perception motor vehicle navigating mate head action for carry out accurate discernment to navigating mate's head action, analysis early warning dangerous driving action promotes the security of driving a vehicle. The system of the embodiment of the application comprises: the device comprises a head attitude data set marking unit, a head attitude algorithm unit, a head attitude estimation unit and a continuous frame header action judgment unit; the head posture data set labeling unit is used for carrying out head posture classification labeling on sample data to construct a head posture estimation training data set, and the sample data comprises at least one group of driver head vision image data; the head posture algorithm unit is used for learning the head posture estimation training data set by using a preset neural network and constructing a head posture estimation model; the head posture estimation unit is used for predicting the head posture of the driver by using the head posture estimation model; the continuous frame header action judging unit is used for judging and outputting the head action type.
Description
Technical Field
The embodiment of the application relates to the field of data processing, in particular to a visual perception system for accurately perceiving head movements of a driver of a motor vehicle.
Background
In recent years, with the progress of society and the improvement of the living standard of people, the quantity of motor vehicles and the number of drivers in China are increased in a blowout manner, so that the traffic flow of roads and the traffic safety of roads are seriously challenged. According to statistics, more than 90% of road traffic safety accidents and road traffic congestion are caused by bad driving behavior factors of drivers.
In the prior art, certain recognition of the head action of a driver in the driving process can be realized, but the recognition depends on calculating an Euler angle value by using traditional human face key point detection (eyes, nose, mouth angle and the like) and a camera calibration algorithm, and then judging the head action by combining the Euler angle value. The head motion identified by the method is not high in accuracy rate, and the situation that the head motion cannot be judged may occur in the low-resolution occasion.
Disclosure of Invention
The embodiment of the application provides a vision perception system of accurate perception motor vehicle navigating mate head action for carry out accurate discernment to navigating mate's head action, analysis early warning dangerous driving action promotes the security of driving a vehicle.
The embodiment of the application provides a visual perception system of accurate perception motor vehicle driver head action, includes: the device comprises a head attitude data set marking unit, a head attitude algorithm unit, a head attitude estimation unit and a continuous frame header action judgment unit;
the head posture data set labeling unit is used for carrying out head posture classification labeling on sample data to construct a head posture estimation training data set, and the sample data comprises at least one group of driver head vision image data;
the head posture algorithm unit is used for learning the head posture estimation training data set by using a preset neural network and constructing a head posture estimation model;
the head posture estimation unit is used for predicting the head posture of the driver by using the head posture estimation model;
the continuous frame header action judging unit is used for judging and outputting the head action type.
Optionally, the head pose data set labeling unit includes: the device comprises a visual image data acquisition module, a first face detection module, a face angle classification and labeling module and a head posture data set output module.
Optionally, the face angle classification and labeling module is configured to perform angle classification and labeling on the sample data in the horizontal and vertical directions, respectively.
Optionally, the head pose estimation unit includes: the system comprises a visual perception module, a preprocessing module, a second face detection module and a head posture estimation module.
Optionally, the visual perception module is configured to collect visual image data of a driver during driving by using a single-channel visual perception device.
Optionally, the second face detection module is configured to detect a face region through a face detection algorithm setaface 2, and input the intercepted face image into the head pose estimation module for prediction.
Optionally, the preset neural network is composed of 6 convolution calculation layers, 3 maximum pooling layers and 2 full-connection layers, cross entropy is used as a loss function, and an Adam algorithm is used as a gradient descent optimization method.
Optionally, the fully connected layer of the preset neural network uses a Leaky ReLU function as an activation function, and the last fully connected layer uses a softmax function to classify the image in the horizontal and vertical directions.
Optionally, the head pose algorithm unit is further configured to perform an augmentation process on the sample data, where the augmentation process includes using at least one of image augmentation processing methods such as angle rotation, scaling, clipping, shifting, and gaussian noise.
Optionally, the head action type includes at least one of observing left view mirror, observing right view mirror, observing left and right rear view mirrors, and lowering head to engage in gear.
According to the technical scheme, the embodiment of the application has the following advantages:
the invention provides a visual perception system for the head action of a driver, which deeply learns the visual image data of the head of the driver through a head attitude data set labeling unit and a head attitude algorithm unit, establishes a head attitude estimation model, collects the visual image data of the driver in the driving process through the head attitude estimation unit, classifies the head attitude of the driver through the head attitude estimation model, and judges the head action type through a continuous frame head action judgment unit by combining continuous frames. The system can be suitable for scenes with low resolution, partial facial feature loss and the like, and is high in accuracy and strong in robustness.
Through this system, can drive the in-process and carry out accurate discernment to navigating mate's head action, analysis early warning dangerous driving action promotes the security of driving a vehicle. The system can also be used for carrying out process evaluation on the driving ability, the safe driving consciousness and the safe driving behavior habit of a specific driver, and then feeding the evaluation result back to the trainee and the coach in real time, so that the training efficiency and the training quality are remarkably improved.
Drawings
FIG. 1 is a schematic structural diagram of an embodiment of a visual perception system for accurately perceiving head movements of a driver of a motor vehicle according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of another embodiment of the visual perception system for accurately perceiving the head movement of the driver of the motor vehicle according to the embodiment of the present application.
Detailed Description
The embodiment of the application provides a vision perception system of accurate perception motor vehicle navigating mate head action for carry out accurate discernment to navigating mate's head action, analysis early warning dangerous driving action promotes the security of driving a vehicle.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an embodiment of a visual perception system for accurately perceiving head movements of a driver of a motor vehicle according to the present application includes: a head posture data set labeling unit 101, a head posture algorithm unit 102, a head posture estimation unit 103, and a continuous frame head action determination unit 104;
the head pose data set labeling unit 101 is configured to perform head pose classification labeling on sample data to construct a head pose estimation training data set, where the sample data includes at least one group of driver head vision image data;
the head pose algorithm unit 102 is configured to learn the head pose estimation training data set by using a preset neural network, and construct a head pose estimation model;
the head pose estimation unit 103 is used for predicting the head pose of the driver by using the head pose estimation model;
the continuous frame header action determining unit 104 is configured to determine and output a header action type.
It should be noted that, in practical application, the head pose data labeling unit 101 may construct a training data set for head pose estimation by acquiring facial images of different angles (-90 ° to 90 °, with the step length set to 30 °) in the horizontal and vertical directions during driving of a plurality of drivers of different ages and sexes, and regarding each head pose as a class, so that there are 7 classes in the horizontal and vertical directions.
Before the training set of the head pose estimation is input into the neural network, appropriate augmentation processing needs to be performed on the data set, and the augmentation processing aims to increase the number of different sample samples and avoid the phenomenon of overfitting during neural network training. In practical application, the number of different sample samples can be increased by angle rotation, scaling, clipping, shifting, Gaussian noise and other image augmentation methods, each augmentation technology ensures that the meaning of the augmented image is consistent with that of the original image, and the created CNN model can be more robust to slight changes in a real environment by using augmentation processing.
It should be noted that the neural network model defined in the present invention is composed of 6 convolution calculation layers, 3 maximum pooling layers, and 2 full-link layers, and adopts cross entropy as a loss function and Adam as a gradient descent optimization method. The convolution calculation layer uses the ReLU as an activation function, a pooling layer is added after every two convolution layers so as to reduce the number of trainable parameters, and a Dropout regularization technology is used for preventing an overfitting phenomenon; the fully connected layer uses Leaky ReLU as an activation function, and the last fully connected layer uses softmax to classify the image in the horizontal and vertical directions. In practical applications, in order to train both horizontal and vertical classes simultaneously, the final loss function of the neural network is a weighted sum of the cross-entropy losses generated by the two classes. The head posture estimation model based on deep learning is finally constructed through training and is used for predicting the angles of the horizontal direction and the vertical direction when the driver performs head actions.
The head pose estimation module 103 predicts the head pose of the driver by using the trained head pose estimation model, obtains the angle classification of the face of the driver in the horizontal and vertical directions, and determines the head motion type by combining the continuous frame head motion determination unit 104. In practical applications, it may be set to output a final head motion type if the face orientation angle is consistent with an angle range of a certain head motion in the horizontal or vertical direction for consecutive n frames.
In this embodiment, the head posture data set labeling unit 101 and the head posture algorithm unit 102 perform deep learning on visual image data of the driver's head to establish a head posture estimation model, the head posture estimation unit 103 collects visual image data of the driver in the driving process, classifies the head posture of the driver through the head posture estimation model, and further determines the head action type by combining continuous frames through the continuous frame head action determination unit 104. The system can be suitable for scenes with low resolution, partial facial feature loss and the like, and is high in accuracy and strong in robustness. Through this system, can drive the in-process and carry out accurate discernment to navigating mate's head action, analysis early warning dangerous driving action promotes the security of driving a vehicle.
Referring to fig. 2, a detailed description is given below of a configuration of a visual perception system for accurately perceiving a head movement of a driver of a motor vehicle according to an embodiment of the present application, where another embodiment of the visual perception system for accurately perceiving a head movement of a driver of a motor vehicle according to an embodiment of the present application includes:
a head posture data set labeling unit 201, a head posture algorithm unit 202, a head posture estimation unit 203, and a continuous frame head action determination unit 204;
the head pose data set labeling unit 201 is configured to perform head pose classification labeling on sample data to construct a head pose estimation training data set, where the sample data includes at least one group of driver head vision image data;
the head pose algorithm unit 202 is configured to learn the head pose estimation training data set by using a preset neural network, and construct a head pose estimation model;
the head pose estimation unit 203 is used for predicting the head pose of the driver by using the head pose estimation model;
the continuous frame header action determining unit 204 is configured to determine and output a header action type.
In this embodiment, the head pose data set labeling unit 201 specifically includes: a visual image data acquisition module 2011, a first face detection module 2012, a face angle classification labeling module 2013 and a head pose data set output module 2014.
The face angle classification and labeling module 2013 is specifically configured to perform angle classification and labeling on the sample data in the horizontal and vertical directions, respectively.
In this embodiment, the head pose estimation unit 203 specifically includes: a visual perception module 2031, a pre-processing module 2032, a second face detection module 2033, and a head pose estimation module 2034.
The visual perception module 2031 is configured to collect visual image data of a driver during driving by using a single-channel visual perception device.
The second face detection module 2033 is configured to detect a face region through a face detection algorithm setaface 2, and input the intercepted face image into the head pose estimation module for prediction.
In this embodiment, the head pose estimation unit 203 acquires real-time video perception data by installing a vehicle-mounted visual perception device, i.e., the visual perception module 2031, in front of the main driving position and debugging the device to be able to photograph the head of the driver; then the second face detection module 2033 detects a face region by using a face detection algorithm setaface 2, and inputs the intercepted face image into a head pose estimation model for prediction, so as to obtain the angle classification of the driver's face in the horizontal and vertical directions. The continuous frame header action determining module 104 determines the type of the head action by combining the continuous frame number value of the euler angle of the head attitude output by the neural network. The head action types comprise head action behaviors such as observing a left rearview mirror, observing a right rearview mirror, observing the left rearview mirror and the right rearview mirror, lowering the head, switching gears and the like.
The system can be used for analyzing the behavior data of a specific driver and the driving behavior data of a group by comprehensively detecting the head action of the driver in the driving process and outputting a head action signal, and can be used for carrying out process evaluation on the driving ability, the safe driving consciousness and the safe driving behavior habit of the driver by combining the results; the evaluation result is fed back to the trainees and coaches in real time, so that the training efficiency and the training quality are obviously improved; secondly, the scheme is favorable for standardizing and correcting the operation habits and driving behaviors of the drivers, is favorable for helping the drivers to develop good safe driving habits, has important significance for improving the road traffic safety level of the whole society, and has remarkable social benefit.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.
Claims (10)
1. A visual perception system for accurately perceiving head movements of a driver of a motor vehicle, comprising: the device comprises a head attitude data set marking unit, a head attitude algorithm unit, a head attitude estimation unit and a continuous frame header action judgment unit;
the head posture data set labeling unit is used for carrying out head posture classification labeling on sample data to construct a head posture estimation training data set, and the sample data comprises at least one group of driver head vision image data;
the head posture algorithm unit is used for learning the head posture estimation training data set by using a preset neural network and constructing a head posture estimation model;
the head posture estimation unit is used for predicting the head posture of the driver by using the head posture estimation model;
the continuous frame header action judging unit is used for judging and outputting the head action type.
2. The system of claim 1, wherein the head pose data set labeling unit comprises: the device comprises a visual image data acquisition module, a first face detection module, a face angle classification and labeling module and a head posture data set output module.
3. The system of claim 2, wherein the face angle classification labeling module is configured to perform angle classification labeling on the sample data in horizontal and vertical directions, respectively.
4. The system of claim 1, wherein the head pose estimation unit comprises: the system comprises a visual perception module, a preprocessing module, a second face detection module and a head posture estimation module.
5. The system of claim 4, wherein the visual perception module is configured to collect visual image data of the driver during driving using a single-channel visual perception device.
6. The system according to claim 4, wherein the second face detection module is configured to detect a face region through a face detection algorithm setaface 2, and input the truncated face image into the head pose estimation module for prediction.
7. The system according to claim 1, wherein the preset neural network is composed of 6 convolution calculation layers, 3 maximum pooling layers and 2 full-connection layers, cross entropy is adopted as a loss function, and Adam algorithm is adopted as a gradient descent optimization method.
8. The system of claim 7, wherein the fully connected layers of the pre-defined neural network use a Leaky ReLU function as an activation function, and the last fully connected layer uses a softmax function to classify the images in horizontal and vertical directions.
9. The system according to any one of claims 1 to 6, wherein the head pose algorithm unit is further configured to perform an augmentation process on the sample data, the augmentation process including using at least one of an image augmentation process method such as angular rotation, scaling, cropping, shifting, and Gaussian noise.
10. The system of any one of claims 1 to 6, wherein the head motion types include at least one of a left view mirror, a right view mirror, a left and right view mirror, a heads-down gear.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011157320.XA CN112329566A (en) | 2020-10-26 | 2020-10-26 | Visual perception system for accurately perceiving head movements of motor vehicle driver |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011157320.XA CN112329566A (en) | 2020-10-26 | 2020-10-26 | Visual perception system for accurately perceiving head movements of motor vehicle driver |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112329566A true CN112329566A (en) | 2021-02-05 |
Family
ID=74310747
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011157320.XA Pending CN112329566A (en) | 2020-10-26 | 2020-10-26 | Visual perception system for accurately perceiving head movements of motor vehicle driver |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112329566A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113239861A (en) * | 2021-05-28 | 2021-08-10 | 多伦科技股份有限公司 | Method for determining head movement of driver, storage medium, and electronic device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190026538A1 (en) * | 2017-07-21 | 2019-01-24 | Altumview Systems Inc. | Joint face-detection and head-pose-angle-estimation using small-scale convolutional neural network (cnn) modules for embedded systems |
CN109426757A (en) * | 2017-08-18 | 2019-03-05 | 安徽三联交通应用技术股份有限公司 | Driver's head pose monitoring method, system, medium and equipment based on deep learning |
CN109875568A (en) * | 2019-03-08 | 2019-06-14 | 北京联合大学 | A kind of head pose detection method for fatigue driving detection |
CN110210456A (en) * | 2019-06-19 | 2019-09-06 | 贵州理工学院 | A kind of head pose estimation method based on 3D convolutional neural networks |
-
2020
- 2020-10-26 CN CN202011157320.XA patent/CN112329566A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190026538A1 (en) * | 2017-07-21 | 2019-01-24 | Altumview Systems Inc. | Joint face-detection and head-pose-angle-estimation using small-scale convolutional neural network (cnn) modules for embedded systems |
CN109426757A (en) * | 2017-08-18 | 2019-03-05 | 安徽三联交通应用技术股份有限公司 | Driver's head pose monitoring method, system, medium and equipment based on deep learning |
CN109875568A (en) * | 2019-03-08 | 2019-06-14 | 北京联合大学 | A kind of head pose detection method for fatigue driving detection |
CN110210456A (en) * | 2019-06-19 | 2019-09-06 | 贵州理工学院 | A kind of head pose estimation method based on 3D convolutional neural networks |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113239861A (en) * | 2021-05-28 | 2021-08-10 | 多伦科技股份有限公司 | Method for determining head movement of driver, storage medium, and electronic device |
WO2022247527A1 (en) * | 2021-05-28 | 2022-12-01 | 多伦科技股份有限公司 | Method for determining head motion of driver, storage medium, and electronic apparatus |
CN113239861B (en) * | 2021-05-28 | 2024-05-28 | 多伦科技股份有限公司 | Method for determining head motion of driver, storage medium, and electronic device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111428699B (en) | Driving fatigue detection method and system combining pseudo-3D convolutional neural network and attention mechanism | |
US9881221B2 (en) | Method and system for estimating gaze direction of vehicle drivers | |
CN111860274B (en) | Traffic police command gesture recognition method based on head orientation and upper half skeleton characteristics | |
CN108596087B (en) | Driving fatigue degree detection regression model based on double-network result | |
CN111626272A (en) | Driver fatigue monitoring system based on deep learning | |
Kumtepe et al. | Driver aggressiveness detection via multisensory data fusion | |
CN112084928A (en) | Road traffic accident detection method based on visual attention mechanism and ConvLSTM network | |
CN115346197A (en) | Driver distraction behavior identification method based on bidirectional video stream | |
CN115937830A (en) | Special vehicle-oriented driver fatigue detection method | |
CN108108651B (en) | Method and system for detecting driver non-attentive driving based on video face analysis | |
CN115690750A (en) | Driver distraction detection method and device | |
CN112052829B (en) | Pilot behavior monitoring method based on deep learning | |
CN114299473A (en) | Driver behavior identification method based on multi-source information fusion | |
CN112329566A (en) | Visual perception system for accurately perceiving head movements of motor vehicle driver | |
CN111626197B (en) | Recognition method based on human behavior recognition network model | |
CN117292346A (en) | Vehicle running risk early warning method for driver and vehicle state integrated sensing | |
CN117456516A (en) | Driver fatigue driving state detection method and device | |
CN117218680A (en) | Scenic spot abnormity monitoring data confirmation method and system | |
CN112926364A (en) | Head posture recognition method and system, automobile data recorder and intelligent cabin | |
CN113361452B (en) | Driver fatigue driving real-time detection method and system based on deep learning | |
Zhou et al. | Development of a camera-based driver state monitoring system for cost-effective embedded solution | |
CN113408389A (en) | Method for intelligently recognizing drowsiness action of driver | |
Hu et al. | Comprehensive driver state recognition based on deep learning and PERCLOS criterion | |
Wang et al. | Research on driver fatigue state detection method based on deep learning | |
QU et al. | Multi-Attention Fusion Drowsy Driving Detection Model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210205 |