CN112381829A - Autonomous learning navigation method based on visual attention mechanism - Google Patents

Autonomous learning navigation method based on visual attention mechanism Download PDF

Info

Publication number
CN112381829A
CN112381829A CN202011266136.9A CN202011266136A CN112381829A CN 112381829 A CN112381829 A CN 112381829A CN 202011266136 A CN202011266136 A CN 202011266136A CN 112381829 A CN112381829 A CN 112381829A
Authority
CN
China
Prior art keywords
response
area
input
navigation
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011266136.9A
Other languages
Chinese (zh)
Inventor
罗大鹏
郭鹏
杜国庆
徐慧敏
何松泽
牟泉政
魏龙生
高常鑫
王勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Geosciences
Original Assignee
China University of Geosciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Geosciences filed Critical China University of Geosciences
Priority to CN202011266136.9A priority Critical patent/CN112381829A/en
Publication of CN112381829A publication Critical patent/CN112381829A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3446Details of route searching algorithms, e.g. Dijkstra, A*, arc-flags, using precalculated routes
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The invention provides an autonomous learning navigation method based on a visual attention mechanism, which uses a developing neural network as a core algorithm, simultaneously adds the visual attention mechanism inspired by a human visual system, receives the input of an image sensor, only needs to receive navigation guide information at the previous two moments, and can continuously output correct navigation information.

Description

Autonomous learning navigation method based on visual attention mechanism
Technical Field
The invention relates to the technical field of computer vision and information, in particular to an autonomous learning navigation method based on a visual attention mechanism.
Background
Vision-assisted navigation is one of the research hotspots in the field of intelligent vehicle navigation. The system generates reference information of driving behaviors based on acquisition and analysis of image information of scenes around the vehicle, so that dangerous driving factors in the driving process are eliminated. Vision-based navigation assistance systems have shown powerful performance in specific applications of intelligent driving such as Lane Departure Warning (LDW), Forward Collision Warning (FCW), Lane Keeping Assistance (LKA), and panoramic parking (SVP). Compared with traditional multi-mode navigation technologies such as ultrasonic radar, laser radar and millimeter wave radar, data collected by the image sensor can be greatly compressed through a thinning means, the demand on computing resources of the vehicle-mounted computer is low, and the method is more economical and efficient.
In a traditional working mode, the visual navigation technology generally adopts a deep convolutional network to perform semantic segmentation on an acquired image, lane and non-lane pixels are separated, and then the driving process of a vehicle is corrected through a control algorithm. However, a large amount of labeled data is needed in the training process, and the collected data is difficult to cover all driving environments, so that the generalization capability of the trained model is poor, and the model without self-learning capability influences the performance of the system in strange environments. Meanwhile, due to the redundancy of background information in the image, the noise and interference brought by the background information can also greatly reduce the training speed and robustness of the model. The invention provides an autonomous learning navigation method based on a visual attention mechanism, which is used for improving the anti-interference capability of a visual navigation system to background noise and the generalization capability to an unfamiliar environment.
In order to solve the defects of insufficient generalization ability to an unknown environment and insufficient anti-interference ability to background noise of the traditional method, the invention provides the autonomous learning navigation method based on the visual attention mechanism, and the system can continuously and autonomously learn under the condition of only needing the guide information of the first two moments. In addition, by adding a visual attention mechanism, the model has the capability of paying attention to key areas of the image, the defects of sensitivity to noise in a complex background picture, low learning efficiency and poor learning effect of the traditional method are effectively overcome, and the performance of visual navigation is greatly improved.
Disclosure of Invention
In view of the above, the present invention provides an autonomous learning navigation method based on a visual attention mechanism, which includes the following steps:
s1, acquiring front-end input and rear-end input of the visual navigation model, wherein the front-end input information is continuously input by the image sensor, the rear-end input information is input from the outside at the first two moments, and the subsequent moments are input by the model output at the previous moments;
s2, the front end input is processed by an attention mechanism, the image of the key area is reserved, and the images of the rest areas are suppressed;
s3, calculating a front-end input processed by an attention mechanism and a bottom-up weight inner product to obtain a bottom-up partial pre-response, calculating a back-end input and a top-down weight inner product to obtain a top-down partial pre-response, superposing the two partial pre-responses to obtain a total pre-response, and competing the pre-responses to obtain a Y regional response;
s4, calculating the bottom-up weight inner product of the Y area response and the Z area to obtain Z area response, and mapping the Z area response to an effect space to obtain final navigation output;
and S5, the visual navigation model is automatically learned and updated, and the next round of step cycle is started until the front-end input is no longer received, and the cycle is terminated.
The technical scheme provided by the invention has the beneficial effects that: according to the method, only the guide information of the first two moments is used as the supervision information of model training, and the developing neural network is used as a core processing algorithm, so that the model has the autonomous learning capability; a visual attention mechanism is added to the model, and top-down attention information is provided as supervision information, so that the robustness of the model to complex background interference information is improved.
Drawings
FIG. 1 is a flow chart of an autonomous learnable navigation method based on visual attention mechanism of the present invention;
FIG. 2 is a timing diagram of an autonomous learnable navigation method based on visual attention mechanism according to the present invention;
FIG. 3 is a schematic diagram of the core navigation algorithm model-developmental neural network of the present invention;
FIG. 4 is a schematic view of the visual attention mechanism of the present invention;
FIG. 5 is a schematic diagram of the visual attention generation mechanism of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be further described with reference to the accompanying drawings.
Referring to fig. 1, the present invention provides an autonomous learning navigation method based on a visual attention mechanism, referring to fig. 2, in time sequence, the method includes the following steps:
s1, at the first time t1 when the navigation method starts to operate, executing the following steps;
s11, acquiring image from image sensor, preprocessing the image into single-channel grey-scale map with 38X 38 pixels, expanding the grey-scale map into one-dimensional data with 1X 1444, normalizing, inputting the one-dimensional data into front end of core algorithm model, namely X region, and recording as X regionr
S12, inputting the guiding information to the rear end of the core algorithm model, namely the Z area, wherein the format of the guiding information is shown as follows:
tagged item Data format Description of the invention Example of a tag
z1
1*6 Navigation action guidance [0,1,0,0,0,0]
z2 1*4 GPS guidance [0,1,0,0]
z3 1*144 Attention position [1,0,0,...,0,0]
z4 1*8 Obstacle object [1,0,0,0,0,0,0,0]
z5 1*2 Dimension information (Global, local) [1,0]
Wherein navigation action guidance is necessary at the first two moments, the attentional position is necessary at the attentional generation phase, and the rest of the information is unnecessary.
S13, inputting X to the front end based on different receptive fields of Y regional neuronsrPerforming attention area masking operation, wherein the size of an attention area is only 15 pixels by 15 pixels, the attention area slides in a picture, picture information in the attention area is reserved, and information outside the attention area is suppressed;
and S14, initializing a core algorithm model.
S2, at a second time t2, executing the following steps:
s21, executing the steps S11-S13 to obtain front-end and rear-end input information of the second moment;
and S22, calculating the response of the second time by using the two-end input of the S21.
And S23, self-learning and updating the model by using the response obtained in S22.
S3, at a third time t3, executing the following steps:
s31, executing the step S11 to obtain the front end input of a third time t 3;
and S32, obtaining the back end input of the third time by mapping to an effect space by using the response obtained at the time of S22, and simultaneously outputting the navigation output of the second time t 2.
And S33, calculating the response of the third moment by using the input obtained in S21.
And S34, self-learning and updating the model by using the response obtained in S33.
And S4, repeating the step S3 at the subsequent time of the operation of the method, and carrying out self-learning updating on the model while obtaining the navigation output.
For the model response of the method, and the self-learning update, please refer to fig. 3:
the core algorithm model of the method is a developmental neural network, and the developmental neural network is a bionic, shallow and self-organizing network model. It is inspired by the hebran theory in neuroscience, i.e., the principle of synaptic plasticity. The developmental neural network has three areas of X, Y and Z, wherein the X area is an accepting area and is used for acquiring input excitation from an external environment; the Y area is a hidden layer and is used for learning knowledge and rules; the Z area is an effect area, and can output an effect to the outside, and besides, supervisory information can be input from the Z area to the Y area, and the Z area can also be used as an input area at this time. The X area is in one-way full connection with the Y area, and the Y area is in two-way full connection with the Z area. The learning process is as follows:
s1, initializing a model, wherein the model comprises an initialization response, a weight, an attention mask and neuron activation information;
s2, calculating a response value of the Y area;
s21, calculating the bottom-up response r of the Y areab
Performing inner product on the preprocessed front-end input and the bottom-up weight;
Figure BDA0002776180460000051
wherein r isbFor the bottom-up response of the Y region, wbAre the weights from the bottom up and are,
Figure BDA0002776180460000052
is an inner product operation, xrIs an input image;
s22, calculating the top-down response r of the Y areat
Figure BDA0002776180460000053
Wherein r istFor top-down response of the Y region, zrFor back-end input, i.e. supervisory information, wtAre bottom-up weights;
s23, calculating the pre-response r of the Y areap
rp=k*rt+(1-k)rb
Where k is the impact factor of the top-down response and (1-k) is the impact factor of the bottom-up response, the sum of which is 1.
S24, Top-k competition mechanism
In order to simulate the neuron side inhibition effect and reduce the neuron renewal rate, a Top-k competition mechanism is adopted to ensure that r ispMaximum KThe neuron is an activated neuron, the response of the activated neuron is set to be 1, and the responses of the rest neurons are set to be 0;
ry(argmax(rp))←1
wherein r isyResponding to the competitive Y area;
s25 weight update of activated neuron
Figure BDA0002776180460000061
Wherein, VjA weight vector for activating neuron j for the Y region, comprising (w)t,wb),wbIs a bottom-up weight, wtAre bottom-up weights; gjThe age of the jth neuron, the greater the number of activations, the greater the age,
Figure BDA0002776180460000062
for input vectors, comprising front-end and back-end inputs, ω1And ω2Learning factor, ω, for controlling the rate of synaptic weight update of neurons21The larger, VjWill reflect more of the learning of new knowledge, ω1And ω2Derived from the forgetting averaging algorithm:
Figure BDA0002776180460000063
ω1(gi)=1-ω2(gi)
in the formula, u (g)i) The forgetting equation is g at the activation age of the ith neuroniThe value of time, forgetting equation u (g) is defined as follows:
Figure BDA0002776180460000064
wherein g is1,g2To forget the age threshold, a typical setting is g1=20,g2The typical setting values are c 2 and λ 2000, where c is 200 and λ is a hyper-parameter that controls the learning speed. After each activation, neurons are activated to provoke age renewal, gi←gi+1。
S3, calculating the bottom-up weight inner product of the Y area response and the Z area to obtain Z area response, and mapping the Z area response to an effect space to obtain final navigation output, wherein the method comprises the following steps;
s31, transmitting the Y area response to the Z area, and calculating the response of the Z area:
Figure BDA0002776180460000071
wherein Z isiResponse of Z region for ith effect space, ryTotal response after Top-k competition for Y region, WzbIs the bottom-up weight of the Z region;
s32, selecting an effector in a Z area, wherein the effector in the Z area corresponds to navigation actions and comprises 6 action states of forward movement, left turning, right turning, slight left turning, slight right turning and stopping;
Figure BDA0002776180460000072
argmax () is a function of the position where the maximum of the response is found, e equals 1, indicating that the navigation output is forward; e is equal to 2, indicating that the navigation output is a left turn; e equals 3, indicating that the navigation output is a right turn; e equals 4, indicating a slight left turn of the navigation output, e equals 5, indicating a slight right turn of the navigation output; e equals 6, indicating that the navigation output is stopped.
For the attention mechanism described in this method, please refer to fig. 4, the Y-region neurons are only connected to the X-region neurons in their receptive field, each Y-region active neuron (i.e. the neuron activated by the Top-K competition) possesses different attention receptive fields, and the attention generation mechanism is shown in fig. 5, when the key region of the input image is the receptive field of the ith Y-region neuron, the bottom-up response r of the ith neuron isbAt the maximum, it will have a greater potential to win out in the Top-K competition mechanism, enabling turnover to be activated, thus enhancing the attachment of the neuron to its attention receptor field, Top-1 in this example.
Referring to fig. 5, in the development and growth process of the model, if the maximum response of the Y region is smaller than the set threshold after the image is input, it indicates that the model is insensitive to the type information of the input image, including effect information, attention information, guidance information, etc., and the model does not learn to notice the type of input, and at this time, a Y region neuron is added, and its attention receptive field is set as the key region of the type. After receiving the same type of income multiple times, the connection between the new Y regional neurons and the X regional neurons in the receptive field thereof is strengthened. If the maximum response of the Y area to the input image is larger than the threshold value, the model at the moment is proved to learn the semantic expression. After training, the model will learn to pay attention to the key areas of all types of pictures, i.e. the model gets an attention mechanism.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (6)

1. An autonomous learning navigation method based on a visual attention mechanism is characterized by comprising the following steps:
s1, acquiring front-end input and rear-end input of the visual navigation model, wherein the front-end input information is continuously input by the image sensor, the rear-end input information is input from the outside at the first two moments, and the subsequent moments are input by the model output at the previous moments;
s2, the front end input is processed by an attention mechanism, the image of the key area is reserved, and the images of the rest areas are suppressed;
s3, calculating a front-end input processed by an attention mechanism and a bottom-up weight inner product to obtain a bottom-up partial pre-response, calculating a back-end input and a top-down weight inner product to obtain a top-down partial pre-response, superposing the two partial pre-responses to obtain a total pre-response, and competing the pre-responses to obtain a Y regional response;
s4, calculating the bottom-up weight inner product of the Y area response and the Z area to obtain Z area response, and mapping the Z area response to an effect space to obtain final navigation output;
and S5, the visual navigation model is automatically learned and updated, and the next round of step cycle is started until the front-end input is no longer received, and the cycle is terminated.
2. The method according to claim 1, wherein the model-derived input in S1 is specifically:
obtaining the front-end input means obtaining a navigation environment image from an image sensor, preprocessing the image into a single-channel gray-scale map with 38X 38 pixels, expanding the gray-scale map into one-dimensional data with 1X 1444, normalizing the data, and inputting the one-dimensional data into the front end of a developing neural network, namely an X area, which is marked as Xr(ii) a Inputting an image XrNormalization treatment:
xr←normalization(xr)
wherein x isrFor an input image, normalization () is a normalization function;
the back-end input is to input guide information to the back end of the developmental neural network, namely the Z region, wherein the guide information comprises: navigation action guidance, GPS guidance, attention location, obstacle objects, and scale information.
3. The method according to claim 1, wherein the attention mechanism in S2, i.e. masking the input image and the receptive field, is as follows:
xr←xr e Maskb
wherein x isrFor input image, for dot product operation, MaskbFor bottom-up attention masking, the front-end input is processed through an attention mechanism, the image of the key region is retained, and the images of the remaining regions are suppressed.
4. The method according to claim 1, wherein the Y-region response obtained in S3 is as follows:
s31, performing inner product of the preprocessed front-end input and the bottom-up weight;
Figure FDA0002776180450000021
wherein r isbFor the bottom-up response of the Y region, wbAre the weights from the bottom up and are,
Figure FDA0002776180450000022
is an inner product operation, xrIs an input image;
s32, calculating the top-down response r of the Y areat
Figure FDA0002776180450000023
Wherein r istFor top-down response of the Y region, zrFor back-end input, i.e. supervisory information, wtAre bottom-up weights;
s33 response pre-screening
To eliminate interference of random noise, the response r is correctedbAnd rtPerforming pre-screening operation, and enabling the response value smaller than the threshold cutValue to return to zero;
rt(rt<cutValue)←0 rb(rb<cutValue)←0
s34, calculating the total pre-response r of the Y areap
rp=k*rt+(1-k)rb
Wherein k is the impact factor of the top-down response, and (1-k) is the impact factor of the bottom-up response, the sum of which is 1;
s35, Top-k competition mechanism
Using a Top-k competition mechanism, rpThe largest neuron is an activated neuron, the responses of the K neurons with the largest responses are set to be 1, and the responses of the rest neurons are set to be 0;
ry(argmax(rp))←1
where argmax () is a function of the position at which the maximum of the response is obtained, ryIs responded by the contended Y region.
5. The method according to claim 1, wherein the navigation output obtained in S4 is as follows:
s41, transmitting the Y area response to the Z area, and calculating the response of the Z area:
Figure FDA0002776180450000031
wherein Z isiResponse of Z region for ith effect space, ryTotal response after Top-k competition for Y region, WzbIs the bottom-up weight of the Z region;
s42, selecting an effector in a Z area, wherein the effector in the Z area corresponds to navigation actions and comprises 6 action states of forward movement, left turning, right turning, slight left turning, slight right turning and stopping;
Figure FDA0002776180450000032
argmax () is a function of the position at which the maximum value of the response is obtained, m is the number of effectors, e is equal to 1, representing that the navigation output is forward; e is equal to 2, indicating that the navigation output is a left turn; e equals 3, indicating that the navigation output is a right turn; e equals 4, indicating a slight left turn of the navigation output, e equals 5, indicating a slight right turn of the navigation output; e equals 6, indicating that the navigation output is stopped.
6. The method according to claim 1, wherein the model in S5 is updated by autonomous learning, and the updating process is as follows:
Figure FDA0002776180450000041
wherein, VjA weight vector for activating neuron j for the Y region, comprising (w)t,wb),wbIs a bottom-up weight, wtAre bottom-up weights; gjThe age of the jth neuron, the greater the number of activations, the greater the age,
Figure FDA0002776180450000042
for input vectors, comprising front-end and back-end inputs, ω1And ω2Learning factor, ω, for controlling the rate of synaptic weight update of neurons21The larger, VjWill reflect more of the learning of new knowledge, ω1And ω2Derived from the forgetting averaging algorithm:
Figure FDA0002776180450000043
ω1(gi)=1-ω2(gi)
in the formula, u (g)i) The forgetting equation is g at the activation age of the ith neuroniThe value of time, forgetting equation u (g) is defined as follows:
Figure FDA0002776180450000044
activation of neurons to provoke age renewal, gi←gi+1, wherein g1And g2Is a forgetting age threshold; c and λ are hyper-parameters that control the learning speed.
CN202011266136.9A 2020-11-13 2020-11-13 Autonomous learning navigation method based on visual attention mechanism Pending CN112381829A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011266136.9A CN112381829A (en) 2020-11-13 2020-11-13 Autonomous learning navigation method based on visual attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011266136.9A CN112381829A (en) 2020-11-13 2020-11-13 Autonomous learning navigation method based on visual attention mechanism

Publications (1)

Publication Number Publication Date
CN112381829A true CN112381829A (en) 2021-02-19

Family

ID=74583678

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011266136.9A Pending CN112381829A (en) 2020-11-13 2020-11-13 Autonomous learning navigation method based on visual attention mechanism

Country Status (1)

Country Link
CN (1) CN112381829A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140032461A1 (en) * 2012-07-25 2014-01-30 Board Of Trustees Of Michigan State University Synapse maintenance in the developmental networks
US20170008168A1 (en) * 2015-07-10 2017-01-12 Board Of Trustees Of Michigan State University Navigational Control of Robotic Systems and Other Computer-Implemented Processes Using Developmental Network with Turing Machine Learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140032461A1 (en) * 2012-07-25 2014-01-30 Board Of Trustees Of Michigan State University Synapse maintenance in the developmental networks
US20170008168A1 (en) * 2015-07-10 2017-01-12 Board Of Trustees Of Michigan State University Navigational Control of Robotic Systems and Other Computer-Implemented Processes Using Developmental Network with Turing Machine Learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
钱夔等: "基于自主发育神经网络的机器人室内场景识别", 《机器人》 *

Similar Documents

Publication Publication Date Title
Gupta et al. Deep learning for object detection and scene perception in self-driving cars: Survey, challenges, and open issues
EP3772704A1 (en) Artificial-intelligence powered ground truth generation for object detection and tracking on image sequences
Mateus et al. Efficient and robust pedestrian detection using deep learning for human-aware navigation
Han et al. Active object detection with multistep action prediction using deep q-network
US11620487B2 (en) Neural architecture search based on synaptic connectivity graphs
Amini et al. Spatial uncertainty sampling for end-to-end control
US11593627B2 (en) Artificial neural network architectures based on synaptic connectivity graphs
US11568201B2 (en) Predicting neuron types based on synaptic connectivity graphs
US11625611B2 (en) Training artificial neural networks based on synaptic connectivity graphs
US20210201115A1 (en) Reservoir computing neural networks based on synaptic connectivity graphs
Xu et al. Deep convolutional neural network-based autonomous marine vehicle maneuver
JP7474446B2 (en) Projection Layer of Neural Network Suitable for Multi-Label Prediction
Ji et al. Incremental online object learning in a vehicular radar-vision fusion framework
EP3938806A1 (en) Radar data collection and labeling for machine-learning
US11631000B2 (en) Training artificial neural networks based on synaptic connectivity graphs
JP2018010568A (en) Image recognition system
CN114708435A (en) Obstacle size prediction and uncertainty analysis method based on semantic segmentation
Sagar et al. Artificial intelligence in autonomous vehicles-a literature review
Dai et al. Camera view planning based on generative adversarial imitation learning in indoor active exploration
US20230334842A1 (en) Training instance segmentation neural networks through contrastive learning
CN112381829A (en) Autonomous learning navigation method based on visual attention mechanism
Huang et al. Robust Visual Tracking Models Designs Through Kernelized Correlation Filters.
US20220391692A1 (en) Semantic understanding of dynamic imagery using brain emulation neural networks
Yang et al. Efficient online transfer learning for 3d object classification in autonomous driving
CN116615666A (en) Sequence processing for data sets with lost frames

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210219