CN112347972A - High-dynamic region-of-interest image processing method based on deep learning - Google Patents
High-dynamic region-of-interest image processing method based on deep learning Download PDFInfo
- Publication number
- CN112347972A CN112347972A CN202011295458.6A CN202011295458A CN112347972A CN 112347972 A CN112347972 A CN 112347972A CN 202011295458 A CN202011295458 A CN 202011295458A CN 112347972 A CN112347972 A CN 112347972A
- Authority
- CN
- China
- Prior art keywords
- region
- interest
- image
- input
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013135 deep learning Methods 0.000 title claims abstract description 23
- 238000003672 processing method Methods 0.000 title claims abstract description 11
- 230000000007 visual effect Effects 0.000 claims abstract description 24
- 238000000034 method Methods 0.000 claims abstract description 22
- 238000013528 artificial neural network Methods 0.000 claims abstract description 21
- 238000012545 processing Methods 0.000 claims abstract description 19
- 238000012549 training Methods 0.000 claims description 20
- 230000008569 process Effects 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 5
- 241001270131 Agaricus moelleri Species 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000005316 response function Methods 0.000 claims description 3
- 230000007423 decrease Effects 0.000 claims 1
- 238000000605 extraction Methods 0.000 abstract description 6
- 238000011160 research Methods 0.000 description 4
- 210000002569 neuron Anatomy 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000004382 visual function Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the technical field of image processing, in particular to a high dynamic interested area image processing method based on deep learning, which acquires a high dynamic image through a camera and inputs the image into an interested area for processing; acquiring region-of-interest and non-region-of-interest data by using the generated region-of-interest map; repeatedly correcting the weight in the network by using an error back propagation learning algorithm until the error reaches the minimum value, and obtaining the optimal weight; and classifying the input signals by using a neural network algorithm with the optimal weight, wherein if the input data belong to an interested region detected by the visual attention model, the output value is l, and otherwise, the output value is 0. According to the method, the interested region in the image extracted by using the deep learning neural network algorithm is not distorted, and meanwhile, the integrity of the content of the interested region is maintained; the method can accurately detect the region of interest in the image, and ensures accurate extraction of the region of interest, so that the extraction result has good visual effect.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a high-dynamic region-of-interest image processing method based on deep learning.
Background
The high dynamic range image processing is mainly directed to a luminance image of a floating point type having a wide dynamic range. The characteristic determines that the high dynamic range image processing method can extract more brightness information for analysis and processing. The high dynamic range image processing method has obvious advantages for environments with high brightness contrast or poor light conditions, such as scenes of backlight, evening, dusk and the like.
The high dynamic range image processing mainly needs to solve three problems of how to reconstruct a high dynamic range image through an image sensor with limited dynamic range (also called high dynamic range image synthesis); how to convert the high dynamic range image into a grayscale image that can be displayed directly on a digital display with limited dynamic range (also known as dynamic range compression or tone mapping); how to effectively suppress signal dependent noise caused by the irregular motion of the photons.
The image interesting region is a region which is most attractive to a user in one image and can embody the content of the whole image. The region is a key region of the image and is also a target region of the image, and the region contains the main content of the image. Therefore, the research based on the image region of interest is bound to be the focus and hot spot of the research in the image processing field.
The image interesting region detection is that a computer is utilized to simulate the visual function of a human (namely, the computer changes a three-dimensional scene of an objective world into a two-dimensional image by means of various visual sensors, such as a CCD (charge coupled device) camera, and then the interesting region in the image is identified according to the visual significance. In addition, the visual attention is an important branch of the image region-of-interest research, which not only can embody the observation capability of a user, but also is beneficial to improving the region-of-interest detection precision. With the further research of the image interesting region, a plurality of scholars at home and abroad introduce the concept of visual attention into the image processing field.
Disclosure of Invention
Aiming at the defects of the prior art, the invention discloses a high dynamic region-of-interest image processing method based on deep learning, which is used for realizing good visual effect of an image.
The invention is realized by the following technical scheme:
the invention discloses a high dynamic region-of-interest image processing method based on deep learning, which comprises the following steps:
s1, acquiring a high dynamic image through the camera, and inputting the image into the region of interest for processing;
s2, filtering and denoising the input image and performing HSV (hue, saturation and value) space transformation;
s3, respectively obtaining image color, brightness and edge visual characteristics of the input image;
s4 further obtaining a color feature map, a brightness feature map and an edge feature map, and generating an interested region map by subjecting a plurality of feature maps to linear normalization processing;
s5 obtaining region-of-interest data and region-of-non-interest data by using the generated region-of-interest map;
s6, inputting the data in S5 as samples into a deep learning neural network for learning training;
s7, repeatedly correcting the weight in the network by using an error back propagation learning algorithm until the error reaches the minimum value, and obtaining the optimal weight;
s8, classifying the input signals by using the neural network algorithm of the optimal weight, wherein if the input data belong to the region of interest detected by the visual attention model, the output value is l, otherwise, the output value is 0.
Furthermore, the deep learning neural network comprises an input layer, a hidden layer and an output layer, wherein an input signal acts on an output node through a node of the hidden layer, and an output signal is generated through nonlinear transformation.
Furthermore, in the network training of the deep learning neural network, each sample comprises an input vector and an expected output value, an error between the network output value and the expected output value is reduced along a gradient direction by adjusting a weight and a threshold value, and the training can be stopped by repeatedly learning and training to determine the weight corresponding to the minimum error.
Furthermore, the input layer is mainly responsible for receiving input data, wherein the input data is region-of-interest data and region-of-non-interest data detected by the improved visual attention model, and the input layer has two nodes, one for receiving the region-of-interest data in the input sample, and the other for receiving the region-of-non-interest data in the input sample.
Furthermore, the hidden layer is responsible for processing signals, the hidden layer comprises data stream forward propagation and error signal backward propagation, and the data stream forward propagation and the error signal backward propagation are performed alternately until the error reaches the minimum value.
Furthermore, the output layer is responsible for outputting a calculation result, and the calculation result is used for learning and training according to data provided by the visual attention model and completing an information memory process.
Furthermore, in the method, the step of obtaining the high-dynamic image is to find the nonlinear response function g and find the brightness values corresponding to different gray scales, so as to restore the dynamic range of the input image.
The invention has the beneficial effects that:
according to the method, the interested region in the image extracted by using the deep learning neural network algorithm is not distorted, and meanwhile, the integrity of the content of the interested region is maintained; the method can accurately detect the region of interest in the image, and ensures accurate extraction of the region of interest, so that the extraction result has good visual effect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic step diagram of a high dynamic region-of-interest image processing method based on deep learning.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
The embodiment discloses a high dynamic region-of-interest image processing method based on deep learning, as shown in fig. 1, the method includes the following steps:
s1, acquiring a high dynamic image through the camera, and inputting the image into the region of interest for processing;
s2, filtering and denoising the input image and performing HSV (hue, saturation and value) space transformation;
s3, respectively obtaining image color, brightness and edge visual characteristics of the input image;
s4 further obtaining a color feature map, a brightness feature map and an edge feature map, and generating an interested region map by subjecting a plurality of feature maps to linear normalization processing;
s5 obtaining region-of-interest data and region-of-non-interest data by using the generated region-of-interest map;
s6, inputting the data in S5 as samples into a deep learning neural network for learning training;
s7, repeatedly correcting the weight in the network by using an error back propagation learning algorithm until the error reaches the minimum value, and obtaining the optimal weight;
s8, classifying the input signals by using the neural network algorithm of the optimal weight, wherein if the input data belong to the region of interest detected by the visual attention model, the output value is l, otherwise, the output value is 0.
The deep learning neural network comprises an input layer, a hidden layer and an output layer, wherein an input signal acts on an output node through a hidden layer node, and an output signal is generated through nonlinear transformation.
The network training of the deep learning neural network is characterized in that each sample of the network training comprises an input vector and an expected output value, the error between the network output value and the expected output value is reduced along the gradient direction by adjusting the weight and the threshold, and the training can be stopped by repeatedly learning and training to determine the weight corresponding to the minimum error.
The input layer is mainly responsible for receiving input data, wherein the input data are region-of-interest data and region-of-non-interest data detected by the improved visual attention model, and the input layer is provided with two nodes, one node is used for receiving the region-of-interest data in the input sample, and the other node is used for receiving the region-of-non-interest data in the input sample.
And the hidden layer is responsible for processing the signal, the hidden layer comprises data flow forward propagation and error signal backward propagation, and the data flow forward propagation and the error signal backward propagation are performed alternately until the error reaches the minimum value.
The output layer is responsible for outputting a calculation result, learning and training are carried out according to data provided by the visual attention model, and an information memory process is completed.
In this embodiment, the obtaining of the high dynamic image is to find the nonlinear response function g and find the luminance values corresponding to different gray scales, so as to restore the dynamic range of the input image.
Example 2
The basic principle of the deep learning neural network algorithm disclosed by the embodiment is that an input signal X acts on an output node through a hidden layer node, and an output signal Y is generated through nonlinear transformation. Each sample of the network training comprises an input vector X and an expected output value t, the error between the network output value Y and the expected output value t is reduced along the gradient direction by adjusting the weight w and the threshold, and the training can be stopped by repeatedly learning and training to determine the weight corresponding to the minimum error. The neural network passing through at this time can process similar input information by itself and can make classification. Therefore, the weighting coefficient in the integration process of the data of the interested region and the data of the non-interested region is adjusted through the supervised learning of the three visual characteristic data of the color, the brightness and the edge, so that the selection of the interested region is realized.
The present embodiment employs a three-layer neural network classifier. Wherein, the input layer has 2 neurons; the hidden layer has 3 neurons; the output layer has I neurons.
The first layer is the input layer. The input layer is mainly responsible for the reception of input data. Wherein the input data are region-of-interest data and region-of-non-interest data detected by the improved visual attention model. The input layer has two nodes, one for receiving region of interest data in the input sample and the other for receiving non-region of interest data in the input sample.
The second layer is a hidden layer. The hidden layer is the key of the neural network and is mainly responsible for processing signals. The layer includes two processes, a forward propagation of the data stream and a backward propagation of the error signal, and the two processes are alternated until the error reaches a minimum value.
The third layer is the output layer. The output layer is mainly responsible for outputting the calculation result. And the neural network performs learning training according to the data provided by the visual attention model and completes the information memory process.
In conclusion, the interested region in the image extracted by the deep learning neural network algorithm is not distorted, and the integrity of the content of the interested region is maintained; the method can accurately detect the region of interest in the image, and ensures accurate extraction of the region of interest, so that the extraction result has good visual effect.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (7)
1. A high dynamic region-of-interest image processing method based on deep learning is characterized by comprising the following steps:
s1, acquiring a high dynamic image through the camera, and inputting the image into the region of interest for processing;
s2, filtering and denoising the input image and performing HSV (hue, saturation and value) space transformation;
s3, respectively obtaining image color, brightness and edge visual characteristics of the input image;
s4 further obtaining a color feature map, a brightness feature map and an edge feature map, and generating an interested region map by subjecting a plurality of feature maps to linear normalization processing;
s5 obtaining region-of-interest data and region-of-non-interest data by using the generated region-of-interest map;
s6, inputting the data in S5 as samples into a deep learning neural network for learning training;
s7, repeatedly correcting the weight in the network by using an error back propagation learning algorithm until the error reaches the minimum value, and obtaining the optimal weight;
s8, classifying the input signals by using the neural network algorithm of the optimal weight, wherein if the input data belong to the region of interest detected by the visual attention model, the output value is l, otherwise, the output value is 0.
2. The method as claimed in claim 1, wherein the deep learning neural network includes an input layer, a hidden layer and an output layer, and the input signal acts on the output node through the hidden layer node, and the output signal is generated through nonlinear transformation.
3. The method as claimed in claim 2, wherein each sample of the network training of the deep learning neural network includes an input vector and an expected output value, an error between the network output value and the expected output value decreases in a gradient direction by adjusting a weight and a threshold, and the training is stopped by repeating the learning training to determine a weight corresponding to a minimum error.
4. The method as claimed in claim 2, wherein the input layer is mainly responsible for receiving input data, wherein the input data is the region-of-interest data and the region-of-non-interest data detected by the improved visual attention model, and the input layer has two nodes, one for receiving the region-of-interest data in the input sample and the other for receiving the region-of-non-interest data in the input sample.
5. The method as claimed in claim 2, wherein the hidden layer is responsible for processing signals, the hidden layer includes data stream forward propagation and error signal backward propagation, and the two processes of data stream forward propagation and error signal backward propagation are alternated until the error reaches a minimum value.
6. The method as claimed in claim 2, wherein the output layer is responsible for outputting the calculation result, performing learning training according to the data provided by the visual attention model, and completing the information memorizing process.
7. The method for processing the high dynamic region of interest image based on the deep learning of claim 1, wherein the obtaining the high dynamic image is to find the non-linear response function g and find the brightness values corresponding to different gray scales, so as to restore the dynamic range of the input image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011295458.6A CN112347972A (en) | 2020-11-18 | 2020-11-18 | High-dynamic region-of-interest image processing method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011295458.6A CN112347972A (en) | 2020-11-18 | 2020-11-18 | High-dynamic region-of-interest image processing method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112347972A true CN112347972A (en) | 2021-02-09 |
Family
ID=74364281
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011295458.6A Pending CN112347972A (en) | 2020-11-18 | 2020-11-18 | High-dynamic region-of-interest image processing method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112347972A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117058132A (en) * | 2023-10-11 | 2023-11-14 | 天津大学 | Cultural relic illumination visual comfort quantitative evaluation method and system based on neural network |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130230229A1 (en) * | 2010-03-31 | 2013-09-05 | Impul's Zakrytoe Akcionernoe Obshchestvo | Method for brightness level calculation of the digital x-ray image for medical applications |
CN104517103A (en) * | 2014-12-26 | 2015-04-15 | 广州中国科学院先进技术研究所 | Traffic sign classification method based on deep neural network |
CN105096279A (en) * | 2015-09-23 | 2015-11-25 | 成都融创智谷科技有限公司 | Digital image processing method based on convolutional neural network |
-
2020
- 2020-11-18 CN CN202011295458.6A patent/CN112347972A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130230229A1 (en) * | 2010-03-31 | 2013-09-05 | Impul's Zakrytoe Akcionernoe Obshchestvo | Method for brightness level calculation of the digital x-ray image for medical applications |
CN104517103A (en) * | 2014-12-26 | 2015-04-15 | 广州中国科学院先进技术研究所 | Traffic sign classification method based on deep neural network |
CN105096279A (en) * | 2015-09-23 | 2015-11-25 | 成都融创智谷科技有限公司 | Digital image processing method based on convolutional neural network |
Non-Patent Citations (2)
Title |
---|
刘宗玥: "高动态范围图像的合成与色阶映射的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
李业伟: "基于感兴趣区域的图像处理算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117058132A (en) * | 2023-10-11 | 2023-11-14 | 天津大学 | Cultural relic illumination visual comfort quantitative evaluation method and system based on neural network |
CN117058132B (en) * | 2023-10-11 | 2024-01-23 | 天津大学 | Cultural relic illumination visual comfort quantitative evaluation method and system based on neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111709902B (en) | Infrared and visible light image fusion method based on self-attention mechanism | |
Ren et al. | Low-light image enhancement via a deep hybrid network | |
CN113065558B (en) | Lightweight small target detection method combined with attention mechanism | |
CN109584248B (en) | Infrared target instance segmentation method based on feature fusion and dense connection network | |
CN108399362B (en) | Rapid pedestrian detection method and device | |
CN108229490B (en) | Key point detection method, neural network training method, device and electronic equipment | |
Zhu et al. | A fast single image haze removal algorithm using color attenuation prior | |
Guo et al. | Multiview high dynamic range image synthesis using fuzzy broad learning system | |
Chen et al. | MFFN: An underwater sensing scene image enhancement method based on multiscale feature fusion network | |
CN105160310A (en) | 3D (three-dimensional) convolutional neural network based human body behavior recognition method | |
CN112686928B (en) | Moving target visual tracking method based on multi-source information fusion | |
CN105335725A (en) | Gait identification identity authentication method based on feature fusion | |
CN111260738A (en) | Multi-scale target tracking method based on relevant filtering and self-adaptive feature fusion | |
CN104063871B (en) | The image sequence Scene Segmentation of wearable device | |
CN109461186A (en) | Image processing method, device, computer readable storage medium and electronic equipment | |
WO2023005818A1 (en) | Noise image generation method and apparatus, electronic device, and storage medium | |
Hu et al. | A multi-stage underwater image aesthetic enhancement algorithm based on a generative adversarial network | |
CN114187214A (en) | Infrared and visible light image fusion system and method | |
CN113379707A (en) | RGB-D significance detection method based on dynamic filtering decoupling convolution network | |
Liu et al. | Image edge recognition of virtual reality scene based on multi-operator dynamic weight detection | |
CN115311186A (en) | Cross-scale attention confrontation fusion method for infrared and visible light images and terminal | |
CN114708615A (en) | Human body detection method based on image enhancement in low-illumination environment, electronic equipment and storage medium | |
CN114926826A (en) | Scene text detection system | |
CN112347972A (en) | High-dynamic region-of-interest image processing method based on deep learning | |
CN111539434B (en) | Infrared weak and small target detection method based on similarity |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210209 |