CN111460964A - Moving target detection method under low-illumination condition of radio and television transmission machine room - Google Patents
Moving target detection method under low-illumination condition of radio and television transmission machine room Download PDFInfo
- Publication number
- CN111460964A CN111460964A CN202010228127.4A CN202010228127A CN111460964A CN 111460964 A CN111460964 A CN 111460964A CN 202010228127 A CN202010228127 A CN 202010228127A CN 111460964 A CN111460964 A CN 111460964A
- Authority
- CN
- China
- Prior art keywords
- illumination
- image
- model
- low
- television transmission
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000005286 illumination Methods 0.000 title claims abstract description 88
- 238000001514 detection method Methods 0.000 title claims abstract description 32
- 230000005540 biological transmission Effects 0.000 title claims abstract description 23
- 238000000034 method Methods 0.000 claims abstract description 36
- 238000001914 filtration Methods 0.000 claims abstract description 14
- 238000013528 artificial neural network Methods 0.000 claims abstract description 7
- 230000011218 segmentation Effects 0.000 claims description 12
- 230000000694 effects Effects 0.000 claims description 7
- 238000012549 training Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 4
- 238000000149 argon plasma sintering Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 238000000605 extraction Methods 0.000 abstract description 3
- 238000012544 monitoring process Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a moving target detection method under a low-illumination condition of a radio and television transmission machine room, which comprises the following steps: s1, predicting image illumination based on the deep neural network; s2, denoising the image based on the illumination model prediction; and S3, detecting the moving object based on the dual-scale background model. The method learns the illumination model extraction and the image quality enhancement of the low-illumination image through a deep neural network, and finally carries out background modeling and moving object detection based on double-scale approximate median filtering.
Description
Technical Field
The invention belongs to the technical field of video monitoring, and particularly relates to a moving target detection method under a low-illumination condition of a radio and television transmission machine room.
Background
In video surveillance, video moving object detection under low light conditions (such as at night or in poor light conditions) is a very important and challenging problem, since many criminal activities occur at night, such as illegal trespass, theft, etc. In the past, infrared ray lamps are often used to enhance image quality in poor night lighting conditions, but infrared lamps have narrow light angles and limited energy and can only cover scenes with uneven lighting in a limited range. Furthermore, there is a large fluctuation in the illumination from the local light sources. Therefore, it is difficult to obtain all details of the scene, such as color, texture, etc., under low light conditions, especially for long shot scenes, where the objects are usually small and the contrast to the background is low. In summary, for scenes with poor lighting conditions such as night, images and videos acquired by a monitoring camera generally have the characteristics of low brightness, low contrast, low signal-to-noise ratio, almost no color information and the like, so that it is difficult to perform moving object detection in such scenes. The existing monitoring equipment, such as Haikang, Dahua and the like, can directly provide a better moving object detection result under the condition that the scene illumination is normal, but is difficult to handle the special condition under the low-illumination scene.
In recent years, moving object detection under low illumination conditions has received much attention. Huang et al 2008 proposes a moving target real-time detection method based on night monitoring video local comparison. The method first divides a video image frame into a plurality of image blocks without overlapping, and defines the local contrast for the pixel mean and the local standard deviation of each image block. Then, obtaining a local comparison importance graph through threshold segmentation so as to know whether visual contents exist in the sub-image blocks, obtaining a preliminary moving object detection result according to the change of the local comparison importance graph, and finally further filtering an error result through association of motion prediction and spatial nearest neighbor data. In 2014, xiaohuaxin et al studied a series of problems in moving object detection under low illumination, including: (1) a background model based on a sparse theory; (2) a dictionary update method for motion detection; (3) the robust and accurate moving target extraction method judges the area where the moving target is located according to the distribution and the size of the projection by researching the sparse projection of the current frame image on the over-complete dictionary. Liu Lei et al have proposed a low light level video surveillance image noise reduction algorithm based on motion detection in 2014, divide the image frame into 8 x 8 motion pixel macro blocks and static pixel macro blocks through a threshold motion detection algorithm, adopt the improved wiener filtering algorithm to reduce noise to the motion pixel macro block, adopt the method of combining mathematical morphology and median filtering to reduce noise to the static pixel macro block.
Above method for moving target detection of video monitoring in low light scene has obtained better result to a certain extent, but stability and real-time are not enough, but are difficult to be directly applied to the control of broadcasting and television transmission computer lab because the security requirement of broadcasting and television computer lab is higher, need handle the control under the abnormal conditions such as outage, light trouble. In recent years, many target detection methods adopt a depth neural network-based method to extract depth features with unchanged illumination, and few research works focus on improving the detection effect of moving targets by improving the video image quality of an illumination scene. Aiming at the specific characteristics of a radio and television transmission machine room, the invention provides low-illumination video image enhancement based on deep learning, and a correct and stable moving target detection result is obtained by a new background modeling method.
Disclosure of Invention
The invention aims to solve the technical problem of providing a moving target detection method under the low-illumination condition of a radio and television transmission machine room.
In order to solve the technical problems, the invention adopts the following technical scheme:
a method for detecting a moving target in a radio and television transmission machine room under a low-illumination condition comprises the following steps:
s1, providing an image illumination prediction method based on a deep neural network, and establishing machine room photo data sets under different illumination conditions according to a typical indoor machine room scene; then training an illumination prediction model based on U-Net;
s2, obtaining a preliminary illumination matrix according to the illumination prediction model, then applying a guide filter to process an illumination image, reducing illumination change and local saturation caused by strong light, adopting a contrast stretching method to reduce the problem of over estimation, and finally obtaining an enhanced image through a Retinex model;
s3, moving target detection based on the double-scale background model, firstly, establishing the double-scale background model based on approximate median filtering, then, jointly extracting foreground moving targets based on the background models with different scales, and finally, carrying out denoising processing on the extracted foreground moving targets.
Preferably, step S1 includes: in order to generate training images under different illumination conditions, an illumination mask generation method is provided, and the light source position, the light transmission attenuation factor and the shape factor for generating a Gaussian function are comprehensively considered.
Preferably, the parameters of the illumination mask generation model include pixel position X of the illumination map, randomly selected light source position X, light scattering attenuation factor σ, shape parameters α, β for controlling the generation of a gaussian function to improve the reality of the illumination map, illumination gain G for adjusting the global brightness of the image, and the sum T of the number of all pixels in the illumination map (for normalization), as shown in the following formula:
preferably, the Retinex model in step S2 is specifically that I is an input low-illumination image, R is a recovered result, L is a trained illumination model, and c is a color channel:
Ic(x)=Rc(x)*Lc(x) Where c ∈ { r, g, b }.
Preferably, the step S3 further includes: the establishment of the dual-scale approximate median filtering background model is based on the high-resolution source image generated in the step S2 and the downsampled image with high signal-to-noise ratio, wherein the downsampled image is obtained by calculating the average value of each pixel in the sub-image block, and the size of the sub-image block needs to comprehensively consider the signal-to-noise ratio, the contrast ratio and the resolution ratio, so that the optimal target detection effect is realized.
Preferably, the two-scale approximate median filtering background model is implemented by taking the variable s ∈ {0,1} as the background modelVariable increment, and then adopting improved approximate median filtering method to establish original image background model MfAnd a background model M of the downsampled mapgWhere x and y are image coordinates, t is the time point of update, f and g respectively represent the original image and the down-sampled image, and the background model is updated as follows:
using M for down-sampled imagesgThe method comprises the steps of performing preliminary segmentation on a motion area, then up-sampling a segmentation result to the size of an original image, wherein the segmentation result has an obvious checkered effect, and further adopting an original image background model M to solve the problemfAnd the foreground of the preliminary segmentation is further improved, so that the contour of the moving target is more accurate.
The invention has the following beneficial effects:
(1) the moving target detection method under the low-illumination condition of the radio and television transmission machine room disclosed by the embodiment of the invention can fundamentally and effectively solve the problem that the moving target is difficult to accurately and effectively detect in a low-illumination scene by enhancing the low-illumination image.
(2) The method for detecting the moving target under the low-illumination condition of the radio and television transmission machine room, disclosed by the embodiment of the invention, has high efficiency and real-time performance, and can be effectively applied to indoor video intelligent monitoring under special scenes such as the radio and television transmission machine room.
(3) The background modeling method based on the double-scale approximate median filtering, which is provided by the moving object detection method under the low-illumination condition of the radio and television transmission machine room in the embodiment of the invention, can efficiently and robustly detect the moving object in the monitoring video with poor light and low signal-to-noise ratio.
Drawings
Fig. 1 is a schematic processing flow diagram of a moving object detection method under a low-illumination condition in a radio and television transmission machine room according to an embodiment of the present invention;
fig. 2 is a moving object detection system based on the processing method shown in fig. 1.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 and 2, the invention discloses a moving object detection method under a low illumination condition of a radio and television transmission machine room, and fig. 1 shows a processing flow schematic diagram, wherein the method comprises the following steps:
and S1, predicting the image illumination based on the deep neural network.
Taking a typical indoor machine room scene as an example, a machine room photo data set under different illumination conditions is established, and the data set is divided into two types: (1) images taken under good lighting conditions, wherein the features of the object of interest are significant and not affected by the lighting; (2) and taking the image shot under the light at night as the verification data of the subsequent learning stage. In addition, in order to evaluate the effectiveness of the method in low-light scenes, images in low-light videos are used as test data. In order to perform effective illumination model extraction, images under various different illumination conditions need to be included in the training set, but the images are difficult to directly acquire from a real scene, and the images under different illumination conditions need to be artificially synthesized.
The parameters of the illumination mask generation model provided by the embodiment of the invention comprise a pixel position X of an illumination map, a randomly selected light source position X, a light scattering attenuation factor sigma, shape parameters α and β for controlling and generating a Gaussian function and improving the reality of the illumination map, an illumination gain G for adjusting the global brightness of an image and the sum T (for normalization processing) of the number of all pixels in the illumination map, as shown in the following formula:
in order to achieve the above goal, the invention replaces the original activation function from sigmoid to Re L U, and comprehensively considers the mean square error and the structural similarity in the loss function.
And S2, denoising the image based on the illumination model prediction.
Based on the U-Net trained in step S1, a preliminary illumination model matrix is obtained, which can be used to recover low-illumination images. In order to avoid over-amplification of image illumination local signals and unstable illumination deviation, the embodiment of the invention adopts a guiding filter to further decompose local information, and adopts an observed image as a reference for optimizing an illumination model matrix. The optimized illumination image contains information of three channels, and can effectively reduce illumination change and local saturation caused by strong light. Further, the method of contrast expansion is adopted to reduce the problem of overestimation.
Based on the optimized illumination model, the embodiments of the present invention adopt a Retinex model to recover the low-illumination image, where I is the input low-illumination image, R is the recovered result, L is the trained illumination model, and c is the color channel.
Ic(x)=Rc(x)*Lc(x) Where c ∈ { r, g, b }.
And S3, detecting the moving object based on the dual-scale background model.
The dual-scale background model provided by the embodiment of the invention is a background model based on an original image f and a downsampled image g respectively. The down-sampling image has higher signal-to-noise ratio compared with the original image, and the generation method comprises the following steps: and dividing the original image into a plurality of sub image blocks with the sizes of a and b, wherein the pixel mean value of each sub image block is used as the down-sampling pixel value. When selecting the size of the sub-image block, the influence of the signal-to-noise ratio, the contrast and the resolution of the image on the target detection needs to be balanced.
The increment given in the background updating process of the traditional method is constant '1', which is not applicable to the detection of the moving target monitored by the video, the invention adopts the variable s ∈ {0,1} as the variable increment, and establishes the original image background model M by the improved approximate median filtering methodfAnd a background model M of the downsampled mapg. The background model is updated as shown below. Where x and y are image coordinates, t is the time point of update, and f and g respectively represent the original image and the downsampled image.
Using M for down-sampled imagesgThe motion area is initially segmented and then the segmentation result is up-sampled to the original image size. A large number of experimental results show that the segmentation result has obvious checkered effect. To solve the problem, the invention further adopts an original image background model MfAnd the foreground of the preliminary segmentation is further improved, so that the contour of the moving target is more accurate.
The target detection method under the low-illumination condition of the radio and television transmission machine room provided by the embodiment of the invention can effectively solve the low-illumination problem in video monitoring, and the quality of the monitored image can be effectively improved through image denoising based on the deep neural network, so that the robustness and the accuracy of moving target detection are effectively improved.
The achievement of the invention can be further applied to monitoring of low-illumination indoor and outdoor scenes, and the definition of the monitoring image can be effectively enhanced by acquiring the data sets of the corresponding scenes for training and learning, thereby improving the monitoring accuracy.
It is to be understood that the exemplary embodiments described herein are illustrative and not restrictive. Although one or more embodiments of the present invention have been described with reference to the accompanying drawings, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the following claims.
Claims (6)
1. A method for detecting a moving target in a radio and television transmission machine room under a low-illumination condition is characterized by comprising the following steps:
s1, providing an image illumination prediction method based on a deep neural network, and establishing machine room photo data sets under different illumination conditions according to a typical indoor machine room scene; then training an illumination prediction model based on U-Net;
s2, obtaining a preliminary illumination matrix according to the illumination prediction model, then applying a guide filter to process an illumination image, reducing illumination change and local saturation caused by strong light, adopting a contrast stretching method to reduce the problem of over estimation, and finally obtaining an enhanced image through a Retinex model;
s3, moving target detection based on the double-scale background model, firstly, establishing the double-scale background model based on approximate median filtering, then, jointly extracting foreground moving targets based on the background models with different scales, and finally, carrying out denoising processing on the extracted foreground moving targets.
2. The method for detecting the moving object in the broadcasting and television transmission room under the low illumination condition according to claim 1, wherein the step S1 comprises: in order to generate training images under different illumination conditions, an illumination mask generation method is provided, and the light source position, the light transmission attenuation factor and the shape factor for generating a Gaussian function are comprehensively considered.
3. The method for detecting the moving object in the radio and television transmission room under the low illumination condition as claimed in claim 2, wherein the parameters of the illumination mask generation model comprise pixel position X of an illumination pattern, randomly selected light source position X, light scattering attenuation factor sigma, shape parameters α and β for controlling and generating Gaussian function and improving the reality of the illumination pattern, illumination gain G for adjusting the global brightness of the image, and the sum T of the number of all pixels in the illumination pattern, as shown in the following formula:
4. the method for detecting the moving object in the radio and television transmission room under the low illumination condition as claimed in claim 1, wherein the Retinex model in the step S2 is specifically that I is an input low illumination image, R is a recovered result, L is a trained illumination model, and c is a color channel:
Ic(x)=Rc(x)*Lc(x) Where c ∈ { r, g, b }.
5. The method for detecting the moving object in the radio and television transmission room under the low illumination condition according to claim 1, wherein the step S3 further comprises: the establishment of the dual-scale approximate median filtering background model is based on the high-resolution source image generated in the step S2 and the downsampled image with high signal-to-noise ratio, wherein the downsampled image is obtained by calculating the average value of each pixel in the sub-image block, and the size of the sub-image block needs to comprehensively consider the signal-to-noise ratio, the contrast ratio and the resolution ratio, so that the optimal target detection effect is realized.
6. The method for detecting the moving object in the broadcasting and television transmission room under the low illumination condition as claimed in claim 5, wherein the dual-scale approximate median filtering background model is specifically that a variable s ∈ {0,1} is adopted as a variable increment, and then an original image background model M is established by adopting an improved approximate median filtering methodfAnd a background model M of the downsampled mapgWhere x and y are image coordinates, t is the time point of update, f and g respectively represent the original image and the down-sampled image, and the background model is updated as follows:
using M for down-sampled imagesgThe method comprises the steps of performing preliminary segmentation on a motion area, then up-sampling a segmentation result to the size of an original image, wherein the segmentation result has an obvious checkered effect, and further adopting an original image background model M to solve the problemfAnd the foreground of the preliminary segmentation is further improved, so that the contour of the moving target is more accurate.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010228127.4A CN111460964A (en) | 2020-03-27 | 2020-03-27 | Moving target detection method under low-illumination condition of radio and television transmission machine room |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010228127.4A CN111460964A (en) | 2020-03-27 | 2020-03-27 | Moving target detection method under low-illumination condition of radio and television transmission machine room |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111460964A true CN111460964A (en) | 2020-07-28 |
Family
ID=71684987
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010228127.4A Pending CN111460964A (en) | 2020-03-27 | 2020-03-27 | Moving target detection method under low-illumination condition of radio and television transmission machine room |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111460964A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112967467A (en) * | 2021-02-24 | 2021-06-15 | 九江学院 | Cultural relic anti-theft method, system, mobile terminal and storage medium |
WO2022193132A1 (en) * | 2021-03-16 | 2022-09-22 | 华为技术有限公司 | Image detection method and apparatus, and electronic device |
WO2022222585A1 (en) * | 2021-04-20 | 2022-10-27 | 北京嘀嘀无限科技发展有限公司 | Target identification method and system |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110223317A (en) * | 2019-04-26 | 2019-09-10 | 中国矿业大学 | A kind of Moving target detection based on image procossing and trajectory predictions method |
-
2020
- 2020-03-27 CN CN202010228127.4A patent/CN111460964A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110223317A (en) * | 2019-04-26 | 2019-09-10 | 中国矿业大学 | A kind of Moving target detection based on image procossing and trajectory predictions method |
Non-Patent Citations (2)
Title |
---|
YEN-TING HUANG 等: "Enhancing object detection in the dark using U-Net based restoration module", 《2019 16TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE (AVSS)》 * |
张运楚等: "夜间弱光环境下运动目标的检测", 《山东建筑大学学报》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112967467A (en) * | 2021-02-24 | 2021-06-15 | 九江学院 | Cultural relic anti-theft method, system, mobile terminal and storage medium |
WO2022193132A1 (en) * | 2021-03-16 | 2022-09-22 | 华为技术有限公司 | Image detection method and apparatus, and electronic device |
WO2022222585A1 (en) * | 2021-04-20 | 2022-10-27 | 北京嘀嘀无限科技发展有限公司 | Target identification method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2011265429B2 (en) | Method and system for robust scene modelling in an image sequence | |
CN111460964A (en) | Moving target detection method under low-illumination condition of radio and television transmission machine room | |
CN102542571B (en) | Moving target detecting method and device | |
EP1805715A1 (en) | A method and system for processing video data | |
Vosters et al. | Background subtraction under sudden illumination changes | |
CN109389569B (en) | Monitoring video real-time defogging method based on improved DehazeNet | |
CN108280409B (en) | Large-space video smoke detection method based on multi-feature fusion | |
CN105898111B (en) | A kind of video defogging method based on spectral clustering | |
CN110807738A (en) | Fuzzy image non-blind restoration method based on edge image block sharpening | |
CN114627269A (en) | Virtual reality security protection monitoring platform based on degree of depth learning target detection | |
CN104715480A (en) | Statistical background model based target detection method | |
Liu et al. | Scene background estimation based on temporal median filter with Gaussian filtering | |
CN105046670A (en) | Image rain removal method and system | |
Angelo | A novel approach on object detection and tracking using adaptive background subtraction method | |
CN107346421B (en) | Video smoke detection method based on color invariance | |
CN113902694A (en) | Target detection method based on dynamic and static combination | |
CN111626944B (en) | Video deblurring method based on space-time pyramid network and against natural priori | |
CN111667498A (en) | Automatic moving ship target detection method facing optical satellite video | |
Jin et al. | Fusing Canny operator with vibe algorithm for target detection | |
CN109493361B (en) | Fire smoke image segmentation method | |
Gao et al. | Single image haze removal algorithm using pixel-based airlight constraints | |
CN107564029B (en) | Moving target detection method based on Gaussian extreme value filtering and group sparse RPCA | |
CN113066077B (en) | Flame detection method and device | |
CN111145219B (en) | Efficient video moving target detection method based on Codebook principle | |
CN111008555B (en) | Unmanned aerial vehicle image small and weak target enhancement extraction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200728 |