CN105427276A - Camera detection method based on image local edge characteristics - Google Patents

Camera detection method based on image local edge characteristics Download PDF

Info

Publication number
CN105427276A
CN105427276A CN201510723896.0A CN201510723896A CN105427276A CN 105427276 A CN105427276 A CN 105427276A CN 201510723896 A CN201510723896 A CN 201510723896A CN 105427276 A CN105427276 A CN 105427276A
Authority
CN
China
Prior art keywords
image
camera
local edge
value
detection method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510723896.0A
Other languages
Chinese (zh)
Inventor
孙琴
彭聃
吴�灿
付煜翀
罗宗亮
符松
徐文韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongiqng Telecom System Integration Co Ltd
Original Assignee
Chongiqng Telecom System Integration Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongiqng Telecom System Integration Co Ltd filed Critical Chongiqng Telecom System Integration Co Ltd
Priority to CN201510723896.0A priority Critical patent/CN105427276A/en
Publication of CN105427276A publication Critical patent/CN105427276A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention belongs to the image processing field, more specifically relating to a method of determining whether a camera is shielded through image local edge characteristics, and provides a camera detection method based on image local edge characteristics, comprising: performing gray scale transformation on an RGB image I shot by a camera to obtain an image L; successively utilizing a gauss filter and a non-linear diffusion filter to process the image L to construct a non-linear multi-scale space; calculating the Hessian matrix response image LiHessian of each image Li in the space; utilizing a non-maximum value inhibition algorithm of a 3*3*3 neighborhood to position LiHessian local edge characteristic points; and finally counting the number of the characteristic points to compare with a threshold, and accurately determining a camera shielding case. The method can adapt to illumination change, accurately determining whether the camera is shielded under various illumination conditions, meanwhile employ a gray value mean square deviation to eliminate the false alarm caused by network and equipment faults, and reduce a rate of false alarm.

Description

A kind of camera detection method based on image local edge feature
Technical field
The invention belongs to image processing field, be specifically related to utilize image local edge feature to judge the method whether camera is blocked.
Background technology
Along with the development of society and the progress of science and technology, supervisory system is used in the every field that people produce, live widely.Camera is in supervisory system foremost, and be easily subject to extraneous interference, modal is exactly that camera is blocked.Although prior art can judge this situation preferably; but due to the limitation of existing algorithm and the complicacy of field condition; especially the impact of illumination variation, often can make equipment occur deviation for the judgement of image, causes the generation of reporting and failing to report phenomenon by mistake.Therefore must improve adaptability and the reliability of intelligent analysis process, accurately could reflect the truth of scene.
Summary of the invention
The object of this invention is to provide a kind of camera detection method based on image local edge feature, the change of illumination can be adapted to, can judge whether camera is blocked accurately at various illumination condition, the wrong report that network failure and equipment failure cause can be got rid of simultaneously, reduce rate of false alarm.
For achieving the above object, the technical solution adopted in the present invention is: a kind of camera occlusion detection method based on image local edge feature, is characterized in that: comprise the following steps:
A, to camera shooting RGB image I carry out greyscale transformation, obtain gray level image L, and calculate the mean square deviation ω of each grey scale pixel value in gray level image L;
B, judge whether the value of ω equals 0, if ω=0, then show network failure or equipment failure, terminate program; If ω > 0, then carry out the extraction of image local Edge Feature Points, comprise the following steps:
B1, adopt Gaussian filter to carry out filtering process to gray level image L, construct the multiscale space with N width image, described multiscale space is made up of O group image, often organizes image and has S sublayer, N=O × S; The scale parameter σ of every tomographic image imark respectively by sequence number o and s, scale parameter σ icalculate according to formula (1):
o ∈ [0 ..., O-1], s ∈ [0 ..., S-1], i ∈ [0 ..., N], σ 0be the initial baseline value of scale parameter, be defaulted as 1.6;
B2, employing Nonlinear diffusion filtering device carry out DIFFUSION TREATMENT to the every tomographic image in multiscale space, generate non-linear multiscale space, wherein the image L of the bottom 0=L σ, L σwith being of a size of 9 × 9, standard deviation be 1.6 gaussian kernel and described gray level image L carry out convolution after obtain; Remainder layer image generates evolution graph as L according to following formula (2) i:
L i + 1 = ( I - τ Σ l = 1 m A l ( L i ) ) - 1 L i ,
Wherein A lrepresent image L iconductance matrix on each dimension l, I is original image, and τ is time step, τ=t i+1-t i, t ifor the scale parameter in units of the time, i ∈ [0 ..., N];
B3, according to the every secondary evolution graph generated in b2 step as L icalculate the Hessian matrix response image L of its correspondence i hessian, according to formula (3):
L i hessian2 i, norm(L xx il yy i-L xy il xy i), wherein, L i xxfor evolution graph is as L isecond order in x direction is reciprocal, L i yyfor evolution graph is as L isecond order in y direction is reciprocal, L i xyfor evolution graph is as L isecond-order mixed partial derivative, σ i, normfor L ithe round values of corresponding yardstick;
The non-maximal value Restrainable algorithms of b4, employing 3 × 3 × 3 neighborhoods, locates each Hessian response image L i hessianin local edge unique point;
The quantity n of the local edge unique point that c, statistics are extracted, by n and threshold value n 0compare, threshold value n 0for the local feature of image captured when camera does not block is counted; If n≤n 0, then send camera and to be blocked alarm.
Beneficial effect of the present invention: carry out numerical value contrast by the quantity extracting picture local edge unique point, and without the need to carrying out the contrast of plurality of pictures, greatly reduce calculated amount, improve computing velocity; And these local edge features possess good yardstick and rotational invariance, the situation such as change, visual angle change conversion, image scaling for illumination also keeps certain unchangeability, overcome some defects that prior art exists, the particularly wrong report that causes of illumination variation, local edge feature keeps good unchangeability to illumination variation, same scene, the characteristic number difference extracted under different light is very little, therefore there is good adaptability, can judge whether camera is blocked accurately at various illumination condition.The prerequisite extracting picture local edge unique point due to the present invention is mean square deviation ω > 0, so just must eliminate the abnormal conditions such as network failure or equipment failure, substantially increase reliability, reduce rate of false alarm.
Accompanying drawing explanation
Fig. 1 is FB(flow block) of the present invention.
Embodiment
When camera is not blocked, the picture clear-cut of shooting; And camera is when being blocked, shelter from camera lens very close to, the picture taken is caused to become very fuzzy, there is no edge contour clearly, therefore before and after blocking, the quantity of the local edge unique point that picture is detected differs greatly, and when the quantity of picture local edge unique point is lower than the threshold value of setting, then shows that camera is blocked.
The RGB image I of camera shooting is of a size of 1280x720, threshold value n 0be set to 100.
As shown in Figure 1, a kind of camera occlusion detection method based on image local edge feature, comprises the following steps:
A, to camera shooting RGB image I carry out greyscale transformation, obtain gray level image L, and calculate the mean square deviation ω of each grey scale pixel value in gray level image L.
B, judge whether the value of ω equals 0, if ω=0, then show network failure or equipment failure, terminate program; If ω > 0, then carry out the extraction of image local Edge Feature Points, comprise the following steps:
B1, adopt Gaussian filter to carry out filtering process to gray level image L, construct the multiscale space with N width image, described multiscale space is made up of O group image, often organizes image and has S sublayer, N=O × S; The scale parameter σ of every tomographic image imark respectively by sequence number o and s, scale parameter σ icalculate according to formula (1):
o ∈ [0 ..., O-1], s ∈ [0 ..., S-1], i ∈ [0 ..., N], σ 0be the initial baseline value of scale parameter, be defaulted as 1.6;
B2, employing Nonlinear diffusion filtering device carry out DIFFUSION TREATMENT to the every tomographic image in multiscale space, generate non-linear multiscale space, wherein the image L of the bottom 0=L σ, L σwith being of a size of 9 × 9, standard deviation be 1.6 gaussian kernel and described gray level image L carry out convolution after obtain; Remainder layer image generates evolution graph as L according to following formula (2) i:
L i + 1 = ( I - τ Σ l = 1 m A l ( L i ) ) - 1 L i ,
Wherein A lrepresent image L iconductance matrix on each dimension l, I is original image, and τ is time step, τ=t i+1-t i, t ifor the scale parameter in units of the time, i ∈ [0 ..., N];
B3, according to the every secondary evolution graph generated in b2 step as L icalculate the Hessian matrix response image L of its correspondence i hessian, according to formula (3):
L i hessian2 i, norm(L xx il yy i-L xy il xy i), wherein, L i xxfor evolution graph is as L isecond order in x direction is reciprocal, L i yyfor evolution graph is as L isecond order in y direction is reciprocal, L i xyfor evolution graph is as L isecond-order mixed partial derivative, σ i, normfor L ithe round values of corresponding yardstick;
The non-maximal value Restrainable algorithms of b4, employing 3 × 3 × 3 neighborhoods, locates each Hessian response image L i hessianin local edge unique point.
The quantity n of the local edge unique point that c, statistics are extracted, by n and threshold value n 0compare, threshold value n 0for the local feature of image captured when camera does not block is counted; If n≤n 0, then send camera and to be blocked alarm.
Further, step b4 specifically comprises the following steps:
B41, traversal response image L i hessiain each response, if be less than default extreme value respthresh=0.001, then continue judge next response;
B42, current response diagram L i hessiain response and 8 consecutive point of its same yardstick, and and neighbouring yardstick corresponding 9 × 2 points---totally 26 points compare, to guarantee all maximum point to be detected at metric space and two dimensional image space, this maximum point is local edge unique point.
Further, matrix A lbe triple diagonal matrix and diagonal dominance.
Further, described formula (2) gets by carrying out discretize to nonlinear diffusion equations, and described nonlinear diffusion equations is as following formula (4):
∂ L ∂ t = d i v ( c ( x , y , t ) × ▿ L ) ,
Wherein, div and represent divergence and gradient respectively; T is the scale parameter in units of the time; Propagation function when c (x, y, t) is called that the evolution time is t, can make to spread the partial structurtes being adaptive to image, and propagation function c (x, y, t) adopts following formula (5) to calculate:
c ( x , y , t ) = g ( | ▿ L σ ( x , y , t ) | ) = 1 1 + ▿ L σ x ( x , y , t ) 2 + ▿ L σ y ( x , y , t ) 2 k 2 ,
Wherein, image L respectively σhorizontal and vertical gradient; Parameter k controls other contrast factor of expansion stage, and the value of parameter k is gradient image the value of histogram on 70% hundredths.
Further, the computation process of described parameter k comprises the following steps:
B21, the Scharr operator using 3x3 and L σcarry out convolution, obtain L respectively σgradient image L in the horizontal direction σ xwith the gradient image L of vertical direction σ y;
B22, calculating L σin the gradient magnitude at each pixel place, according to formula (6):
M ( x , y ) = L σ x ( x , y ) 2 + L σ y ( x , y ) 2 , Wherein maximum gradient magnitude is M max,
B23, gradient magnitude histogram is divided into nbins=300 dimension (bin), according to following formula (7), the gradient magnitude of each pixel is assigned to corresponding dimension nbin:
Add up the number that each gradient magnitude is not the pixel of 0, be designated as nps, then the value of histogram on 70% hundredths is: nthresh=nps*0.7;
B24, from position 0, each bin in traversal histogram, and to add up the value in each bin, is kept in variable nelements.As nelements >=nthresh, record the position kperc of now bin, the final expression formula of k is as following formula (8):
k = 0.03 , n e l e m e n t s < n t h r e s h M m a x &times; k p e r c n b i n s , .
The present invention in same scene, when illumination variation is larger, can stable detection to image local Edge Feature Points, it is high that the camera caused various situation blocks discrimination, can reach about 99%; The present invention, without the need to being algorithm configuration parameter according to different scenes, only using default parameters, therefore can better be applied to the cloud supervisory system of unified plan; The present invention accurately can judge that camera is blocked or the pictorial information caused due to network failure is lost.

Claims (6)

1., based on a camera occlusion detection method for image local edge feature, it is characterized in that: comprise the following steps:
A, to camera shooting RGB image I carry out greyscale transformation, obtain gray level image L, and calculate the mean square deviation ω of each grey scale pixel value in gray level image L;
B, judge whether the value of ω equals 0, if ω=0, then show network failure or equipment failure, terminate program; If ω ≠ 0, then carry out the extraction of image local Edge Feature Points, comprise the following steps:
B1, adopt Gaussian filter to carry out filtering process to gray level image L, construct the multiscale space with N width image, described multiscale space is made up of O group image, often organizes image and has S sublayer, N=O × S; The scale parameter σ of every tomographic image imark respectively by sequence number o and s, scale parameter σ icalculate according to formula (1):
o ∈ [0 ..., O-1], s ∈ [0 ..., S-1], i ∈ [0 ..., N], σ 0be the initial baseline value of scale parameter, be defaulted as 1.6;
B2, employing Nonlinear diffusion filtering device carry out DIFFUSION TREATMENT to the every tomographic image in multiscale space, generate non-linear multiscale space, wherein the image L of the bottom 0=L σ, L σwith being of a size of 9 × 9, standard deviation be 1.6 gaussian kernel and described gray level image L carry out convolution after obtain; Remainder layer image generates evolution graph as L according to following formula (2) i:
L i + 1 = ( I - &tau; &Sigma; l = 1 m A l ( L i ) ) - 1 L i ,
Wherein A lrepresent image L iconductance matrix on each dimension l, I is original image, and τ is time step, τ=t i+1-t i, t ifor the scale parameter in units of the time,
t i = 1 2 &sigma; i 2 , i &Element; &lsqb; 0 , ... , N &rsqb; ;
B3, according to the every secondary evolution graph generated in b2 step as L icalculate the Hessian matrix response image L of its correspondence i hessian, according to formula (3):
L i hessian2 i, norm(L xx il yy i-L xy il xy i), wherein, L i xxfor evolution graph is as L isecond order in x direction is reciprocal, L i yyfor evolution graph is as L isecond order in y direction is reciprocal, L i xyfor evolution graph is as L isecond-order mixed partial derivative, σ i, normfor L ithe round values of corresponding yardstick;
The non-maximal value Restrainable algorithms of b4, employing 3 × 3 × 3 neighborhoods, locates each Hessian response image L i hessianin local edge unique point;
The quantity n of the local edge unique point that c, statistics are extracted, by n and threshold value n 0compare, threshold value n 0for the local feature of image captured when camera does not block is counted; If n≤n 0, then send camera and to be blocked alarm.
2. the camera occlusion detection method based on image local edge feature according to claim 1, is characterized in that: step b4 specifically comprises the following steps:
B41, traversal response image L i hessiain each response, if be less than default extreme value respthresh=0.001, then continue judge next response;
B42, current response diagram L i hessiain response and 8 consecutive point of its same yardstick, and and neighbouring yardstick corresponding 9 × 2 points---totally 26 points compare, to guarantee all maximum point to be detected at metric space and two dimensional image space, this maximum point is local edge unique point.
3. the camera occlusion detection method based on image local edge feature according to claim 1, is characterized in that: RGB image I is of a size of 1280x720, described threshold value n 0be set to 100.
4. the camera occlusion detection method based on image local edge feature according to claim 1, is characterized in that: matrix A lbe triple diagonal matrix and diagonal dominance.
5. want the camera occlusion detection method based on image local edge feature described in 1 according to right, it is characterized in that: described formula (2) gets by carrying out discretize to nonlinear diffusion equations, and described nonlinear diffusion equations is as following formula (4):
&part; L &part; t = d i v ( c ( x , y , t ) &times; &dtri; L ) ,
Wherein, div and represent divergence and gradient respectively; T is the scale parameter in units of the time; Propagation function when c (x, y, t) is called that the evolution time is t, can make to spread the partial structurtes being adaptive to image, and propagation function c (x, y, t) adopts following formula (5) to calculate:
c ( x , y , t ) = g ( | &dtri; L &sigma; ( x , y , t ) | ) = 1 1 + &dtri; L &alpha; x ( x , y , t ) 2 + &dtri; L &alpha; y ( x , y , t ) 2 k 2 ,
Wherein, image L respectively σhorizontal and vertical gradient; Parameter k controls other contrast factor of expansion stage, and the value of parameter k is gradient image the value of histogram on 70% hundredths.
6. want the camera occlusion detection method based on image local edge feature described in 5 according to right, it is characterized in that: the computation process of described parameter k comprises the following steps:
B21, the Scharr operator using 3x3 and L σcarry out convolution, obtain L respectively σgradient image L in the horizontal direction σ xwith the gradient image L of vertical direction σ y;
B22, calculating L σin the gradient magnitude at each pixel place, according to formula (6):
M ( x , y ) = L &alpha; x ( x , y ) 2 + L &alpha; y ( x , y ) 2 , Wherein maximum gradient magnitude is M max,
B23, gradient magnitude histogram is divided into nbins=300 dimension (bin), according to following formula (7), the gradient magnitude of each pixel is assigned to corresponding dimension nbin:
Add up the number that each gradient magnitude is not the pixel of 0, be designated as nps, then the value of histogram on 70% hundredths is: nthresh=nps*0.7;
B24, from position 0, each bin in traversal histogram, and to add up the value in each bin, is kept in variable nelements.As nelements>=nthresh, record the position kperc of now bin, the final expression formula of k is as following formula (8):
k = 0.03 , n e l e m e n t s < n t h r e s h M m a x &times; k p e r c n b i n s , .
CN201510723896.0A 2015-10-29 2015-10-29 Camera detection method based on image local edge characteristics Pending CN105427276A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510723896.0A CN105427276A (en) 2015-10-29 2015-10-29 Camera detection method based on image local edge characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510723896.0A CN105427276A (en) 2015-10-29 2015-10-29 Camera detection method based on image local edge characteristics

Publications (1)

Publication Number Publication Date
CN105427276A true CN105427276A (en) 2016-03-23

Family

ID=55505457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510723896.0A Pending CN105427276A (en) 2015-10-29 2015-10-29 Camera detection method based on image local edge characteristics

Country Status (1)

Country Link
CN (1) CN105427276A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109120916A (en) * 2017-06-22 2019-01-01 杭州海康威视数字技术股份有限公司 Fault of camera detection method, device and computer equipment
CN111027398A (en) * 2019-11-14 2020-04-17 深圳市有为信息技术发展有限公司 Automobile data recorder video occlusion detection method
CN111967345A (en) * 2020-07-28 2020-11-20 国网上海市电力公司 Method for judging shielding state of camera in real time
CN113298808A (en) * 2021-06-22 2021-08-24 哈尔滨工程大学 Method for repairing building shielding information in tilt-oriented remote sensing image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102111532A (en) * 2010-05-27 2011-06-29 周渝斌 Camera lens occlusion detecting system and method
CN102176244A (en) * 2011-02-17 2011-09-07 东方网力科技股份有限公司 Method and device for determining shielding condition of camera head
CN103139547A (en) * 2013-02-25 2013-06-05 昆山南邮智能科技有限公司 Method of judging shielding state of pick-up lens based on video image signal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102111532A (en) * 2010-05-27 2011-06-29 周渝斌 Camera lens occlusion detecting system and method
CN102176244A (en) * 2011-02-17 2011-09-07 东方网力科技股份有限公司 Method and device for determining shielding condition of camera head
CN103139547A (en) * 2013-02-25 2013-06-05 昆山南邮智能科技有限公司 Method of judging shielding state of pick-up lens based on video image signal

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
PABLO FERNANDEZ ALCANTARILLA 等: "KAZE Features", 《ECCV 2012》 *
周励琨: "面向视频监控的视频质量异常检测系统的设计与开发", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
曹金燕: ""基于图像特征的匹配方法研究", 《中国优秀硕士学位论文全文数据看 信息科技辑》 *
罗显科: "视频监控图像异常检测及质量评价", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109120916A (en) * 2017-06-22 2019-01-01 杭州海康威视数字技术股份有限公司 Fault of camera detection method, device and computer equipment
CN109120916B (en) * 2017-06-22 2020-06-05 杭州海康威视数字技术股份有限公司 Camera fault detection method and device and computer equipment
CN111027398A (en) * 2019-11-14 2020-04-17 深圳市有为信息技术发展有限公司 Automobile data recorder video occlusion detection method
CN111967345A (en) * 2020-07-28 2020-11-20 国网上海市电力公司 Method for judging shielding state of camera in real time
CN111967345B (en) * 2020-07-28 2023-10-31 国网上海市电力公司 Method for judging shielding state of camera in real time
CN113298808A (en) * 2021-06-22 2021-08-24 哈尔滨工程大学 Method for repairing building shielding information in tilt-oriented remote sensing image
CN113298808B (en) * 2021-06-22 2022-03-18 哈尔滨工程大学 Method for repairing building shielding information in tilt-oriented remote sensing image

Similar Documents

Publication Publication Date Title
CN108682039B (en) Binocular stereo vision measuring method
WO2021139197A1 (en) Image processing method and apparatus
CN105427276A (en) Camera detection method based on image local edge characteristics
CN113343779B (en) Environment abnormality detection method, device, computer equipment and storage medium
CN110751195B (en) Fine-grained image classification method based on improved YOLOv3
CN113012383B (en) Fire detection alarm method, related system, related equipment and storage medium
CN112417955B (en) Method and device for processing tour inspection video stream
CN108550166A (en) A kind of spatial target images matching process
CN109030499B (en) Device and method suitable for continuous online detection of target defects and preventing repeated counting of defect number
CN111967345B (en) Method for judging shielding state of camera in real time
CN115526892A (en) Image defect duplicate removal detection method and device based on three-dimensional reconstruction
CN114926781A (en) Multi-user time-space domain abnormal behavior positioning method and system supporting real-time monitoring scene
CN110414430B (en) Pedestrian re-identification method and device based on multi-proportion fusion
KR101270718B1 (en) Video processing apparatus and method for detecting fire from video
CN106778822B (en) Image straight line detection method based on funnel transformation
CN110738229B (en) Fine-grained image classification method and device and electronic equipment
CN116543333A (en) Target recognition method, training method, device, equipment and medium of power system
CN109558881A (en) A kind of crag avalanche monitoring method based on computer vision
CN111598943B (en) Book in-place detection method, device and equipment based on book auxiliary reading equipment
CN112116561B (en) Power grid transmission line detection method and device based on image processing fusion network weight
CN114359183A (en) Image quality evaluation method and device, and lens occlusion determination method
CN111191593A (en) Image target detection method and device, storage medium and sewage pipeline detection device
Deokar Implementation of Canny Edge Detector Algorithm using FPGA
CN112102365A (en) Target tracking method based on unmanned aerial vehicle pod and related device
CN112581489A (en) Video compression method, device and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160323

RJ01 Rejection of invention patent application after publication