CN102982313A - Smog detecting method - Google Patents

Smog detecting method Download PDF

Info

Publication number
CN102982313A
CN102982313A CN2012104277419A CN201210427741A CN102982313A CN 102982313 A CN102982313 A CN 102982313A CN 2012104277419 A CN2012104277419 A CN 2012104277419A CN 201210427741 A CN201210427741 A CN 201210427741A CN 102982313 A CN102982313 A CN 102982313A
Authority
CN
China
Prior art keywords
field picture
moving region
pixel
image
smog
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012104277419A
Other languages
Chinese (zh)
Other versions
CN102982313B (en
Inventor
阮锐
吴翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN HR-SKYEYES Co Ltd
Original Assignee
SHENZHEN HR-SKYEYES Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN HR-SKYEYES Co Ltd filed Critical SHENZHEN HR-SKYEYES Co Ltd
Priority to CN201210427741.9A priority Critical patent/CN102982313B/en
Publication of CN102982313A publication Critical patent/CN102982313A/en
Application granted granted Critical
Publication of CN102982313B publication Critical patent/CN102982313B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a smog detecting method which aims to research the smog detecting method in dark scenes. The smog detecting method comprises two stages: a training classifier stage and a smog detecting stage. The training classifier stage comprises the following steps: receiving sample video information, detecting a movement area in scenes by using background substraction method, extracting all the movement features of the movement area, using a support vector machine to combine all the extracted movement features of the movement area to a movement feature vector and storing into a classifier. The smog detecting stage comprises the flowing steps: receiving to-be-detected video information, using the same method as the training classifier stage to detect the movement features of the movement area and combine the movement feature vector and inputting the movement feature vector into the classifier, obtaining the probability of the area which belongs to smog inside the movement area of a single-frame image, analyzing comprehensively to the same target, and judging whether the target is the smog. According to the smog detecting method, smog detecting in a large space range in the dark scenes can be achieved simply, and safety guarantees can be provided for fire protection and control work of a closed large warehouse.

Description

The method of Smoke Detection
Technical field
The present invention relates to protection and monitor field, relate in particular to a kind of method of Smoke Detection.
Background technology
Traditional fire alarm system based on ionic smoke inductor, photoelectric smoke device has been obtained in current fire prevention and control very widely and has been used because with low cost.But detector must contact and could report to the police with certain density smog, so that it can't be applied to large space and open-air atmosphere.
Computer vision is mainly studied the method for obtaining information from view data.By video image content is analyzed, judge whether there is smog in the scene based on the smog detection method of computer vision, so it does not need to contact with smog, can monitor large space and open-air area; In addition, can transmit in real time the video information at monitoring scene based on the fire alarm system of video monitoring, after reporting to the police, the fire fighter can utilize video information in time to judge the authenticity of fire alarm, the loss of avoiding the police of fire prevention and control misreport of system to bring; For the fire of real generation, video image also can help the fire fighter to understand scene of a fire information, in time formulates effective fire extinguishing scheme simultaneously.
Smoke Detection belongs to the detection identification problem of specific objective in the computer vision field, Smoke Detection algorithm in actual the use mainly contains following several at present: 1, the Smoke Detection of color-based information, colouring information is the information to a kind of frequent utilization in the graphical analysis, by in video image, seeking the zone approximate with the smog color, can realize the detection of smog.Yet, utilize colouring information to carry out the interference that Smoke Detection is subjected to the Similar color target easily; In addition, the smog color that different comburant burnings discharge has larger difference, also is the critical limitation that restrict colors information is used in Smoke Detection; 2, the Smoke Detection of based on motion information,
Light stream in the scene has reflected the direction of motion of each point in video flowing in the scene, and some personnel are by calculating the light stream in the scene, seeks in the scene with moving region like the smog diffusion phase, thereby finds doubtful smog zone in the scene.Yet, the accuracy of optical flow computation, the image-forming conditions of guarded region etc. all have a significant impact the accurate testing result of smog; 3, based on the Smoke Detection of wavelet analysis, wavelet analysis method can be analyzed image simultaneously at frequency domain and spatial domain, in a lot of problems of image processing field important application is arranged.Scholar's research is arranged in the image smog zone analyzed wavelet field energy loss and the relation that keeps energy, the statistical law of wavelet coefficient etc. with the difference of non-smog zone in wavelet field, obtained preferably Smoke Detection effect.But wavelet analysis method often only for the smog of specific modality, is difficult to satisfy the application demand of some specific occasions, and in addition, for the relatively poor video of image quality, noise also can have considerable influence to the information in image wavelet territory.Although the researchist has proposed different Smoke Detection algorithms according to the different qualities of smog,, existing smog detection method research based on computer vision mainly concentrates on the visible light scene, can't be applied to dark confined space and night-time scene.For this reason, be necessary above-mentioned Smoke Detection algorithm is further improved.
Summary of the invention
The present invention proposes a kind of method of Smoke Detection, can in the confined space of dark and night-time scene, accurately finish the detection to smog in real time.
The technical scheme that the present invention adopts is: a kind of method of Smoke Detection is provided, comprises two stages: generate the sorter stage and detect the smog stage;
In the described generation sorter stage, comprise the steps:
S11, reception sample video;
S12, the sample video is analyzed, generate the sport foreground bianry image, and the sport foreground bianry image carried out connected component labeling, be specially: the first two field picture is as image background in the sample video, after subtracting each other and difference taken absolute value for the pixel corresponding with the background image of t frame of each pixel in the t two field picture, absolute value and the motion detection threshold of difference are compared, if the absolute value of difference is greater than motion detection threshold, then set this pixel and have the moving region, compose the pixel of the same grayscale value that centered by this pixel, exists in the search neighbor pixel behind the first gray-scale value, and be labeled as connected domain, if the absolute value of difference is not more than motion detection threshold, then sets this pixel and do not have the moving region and compose gray-scale value;
S13, determine moving region interframe relation, for each moving region in the t two field picture connected domain, calculate the distance of all moving regions in itself and the t-1 two field picture connected domain, judge whether less than distance threshold, if, then the moving region in the t two field picture and the moving region in the t-1 two field picture are labeled as same target, if not, then process next moving region or the next frame image of t two field picture;
The moving region feature of S14, calculating single-frame images, to the moving region in the t frame, if can in the t-1 two field picture, find the moving region that is produced by same target travel, calculate the moving region feature of t two field picture in the same target, marker motion area classification attribute is also preserved;
S15, generation sorter, behind all moving regions extraction motion features and marker motion area classification attribute in the sample video information, after all moving regions are extracted motion features and be combined into motion feature vector and mark classification, all motion feature vector sum classification marks are stored in the sorter;
In the described detection smog stage, comprise the steps:
S21, reception video to be detected;
S22, video to be detected is analyzed, generate the sport foreground bianry image, and the sport foreground bianry image carried out connected component labeling, be specially: the first two field picture is as image background in the video to be detected, after subtracting each other and difference taken absolute value for the pixel corresponding with the background image of t frame of each pixel in the t two field picture, absolute value and the motion detection threshold of difference are compared, if the absolute value of difference is greater than motion detection threshold, then set this pixel and have the moving region, compose the pixel of the same grayscale value that centered by this pixel, exists in the search neighbor pixel behind the gray-scale value, and be labeled as connected domain, if the absolute value of difference is not more than motion detection threshold, then sets this pixel and do not have the moving region and compose gray-scale value;
S23, determine moving region interframe relation, for each moving region in the t two field picture connected domain, calculate the distance of all moving regions in itself and the t-1 two field picture connected domain, judge whether less than distance threshold, if then corresponding region in t two field picture and the t-1 two field picture is labeled as same target, if not, judge that then this target is the new moving region that produces, and return step S22;
The probability of S24, calculating single-frame images moving region, to the moving region in the t frame, if can in the t-1 two field picture, find the moving region that is produced by same target travel, calculate the moving region feature of m two field picture in the same target after, marker motion area classification attribute is also preserved; Behind all moving regions extraction motion features and marker motion area classification attribute in the video information to be detected, after all moving regions are extracted motion features and be combined into motion feature vector and mark classification, in all motion feature vector sum classification mark input sorters, calculate the probability of single-frame images moving region;
S25, target analysis-by-synthesis, judge that target exists frame number whether greater than the frame number threshold value, if, then calculating target, to have the frame number corresponding region be the average probability of smog, and should judge the relation of average probability and smog alarm threshold value, is specially: when this judgement average probability during greater than the smog alarm threshold value, judge that then this target is smog and reports to the police, when this judgement average probability is less than or equal to the smog alarm threshold value, judges that then this target is non-smog, and return step S12; If not, judge that then this target is non-smog, and return step S12.
Preferably, generate the sport foreground bianry image among the described step S12 and the sport foreground bianry image carried out between the connected component labeling, also comprise step S121, need to judge whether background image updating,
With t two field picture F tWith background image B tRespective pixel is subtracted each other the cumulative summation of absolute value of difference, obtains present frame with the total difference of background image
Figure BDA00002339911000041
diff t B , F = Σ ( x , y ) ∈ F t | F t ( x , y ) - B t ( x , y ) | ,
Will
Figure BDA00002339911000043
B compares with context update threshold value Δ, if
Figure BDA00002339911000044
Greater than the doubtful zone of smog of produce reporting to the police in Δ B and the current goal, background image updating then is with F tAs new background image B T+1Otherwise,, background image updating not then is specially:
B t + 1 = F t , if diff t B , F > ΔB B t else .
Preferably, after the described step S121 with the sport foreground bianry image is carried out before the connected component labeling, also comprise step S122, the sport foreground bianry image is carried out filtering, gray-scale value to 8 pixels of neighborhood centered by the arbitrary pixel in the sport foreground bianry image sorts, and chooses centered by 8 intermediate values in the pixel gray-scale value gray-scale value of pixel in the neighborhood.
Preferably, described step S13 is specially: calculate t two field picture and t-1 in distance between each moving region,
Calculate the average of i moving region of t frame and j moving region of t-1 two field picture
Figure BDA00002339911000046
Calculate the variance of i moving region of t frame and j moving region of t-1 two field picture
Figure BDA00002339911000047
Then the distance of i moving region of t frame and j moving region of t-1 two field picture is:
Figure BDA00002339911000048
Wherein,
Figure BDA00002339911000049
Center for i moving region in the t two field picture;
Figure BDA000023399110000410
Be the center of j moving region in the t-1 two field picture, λ Mean, λ Variance, λ LocationBe respectively the weight parameter of average, variance and center;
To i moving region of current t frame, calculate all N in it and the t-1 two field picture T-1The distance vector of individual moving region
Figure BDA000023399110000411
Then ask all N T-1Minimum value in the individual distance
Figure BDA000023399110000412
And minimum value and distance threshold compared:
If
Figure BDA000023399110000413
Judge that then the nearest zone that produces minor increment in i moving region in the t two field picture and the t-1 two field picture is produced by the motion of same target,
If Judge that then all it doesn't matter for i moving region in the t two field picture and all moving regions in the t-1 two field picture, the moving region of t two field picture is new moving region or the noise that produces.
Preferably, the moving region feature among the described step S14 comprises t two field picture and the t-1 two field picture regional interframe movement coefficient on directions X, t two field picture and the t-1 two field picture regional interframe movement coefficient on Y-direction, the region area variation factor of t two field picture, normalized gray average in the zone of t two field picture, the normalization histogram of the average gray average traversing times in the historical frames that continues before in the moving region of the t two field picture image, the ratio of the maximum modified-image of gray scale large gradient pixel and region area in the normalization average value of moving region and average and t two field picture moving region in the historical frames that continues before the t two field picture image.
Preferably, generate the sport foreground bianry image among the described step S22 and the sport foreground bianry image is carried out between the connected component labeling, also comprise step S221, judge whether that needs upgrade background image and step S222 in the video to be detected, carry out filtering to being with the sport foreground bianry image in the detection video.
Preferably, described step S33 is specially: calculate t two field picture and t-1 in the sport foreground bianry image to be detected in distance between each moving region, and will compare apart from minimum value and distance threshold, if less than distance threshold, judge then that the nearest zone that produces minor increment in i moving region in the t two field picture and the t-1 two field picture is produced by the motion of same target apart from minimum value; Otherwise, judge that then all it doesn't matter for i moving region in the t two field picture and all moving regions in the t-1 two field picture, the moving region of t two field picture is new moving region or the noise that produces.
Preferably, the moving region feature among the described step S24 comprises t two field picture and the t-1 two field picture regional interframe movement coefficient on directions X, t two field picture and the t-1 two field picture regional interframe movement coefficient on Y-direction, the region area variation factor of t two field picture, normalized gray average in the zone of t two field picture, the normalization histogram of the average gray average traversing times in the historical frames that continues before in the moving region of the t two field picture image, the ratio of the maximum modified-image of gray scale large gradient pixel and region area in the moving region of the normalization average value of moving region and average and t two field picture in the historical frames that continues before the t two field picture image.
Preferably, the frame number threshold value among the described step S25 is 10.
Useful technique effect of the present invention is: the invention provides a kind of smog detection method, mainly comprise the training classifier stage and detect the smog stage, wherein, the process in training classifier stage is: receive the sample video information, obtain several as the moving region of sample by the sample video, extract the feature of these training samples, and the attribute of these training samples of handmarking (whether being the potential moving region of smog), then by the sorter training algorithm motion feature is extracted in all moving regions and be combined into motion feature vector and mark classification, and it is stored in sorter; The process that detects the smog stage is: receive video information to be detected, the method identical with the training classifier stage detects the motion feature of moving region and is combined into the motion feature vector, with this motion feature vector input sorter, the moving region belongs to the probability of smog in the acquisition single-frame images, same target is carried out analysis-by-synthesis, judge whether this target is smog.The present invention can realize that the Real-time Smoke in the large space scope detects in the simple dark scene, can provide safety guarantee for the fire prevention and control of airtight bulk storage plant.
Description of drawings
Fig. 1 is the method flow diagram in training classifier stage in the smog detection method of the present invention;
Fig. 2 is the method flow diagram that detects the smog stage in the smog detection method of the present invention;
Fig. 3 a is the background image during Smoke Detection under the near infrared scene among the embodiment;
Fig. 3 b is the video image when occuring to report to the police during Smoke Detection under the near infrared scene among the embodiment;
Fig. 3 c is the moving region foreground image among Fig. 3 b;
Fig. 3 d is the alarm image of Fig. 3 b.
Embodiment
By describing technology contents of the present invention, structural attitude in detail, realized purpose and effect, below in conjunction with embodiment and cooperate that accompanying drawing is detailed to give explanation.
See also Fig. 1 and Fig. 2, present embodiment provides a kind of method of Smoke Detection, mainly comprise generating the sorter stage and detecting the smog stage,
Training classifier is in the stage, its process roughly is: receive the sample video information, obtain several as the moving region of sample by the sample video, extract the feature of these training samples, and the attribute of these training samples of handmarking (whether being the smog zone), then by the sorter training algorithm motion feature is extracted in all moving regions and be combined into motion feature vector and mark classification, and it is stored in sorter, concrete comprises the steps:
S11, reception sample video;
S12, the sample video is analyzed, generated the sport foreground bianry image, and the sport foreground bianry image is carried out connected component labeling.
The method that the image that obtains among the present invention to advance uses is the background subtraction method, and namely the order of magnitude of the background subtracting difference by current frame image and estimation judges whether the corresponding pixel points place is the moving region, and concrete grammar is as follows:
When detecting beginning with the sample video in the first two field picture as the background B of image 1, for each the pixel F among the t two field picture Ft t(x, y) is with the current background image B tCorresponding pixel subtracts each other and difference is taken absolute value, and absolute value and the motion detection threshold Δ F of difference made comparisons, and moves if think then greater than threshold value Δ F that this some place exists, then with this pixel
Figure BDA00002339911000071
Be set to 255, otherwise set to 0, thereby obtain the sport foreground image That is:
F ⩓ t ( x , y ) = 255 , if | F t ( x , y ) - B t ( x , y ) | > ΔF 0 , else
Wherein, the motion detection threshold of Δ F for setting, according to the quality of video image, the artificial settings such as smokescope that needs detect, general value is between 10 to 20.
S121, need to judge whether background image updating
With t two field picture F tWith background image B tThe absolute value that respective pixel is subtracted each other difference adds up, and obtains present frame with the total difference of background image
Figure BDA00002339911000074
diff t B , F = Σ ( x , y ) ∈ F t | F t ( x , y ) - B t ( x , y ) |
Will
Figure BDA00002339911000076
B compares with context update threshold value Δ, if
Figure BDA00002339911000077
Greater than the doubtful zone of smog that does not have to produce warning in Δ B and the current scene, background image updating then is with F tAs new background image B T+1, that is:
B t + 1 = F t , if diff t B , F > ΔB B t else
S122, the sport foreground image is carried out filtering
The near-infrared video signal noise ratio (snr) of image is low, the initial foreground image that obtains among the step S12
Figure BDA00002339911000079
The isolated noise point of middle existence for the isolated noise spot of filtering, obtains better foreground image Select median filter pair in the present embodiment
Figure BDA000023399110000711
Carrying out filtering processes.
Medium filtering is based on the nonlinear signal processing technology of a kind of energy establishment noise of sequencing statistical theory, the ultimate principle of medium filtering is that the color value of certain pixel in the image is replaced with the intermediate value after each pixel color value sorts in the neighborhood of this pixel, allow the color value of surrounding pixel more near actual value, thereby eliminate isolated noise spot, right in the present embodiment
Figure BDA00002339911000081
The neighborhood of selecting when carrying out medium filtering is 8 neighborhoods of this pixel, namely chooses the intermediate value of gray-scale value of all pixels in 8 neighborhoods as the filtered result of this pixel.The neighborhood of so-called pixel (x, y) refers to that this pixel has the neighbor of 4 horizontal and verticals, and its coordinate is (x+1, y), (x-1, y), (x, y+1), (x, y-1), these four points are referred to as 4 neighborhoods of (x, y), the neighbor at 4 of (x, y) diagonal angles has following coordinate simultaneously: (x+1, x+1), (x+1, y-1), (x-1, y+1), (x-1, y-1).8 points of all this are referred to as 8 neighborhoods of (x, y), if (x, y) is positioned at the border of image, then some point in its 8 neighborhoods falls into the outside of image.
Bianry image
Figure BDA00002339911000082
After the filtering processing, with pixel value wherein be 255 and be arranged in each other the other side 8 neighborhoods pixel with same numeric indicia out, all pixels that have identical numerical value in the image behind the mark then are under the jurisdiction of same connected domain, are located in the t two field picture of sample and mark altogether N tIndividual connected region.
S13, determine regional relation between motion frame
For N in the sample video information t two field picture tThe regional i of in the individual connected region each calculates the distance of All Ranges j in it and the t-1 two field picture, determine moving region i in the t two field picture whether with the t-1 two field picture in a certain zone produced by the motion of same target, as follows apart from method between concrete moving region:
(1) calculates distance between the moving region
The method of calculating i moving region of t frame and j moving region distance of t-1 two field picture is as follows:
Calculate the average of i moving region of t frame and j moving region of t-1 two field picture
Figure BDA00002339911000083
Calculate the variance of i moving region of t frame and j moving region of t-1 two field picture
Figure BDA00002339911000084
Then the distance of i moving region of t frame and j moving region of t-1 two field picture is:
Figure BDA00002339911000085
Wherein,
Figure BDA00002339911000086
Center for i moving region in the t two field picture;
Figure BDA00002339911000087
Be the center of j moving region in the t-1 two field picture, λ Mean, λ Variance, λ LocationBe respectively the weight parameter of average, variance and center;
(2) determine regional relation between motion frame;
To i moving region of current t frame, calculate all N in it and the t-1 two field picture T-1The distance vector in individual zone
Figure BDA00002339911000088
(vectorial dimension is the number N of moving target in the t-1 two field picture T-1), then ask all N T-1Minimum value in the individual distance
Figure BDA00002339911000089
Minimum value and distance threshold are compared:
If
Figure BDA00002339911000091
Think that then the nearest regional k that produces minor increment in i moving region in the t two field picture and the t-1 two field picture is produced by the motion of same target,
If
Figure BDA00002339911000092
Thinking that then all it doesn't matter for i moving region in the t two field picture and all moving regions in the t-1 two field picture, is new moving region or the noise that produces;
The moving region feature of S14, calculating single-frame images,
For the moving region i among the present frame t, if can in the t-1 two field picture, find the moving region that is produced by same target travel, then calculate its category attribute of its feature and mark:
(1) calculates the X of t two field picture moving region and the regional interframe movement coefficient on the Y-direction
Figure BDA00002339911000093
Figure BDA00002339911000094
Figure BDA00002339911000095
Wherein,
Figure BDA00002339911000096
Be the center of i moving region in the t two field picture,
Figure BDA00002339911000097
It is the center of corresponding moving region k with it in the t-1 two field picture; λ x, λ yBe respectively the weight parameter on directions X and the Y-direction;
(2) t two field pictures and t-1 two field picture are at the region area variation factor
Figure BDA00002339911000098
Aquotient i t = Area i t = Area i t / Area k t - 1
Wherein,
Figure BDA000023399110000910
The area of moving region i in the expression t two field picture,
Figure BDA000023399110000911
The area of corresponding regional k with it in the expression t-1 two field picture;
(3) normalized gray average in the zone of calculating t two field picture
Mean i t = Σ ( x , y ) ∈ I i t F t ( x , y ) / Area i t 255 ;
Wherein Expression point (x, y) belongs in the scope of moving region i in the current t two field picture;
The normalization histogram of the average gray average traversing times in the historical frames image continues before in the statistics moving region of (4) t two field pictures;
At first calculate in the moving region gray average traversing times MCR of each pixel in the ζ two field picture in the past T-ζ, t(x, y); MCR wherein T-ζ, tThe average traversing times that (x, y) is point (x, y) in time range [t-ζ, t], namely in the past in the ζ two field picture, adjacent two two field picture gray-scale values pass the gray average M of this some place in all ζ two field pictures T-ζ, tThe number of times of (x, y), it has reflected the frequency information that point (x, y) is located to move.MCR T-ζ, tThe computing method of (x, y) are:
(a) make MCR T-ζ, t(x, y)=0;
(b) for ω=t-ζ ..., t-1, if:
(F ω(x,y)-M t-ζ,t(x,y))×(M t-ζ,t(x,y)-F ω+1(x,y))<0;
MCR then T-ζ, t(x, y)=MCR T-ζ, t(x, y)+1;
After obtaining the gray average traversing times of each pixel in the zone, the histogram of statistics traversing times in the zone For
Figure BDA00002339911000102
Any one passage b, have:
HistMC R i t ( b ) = # ( x , y ) &Element; I i t ( MCR t - &zeta; , t ( x , y ) = b ) / Area i t
That is, among histogrammic any the passage b of traversing times in the storage area traversing times be the number of the pixel of b.
Figure BDA00002339911000104
Be illustrated in the regional i scope, MCR satisfies condition T-ζ, tThe number of the pixel of (x, y)=b; Especially, in the time period, traversing times often concentrates on less several passages at [t-ζ, t]; Therefore, in order to save memory headroom, traversing times all is classified as a passage greater than the pixel of certain threshold value Δ b.We get in actual use, have then only added up regional interior traversing times and be the number of 1,2,3,4 pixel, and traversing times is classified as a class more than or equal to 5 pixel, so histogram
Figure BDA00002339911000105
In comprise 5 passages.
(5) in the front historical frames image that continues the maximum modified-image of gray scale in normalization average value and the variance of moving region;
The maximum modified-image TCchange of gray scale T-ζ, tRefer to, in the past in the ζ two field picture, the image that the value of the adjacent two frame grey scale change maximums of each pixel forms;
TCchange t-ζ,t(x,y)=max q∈[t-ζ,t-1](|F q(x,y)-F q+1(x,y)|);
This image represents the in the past variation in the ζ two field picture of this pixel place by using each pixel place to get maximal value, is equivalent to the grey scale change in the moving region in the past ζ two field picture has been carried out dimensionality reduction.Then add up TCchange T-ζ, tAverage in regional i
Figure BDA00002339911000106
And variance
Figure BDA00002339911000107
Can characterize in the ζ two field picture overall intensity variation tendency and degree of irregularity in the moving region.
(6) ratio of large gradient pixel and region area in the zoning
Figure BDA00002339911000108
Figure BDA00002339911000109
Represent in the regional i that gradient accounts for the ratio of whole region area greater than the pixel of threshold value Δ Grad:
GRADquotient i t = # ( x , y ) &Element; I i t ( F t grad ( x , y ) > &Delta;Grad ) / Area i t
Wherein,
Figure BDA00002339911000112
Be the gradient of image Ft at point (x, y),
Figure BDA00002339911000113
Be illustrated in the regional i scope, satisfy condition The number of pixel;
Wherein Δ Grad is Grads threshold, can according to how much the preseting of the marginal information in the scene, and the adaptive threshold value of some feature-set that also can domain of dependence.In this example, use this regional mean flow rate
Figure BDA00002339911000115
As Grads threshold, namely brighter place can allow to have obvious edge, and darker place obvious marginal information should not occur.
Compute gradient is used the sobel operator in this example, and the sobel operator is practice one of the most frequently used operator when counting the word gradient of falling into a trap.By using template:
Figure BDA00002339911000116
Respectively to image F tCarry out convolution and obtain convolution results
Figure BDA00002339911000118
And order
Figure BDA00002339911000119
Try to achieve gradient image.
After the calculated characteristics, needing artificial category attribute to the marker motion zone, is whether the marker motion zone is produced by smog in this example.We are about to the category attribute by the foreground area of smog movement generation with the positive sample of smog zone as training in mark
Figure BDA000023399110001110
Be labeled as 1, the category attribute of other moving regions is labeled as-1.
S15, generation sorter
After all moving regions in one group of training video are carried out feature extraction and class list notation, obtain proper vector and S the corresponding classification mark of S moving region sample, the sequence number of establishing sample represents that with s then the proper vector of sample is f s, classification is labeled as Label sUse the svm classifier device that the information of all S sample is trained, obtain sorter C;
So far, training module is all finished, and has obtained svm classifier device C
Detect the smog stage, comprise the steps:
Whether after having training module to obtain sorter, receive video to be detected, can detect in the video scene and to be existed by smog, concrete detection method is as follows:
S21, reception video to be detected;
S22, video to be detected is analyzed, generate the sport foreground bianry image, and the sport foreground bianry image carried out connected component labeling, be specially: the first two field picture is as image background in the video to be detected, after subtracting each other and difference taken absolute value for the pixel corresponding with the background image of t frame of each pixel in the t two field picture, absolute value and the motion detection threshold of difference are compared, if the absolute value of difference is greater than motion detection threshold, then set this pixel and have the moving region, compose the pixel of the same grayscale value that centered by this pixel, exists in the search neighbor pixel behind the gray-scale value, and be labeled as connected domain, if the absolute value of difference is not more than motion detection threshold, then sets this pixel and do not have the moving region and compose gray-scale value; Concrete method is similar with the method that generates among the step S12 of sorter in the stage, repeats no more herein.
S23, inter areas relation are determined;
Connected component labeling obtains N tBehind the individual moving region, use the method that the inter areas relation is determined in the training module 1.2, in the t-1 two field picture, seek whether there is moving region k, produced by the motion of same target with current region i;
If have moving region k and current moving region to be produced by same moving target in the t-1 two field picture, then current moving region i obtains the same target label of regional k in the t-1 two field picture;
Otherwise, think that i moving region generates new target label for the new moving region that produces in the t frame; And return step S22.Concrete method is similar with the method that generates among the step S12 of sorter in the stage, repeats no more herein.
S23, the probability calculation of single-frame images moving region;
For the fresh target that can't in the t-1 two field picture, find corresponding associated region, can't extract its feature and differentiate (Partial Feature is calculated the information that needs to use corresponding region in the prior image frame), then set probability that this moving region belongs to smog and be 0.01(in can't the situation of calculated characteristics, rely on priori to think that moving region in the warehouse is that the probability of smog is extremely low);
For the zone that can in the t-1 two field picture, find corresponding associated region, extract following characteristics:
Regional interframe movement coefficient on X and the Y-direction
Figure BDA00002339911000121
The region area variation factor
Figure BDA00002339911000122
Normalized gray average in the zone
Figure BDA00002339911000123
The normalization histogram of the average gray average traversing times in the historical frames that continues before in the moving region image
Figure BDA00002339911000131
Before continue in the historical frames image the maximum modified-image of gray scale in the normalization average value of moving region
Figure BDA00002339911000132
And variance
Figure BDA00002339911000133
The ratio of large gradient pixel and region area in the zone
Figure BDA00002339911000134
The proper vector that eigenwert is formed the moving region is designated as
Figure BDA00002339911000135
Note here
Figure BDA00002339911000136
In among order and the training sample feature fs of each characteristic component each feature put in order consistent;
Will
Figure BDA00002339911000137
As input, utilize svm classifier device C to calculate the probability that the moving region belongs to the smog zone
Figure BDA00002339911000138
Its concrete method is similar with the method that generates among the step S13 of sorter in the stage, repeats no more herein.
S25, target analysis-by-synthesis;
Because the near-infrared image signal to noise ratio (S/N ratio) is low, smog provincial characteristics and non-smog provincial characteristics are not obvious, therefore only by a two field picture among the S24 analyze belong to smog to the moving region probability obtain the Detection accuracy of being satisfied with, need utilize the inter areas relation of determining among the S23, the zone that same target travel produces is analyzed, obtained the probability that this target belongs to smog;
To all targets in the current goal tabulation:
If the target in the object listing does not find corresponding with it moving region in current t two field picture, think that then the motion of this target has stopped or the false target of this target for being produced by noise, delete this target;
If the target in the object listing can find corresponding with it moving region in current t two field picture, but the frame number that this target exists does not then carry out any operation less than 10 frames;
If the target in the object listing can find with it corresponding moving region in current t two field picture, and this moving target occurs surpassing 10 frames, calculates the average probability that this target corresponding region in 10 two field pictures belongs to the smog zone
Figure BDA00002339911000139
Wherein o is the target label in the object listing;
Judge all average probabilities that calculate in the current t frame
There is smog if the average probability that a certain target belongs to smog in 10 two field pictures greater than alarm threshold value, is then assert, reports to the police.
Otherwise, judge whether next target in the target chained list satisfies alert if, if do not satisfy the target of alert if in the present frame, then read in the next frame image, continue to detect.
Fig. 3 a is that (this background image is when smog alarm occurs to the background image in the Smoke Detection under the near infrared scene, the background image that uses in the algorithm), Fig. 3 b is the video image when warning occurs, and Fig. 3 c is moving region foreground image corresponding when warning occurs, and Fig. 3 d is alarm image.
The above only is embodiments of the invention; be not so limit claim of the present invention; every equivalent structure or equivalent flow process conversion that utilizes instructions of the present invention and accompanying drawing content to do; or directly or indirectly be used in other relevant technical fields, all in like manner be included in the scope of patent protection of the present invention.

Claims (9)

1. the method for a Smoke Detection is characterized in that, comprises two stages: generate the sorter stage and detect the smog stage;
In the described generation sorter stage, comprise the steps:
S11, reception sample video;
S12, the sample video is analyzed, generate the sport foreground bianry image, and the sport foreground bianry image carried out connected component labeling, be specially: the first two field picture is as image background in the sample video, after subtracting each other and difference taken absolute value for the pixel corresponding with the background image of t frame of each pixel in the t two field picture, absolute value and the motion detection threshold of difference are compared, if the absolute value of difference is greater than motion detection threshold, then set this pixel and have the moving region, compose the pixel of the same grayscale value that centered by this pixel, exists in the search neighbor pixel behind the first gray-scale value, and be labeled as connected domain, if the absolute value of difference is not more than motion detection threshold, then sets this pixel and do not have the moving region and compose gray-scale value;
S13, determine moving region interframe relation, for each moving region in the t two field picture connected domain, calculate the distance of all moving regions in itself and the t-1 two field picture connected domain, judge whether less than distance threshold, if, then the moving region in the t two field picture and the moving region in the t-1 two field picture are labeled as same target, if not, then process next moving region or the next frame image of t two field picture;
The moving region feature of S14, calculating single-frame images, to the moving region in the t frame, if can in the t-1 two field picture, find the moving region that is produced by same target travel, calculate the moving region feature of t two field picture in the same target, marker motion area classification attribute is also preserved;
S15, generation sorter, behind all moving regions extraction motion features and marker motion area classification attribute in the sample video information, after all moving regions are extracted motion features and be combined into motion feature vector and mark classification, all motion feature vector sum classification marks are stored in the sorter;
In the described detection smog stage, comprise the steps:
S21, reception video to be detected;
S22, video to be detected is analyzed, generate the sport foreground bianry image, and the sport foreground bianry image carried out connected component labeling, be specially: the first two field picture is as image background in the video to be detected, after subtracting each other and difference taken absolute value for the pixel corresponding with the background image of t frame of each pixel in the t two field picture, absolute value and the motion detection threshold of difference are compared, if the absolute value of difference is greater than motion detection threshold, then set this pixel and have the moving region, compose the pixel of the same grayscale value that centered by this pixel, exists in the search neighbor pixel behind the gray-scale value, and be labeled as connected domain, if the absolute value of difference is not more than motion detection threshold, then sets this pixel and do not have the moving region and compose gray-scale value;
S23, determine moving region interframe relation, for each moving region in the t two field picture connected domain, calculate the distance of all moving regions in itself and the t-1 two field picture connected domain, judge whether less than distance threshold, if then corresponding region in t two field picture and the t-1 two field picture is labeled as same target, if not, judge that then this target is the new moving region that produces, and return step S22;
The probability of S24, calculating single-frame images moving region, to the moving region in the t frame, if can in the t-1 two field picture, find the moving region that is produced by same target travel, calculate the moving region feature of m two field picture in the same target after, marker motion area classification attribute is also preserved; Behind all moving regions extraction motion features and marker motion area classification attribute in the video information to be detected, after all moving regions are extracted motion features and be combined into motion feature vector and mark classification, in all motion feature vector sum classification mark input sorters, calculate the probability of single-frame images moving region;
S25, target analysis-by-synthesis, judge that target exists frame number whether greater than the frame number threshold value, if, then calculating target, to have the frame number corresponding region be the average probability of smog, and should judge the relation of average probability and smog alarm threshold value, is specially: when this judgement average probability during greater than the smog alarm threshold value, judge that then this target is smog and reports to the police, when this judgement average probability is less than or equal to the smog alarm threshold value, judges that then this target is non-smog, and return step S12; If not, judge that then this target is non-smog, and return step S12.
2. the method for Smoke Detection according to claim 1, it is characterized in that, generate the sport foreground bianry image among the described step S12 and the sport foreground bianry image is carried out between the connected component labeling, also comprise step S121, need to judge whether background image updating
T two field picture Ft and background image Bt respective pixel are subtracted each other the cumulative summation of absolute value of difference, obtain present frame with the total difference of background image
Figure FDA00002339910900021
diff t B , F = &Sigma; ( x , y ) &Element; F t | F t ( x , y ) - B t ( x , y ) | ,
Will
Figure FDA00002339910900023
B compares with context update threshold value Δ, if
Figure FDA00002339910900024
Greater than the doubtful zone of smog of produce reporting to the police in Δ B and the current goal, background image updating then is with F tAs new background image B T+1Otherwise,,
Figure FDA00002339910900025
3. the method for Smoke Detection according to claim 2, it is characterized in that, after the described step S121 with the sport foreground bianry image is carried out before the connected component labeling, also comprise step S122, the sport foreground bianry image is carried out filtering, gray-scale value to 8 pixels of neighborhood centered by the arbitrary pixel in the sport foreground bianry image sorts, and chooses centered by 8 intermediate values in the pixel gray-scale value gray-scale value of pixel in the neighborhood.
4. the method for Smoke Detection according to claim 1 is characterized in that, described step S13 is specially: calculate t two field picture and t-1 in distance between each moving region,
Calculate the average of i moving region of t frame and j moving region of t-1 two field picture
Figure FDA00002339910900031
Calculate the variance of i moving region of t frame and j moving region of t-1 two field picture
Figure FDA00002339910900032
Then the distance of i moving region of t frame and j moving region of t-1 two field picture is:
Figure FDA00002339910900033
Wherein,
Figure FDA00002339910900034
Center for i moving region in the t two field picture;
Figure FDA00002339910900035
Be the center of j moving region in the t-1 two field picture, λ Mean, λ Variance, λ LocationBe respectively the weight parameter of average, variance and center;
To i moving region of current t frame, calculate all N in it and the t-1 two field picture T-1The distance vector of individual moving region
Figure FDA00002339910900036
Then ask all N T-1Minimum value in the individual distance
Figure FDA00002339910900037
And minimum value and distance threshold compared:
If
Figure FDA00002339910900038
Judge that then the nearest zone that produces minor increment in i moving region in the t two field picture and the t-1 two field picture is produced by the motion of same target,
If
Figure FDA00002339910900039
Judge that then all it doesn't matter for i moving region in the t two field picture and all moving regions in the t-1 two field picture, the moving region of t two field picture is new moving region or the noise that produces.
5. the method for Smoke Detection according to claim 1, it is characterized in that the moving region feature among the described step S14 comprises t two field picture and the t-1 two field picture regional interframe movement coefficient on directions X, t two field picture and the t-1 two field picture regional interframe movement coefficient on Y-direction, the region area variation factor of t two field picture, normalized gray average in the zone of t two field picture, the normalization histogram of the average gray average traversing times in the historical frames that continues before in the moving region of the t two field picture image, the ratio of the maximum modified-image of gray scale large gradient pixel and region area in the normalization average value of moving region and average and t two field picture moving region in the historical frames that continues before the t two field picture image.
6. the method for Smoke Detection according to claim 1, it is characterized in that, generate the sport foreground bianry image among the described step S22 and the sport foreground bianry image is carried out between the connected component labeling, also comprise step S221, judge whether that needs upgrade background image and step S222 in the video to be detected, carry out filtering to being with the sport foreground bianry image in the detection video.
7. the method for Smoke Detection according to claim 1, it is characterized in that, described step S33 is specially: calculate t two field picture and t-1 in the sport foreground bianry image to be detected in distance between each moving region, and will compare apart from minimum value and distance threshold, if less than distance threshold, judge then that the nearest zone that produces minor increment in i moving region in the t two field picture and the t-1 two field picture is produced by the motion of same target apart from minimum value; Otherwise, judge that then all it doesn't matter for i moving region in the t two field picture and all moving regions in the t-1 two field picture, the moving region of t two field picture is new moving region or the noise that produces.
8. the method for Smoke Detection according to claim 1, it is characterized in that the moving region feature among the described step S24 comprises t two field picture and the t-1 two field picture regional interframe movement coefficient on directions X, t two field picture and the t-1 two field picture regional interframe movement coefficient on Y-direction, the region area variation factor of t two field picture, normalized gray average in the zone of t two field picture, the normalization histogram of the average gray average traversing times in the historical frames that continues before in the moving region of the t two field picture image, the ratio of the maximum modified-image of gray scale large gradient pixel and region area in the moving region of the normalization average value of moving region and average and t two field picture in the historical frames that continues before the t two field picture image.
9. the method for Smoke Detection according to claim 1 is characterized in that, the frame number threshold value among the described step S25 is 10.
CN201210427741.9A 2012-10-31 2012-10-31 The method of Smoke Detection Expired - Fee Related CN102982313B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210427741.9A CN102982313B (en) 2012-10-31 2012-10-31 The method of Smoke Detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210427741.9A CN102982313B (en) 2012-10-31 2012-10-31 The method of Smoke Detection

Publications (2)

Publication Number Publication Date
CN102982313A true CN102982313A (en) 2013-03-20
CN102982313B CN102982313B (en) 2015-08-05

Family

ID=47856299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210427741.9A Expired - Fee Related CN102982313B (en) 2012-10-31 2012-10-31 The method of Smoke Detection

Country Status (1)

Country Link
CN (1) CN102982313B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050480A (en) * 2014-05-21 2014-09-17 燕山大学 Cigarette smoke detection method based on computer vision
CN104820832A (en) * 2015-05-13 2015-08-05 国家电网公司 Method and device for detecting video dangerous case
CN108550159A (en) * 2018-03-08 2018-09-18 佛山市云米电器科技有限公司 A kind of flue gas concentration identification method based on the segmentation of three color of image
CN108572734A (en) * 2018-04-23 2018-09-25 哈尔滨拓博科技有限公司 A kind of gestural control system based on infrared laser associated image
CN108898782A (en) * 2018-07-20 2018-11-27 武汉理工光科股份有限公司 The smoke detection method and system that infrared image temperature information for tunnel fire proofing identifies
CN109142176A (en) * 2018-09-29 2019-01-04 佛山市云米电器科技有限公司 Smog sub-district domain space based on space relationship rechecks method
CN110033463A (en) * 2019-04-12 2019-07-19 腾讯科技(深圳)有限公司 A kind of foreground data generates and its application method, relevant apparatus and system
CN110263741A (en) * 2019-06-26 2019-09-20 Oppo广东移动通信有限公司 Video frame extraction method, apparatus and terminal device
CN110619621A (en) * 2018-06-04 2019-12-27 青岛海信医疗设备股份有限公司 Method and device for identifying rib region in image, electronic equipment and storage medium
CN110929597A (en) * 2019-11-06 2020-03-27 普联技术有限公司 Image-based leaf filtering method and device and storage medium
CN111557692A (en) * 2020-04-26 2020-08-21 深圳华声医疗技术股份有限公司 Automatic measurement method, ultrasonic measurement device and medium for target organ tissue
CN111652930A (en) * 2020-06-04 2020-09-11 上海媒智科技有限公司 Image target detection method, system and equipment
CN113194357A (en) * 2021-01-25 2021-07-30 妙微(杭州)科技有限公司 Moving target detection method and system
CN115223105A (en) * 2022-09-20 2022-10-21 万链指数(青岛)信息科技有限公司 Big data based risk information monitoring and analyzing method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080187219A1 (en) * 2007-02-05 2008-08-07 Chao-Ho Chen Video Object Segmentation Method Applied for Rainy Situations
CN102013009A (en) * 2010-11-15 2011-04-13 无锡中星微电子有限公司 Smoke image recognition method and device
CN102509414A (en) * 2011-11-17 2012-06-20 华中科技大学 Smog detection method based on computer vision

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080187219A1 (en) * 2007-02-05 2008-08-07 Chao-Ho Chen Video Object Segmentation Method Applied for Rainy Situations
CN102013009A (en) * 2010-11-15 2011-04-13 无锡中星微电子有限公司 Smoke image recognition method and device
CN102509414A (en) * 2011-11-17 2012-06-20 华中科技大学 Smog detection method based on computer vision

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050480A (en) * 2014-05-21 2014-09-17 燕山大学 Cigarette smoke detection method based on computer vision
CN104820832A (en) * 2015-05-13 2015-08-05 国家电网公司 Method and device for detecting video dangerous case
CN104820832B (en) * 2015-05-13 2018-11-27 国家电网公司 A kind of method and device detecting video dangerous situation
CN108550159A (en) * 2018-03-08 2018-09-18 佛山市云米电器科技有限公司 A kind of flue gas concentration identification method based on the segmentation of three color of image
CN108550159B (en) * 2018-03-08 2022-02-15 佛山市云米电器科技有限公司 Flue gas concentration identification method based on image three-color segmentation
CN108572734A (en) * 2018-04-23 2018-09-25 哈尔滨拓博科技有限公司 A kind of gestural control system based on infrared laser associated image
CN110619621A (en) * 2018-06-04 2019-12-27 青岛海信医疗设备股份有限公司 Method and device for identifying rib region in image, electronic equipment and storage medium
CN110619621B (en) * 2018-06-04 2023-10-27 青岛海信医疗设备股份有限公司 Method, device, electronic equipment and storage medium for identifying rib area in image
CN108898782A (en) * 2018-07-20 2018-11-27 武汉理工光科股份有限公司 The smoke detection method and system that infrared image temperature information for tunnel fire proofing identifies
CN108898782B (en) * 2018-07-20 2020-12-11 湖北烽火平安智能消防科技有限公司 Smoke detection method and system for infrared image temperature information identification for tunnel fire prevention
CN109142176B (en) * 2018-09-29 2024-01-12 佛山市云米电器科技有限公司 Smoke subarea space rechecking method based on space association
CN109142176A (en) * 2018-09-29 2019-01-04 佛山市云米电器科技有限公司 Smog sub-district domain space based on space relationship rechecks method
CN110033463A (en) * 2019-04-12 2019-07-19 腾讯科技(深圳)有限公司 A kind of foreground data generates and its application method, relevant apparatus and system
US11961237B2 (en) 2019-04-12 2024-04-16 Tencent Technology (Shenzhen) Company Limited Foreground data generation method and method for applying same, related apparatus, and system
CN110263741A (en) * 2019-06-26 2019-09-20 Oppo广东移动通信有限公司 Video frame extraction method, apparatus and terminal device
CN110929597A (en) * 2019-11-06 2020-03-27 普联技术有限公司 Image-based leaf filtering method and device and storage medium
CN111557692A (en) * 2020-04-26 2020-08-21 深圳华声医疗技术股份有限公司 Automatic measurement method, ultrasonic measurement device and medium for target organ tissue
CN111652930A (en) * 2020-06-04 2020-09-11 上海媒智科技有限公司 Image target detection method, system and equipment
CN111652930B (en) * 2020-06-04 2024-02-27 上海媒智科技有限公司 Image target detection method, system and equipment
CN113194357A (en) * 2021-01-25 2021-07-30 妙微(杭州)科技有限公司 Moving target detection method and system
CN115223105A (en) * 2022-09-20 2022-10-21 万链指数(青岛)信息科技有限公司 Big data based risk information monitoring and analyzing method and system
CN115223105B (en) * 2022-09-20 2022-12-09 万链指数(青岛)信息科技有限公司 Big data based risk information monitoring and analyzing method and system

Also Published As

Publication number Publication date
CN102982313B (en) 2015-08-05

Similar Documents

Publication Publication Date Title
CN102982313B (en) The method of Smoke Detection
CN110516609B (en) Fire disaster video detection and early warning method based on image multi-feature fusion
Zhao et al. SVM based forest fire detection using static and dynamic features
US20230289979A1 (en) A method for video moving object detection based on relative statistical characteristics of image pixels
CN105404847B (en) A kind of residue real-time detection method
CN100589561C (en) Dubious static object detecting method based on video content analysis
CN103632158B (en) Forest fire prevention monitor method and forest fire prevention monitor system
CN107392885A (en) A kind of method for detecting infrared puniness target of view-based access control model contrast mechanism
CN103729854B (en) A kind of method for detecting infrared puniness target based on tensor model
CN107025652A (en) A kind of flame detecting method based on kinetic characteristic and color space time information
CN104616006B (en) A kind of beard method for detecting human face towards monitor video
CN101739686A (en) Moving object tracking method and system thereof
CN101715111B (en) Method for automatically searching abandoned object in video monitoring
CN101739551A (en) Method and system for identifying moving objects
CN107292879B (en) A kind of sheet metal surface method for detecting abnormality based on image analysis
CN105894701A (en) Large construction vehicle identification and alarm method for preventing external damage to transmission lines
CN104966304A (en) Kalman filtering and nonparametric background model-based multi-target detection tracking method
CN106339677B (en) A kind of unrestrained object automatic testing method of the railway freight-car based on video
CN102760230B (en) Flame detection method based on multi-dimensional time domain characteristics
CN111738342A (en) Pantograph foreign matter detection method, storage medium and computer equipment
CN102509414B (en) Smog detection method based on computer vision
CN106127812A (en) A kind of passenger flow statistical method of non-gate area, passenger station based on video monitoring
CN103605983A (en) Remnant detection and tracking method
CN105930786A (en) Abnormal behavior detection method for bank self-service hall
CN105005773A (en) Pedestrian detection method with integration of time domain information and spatial domain information

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150805

Termination date: 20201031

CF01 Termination of patent right due to non-payment of annual fee