CN101984451A - Video-based shielded flame detecting method and device - Google Patents

Video-based shielded flame detecting method and device Download PDF

Info

Publication number
CN101984451A
CN101984451A CN 201010285475 CN201010285475A CN101984451A CN 101984451 A CN101984451 A CN 101984451A CN 201010285475 CN201010285475 CN 201010285475 CN 201010285475 A CN201010285475 A CN 201010285475A CN 101984451 A CN101984451 A CN 101984451A
Authority
CN
China
Prior art keywords
piece
threshold value
connected region
deepening
candidate region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010285475
Other languages
Chinese (zh)
Other versions
CN101984451B (en
Inventor
班华忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netposa Technologies Ltd
Original Assignee
Beijing Zanb Science & Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zanb Science & Technology Co Ltd filed Critical Beijing Zanb Science & Technology Co Ltd
Priority to CN2010102854751A priority Critical patent/CN101984451B/en
Publication of CN101984451A publication Critical patent/CN101984451A/en
Application granted granted Critical
Publication of CN101984451B publication Critical patent/CN101984451B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a video-based shielded flame detecting method. The method comprises: step 101) analyzing the brightness variation between a present frame image and a last frame image and acquiring a communication area for different brightness, step 102) analyzing the variation trend of the communication area and acquiring the communication area having the variation trend meeting flicker feature as a candidate area, step 103) extracting the features of the candidate area, filtering false candidate area according to the features of the candidate area, using the remained candidate area as a flame detecting area and outputting, wherein the features comprise color feature and area edge change feature. The invention also provides a video-based shielded flame detecting device. Compared with the prior art of image flame detection, the method and the device of the invention can detect the shielded or secluded flame in a complex scene and realize the early detection for flame.

Description

Block flame detecting method and device based on video
Technical field
The present invention relates to a kind of detection method of blocking flame and device, belong to Flame Image Process, field of video monitoring based on video.
Background technology
Along with the development of fire-fighting domain fire detection technology, efficiently, accurately, be not subject to advantages such as environmental factor interference with it based on the flame detecting method of video, become the mainstream technology of research in the industry.U.S. Patent application US 2008/0136934A1 discloses the method and apparatus that a kind of flame detects.Publication number is that the Chinese patent application of CN101315667A discloses a kind of multi-characteristic synthetic recognition method for outdoor early fire disaster, and this method comprises that following four steps comprise: 1, thermal imaging system obtains infrared image; 2, fire infrared image gray processing processing, Threshold Segmentation and Filtering Processing; 3, doubtful fire disaster flame image is further analyzed, obtained following five criterions: (1) flame image color distribution criterion; (2) flame image rate of change criterion; (3) the flame image area spreads the growth criterion; (4) flame image circularity criterion; (5) the flame image body changes criterion; 4, utilizing neural network is input with criterion 1~criterion 5, comprehensively judges, obtains the whether final judgement of fire.The method of this application has overcome in the fire identifying the interference of aspects such as natural light, reduces the rate of failing to report and the rate of false alarm of fire.Publication number is that the Chinese patent application of CN101316371A also discloses a kind of flame detecting method and device.
Yet, the situation that flame is blocked appears in actual applications sometimes, when the flame of initial appearance is blocked, owing to can't in time finding to incur loss through delay warning.Therefore, press for the method for the flame that is blocked in a kind of environment of detection of complex effectively of proposition at present.
Summary of the invention
According to a first aspect of the invention, the invention provides a kind of detection method of blocking flame, said method comprising the steps of based on video:
Step 101, the brightness of analyzing current frame image and former frame image changes, and obtains the tangible connected region of variation;
Step 102 is analyzed the variation tendency of described connected region, obtains that variation tendency satisfies the described connected region of blinking characteristic and as the candidate region;
Step 103 is extracted the feature of described candidate region, according to the candidate region of described feature filtering falseness; With
Step 104, as flame surveyed area and output, described feature comprises color characteristic and edges of regions variation characteristic with remaining candidate region.
Preferably, described step 101 may further comprise the steps:
Step 1011: the error image that calculates the brightness value of described current frame image and described former frame image;
Step 1012: brightness value is set to brighten a little greater than the pixel of first threshold in the described error image, brightness value is set to the deepening point less than the pixel of second threshold value, and brightness value is set to the no change point smaller or equal to first threshold and more than or equal to the pixel of second threshold value;
Step 1013: according to width with highly described error image is carried out piecemeal, add up the number of the point that brightens, deepening point and no change point in each sub-piece respectively;
Step 1014: the ratio R 1, R2 and the R3 that calculate total number of the pixel in the number of the described point that brightens, deepening point and no change point and the described sub-piece respectively, if described R1 is greater than the 3rd threshold value, think that then described sub-piece is the piece that brightens, if described R2 is greater than the 3rd threshold value then think that described sub-piece is the deepening piece, if described R3 is greater than the 3rd threshold value then think that described sub-piece is the no change piece, if described R1, R2 and R3 are all smaller or equal to the 3rd threshold value, then described sub-piece is mixed and disorderly piece; With
Step 1015: respectively described piece and the deepening piece of brightening carried out cluster, to obtain the connected region of described brighten piece and deepening piece.
Preferably, the height ∈ of described sub-piece [3 pixels, 8 pixels], width ∈ [3 pixels, 8 pixels].
Preferably, described first threshold ∈ [1,4], the second threshold value ∈ [4 ,-1], the 3rd threshold value ∈ [60%, 90%].
Preferably, in described step 102, add up described connected region in the N continuous two field picture respectively and be the number N of the connected region of the connected region of the piece that brightens and deepening piece continuously ClAnd N Cd, and the total number N that adds up the connected region of the connected region of the described piece that brightens in the N continuous two field picture and deepening piece respectively 1And N dIf described connected region one of meets the following conditions, think that then the variation tendency of described connected region satisfies blinking characteristic: (1) described connected region is continuously the number N of the connected region of the piece that brightens ClLess than the 4th threshold value, and total number N of the connected region of the described piece that brightens 1Less than the 5th threshold value; (2) described connected region is the number N of the connected region of deepening piece continuously CdLess than the 4th threshold value, and total number N of the connected region of described deepening piece dLess than the 5th threshold value.
Preferably, described N ∈ [14,18], the 4th threshold value ∈ [6,10], the 5th threshold value ∈ [11,14].
Preferably, in described step 103, think then that when one of meeting the following conditions described candidate region is false candidate region and filtering: the color characteristic of (1) described candidate region is partial to non-redness; (2) the edges of regions variation characteristic of described candidate region changes in time.
Preferably, adopt following method to judge whether the color characteristic of described candidate region is partial to non-redness: the candidate region of described current frame image is transformed into yuv space, add up and satisfy V in the described candidate region greater than the 6th threshold value and U number less than the pixel of V, and calculate the ratio of total number of the pixel of described number and described candidate region, if described ratio is greater than the 7th threshold value, think that then the color characteristic of described candidate region is partial to non-redness, wherein, the 6th threshold value ∈ [135,150], the 7th threshold value ∈ [15%, 40%].
Preferably, adopt edge variation feature that following method judges the candidate region whether time of origin change: 1. obtain the polar coordinates of marginal point, the center of gravity of edge calculation point, the polar coordinates and the angle of each pixel in the calculated candidate zone then; 2. polar radius is carried out statistics with histogram (for example to [0 about the distribution of angle, 360] angular range is divided into 90 intervals, be that per 4 degree constitute intervals, add up the mean value of each interval corresponding polar radius, such 90 polar radius mean values have constituted the polar radius histogram); 3. the polar coordinates histogram of candidate region different time is done correlation computations and obtained related coefficient, this related coefficient is the edge variation feature, and it has described the edge variation severe degree; If the related coefficient that 4. calculates, thinks then that variation has taken place at the edge less than the 8th threshold value.The 6th threshold value ∈ [0.98,0.998].
According to another aspect of the present invention, the invention provides a kind of pick-up unit that blocks flame based on video, described device comprises:
The connected region acquiring unit is used to analyze the brightness variation of current frame image and former frame image, and obtains the tangible connected region of variation;
The candidate region acquiring unit is used to analyze the variation tendency of described connected region, obtains that variation tendency satisfies the connected region of blinking characteristic and as the candidate region;
False areas filtering and flame surveyed area output unit, be used to extract the feature of described candidate region, according to the candidate region of described feature filtering falseness, as flame surveyed area and output, described feature comprises color characteristic and edges of regions variation characteristic with remaining candidate region.
Preferably, wherein the connected region acquiring unit further comprises:
The luminance difference image collection module is used to calculate the error image of the brightness value of current frame image and former frame image;
The pixel sort module, being used for described error image brightness value is set to brighten a little greater than the pixel of first threshold, brightness value is set to the deepening point less than the pixel of second threshold value, and brightness value is set to the no change point smaller or equal to first threshold and more than or equal to the pixel of second threshold value;
Pixel number computing module is used for width and highly described error image is carried out piecemeal, adds up the number of brighten in each sub-piece point, deepening point and no change point respectively;
Sub-block sort module, be used for calculating respectively ratio R 1, R2 and the R3 of total number of the number of the described point that brightens, deepening point and no change point and the pixel in the described sub-piece, if described R1 is greater than the 3rd threshold value, think that then described sub-piece is the piece that brightens, if described R2 is greater than the 3rd threshold value then think that described sub-piece is the deepening piece, if described R3 is greater than the 3rd threshold value then think that described sub-piece is the no change piece, if described R1, R2 and R3 all smaller or equal to the 3rd threshold value, then described sub-piece is mixed and disorderly piece;
Sub-piece cluster module is used for respectively described piece and the deepening piece of brightening carried out cluster, to obtain the connected region of described brighten piece and deepening piece.
Compare with existing image-type flame detection technique, the detection method of blocking flame based on video of the present invention can detect with device and be blocked in the complex scene or hidden flame, has realized the early detection of flame.
Description of drawings
Fig. 1 is the process flow diagram of the detection method of blocking flame based on video of the present invention;
Fig. 2 is the process flow diagram of the step 101 of the detection method of blocking flame based on video of the present invention;
Fig. 3 is the frame diagram of the pick-up unit that blocks flame based on video of the present invention;
Fig. 4 is the frame diagram of the connected region acquiring unit of the pick-up unit that blocks flame based on video of the present invention.
Embodiment
For making your auditor can further understand structure of the present invention, feature and other purposes, now be described in detail as follows in conjunction with appended preferred embodiment, illustrated preferred embodiment only is used to technical scheme of the present invention is described, and non-limiting the present invention.
Fig. 1 is the process flow diagram of the detection method of blocking flame based on video of the present invention.As shown in Figure 1, the detection method of blocking flame based on video of the present invention may further comprise the steps:
Step 101, the brightness of analyzing current frame image and former frame image changes, and obtains the tangible connected region of variation;
Step 102 is analyzed the variation tendency of this connected region, obtains that variation tendency satisfies the connected region of blinking characteristic and as the candidate region;
Step 103 is extracted the feature of this candidate region, according to the candidate region of this feature filtering falseness; With
Step 104, as flame surveyed area and output, this feature comprises color characteristic and edges of regions variation characteristic with remaining candidate region.
Fig. 2 is the process flow diagram of the step 101 of the detection method of blocking flame based on video of the present invention.As shown in Figure 2, step 101 may further include:
Step 1011: the error image that calculates the brightness value of this current frame image and former frame image.
Step 1012: brightness value is set to brighten a little greater than the pixel of first threshold in this error image, brightness value is set to the deepening point less than the pixel of second threshold value, and brightness value is set to the no change point smaller or equal to first threshold and more than or equal to the pixel of second threshold value.Wherein, first threshold ∈ [1,4], the second threshold value ∈ [4 ,-1].For example, first threshold is that 2, second threshold value is-2 in the warehouse.
Step 1013: according to width with highly this error image is carried out piecemeal, add up the number of the point that brightens, deepening point and no change point in each sub-piece respectively.Wherein, the height ∈ of sub-piece [3 pixels, 8 pixels], width ∈ [3 pixels, 8 pixels].For example, be 5 at the height of warehouse neutron piece, width is 5.
Step 1014: the ratio R 1, R2 and the R3 that calculate total number of the pixel in the number of this point that brightens, deepening point and no change point and this sub-piece respectively, if this R1 is greater than the 3rd threshold value, think that then this sub-piece is the piece that brightens, if this R2 is greater than the 3rd threshold value then think that this sub-piece is the deepening piece, if this R3 is greater than the 3rd threshold value then think that this sub-piece is the no change piece, if this R1, R2 and R3 are all smaller or equal to the 3rd threshold value, then this sub-piece is mixed and disorderly piece.Wherein, the 3rd threshold value ∈ [60%, 90%].For example, the 3rd threshold value is 80% in the warehouse.
Step 1015: respectively this brighten piece and deepening piece are carried out cluster, to obtain the connected region of this brighten piece and deepening piece.Wherein, (this method is referring to teaching material: " Digital Image Processing and analysis " can to adopt region growing method general in the image processing field in step 1015, Tian Yan, Peng Fuyuan, publishing house of the Central China University of Science and Technology, 2009.06) this brighten piece and deepening piece are carried out cluster.
In step 102, add up this connected region in the N continuous two field picture respectively and be the number N of the connected region of the connected region of the piece that brightens and deepening piece continuously ClAnd N Cd, and the total number N that adds up the connected region of the connected region of the piece that brightens in the N continuous two field picture and deepening piece respectively 1And N dIf this connected region one of meets the following conditions, think that then the variation tendency of this connected region satisfies blinking characteristic: (1) this connected region is continuously the number N of the connected region of the piece that brightens ClLess than the 4th threshold value, and total number N of the connected region of this piece that brightens 1Less than the 5th threshold value; (2) this connected region is the number N of the connected region of deepening piece continuously CdLess than the 4th threshold value, and total number N of the connected region of this deepening piece dLess than the 5th threshold value.To satisfy the connected region of blinking characteristic as the candidate region.Wherein, N ∈ [14,18], the 4th threshold value ∈ [6,10], the 5th threshold value ∈ [11,14].For example, N is 16 frames in the warehouse, and the 4th threshold value is 8 frames, and the 5th threshold value is 12 frames.
In step 103, think then that when one of meeting the following conditions this candidate region is false candidate region and filtering: the color characteristic of (1) this candidate region is partial to non-redness; (2) edge of candidate region changes in time.
Adopt following method to judge whether the color characteristic of this candidate region is partial to non-redness: the candidate region of this current frame image is transformed into yuv space, add up and satisfy V in this candidate region greater than the 6th threshold value and U number less than the pixel of V, and calculate the ratio of total number of the pixel of this number and this candidate region, if this ratio is greater than the 7th threshold value, think that then the color characteristic of this candidate region is partial to non-redness, wherein, the 6th threshold value ∈ [135,150], the 7th threshold value ∈ [15%, 40%].For example, in the warehouse, the 6th threshold value can be got 140, the seven threshold values can get 25%.
Adopt edge variation feature that following method judges the candidate region whether time of origin change: 1. obtain the polar coordinates of marginal point, the center of gravity of edge calculation point, the polar coordinates and the angle of each pixel in the calculated candidate zone then; 2. polar radius is carried out statistics with histogram (for example to [0 about the distribution of angle, 360] angular range is divided into 90 intervals, be that per 4 degree constitute intervals, add up the mean value of each interval corresponding polar radius, such 90 polar radius mean values have constituted the polar radius histogram); 3. the polar coordinates histogram of candidate region different time is done correlation computations and obtained related coefficient, this related coefficient is the edge variation feature, and it has described the edge variation severe degree; If the related coefficient that 4. calculates, thinks then that variation has taken place at the edge less than the 8th threshold value.The 8th threshold value ∈ [0.98,0.998].For example, can choose the 8th threshold value in the warehouse is 0.995.
Fig. 3 is the frame diagram of the pick-up unit that blocks flame based on video of the present invention.As shown in Figure 3, the pick-up unit that blocks flame based on video of the present invention comprises:
Connected region acquiring unit 1 is used to analyze the brightness variation of current frame image and former frame image, and obtains the tangible connected region of variation;
Candidate region acquiring unit 2 is used to analyze the variation tendency of described connected region, obtains that variation tendency satisfies the connected region of blinking characteristic and as the candidate region;
False areas filtering and flame surveyed area output unit 3, be used to extract the feature of described candidate region, according to the candidate region of described feature filtering falseness, as flame surveyed area and output, described feature comprises color characteristic and edges of regions variation characteristic with remaining candidate region.
Fig. 4 is the frame diagram of the connected region acquiring unit 1 of the pick-up unit that blocks flame based on video of the present invention.As shown in Figure 4, the connected region acquiring unit 1 according to the pick-up unit that blocks flame based on video of the present invention may further include:
Luminance difference image collection module 11 is used to calculate the error image of the brightness value of current frame image and former frame image;
Pixel sort module 12, being used for described error image brightness value is set to brighten a little greater than the pixel of first threshold, brightness value is set to the deepening point less than the pixel of second threshold value, and brightness value is set to the no change point smaller or equal to first threshold and more than or equal to the pixel of second threshold value;
Pixel number computing module 13 is used for width and highly described error image is carried out piecemeal, adds up the number of brighten in each sub-piece point, deepening point and no change point respectively;
Sub-block sort module 14, be used for calculating respectively ratio R 1, R2 and the R3 of total number of the number of the described point that brightens, deepening point and no change point and the pixel in the described sub-piece, if described R1 is greater than the 3rd threshold value, think that then described sub-piece is the piece that brightens, if described R2 is greater than the 3rd threshold value then think that described sub-piece is the deepening piece, if described R3 is greater than the 3rd threshold value then think that described sub-piece is the no change piece, if described R1, R2 and R3 are all smaller or equal to the 3rd threshold value, then described sub-piece is mixed and disorderly piece;
Sub-piece cluster module 15 is used for respectively described piece and the deepening piece of brightening carried out cluster, to obtain the connected region of described brighten piece and deepening piece.
Compare with existing image-type flame detection technique, detection method of blocking flame and the device based on video of the present invention can detect the flame that occurs in the complex scene that flame blocks situation and detect, thereby realizes the detection of early stage flame.
What need statement is that foregoing invention content and embodiment are intended to prove the practical application of technical scheme provided by the present invention, should not be construed as the qualification to protection domain of the present invention.Those skilled in the art are in spirit of the present invention and principle, when doing various modifications, being equal to and replacing or improve.Protection scope of the present invention is as the criterion with appended claims.

Claims (11)

1. the detection method of blocking flame based on video is characterized in that, said method comprising the steps of:
Step 101, the brightness of analyzing current frame image and former frame image changes, and obtains the tangible connected region of variation;
Step 102 is analyzed the variation tendency of described connected region, obtains that variation tendency satisfies the described connected region of blinking characteristic and as the candidate region;
Step 103 is extracted the feature of described candidate region, according to the candidate region of described feature filtering falseness; With
Step 104, as flame surveyed area and output, described feature comprises color characteristic and edges of regions variation characteristic with remaining candidate region.
2. detection method according to claim 1 is characterized in that, described step 101 may further comprise the steps:
Step 1011: the brightness value that calculates the error image of described current frame image and described former frame image;
Step 1012: brightness value is set to brighten a little greater than the pixel of first threshold in the described error image, brightness value is set to the deepening point less than the pixel of second threshold value, and brightness value is set to the no change point smaller or equal to first threshold and more than or equal to the pixel of second threshold value;
Step 1013: according to width with highly described error image is carried out piecemeal, add up the number of the point that brightens, deepening point and no change point in each sub-piece respectively;
Step 1014: the ratio R 1, R2 and the R3 that calculate total number of the pixel in the number of the described point that brightens, deepening point and no change point and the described sub-piece respectively, if R1 is greater than the 3rd threshold value, think that then described sub-piece is the piece that brightens, if R2 is greater than the 3rd threshold value then think that described sub-piece is the deepening piece, if described R3 is greater than the 3rd threshold value then think that described sub-piece is the no change piece, if described R1, R2 and R3 are all smaller or equal to the 3rd threshold value, then described sub-piece is mixed and disorderly piece; With
Step 1015: respectively described piece and the deepening piece of brightening carried out cluster, to obtain the connected region of described brighten piece and deepening piece.
3. detection method according to claim 2 is characterized in that, the height ∈ of described sub-piece [3 pixels, 8 pixels], width ∈ [3 pixels, 8 pixels].
4. detection method according to claim 2 is characterized in that, described first threshold ∈ [1,4], the second threshold value ∈ [4 ,-1], the 3rd threshold value ∈ [60%, 90%].
5. detection method according to claim 1 is characterized in that, in step 102, adds up described connected region in the N continuous two field picture respectively and is the number N of the connected region of the connected region of the piece that brightens and deepening piece continuously ClAnd N Cd, and the total number N that adds up the connected region of the connected region of the described piece that brightens in the N continuous two field picture and deepening piece respectively 1And N dIf described connected region one of meets the following conditions, think that then the variation tendency of described connected region satisfies blinking characteristic: (1) described connected region is continuously the number N of the connected region of the piece that brightens ClLess than the 4th threshold value, and total number N of the connected region of the described piece that brightens 1Less than the 5th threshold value; (2) described connected region is the number N of the connected region of deepening piece continuously CdLess than the 4th threshold value, and total number N of the connected region of described deepening piece dLess than the 5th threshold value.
6. detection method according to claim 5 is characterized in that, described N ∈ [14,18], the 4th threshold value ∈ [6,10], the 5th threshold value ∈ [11,14].
7. detection method according to claim 1 is characterized in that, in described step 103, think then that when one of meeting the following conditions described candidate region is false candidate region and filtering: the color characteristic of (1) described candidate region is partial to non-redness; (2) the edges of regions variation characteristic of described candidate region changes in time.
8. detection method according to claim 7, it is characterized in that, adopt following method to judge whether the color characteristic of described candidate region is partial to non-redness: the candidate region of described current frame image is transformed into yuv space, add up and satisfy V in the described candidate region greater than the 6th threshold value and U number less than the pixel of V, and calculate the ratio of total number of the pixel of described number and described candidate region, if described ratio is greater than the 7th threshold value, think that then the color characteristic of described candidate region is partial to non-redness, wherein, the 6th threshold value ∈ [135,150], the 7th threshold value ∈ [15%, 40%].
9. detection method according to claim 7, it is characterized in that, adopt edge variation feature that following method judges the candidate region whether time of origin change: 1. obtain the polar coordinates of marginal point, the center of gravity of edge calculation point, the polar coordinates and the angle of each pixel in the calculated candidate zone then; 2. polar radius is carried out statistics with histogram about the distribution of angle; 3. the polar coordinates histogram of candidate region different time is done correlation computations and obtained related coefficient, this related coefficient is the edge variation feature; If the related coefficient that 4. calculates, thinks then that variation has taken place at the edge less than the 8th threshold value, the 8th threshold value ∈ [0.98,0.998].
10. pick-up unit that blocks flame based on video is characterized in that described device comprises:
The connected region acquiring unit is used to analyze the brightness variation of current frame image and former frame image, and obtains the tangible connected region of variation;
The candidate region acquiring unit is used to analyze the variation tendency of described connected region, obtains that variation tendency satisfies the connected region of blinking characteristic and as the candidate region;
False areas filtering and flame surveyed area output unit, be used to extract the feature of described candidate region, according to the candidate region of described feature filtering falseness, as flame surveyed area and output, described feature comprises color characteristic and edges of regions variation characteristic with remaining candidate region.
11. device according to claim 10 is characterized in that, wherein the connected region acquiring unit further comprises:
The luminance difference image collection module is used to calculate the error image of the brightness value of current frame image and former frame image;
The pixel sort module, being used for described error image brightness value is set to brighten a little greater than the pixel of first threshold, brightness value is set to the deepening point less than the pixel of second threshold value, and brightness value is set to the no change point smaller or equal to first threshold and more than or equal to the pixel of second threshold value;
Pixel number computing module is used for width and highly described error image is carried out piecemeal, adds up the number of brighten in each sub-piece point, deepening point and no change point respectively;
Sub-block sort module, be used for calculating respectively ratio R 1, R2 and the R3 of total number of the number of the described point that brightens, deepening point and no change point and the pixel in the described sub-piece, if described R1 is greater than the 3rd threshold value, think that then described sub-piece is the piece that brightens, if described R2 is greater than the 3rd threshold value then think that described sub-piece is the deepening piece, if described R3 is greater than the 3rd threshold value then think that described sub-piece is the no change piece, if described R1, R2 and R3 all smaller or equal to the 3rd threshold value, then described sub-piece is mixed and disorderly piece;
Sub-piece cluster module is used for respectively described piece and the deepening piece of brightening carried out cluster, to obtain the connected region of described brighten piece and deepening piece.
CN2010102854751A 2010-09-16 2010-09-16 Video-based shielded flame detecting method and device Active CN101984451B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010102854751A CN101984451B (en) 2010-09-16 2010-09-16 Video-based shielded flame detecting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010102854751A CN101984451B (en) 2010-09-16 2010-09-16 Video-based shielded flame detecting method and device

Publications (2)

Publication Number Publication Date
CN101984451A true CN101984451A (en) 2011-03-09
CN101984451B CN101984451B (en) 2012-10-31

Family

ID=43641620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010102854751A Active CN101984451B (en) 2010-09-16 2010-09-16 Video-based shielded flame detecting method and device

Country Status (1)

Country Link
CN (1) CN101984451B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020588A (en) * 2012-11-15 2013-04-03 镇江石鼓文智能化系统开发有限公司 Flame detection method based on video image analysis
CN103021138A (en) * 2012-12-28 2013-04-03 广州市浩云安防科技股份有限公司 Method for sensing shielding of video camera and device for implementing method
CN105869183A (en) * 2016-03-25 2016-08-17 北京智芯原动科技有限公司 Flame detection method and device based on single band
CN106250845A (en) * 2016-07-28 2016-12-21 北京智芯原动科技有限公司 Flame detecting method based on convolutional neural networks and device
CN107729913A (en) * 2017-08-25 2018-02-23 徐州科融环境资源股份有限公司 A kind of boiler furnace Situation Awareness method based on multiple features fusion cluster
CN115700757A (en) * 2022-11-08 2023-02-07 中信重工开诚智能装备有限公司 Control method and device for fire water monitor and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101493980A (en) * 2009-03-05 2009-07-29 中国科学技术大学 Rapid video flame detection method based on multi-characteristic fusion
CN101719300A (en) * 2009-12-01 2010-06-02 航天海鹰安全技术工程有限公司 Fire early-warning system with intelligent video and method for determining alarm parameters thereof
CN101764922A (en) * 2009-08-03 2010-06-30 北京智安邦科技有限公司 Method and device for adaptive generation of luminance threshold

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101493980A (en) * 2009-03-05 2009-07-29 中国科学技术大学 Rapid video flame detection method based on multi-characteristic fusion
CN101764922A (en) * 2009-08-03 2010-06-30 北京智安邦科技有限公司 Method and device for adaptive generation of luminance threshold
CN101719300A (en) * 2009-12-01 2010-06-02 航天海鹰安全技术工程有限公司 Fire early-warning system with intelligent video and method for determining alarm parameters thereof

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020588A (en) * 2012-11-15 2013-04-03 镇江石鼓文智能化系统开发有限公司 Flame detection method based on video image analysis
CN103020588B (en) * 2012-11-15 2016-04-13 镇江石鼓文智能化系统开发有限公司 Based on the flame detecting method of video image analysis
CN103021138A (en) * 2012-12-28 2013-04-03 广州市浩云安防科技股份有限公司 Method for sensing shielding of video camera and device for implementing method
CN103021138B (en) * 2012-12-28 2015-07-15 广州市浩云安防科技股份有限公司 Method for sensing shielding of video camera and device for implementing method
CN105869183A (en) * 2016-03-25 2016-08-17 北京智芯原动科技有限公司 Flame detection method and device based on single band
CN105869183B (en) * 2016-03-25 2018-09-25 北京智芯原动科技有限公司 Based on single-range flame detecting method and device
CN106250845A (en) * 2016-07-28 2016-12-21 北京智芯原动科技有限公司 Flame detecting method based on convolutional neural networks and device
CN107729913A (en) * 2017-08-25 2018-02-23 徐州科融环境资源股份有限公司 A kind of boiler furnace Situation Awareness method based on multiple features fusion cluster
CN115700757A (en) * 2022-11-08 2023-02-07 中信重工开诚智能装备有限公司 Control method and device for fire water monitor and electronic equipment
CN115700757B (en) * 2022-11-08 2024-05-17 中信重工开诚智能装备有限公司 Control method and device for fire water monitor and electronic equipment

Also Published As

Publication number Publication date
CN101984451B (en) 2012-10-31

Similar Documents

Publication Publication Date Title
CN110516609B (en) Fire disaster video detection and early warning method based on image multi-feature fusion
Gong et al. A Real‐Time Fire Detection Method from Video with Multifeature Fusion
CN108416968B (en) Fire early warning method and device
Appana et al. A video-based smoke detection using smoke flow pattern and spatial-temporal energy analyses for alarm systems
CN101984451B (en) Video-based shielded flame detecting method and device
US9245187B1 (en) System and method for robust motion detection
Chen et al. Multi-feature fusion based fast video flame detection
CN107609470B (en) Method for detecting early smoke of field fire by video
CN109636795B (en) Real-time non-tracking monitoring video remnant detection method
CN105469105A (en) Cigarette smoke detection method based on video monitoring
CN102982313B (en) The method of Smoke Detection
CN104853151A (en) Large-space fire monitoring system based on video image
CN101325690A (en) Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow
CN105046218B (en) A kind of multiple features traffic video smog detection method based on serial parallel processing
CN109948455B (en) Detection method and device for left-behind object
CN105261030B (en) The method and device of flame is detected from infrared video
CN105426828A (en) Face detection method, face detection device and face detection system
CN102567722A (en) Early-stage smoke detection method based on codebook model and multiple features
Barmpoutis et al. Real time video fire detection using spatio-temporal consistency energy
CN110852179B (en) Suspicious personnel invasion detection method based on video monitoring platform
CN104463869A (en) Video flame image composite recognition method
KR101030257B1 (en) Method and System for Vision-Based People Counting in CCTV
CN103996045A (en) Multi-feature fused smoke identification method based on videos
CN108898098A (en) Early stage video smoke detection method based on monitor supervision platform
CN103489012A (en) Crowd density detecting method and system based on support vector machine

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: NETPOSA TECHNOLOGIES, LTD.

Free format text: FORMER OWNER: BEIJING ZANB SCIENCE + TECHNOLOGY CO., LTD.

Effective date: 20150716

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20150716

Address after: 100102, Beijing, Chaoyang District, Tong Tung Street, No. 1, Wangjing SOHO tower, two, C, 26 floor

Patentee after: NETPOSA TECHNOLOGIES, Ltd.

Address before: 100048 Beijing city Haidian District Road No. 9, building 4, 5 layers of international subject

Patentee before: Beijing ZANB Technology Co.,Ltd.

PP01 Preservation of patent right

Effective date of registration: 20220726

Granted publication date: 20121031

PP01 Preservation of patent right