CN100538757C - Fire-disaster monitoring device based on omnibearing vision sensor - Google Patents

Fire-disaster monitoring device based on omnibearing vision sensor Download PDF

Info

Publication number
CN100538757C
CN100538757C CNB2005100618768A CN200510061876A CN100538757C CN 100538757 C CN100538757 C CN 100538757C CN B2005100618768 A CNB2005100618768 A CN B2005100618768A CN 200510061876 A CN200510061876 A CN 200510061876A CN 100538757 C CN100538757 C CN 100538757C
Authority
CN
China
Prior art keywords
fire
flame
image
formula
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2005100618768A
Other languages
Chinese (zh)
Other versions
CN1979576A (en
Inventor
汤一平
金顺敬
叶永杰
邓飞
顾小凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CNB2005100618768A priority Critical patent/CN100538757C/en
Publication of CN1979576A publication Critical patent/CN1979576A/en
Application granted granted Critical
Publication of CN100538757C publication Critical patent/CN100538757C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

A kind of fire monitoring method and device based on omnidirectional computer vision, mainly be to utilize the omnidirectional computer vision sensor that the scene is monitored, with take the photograph continuous omnidirectional images input computing machine, constantly carry out Flame Image Process and analysis, simple pattern classification by fire, the flame area variation characteristic is judged, flame body variation characteristic is judged, the flame color feature is judged, flame flashes law characteristic to be judged, flame mass motion feature is judged and the judgement of fire intensity feature obtains a comprehensive judged result, judge that according to this calculating is comprehensive quantized value carries out different processing, reduce the rate of false alarm of fire, the invention provides a kind of brand-new quicker, more accurate, more reliable multi parameter intallingent fire monitoring method and device.

Description

Fire-disaster monitoring device based on omnibearing vision sensor
(1) technical field
The present invention relates to a kind of fire-disaster monitoring device based on omnibearing vision sensor.
(2) background technology
Automatic fire alarm system is that people are for early detection and circular fire, and in time adopt an effective measure, control and stamping out a fire, and be arranged in the buildings or a kind of automatic fire-fighting facility in other place, be one of indispensable safety technique facility of modern fire-fighting.Both at home and abroad fire automatic monitoring is all paid much attention to.Whether detection of fires takes place mainly by monitoring that continuously or intermittently what at least a physics that is associated with fire or chemical phenomenon were carried out, method for monitoring are divided into sense cigarette type, temperature sensitive type, gaseous type and photosensitive type usually at present.
According to U.S. test and materialogy meeting (ASTM) and the international definition of preventing association (NFPA), flue gas comprises the solid phase that is suspended in it and the gaseous substance of liquid-phase particle and the generation of material pyrolytic process.Therefore, fire smoke flows and to belong to two-phase flow and flow, and being suspended in wherein flue gas particle number, particle agglomeration effect, Smoke Turbulent Effect etc. is the important factor in order of fire image detection optical characteristics, is the important content of detection and study of warning.This fire image detection method based on a certain parameter attribute of flue gas still can not satisfy strict day by day fire safety evaluating requirement.Therefore, be necessary to study fire image detection method based on the multiparameter feature of flue gas.
Traditional fire alarm system is generally based on infrared sensor and smoke transducer, and these systems adopt the detectable concentration method more, and flame detection itself not, so its rate of false alarm height, detection time are longer, can't forecast some situation, such as smokeless flames.In the fire alarm of large space occasions such as outdoor warehouse and warehouse, large-scale chamber, it is very faint that the sensor signal becomes, even high-precision sensor also can can't be worked owing to all interference noises simultaneously.For solving the fire alarm problem of this type of occasion, abroad some company proposes to carry out with the imageing sensor of ultraviolet band the flame detection of medium and long distance in the early 1990s, and has released corresponding product.But this type systematic does not have the function of automatic identification, and module does not have reconfigurability yet, and the recognition methods of flame is simple, and the identification rate of false alarm is higher.
Along with digital communication technology, development of computer, Digital Image Processing has obtained to use widely.Systems such as image-type fire alarm utilize digital image processing techniques to realize automatic fire alarm.At present, adopted the method for the detection of comparative maturity in some place, as sense cigarette, temperature-sensitive, sensitive detector, they utilize smog, the temperature of fire disaster flame, the characteristic of light etc. that fire is surveyed respectively.But the effect of existing fire detection equipment can't be brought into play in place such as relatively abominable and outdoor environment at large space, large tracts of land, environment, and the utilization digital image processing techniques utilize the picture characteristics of fire disaster flame can solve the detection problem in above place.
Existing image processing method is because apparatus expensive (utilizing spectral analysis), require very high (static, strict calibration etc.) and can't widespread use to environment or video camera etc.The image processing method of Chu Xianing has been obtained certain progress owing to considered some characteristics of flame in recent years.Yet strict day by day fire safety evaluating requires and high-tech developing rapidly, makes detection and mode such as pre-just towards image conversion and intelligent development, is detection method according to flame characteristic based on the fire detecting method of image.Therefore, countries in the world are all at fire detecting method and the equipment of being devoted to research and develop energy early prediction fire.Digital Image Processing and mode identification technology realize that fire forecast is compared with traditional forecasting procedure and can improve forecast precision effectively, shorten the fire information of asking, provide abundanter of giving the correct time in advance etc. greatly.
Flame Image Process and computer vision are constantly new technologies of development, adopt computer vision to observe four purposes in principle, i.e. the debating of the feature extraction of pre-service, the bottom, mid-level features known and by the explanation of image to senior sight.In general, computer vision comprises principal character, Flame Image Process and image understanding.
Image is the extension of human vision.By vision, can find fire immediately exactly, this is a undisputable fact.The basis of image monitoring rapidity is that the information that vision is accepted is communication media with light; And image information is abundant and directly perceived, and more the identification of incipient fire and judgement are laid a good foundation, and other any fire detection technology all can not provide so abundant and information intuitively.In addition, the Primary Component image sensing assembly of image monitoring is by optical lens and extraneous generation indirect contact, this structure has guaranteed that the image monitoring technology both used in the indoor environment of abominable (many dust, high humility), also can use in the outdoor environment.Thus, the status of image monitoring technology in detection and acting as:
(1) can be between space, use in the large-area environment;
(2) can be used for the place of many dust, high humility;
(3) can in outdoor environment, use;
(4) can make rapid reaction to the image information in the fire phenomena;
(5) can provide fire information intuitively.
The fire image detection system, be a kind of be core with the computing machine, the fire automatic monitoring warning system that develops in conjunction with photoelectric technology and computer image processing technology.The fire image detection method is a kind of novel fire detecting method based on Digital Image Processing and analysis.It utilizes camera that the scene is monitored, simultaneously to take the photograph consecutive image input computing machine, constantly carry out Flame Image Process and analysis, the body variation characteristic by incipient fire flame comes detection of fires.
Image is a kind of signal that comprises information such as intensity, body, position.Combustion process in the fire is a typical erratic process.Because the influence of combustible, geometric condition, natural environment and climate, fire process is more more complicated than other controlled combustion process.Simultaneously, there are various disturbing factors in the scene of fire, as sunlight, illuminating lamp etc.Therefore the image-type fire detecting method must be based on the fundamental characteristics of incipient fire flame, so just can remove all kinds of obstacles, and makes detection quicker, reliable.
At original patent (CN86106890A, CN1089741A, CN1112702A) in the technology image-type fire detection is utilized aspect the picture characteristics of fire disaster flame in the utilization digital image processing techniques, according to some single or several simple shape features of flame figure criterion as pattern-recognition, like this because selected characteristic is more single, the algorithm relative complex causes the False Rate height.In patent CN1089741A, use The Cloud Terrace that panorama is carried out fire detection, though can obtain omnidirectional images, but The Cloud Terrace adopt the mechanical type whirligig exist mechanical wear, maintenance workload big, want consumes energy, algorithm relative complex, shortcoming such as can not handle in real time.
(3) summary of the invention
Higher for the rate of false alarm that overcomes existing fire detecting system, can not handle in real time, the deficiency of algorithm relative complex, the invention provides a kind ofly can effectively reduce rate of false alarm, can handle in real time, algorithm is simply based on the fire-disaster monitoring device of omnibearing vision sensor.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of fire-disaster monitoring device based on omnibearing vision sensor, this fire-disaster monitoring device comprises microprocessor, is used for the video sensor of on-site supervision, is used for and extraneous communication module of communicating by letter, described microprocessor comprises: the view data read module is used to read the video image information of coming from the video sensor biography; File storage module, the data storage that is used for video sensor is gathered is to storer; On-the-spot real-time play module is used to connect outside display device, with on-site supervision picture real-time play;
The output of described video sensor is connected with microprocessor communication, described video sensor is an omnibearing vision sensor, described vision sensor comprises evagination mirror surface, transparent cylinder, the camera that is used for reflecting monitoring field object, described evagination mirror surface down, described transparent cylinder supports the evagination mirror surface, the camera that is used to take imaging body on the evagination mirror surface is positioned at the inside of transparent cylinder, and camera is positioned on the virtual focus of evagination mirror surface;
Described microprocessor also comprises:
The transducer calibration module is used for the parameter of omnibearing vision sensor is demarcated, and sets up the material picture in space and the corresponding relation of the video image that is obtained;
The color model modular converter, be used for color with each pixel of coloured image from the RGB color space conversion to (Cr, Cb) spatial color model;
The image stretching processing module, the circular video image that is used for gathering expands into the panorama histogram, according to a point (x on the circular omnidirectional images *, y *) and rectangle column panorama sketch on a point (x *, y *) corresponding relation, set up (x *, y *) and (x *, y *) mapping matrix, shown in the formula (1):
Figure C200510061876C00032
In the following formula,
Figure C200510061876C00033
Be mapping matrix,
Figure C200510061876C00034
Be the picture element matrix on the circular omnidirectional images, It is the picture element matrix on the rectangle column panorama sketch;
The motion obj ect detection module, present frame live video image and a relatively stable reference image of being used for being obtained carry out the difference computing, and the computing formula of image subtraction is represented suc as formula (2):
f d(X,t 0,t i)=f(X,t i)-f(X,t 0) (2)
In the following formula, f d(X, t 0, t i) be to photograph the result who carries out image subtraction between image and reference image in real time; F (X, t i) be to photograph image in real time; F (X, t 0) be the reference image;
And with in the present image with the image subtraction computing formula of adjacent K frame shown in (3):
f d(X,t i-k,t i)=f(X,t i)-f(X,t i-k) (3)
In the following formula, f d(X, t I-k, t i) be to photograph the result who carries out image subtraction between image and adjacent K two field picture in real time; F (X, t I-k) image when being adjacent K frame;
As f d(X, t 0, t i) 〉=threshold value, f d(X, t I-k, t iWhen) 〉=threshold value is set up, be judged to be suspicious flame object;
As f d(X, t 0, t i) 〉=threshold value, f d(X, t I-k, t i)<threshold value is judged stationary objects, and upgrades replacement reference image with formula (4):
f ( X , t 0 ) ⇐ f ( X , t i - k ) - - - ( 4 )
As f d(X, t 0, t i)<threshold value is judged to be stationary objects;
The connected region computing module, be used for present image is carried out mark, pixel grey scale is that 0 sub-district represents that this sub-district do not have suspicious flame, pixel grey scale is that 1 this sub-district of expression has suspicious flame, whether the pixel of calculating in the present image equates with the pixel of some points adjacent around the current pixel, equate to be judged as gray scale and have connectedness, all are had connective pixel as a connected region;
The pattern classification module, be used to be judged to be suspicious flame after, each connected region is obtained its area Si, and according to as follows:
1) if S ithreshold value 1, then this region of variation is a noise spot;
2) if S iThreshold value 2, then this region of variation is that large-area Infrared changes;
3) if threshold value 1<S ithreshold value 2, then this region of variation is suspicious flame region;
After being judged as suspicious flame region, calculate the similarity ε of consecutive frame modified-image i, as shown in Equation (5),
ϵ i = Σ ( x , y ) ∈ Ω b i ( x , y ) ∩ b i + 1 ( x , y ) Σ ( x , y ) ∈ Ω b i ( x , y ) ∪ b i + 1 ( x , y ) , i = 1 , N - 1 - - - ( 5 )
In the formula, b i(x y) is flame region suspicious in the previous frame, b I+1(x y) is flame region suspicious in the present frame;
According to the aforementioned calculation result, pattern classification is:
1) if ε ithreshold value 1, then image model is the bright spot of rapid movement;
2) if ε i〉=threshold value 2, then image model is fixing infraluminescence zone;
3) if threshold value 1<ε ithreshold value 2, then image model is a flame;
Flame color feature judge module, be used for (Cr, Cb) spatial distributions model, calculate light emitting source whether drop on flame image (Cr, Cb) in the spatial distributions model, computing formula is by shown in the formula (6):
W firecolor = exp { - 1 2 [ A * ( Cr - Cr ‾ ) 2 + 2 B * ( Cr - Cr ‾ ) * ( Cb - Cb ‾ ) + C * ( Cb - Cb ‾ ) 2 ] } - - - ( 6 )
In the formula (6), W FirecolorBe the color characteristic amount,
Figure C200510061876C00043
,
Figure C200510061876C00044
Be the sample standard average of flame point Cr, Cb, A, B, C are respectively the coefficients that is come out by sample standard deviation and mean value computation;
Figure C200510061876C00043
=144.6;
Figure C200510061876C00044
=117.5;A=3.7*10 -3;B=4.1*10 -3;C=4.5*10 -3
Flame area variation characteristic judge module is used for carrying out recursion calculating according to the light-emitting area Si of every two field picture, asks the recursion value at the light-emitting area of next frame image
Figure C200510061876D00233
Computing formula is provided by formula (7);
S ‾ i t ( i + 1 ) = ( 1 - k ) * S ‾ i t ( i ) + k * S i - - - ( 7 )
In the formula,
Figure C200510061876D00242
Be the recurrence average value of the light-emitting area of next frame image, Be the recurrence average value of the light-emitting area of present frame image, Si is the calculated value of present frame light-emitting area, and K is a coefficient, less than 1, obtains following formula (8)
S ‾ i t ( i + 1 ) + S ‾ i t ( i ) + S ‾ i t ( i - 1 ) 3 > S ‾ i t ( i - 2 ) + S ‾ i t ( i - 3 ) + S ‾ i t ( i - 4 ) 3 - - - ( 8 )
If set up above-mentioned (8), flame area expansion W Fire areaQuantized value is taken as 1; Be false flame area expansion W as following formula (8) Fire areaQuantized value is taken as 0;
Flame flashes the law characteristic judge module, when being used for setting up in above-mentioned (8), with the recurrence average value of the light-emitting area of the calculated value Si of light-emitting area and present frame image
Figure C200510061876D0024163834QIETU
Ask difference to calculate, just count, calculate the change frequency f that the number of times that changes at certain hour Duan Zhongqi flashes as flame as both positive and negative the changing of difference Fenquncy, and with this f FenquncyWith the threshold value f that sets cCompare, work as f Fenquncy〉=f cThe time, flame flashes W Fire bickerQuantized value is taken as 1; Work as f Fenquncy<f cThe time, flame flashes W Fire bickerQuantized value 0;
The fire intensity judge module is used at flame area expansion W Fire areaQuantized value is 1 and flame color feature W Fire colorCalculated value greater than 0.5 o'clock, computing formula is as the formula (9);
W fire indensity = S i ΣS * 10 where W fire area = 1 AND W fire color ≥ = 0.5 - - - ( 9 )
In the following formula (9), ∑ S is whole monitoring area, W Fire indensityIt is the fire intensity value;
Comprehensive judge module takes place in fire, is used for flashing rule according to flame modes, flame color eigenwert, flame area, flame and has judged whether that comprehensively fire takes place, and its calculating formula is shown in (10):
W fire?alarm=K fire?pattern×ε i+K firecolor×W fire?color+K fire?area×W fire?area (10)
+K fire?bicker×W fire?bicker+K fireindensity×W fire?indensity
In the formula: K Fire patternWeighting coefficient for flame modes;
K Fire colorWeighting coefficient for the flame color feature;
K Fire areaWeighting coefficient for the flame area variation;
K Fire bickerThe weighting coefficient that flashes for flame;
K Fire indensityWeighting coefficient for fire intensity;
As K Alarm≤ W Fire alarm, be judged to be the fire alarm, notify managerial personnel by communication module.
Further, described warning value K AlarmComprise K Attention, K Alarm1, K Alarm2, K Alarm3
If K Attention≤ W Fire alarm≤ K Alarm1, be judged to be the suspicious attention of flame, notify managerial personnel by communication module;
If K Alarm1<W Fire alarm≤ K Alarm2, be judged to be the fire early warning, notify managerial personnel by the telex network module, startup file memory module record live video data;
If K Alarm2<W Fire alarm≤ K Alarm3, be judged to be fire and take place, by the automatic alert notice 119 of telex network module, and notify managerial personnel, startup file memory module record live video data.
If K Alarm3<W Fire alarm, then sound alarm of fire, by the automatic alert notice 119 of telex network module, and notify managerial personnel, startup file memory module record live video data.
Further again, described microprocessor also comprises the background maintenance module, and described background maintenance module comprises:
The background luminance computing unit is used to calculate average background brightness Yb computing formula as the formula (11):
Y ‾ b = Σ x = 0 W - 1 Σ y = 0 H - 1 Y n ( x , y ) ( 1 - M n ( x , y ) ) Σ x = 0 W - 1 Σ y = 0 H - 1 ( 1 - M n ( x , y ) ) - - - ( 11 ) ;
In the formula (11), Yn (x y) is the brightness of each pixel of present frame, Mn (x y) is the mask table of present frame, and described mask table is to write down each pixel with one with the measure-alike array M of frame of video whether motion change is arranged, referring to formula (12):
Figure C200510061876D00252
Yb0 is the background luminance of former frame when being judged to be suspicious flame object, and Yb1 is the background luminance of first frame when being judged to be suspicious flame object, being changed to of two frame mean flow rates:
AY=Yb1-Yb0 (13)
If Δ Y, then thinks the incident of turning on light that taken place greater than higher limit; If Δ Y, then thinks the incident of turning off the light that taken place less than certain lower limit; Between higher limit and lower limit, think then that light changes naturally as Δ Y;
The background adaptive unit is used for carrying out adaptive learning according to following formula (14) when light changes naturally:
X mix,bn+1(i)=(1-λ)X mix,bn(i)+λX mix,cn(i) (14)
In the formula: X Mix, cn(i) be present frame RGB vector, X Mix, bn(i) be present frame background RGB vector, X Mix, bn+1(i) be next frame background forecast RGB vector, λ is the speed of context update; Changeless background (initial background) is used in λ=0; Present frame is used as a setting in λ=1; 0<λ<1, background is mixed by the background and the present frame of previous moment;
When light is caused that by switch lamp background pixel is reset according to present frame, referring to formula (15):
X mix,bn+1(i)=X mix,cn(i) (15)。
Described microprocessor also comprises: noise is rejected module, is used for the average displacement of each pixel value with all values in its local neighborhood, as shown in Equation (16):
h[i,j]=(1/M)∑f[k,1] (16)
In the following formula (16), M is the pixel sum in the neighborhood.
Described microprocessor also comprises: flame mass motion characteristic module is used for that track that the integral body by flame moves judges, as judging the flame mass motion, flame arrangement motion quantized value W Fire moveBe taken as 1; As judge the non-integral motion, flame is put motion quantized value W in order Fire moveBe taken as 0;
Take place in the comprehensive judge module at described fire, formula (10) is modified to:
W fire?alarm=K fire?pattern×ε i+K fire?color×W fire?color+K fire?area×W fire?area
+K fire?bicker×W fire?bicker+K fire?indensity×W fire?indensity+K fire?move×W fire?move
In the following formula, K Fire moveIt is the weighting coefficient of flame mass motion.
Described microprocessor also comprises: flame body variation characteristic quantization modules, be used for change of shape, as the body that presents variation is regular, W according to the flame on the horizontal direction Fire bodyQuantized value is taken as 0; As the body that presents changes irregularities, W Fire bodyQuantized value is taken as 1;
Take place in the comprehensive judge module at described fire, formula (10) is modified to:
W fire?alarm=K fire?pattern×ε i+K fire?color×W fire?color+K fire?area×W fire?area
+K fire?bicker×W fire?bicker+K fire?indensity×W fire?indensity+K fire?body×W fire?body
In the following formula, K Fire bodyIt is the weighting coefficient that the flame body changes.
Principle of work of the present invention is: the omnibearing vision sensor ODVS that developed recently gets up (OmniDirectional Vision Sensors) provide a kind of new solution for the panoramic picture that obtains scene in real time.The characteristics of ODVS are looking away (360 degree), can become piece image to the Information Compression in the hemisphere visual field, and the quantity of information of piece image is bigger; When obtaining a scene image, the riding position of ODVS in scene is free more; ODVS is without run-home during monitoring environment; Algorithm is simpler during moving object in the detection and tracking monitoring range; Can obtain the realtime graphic of scene.Therefore the fully-directional visual system based on ODVS developed rapidly in recent years, just becoming the key areas in the computer vision research, IEEE held the special symposial (IEEE workshop onOmni-directional vision) of annual omni-directional visual since 2000.
Omnidirectional computer vision sensing system shown in Figure 1 enters the light at the center of hyperbolic mirror, according to bi-curved minute surface characteristic towards its virtual focus refraction.Material picture reflexes to imaging in the collector lens through hyperbolic mirror, a some P1 on this imaging plane (x*1, y*1) corresponding the coordinate A of a point spatially in kind (x1, y1, z1).
1-hyperbolic curve face mirror among Fig. 1,2-incident ray, the focus Om (0 of 3-hyperbolic mirror, 0, c), the virtual focus of 4-hyperbolic mirror is camera center O c (0,0 ,-c), 5-reflection ray, 6-imaging plane, the volume coordinate A of 7-material picture (x1, y1, z1), 8-incide the volume coordinate of the image on the hyperboloid minute surface, 9-be reflected in some P1 on the imaging plane (x*1, y*1).
The optical system that hyperbolic mirror shown in Fig. 1 constitutes can be represented by following 5 equatioies;
((X 2+Y 2)/a 2)-(Z 2/b 2)=-1 (Z>0) (17)
c = a 2 + b 2 - - - ( 18 )
β=tan -1(Y/X) (19)
α=tan -1[(b 2+c 2)sinγ-2bc]/(b 2+c 2)cosγ (20)
γ = tan - 1 [ f / ( X 2 + Y 2 ) ] - - - ( 21 )
X in the formula, Y, Z representation space coordinate, c represents the focus of hyperbolic mirror, and 2c represents two distances between the focus, a, b is respectively the real axis of hyperbolic mirror and the length of the imaginary axis, β represents the angle-position angle of incident ray on the XY plane, and α represents the angle-angle of depression of incident ray on the XZ plane, and f represents the distance of imaging plane to the virtual focus of hyperbolic mirror.
Correspondence according to the three-dimensional interior space and image pixel concerns there being those pixel portion of existence to detect, at first will be in the storer of computing machine reference pictures store, carry out image subtraction between image and reference picture by photographing in real time, the regional luminance that the result who subtracts each other changes strengthens, the brightness that is to say those block of pixels that luminous point exists strengthens, and this patent is only interested in shape, area, the radiation intensity (color) at the edge of these blocks.Correspondence according to the pixel in above-mentioned space geometry relational expression space just can be calculated.
Incipient fire has because of the hot physical phenomenon of the image information that self reason forms: glowing under fire plume and the high-temperature.Fire stage because flame grows out of nothing, is a process that development takes place in early days.The characteristics of image of this stage flame is just more obvious.Incipient fire flame right and wrong are permanent, and the shape of different flames constantly, area, radiation intensity or the like are all changing.Catch these characteristics of fire to lay a good foundation for the identification of fire.Flame Image Process in the image-type detection is the continuous processing of dynamic image; To each target on the image, determine their matching relationships according to certain algorithm, thereby obtain the continuous Changing Pattern of each target with target in the former frame.Below be the image information of using in the image-type fire detecting method of the present invention:
1) area change: incipient fire is the constantly process of development of back fire of catching fire.In this stage, the area of fire disaster flame presents increase trend continuous, extendability.In Flame Image Process, area is to realize by bright spot (gray-scale value is greater than the threshold value) number of getting statistical picture after the threshold value.Because the present invention has adopted above the space towards the omnibearing vision sensor of monitoring below, can observe the increase trend continuous, extendability of flame area.
2) edge variation: the edge variation of incipient fire flame has certain rules, with other the edge variation difference of the high temp objects and the retention flame.Method is with rim detection and edge searching algorithm edge extracting to be come out accurately, waits for property to encoding at the edge according to shape, the curvature at edge, extracts the characteristic quantity at edge again according to coding.Utilize these characteristic quantities in early days the Changing Pattern of fire stage carry out fire and differentiate.
3) body changes: the body variation of incipient fire flame has reflected the variation of flame in space distribution.Fire stage in early days, the change of shape of flame, spatial orientation change, the shake of flame and flame deciliter etc., have own unique Changing Pattern.In Flame Image Process, the body variation characteristic is by calculating the spatial characteristics of flame, and promptly the position between the pixel concerns and realizes.
4) flash rule: flame flash rule, i.e. brightness is in the time dependent rule of spatial distributions, flame can be by certain frequency scintillation in combustion process.Be exactly gray level histogram rule over time in digital picture, the frame frequency situation of pixel on different grey-scale that this characteristic has embodied a two field picture over time.
5) layering changes: the flame temperature inside is uneven, and shows certain rules.Burning in the fire belongs to diffusion combustion, and diffusion combustion flame all has tangible hierarchical nature, can be divided into three layers of flame cores, internal flame, flame envelope as candle flame; Because surface emissivity is very strong, can be divided into solid surface and flame part two big layers during solid combustions such as timber, and the layering again of flame part.The layering variation characteristic has embodied the pixel of different grey-scale in the spatial distributions rule.
6) whole moving: incipient fire flame is constantly the flame of development, and along with old comburant after-flame and new comburant are lighted, flame is constantly moving the position.So it is continuous, non-jumping characteristic that the integral body of flame moves.
Therefore, adopt omnibearing vision sensor ODVS and utilize digital image processing techniques, find rational morphological feature criterion, in conjunction with the color characteristic of flame.Solve the False Rate height of fire alarm system, can not handle in real time, outstanding shortcomings such as environmental factor dependence is strong are just become the primary study content in fire Prevention Technique field, and have outstanding realistic meaning and theory significance.
Beneficial effect of the present invention mainly shows: 1, various characteristic quantities are quantized, judged whether that comprehensively fire takes place, and can effectively reduce rate of false alarm; 2, can handle in real time; 3, algorithm is simple relatively; 4, the deficiency of having avoided existing The Cloud Terrace to exist does not exist mechanical wear, and maintenance workload is little.
(4) description of drawings
Fig. 1 reflexes to omni-directional visual planar imaging synoptic diagram for three-dimensional space;
Fig. 2 is omni-directional visual optical accessories and camera and the synoptic diagram that is used;
Fig. 3 is a kind of based on the fire monitoring method of omnidirectional computer vision sensor and the schematic diagram of device;
Fig. 4 is a kind of based on the fire monitoring method of omnidirectional computer vision sensor and the module frame chart of device;
Fig. 5 is a connected graph mark principle;
(5) embodiment
Below in conjunction with accompanying drawing the present invention is further described.
With reference to Fig. 1, Fig. 2, Fig. 3, Fig. 4, Fig. 5, omnidirectional computer vision sensing system shown in Figure 1 enters the light at the center of hyperbolic mirror 1, according to bi-curved minute surface characteristic towards its virtual focus 4 refractions.Material picture 7 reflexes to imaging in the collector lens through hyperbolic mirror, a some P1 on this imaging plane (x*1, y*1) corresponding the coordinate A of a point spatially in kind (x1, y1, z1).
1-hyperbolic curve face mirror among Fig. 1,2-incident ray, the focus Om (0 of 3-hyperbolic mirror, 0, c), the virtual focus of 4-hyperbolic mirror is camera center O c (0,0 ,-c), 5-reflection ray, 6-imaging plane, the volume coordinate A of 7-material picture (x1, y1, z1), 8-incide the volume coordinate of the image on the hyperboloid minute surface, 9-be reflected in some P1 on the imaging plane (x*1, y*1).
The optical system that hyperbolic mirror shown in Fig. 1 constitutes can be represented by following 5 equatioies;
((X 2+Y 2)/a 2)-(Z 2/b 2)=-1 (Z>0) (17)
c = a 2 + b 2 - - - ( 18 )
β=tan -1(Y/X) (19)
α=tan -1[(b 2+c 2)sinγ-2bc]/(b 2+c 2)cosγ (20)
γ = tan - 1 [ f / ( X 2 + Y 2 ) ] - - - ( 21 )
X in the formula, Y, Z representation space coordinate, c represents the focus of hyperbolic mirror, and 2c represents two distances between the focus, a, b is respectively the real axis of hyperbolic mirror and the length of the imaginary axis, β represents the angle-position angle of incident ray on the XY plane, and α represents the angle-angle of depression of incident ray on the XZ plane, and f represents the distance of imaging plane to the virtual focus of hyperbolic mirror.
In conjunction with Fig. 1 and with reference to Fig. 2, the structure of the accessory of omni-directional visual function of the present invention by: hyperbolic curve face mirror reflection parts 1, transparent housing right cylinder 10, base 12 are formed, described hyperbolic curve face mirror 1 is positioned at the upper end of right cylinder 10, and the convex surface of mirror surface stretches in the right cylinder downward; The turning axle of described hyperbolic curve face mirror 1, right cylinder 10, base 12 is on same central axis; Described digital camera head 11 is positioned at the below of right cylinder 10; Have the circular groove identical on the described base 12 with the wall thickness of described right cylinder 10; Described base 12 is provided with a hole of a size of the camera lens with digital camera 11, and the bottom of described base 12 disposes embedded hardware and software systems 15.
In conjunction with Fig. 1 and with reference to Fig. 4, omnibearing vision sensor 13 of the present invention is connected in the microprocessor 15 of fire-disaster monitoring device by usb 14, described microprocessor 15 reads in view data through view data read module 16, ambient image when initialization in order to obtain not having fire, need this image is deposited in the image data storage module 18 so that the image recognition of back and processing, simultaneously in order to discern the size of object in motion and the modified-image or flame, need demarcate 9 basic parameters that obtain the omnidirectional images system to volume coordinate and carry out image recognition and processing, handle hereto in the transducer calibration module 17 in the present invention and carry out.
Described image stretching is handled to calculate and is carried out in image stretching processing module 19, and the effect of this module is that the circular comprehensive figure of a width of cloth width of cloth is launched into corresponding rectangle column panorama sketch, and the figure after the expansion has easy calculating, is out of shape advantages such as little.According to a point on the circular omnidirectional images (x*, y*) and a point on the rectangle column panorama sketch (x**, corresponding relation y**) set up that (x* is y*) with (x**, mapping matrix y**).Because this one-to-one relationship can be being transformed into indeformable panoramic picture by the mapping matrix method.By
Figure C200510061876C00033
Mapping matrix can be set up formula (1) relation.
Figure C200510061876C00032
According to formula (1), (x*, (x**, y**) correspondence has been set up y*) a some P** on omnidirectional images for each the pixel P* on the imaging plane
Figure C200510061876C00033
Behind the mapping matrix, the task that realtime graphic is handled can obtain simplifying.The omnidirectional images of each distortion that obtains on imaging plane is finished the computing of tabling look-up, and generates indeformable omnidirectional images; Indeformable omnidirectional images after the generation sends to real-time play module 20 and delivers to demonstration on the display 21; If the user need know on-the-spot real-time condition and can obtain on-the-spot omnidirectional images by network transmission module 22.
The color of one each pixel of width of cloth coloured image is synthetic by the weighting of RGB tristimulus values usually, and other colored base can be obtained by RGB rgb value linearity or nonlinear transformation as intensity, tone t saturation degree IHS base etc.For obtain on the flame image flame region and background area in the different color spaces with different external environment in the difference of color feature value at (cloudy day, fine day and night), and flame with the difference of the color feature value of luminophor in the different color space such as car light, torch light, adopted (Cr in this patent, Cb) spatial color model, to obtain the flame parameters value in the above color space, the color model modular converter is set up a flame image at (Cr, Cb) spatial distributions model.
Conversion formula from the RGB color space to the YCrCb color space (22) provides,
Y=0.29990*R+0.5870*G+0.1140*B
Cr=0.5000*R-0.4187*G-0.0813*B+128 (22)
Cb=-0.1787*R-0.3313*G+0.5000*B+128
Then according to flame image (Cr, Cb) spatial distributions model, calculate light emitting source whether drop on flame image (Cr Cb) in the spatial distributions model, is used as judging an important evidence of flame point, and computing formula is provided by formula (6),
W firecolor = exp { - 1 2 [ A * ( Cr - Cr ‾ ) 2 + 2 B * ( Cr - Cr ‾ ) * ( Cb - Cb ‾ ) + C * ( Cb - Cb ‾ ) 2 ] } - - - ( 6 )
In the formula (7)
Figure C200510061876C00043
,
Figure C200510061876C00044
Be the sample standard average of flame point Cr, Cb, A, B, C are respectively the coefficients that is come out by sample standard deviation and mean value computation.
Figure C200510061876C00043
=144.6;
Figure C200510061876C00044
=117.5;A=3.7*10 -3;B=4.1*10 -3;C=4.5*10 -3
Background maintenance module 37 is based on background and cuts algorithm and detect the key of motion object, its directly influence detect integrality and accuracy of motion object.Adopted the background adaptive method among the present invention, its core concept is the current mixed number (X that uses 1 group of vector: RGB to change to each background pixel Mix, bi) represent the permission value (i is a frame number) of legal background pixel, and adopt IIR filtering that it is carried out following renewal.
(1) change (not being that switch lamp causes) naturally when light, and no abnormal object is when existing, 1 group of vector (being respectively RGB) carries out adaptive learning:
X mix,bn+1(i)=(1-λ)X mix,bn(i)+λX mix,cn(i) (14)
In the formula: X Mix, cn(i) be present frame RGB vector, X Mix, bn(i) be present frame background RGB vector, X Mixbn+1(i) be next frame background forecast RGB vector, λ is the speed of context update: λ=0, uses changeless background (initial background); Present frame is used as a setting in λ=1; 0<λ<1, background is mixed by the background and the present frame of previous moment.
When having sudden change, light (causes) that (2) 1 group of vector is pressed present frame and reset by switch lamp:
X mix,bn+1(i)=X mix,cn(i) (15)
(3) when object enters monitoring range, background remains unchanged.For avoiding that the partial pixel study of motion object is background pixel, adopt:
X mix,bn+1(i)=X mix,bn(i) (23)
X in the following formula Mix, bn+1(i) (i=1,2,3) represent R respectively, G, and B3 component, for simplicity, above-mentioned formula has omitted coordinate (x, y) part of each pixel.
The variation of background luminance can be used to judge that whether detected motion object causes because of switch lamp.The switch lamp incident should not cause system alarm, thereby carries out the rate of false alarm that the background luminance analysis helps to reduce system.Background luminance uses average background brightness Yb to measure, and computing formula is provided by formula (11),
Y ‾ b = Σ x = 0 W - 1 Σ y = 0 H - 1 Y n ( x , y ) ( 1 - M n ( x , y ) ) Σ x = 0 W - 1 Σ y = 0 H - 1 ( 1 - M n ( x , y ) ) - - - ( 11 )
In the formula (11), (x y) is the brightness of each pixel of present frame to Yn, and (x y) is the mask table of present frame to Mn.The background luminance of former frame when representing to find exception object is arranged with Yb0, the background luminance of first frame when Yb1 represents to detect exception object, being changed to of two frame mean flow rates:
ΔY=Yb1-Yb0 (13)
If Δ Y is greater than certain value then think the incident of turning on light that taken place, if Δ Y is less than certain negative value then think the incident of turning off the light that taken place.Present frame is reset with formula (15) according to above-mentioned judged result.
Described mask table is to write down each pixel with one with the measure-alike array M of frame of video whether motion change is arranged, and this array is called mask mapping table (Mask Map):
Figure C200510061876D00342
Array M is the bianry image of motion object, is partitioned into the motion object thereby not only can be used to the mask frame of video, also can be used for tracking, analysis and the classification of motion object.
After gathering monitoring picture, above-mentioned process omnibearing vision sensor needs through handling several times, be comprehensive figure unfolding calculation, ask poor shadow figure, edge detection, ask that connected region, image pre-service, pattern classification, color characteristic are extracted, color characteristic is differentiated, further comprehensively judge then, differentiate the size of possibility on fire at last.
Describedly ask poor shadow figure to be also referred to as difference method, it is a kind of image processing method that is usually used in detected image variation and moving object, by asking poor shadow figure to obtain the processing of motion object module 23, can those pixel portion that have light source point to exist be detected according to the correspondence relation of three dimensions and image pixel, a stable reference image at first will be arranged here, and this reference image is stored in the storer of computing machine, among the present invention a relatively stable reference image is left in the image data file memory module 18, carry out image subtraction by photographing in real time between image and reference image then, the regional luminance that the result who subtracts each other changes strengthens, the computing formula of image subtraction is represented suc as formula (2)
f d(X,t 0,t i)=f(X,t i)-f(X,t 0) (2)
F in the formula d(X, t 0, t i) be to photograph the result who carries out image subtraction between image and reference image in real time; F (X, t i) be to photograph image in real time; F (X, t 0) be the reference image.
Because omnibearing vision sensor is all fixed in the fire hazard monitoring, and the stationary objects in the background may be moved sometimes, cut algorithm based on background and to detect the resulting motion pixel of motion object and may comprise object and move the hole that stays.Because the hole can not moved in frame of video subsequently, therefore available adjacent K frame difference method is eliminated the hole, adopts adjacent K frame difference method to judge whether certain pixel is the hole that background object stays among the present invention.Need to carry out the calculating of formula (3) for this reason,
f d(X,t i-k,t i)=f(X,t i)-f(X,t i-k) (3)
Moving in the unit that generally can consider to divide in the time of stationary objects worked as f d(X, t 0, t i) 〉=threshold value and f d(X, t I-k, t iWhen) 〉=threshold value is all set up, be considered to the motion object; If f d(X, t 0, t i) 〉=threshold value and f d(X, t I-k, t i)<threshold value thinks among the present invention that the stationary objects in the background is moved the hole that the back is produced, and upgrades replacement reference image in order to eliminate the hole with formula (4),
f ( X , t 0 ) ⇐ f ( X , t i - k ) - - - ( 4 )
Work as f d(X, t 0, t i) 〉=threshold value and f d(X, t I-k, t iWhen being considered to the motion object under all establishment situations of) 〉=threshold value, start following thread and carry out fire judgement computing module 24, will start 7 new threads in this module altogether, both simple pattern classification module 26, flame area variation characteristic judge module thread 27, flame body variation characteristic judge module thread 28, flame color feature judge module thread 29, flame flash law characteristic judge module thread 30, flame mass motion feature judge module thread 31, fire intensity feature judge module thread 32; Comprehensively judge in module 33 according to each thread computes result at last, calculate comprehensive judgement quantized value, and carry out different processing according to this quantized value; If when being judged as rest image, enter background maintenance module 37;
Before simple pattern classification module 26 is handled, owing to include noise in the actual image signal, and generally all show as high-frequency signal, therefore in identifying, to reject the image border point that produces by noise, the connectedness of carrying out then between pixel is calculated;
Described rejecting is by image border point that noise produced, use the method for neighbours territory traversal in the present invention, the value that the average gray value of the neighborhood interior pixel that it is determined with the filtering mask removes each pixel of alternate image, be of the average displacement of each pixel value, as shown in Equation (16) with all values in its local neighborhood;
h[i,j]=(1/M)∑f[k,1] (16)
In the formula, M is the pixel sum in the neighborhood, is taken as 4 among the present invention.
Connectedness between described pixel is to determine a key concept in zone.In two dimensional image, the individual adjacent pixels of m (m<=8) is arranged around the hypothetical target pixel, if this pixel grey scale equate with the gray scale of some some A in this m pixel, claim this pixel so and put A to have connectedness.Connectedness commonly used has 4 connected sums 8 to be communicated with.4 are communicated with four points in upper and lower, left and right of generally choosing object pixel.8 are communicated with and then choose object pixel all neighbor in two-dimensional space.All are had connective pixel then constituted a connected region as a zone.
Described connected region computing module 25 mainly solves in image processing process, a width of cloth bianry image, and its background and target have gray- scale value 0 and 1 respectively.To such bianry image, carry out mark to target, calculate each clarification of objective to discern, in the design of multiple goal real-time tracking system, need a kind of connected component labeling algorithm of saving internal memory fast.Among the present invention be that 0 sub-district represents that this sub-district do not have suspicious flame, if there is suspicious flame 1 this sub-district of expression with pixel.So can adopt connection composition scale notation to carry out the merging of defect area.The connection labeling algorithm can find all the connection compositions in the image, and the institute in the same connection composition is distributed same mark a little.Accompanying drawing 5 is for being communicated with the mark schematic diagram.Be the connected region algorithm below,
1) from left to right, scan image from top to bottom;
2) if pixel is 1, then:
If upper point and left side point have a mark, then duplicate this mark.
If have identical mark, duplicate this mark at 2.
If 2 have different marks, then duplicate a little mark and with in two marks input table of equal value as mark of equal value.
Otherwise give the new mark of this picture element distribution and this mark is imported table of equal value.
3) go on foot if need to consider more point then get back to the 2nd.
4) find minimum mark each of equal value concentrating of equivalence table.
5) scan image replaces each mark with the minimum mark in the table of equal value.
Described pattern classification module 26 is after finding suspicious flame region, adopts the method for the similarity of calculating the consecutive frame modified-image that flame and jamming pattern are simply classified.Before pattern classification, carry out pre-service earlier, each connected region that above-mentioned mark is crossed is obtained its area Si, following judgment rule is arranged:
1) if S ithreshold value 1, then this region of variation is a noise spot;
2) if S iThreshold value 2, then this region of variation is that large-area Infrared changes;
3) if threshold value 1<S ithreshold value 2, then this region of variation is suspicious flame region.
If image satisfies condition 3, then carry out simple pattern classification, adopt the similarity of calculating the consecutive frame modified-image among the present invention, as shown in Equation (5),
ϵ i = Σ ( x , y ) ∈ Ω b i ( x , y ) ∩ b i + 1 ( x , y ) Σ ( x , y ) ∈ Ω b i ( x , y ) ∪ b i + 1 ( x , y ) , i = 1 , N - 1 - - - ( 5 )
In the formula, b i(x y) is flame region suspicious in the previous frame, b I+1(x y) is flame region suspicious in the present frame;
According to the aforementioned calculation result, can obtain following simple pattern classification;
4) if ε ithreshold value 1, then image model is the bright spot of rapid movement;
5) if ε i〉=threshold value 2, then image model is fixing infraluminescence zone;
6) if threshold value 1<ε ithreshold value 2, then image model is a flame modes.
The color characteristic of described flame kernel portion is judged, is to utilize flame layering Changing Pattern, the color characteristic W that is calculated according to formula (6) in the pixel at the shape middle part of flame Fire colorAs a comprehensive judge index, W Fire ColorValue shows this luminophor more near flame near 1 more, and the judgement of the color characteristic of flame is carried out in flame color feature judge module thread 29.
Described flame mass motion feature is judged, in flame mass motion feature judge module thread 31, carry out, utilize the integral body of flame to move in this module to have continuity, the rule of non-jumping characteristic, the track that integral body by flame moves is judged, in this patent with the resultant mask mapping table of formula (12) as calculating flame mass motion feature, flame mass motion W Fire moveQuantized value is taken as 1 or 0, and quantized value is 1 to show that the flame mass motion has continuity, and 0 expression does not have continuity.
Described flame area variation characteristic is judged, in flame area variation characteristic judge module thread 27, carry out, it in this module the rule of increase trend of utilizing continuous, the extendability of flame area, that obtain its area Si according to above-mentioned each connected region and judge that whether flame area is the increase in extendability, light-emitting area Si by every two field picture in this patent carries out recursion calculating, asks the recursion value at the light-emitting area of next frame image
Figure C200510061876D00381
Computing formula is provided by formula (7);
S ‾ i t ( i + 1 ) = ( 1 - k ) * S ‾ i t ( i ) + k * S i - - - ( 7 )
In the formula,
Figure C200510061876D00383
Be the recurrence average value of the light-emitting area of next frame image, Be the recurrence average value of the light-emitting area of present frame image, Si is the calculated value of present frame light-emitting area, and K is a coefficient, less than 1.Calculate the increase trend that shows extendability in time with formula among the present invention,
S ‾ i t ( i + 1 ) + S ‾ i t ( i ) + S ‾ i t ( i - 1 ) 3 > S ‾ i t ( i - 2 ) + S ‾ i t ( i - 3 ) + S ‾ i t ( i - 4 ) 3 - - - ( 8 )
If above-mentioned inequality (8) is set up expression increase trend is arranged, reflected that flame area showing the increase trend of extendability in time, expands W with flame area Fire areaQuantized value is taken as 1, so quantized value is 1 to show that flame area has extendability, and 0 expression does not have extendability.
Described flame flashes law characteristic to be judged, flash in the law characteristic judge module thread 30 at flame and to carry out, be to utilize brightness to judge in this module in the time dependent rule of spatial distributions, in this patent by the above-mentioned luminous connected region S that calculates every two field picture iChange frequency judge, fire take place early stage, the luminous connected region S of flame iArea shows as increase tendency, promptly under the situation that inequality (8) is set up, further judges the recurrence average value of the light-emitting area of the calculated value Si of light-emitting area and present frame image
Figure C200510061876D0039165526QIETU
Size cases, adopted in this patent in case find that positive and negative the changing of both differences just counts, calculate the change frequency f that the number of times that changes at certain hour Duan Zhongqi flashes as flame Fenquncy, and with this f FenquncyThe threshold value f that sets with system cCompare, work as f Fenquncy〉=f cThe time, flame is flashed W Fire bickerQuantized value is taken as 1, and quantized value is 1 to show have flame to flash, and 0 expression does not have flame to flash.
Described flame body variation characteristic is judged, in flame body variation characteristic judge module thread 28, carry out, be to utilize fire stage in early days in this module, the change of shape of flame, spatial orientation changes, the shake of flame and flame deciliter etc., has own unique Changing Pattern, there is not common regularity in above-mentioned variation, in this patent, used omnibearing vision sensor, from the aspect, visual angle, can observe the change of shape of the flame on the horizontal direction, because body that luminophor produced and body variations such as electric torch and car light, have the regularity of body variation and the systematicness of body, if changing, body that calculating gained luminophor is produced and body have regularity and systematicness, with W Fire bodyQuantized value is taken as 0, and quantized value is 0 to show that luminophor has regularity and systematicness, and 1 expression has not regulation and systematicness.
Described fire intensity is judged, in fire intensity judge module thread 32, carry out, in this module at flame mass motion W Fire moveQuantized value is 1, flame area expansion W Fire areaQuantized value is 1 and flame color feature W Fire colorCalculated value greater than 0.5 o'clock, the percentage that flame area accounts for whole monitoring area recently calculates, one percentage point note 0.1, computing formula is provided by formula (9);
W fire indensity = S i ΣS * 10 where W fire move = 1 AND W fire area = 1 AND W fire color ≥ = 0.5 - - - ( 9 )
In the formula, ∑ S is whole monitoring area, the fire intensity W that is calculated Fire indensityThe big more fire intensity that shows of value is big more.
On the basis that seven kinds of above-mentioned flames are judged, then comprehensively judge to reduce the disconnected rate of erroneous judgement, can judge the degree of fire simultaneously, the weighted comprehensive judgement is calculated and is carried out in module 33, comprehensive judgment formula is provided by formula (10), has adopted weighting scheme in the comprehensive judgement
W fire?alarm=K fire?pattem×ε i+K fire?color×W fire?color+K fire move×W fire?move+K fire?area×W fire?area
+K fire?body×W fire?body+K fire?indensity×W fire?indensity+K fire?bicker×W fire?bicker(10)
In the formula:
K Fire patternWeighting coefficient for flame modes.
K Fire colorWeighting coefficient for the flame color feature.
K Fire moveWeighting coefficient for the flame moving characteristic.
K Fire areaWeighting coefficient for the flame area variation.
K Fire bodyWeighting coefficient for the variation of flame body.
K Fire indensityWeighting coefficient for fire intensity.
K Fire bickerThe weighting coefficient that flashes for flame.
The W that calculates according to formula (10) Fire alarmThe result, at first to make following different output result according to the size of quantized value;
1) if K Attention≤ W Fire alarm≤ K Alarm1The then suspicious attention of flame, by telex network module 36 send SMS message, voice call or email notification managerial personnel are by the network validation image, start image data file memory module 18 record live video data, managerial personnel can continue observation by network selecting and still calculate from newly beginning in this case, and managerial personnel's affirmation information is write in the storage of subscriber data information module 34 so that the energy clearly defining responsibilities;
2) if K Alarm1<W Fire alarm≤ K Alarm2Fire early warning then, by telex network module 36 send SMS message, voice call or email notification managerial personnel are by the network validation image, and require the scene to confirm, simultaneously automatic alert notice 119 starts image data file memory module 18 record live video data;
3) if K Alarm2<W Fire alarm≤ K Alarm3Judge that then fire takes place, by telex network module 36 automatic alert notices 119, simultaneously by telex network module 36 send SMS message, voice call or email notification managerial personnel dial 119, inform that clearly present judgement is that fire takes place, and sounds alarm of fire simultaneously, inform the surrounding area breaking out of fire, so that emergency evacuation continues to send fire alarm to 119, continue record live video data;
4) K Alarm3<W Fire alarm, sound alarm of fire, inform the surrounding area breaking out of fire, so that emergency evacuation continues to send fire alarm to 119, notify the order of severity of all relevant personnel this area breaking out of fires and fire by all methods, hope obtains rescue immediately, continues record live video data simultaneously.
Can know that from top various calculating the quantized value of calculating is big more, the possibility of breaking out of fire is just big more.
Can know that from top various calculating the quantized value of calculating is big more, the possibility of breaking out of fire is just big more.Below work be will be in time information such as time of judge fire generation, place, degree to be sent to the personnel of this information of needs by the diverse network means so that can be handled timely and the disaster relief.
The judgement of described flame occurrence positions, be there being the central point of flame to calculate according to the correspondence of three dimensions and image pixel relation, the address information of another part is to leave in the storage of subscriber data information 34, this two-part information is combined just can obtain a complete detailed address azimuth information (what orientation that belongs to what district, what location, house number, what floor, floor).
Comprehensive judgement described fire breaks out is according to above-mentioned calculating, works as W Fire alarmGreater than K AttentionThe time, start shoot and monitor automatically and partly carry out photography scene evidence obtaining, simultaneously photography scene evidence obtaining information is kept in the information storage part 18, continue to read quantized value then, at K after after a while Timelimit(device is initially set 1 minute) do not find that quantized value has increase trend, stop to make a video recording and with the quantized value zero clearing so that the calculating of next time, and simultaneously the image of shooting and the time of generation are kept in the image data file memory module 18, so that the technician analyzes reason; At a period of time K TimelimitInterior quantized value also when continuing to increase, is at first judged quantized value, if quantized value is at K Alarm2And K Alarm1Between the time, at first play alarm, and then by the relevant guard managerial personnel of various possible communication facilities notices, require the guard managerial personnel to confirm by network means (device provides the remote visualization monitoring means) or to the scene, the device aut.eq. continues the observation quantized value and changes; When quantized value is K Alarm3And K Alarm2Between the time, device sends fire condition automatically to fire department 119, the information that from storage of subscriber data information 34, obtains circulating a notice of, include the place (detail location that belongs to what district, what location, house number, fire generation) of warning, the time of fire alarm, the intensity of system prediction fire, the information such as area of burning in the information of being circulated a notice of, after fire department obtains above-mentioned information, can make correct Countermeasure in the very first time; The affirmation breaking out of fire is worked as quantized value and is surpassed K Alarm3When above, sound the alarm and notify peripheral emergent evacuation of people and continue to give fire department 119 to report to the police, the packets of information of warning contains the place of warning (detail location that belongs to what city, what location, house number, fire generation), the time of fire alarm, the intensity of system prediction fire, the information such as area of burning.
Described microprocessor 15 is embedded systems, and the implementation algorithm among the present invention is realized by Java language.
The invention effect that the above embodiments 1 are produced is to calculate the rate of false alarm that has reduced fire by the comprehensive judgement quantification of fire, and a kind of brand-new quicker, more accurate, more reliable multi parameter intallingent fire monitoring method and device based on omnibearing computer vision sensor is provided.

Claims (6)

1, a kind of fire-disaster monitoring device based on omnibearing vision sensor, this fire-disaster monitoring device comprise microprocessor, are used for the video sensor of on-site supervision, are used for and extraneous communication module of communicating by letter, and described microprocessor comprises:
The view data read module is used to read the video image information of coming from the video sensor biography;
File storage module, the data storage that is used for video sensor is gathered is to storer;
On-the-spot real-time play module is used to connect outside display device, with on-site supervision picture real-time play;
The output of described video sensor is connected with microprocessor communication, it is characterized in that:
Described video sensor is an omnibearing vision sensor, described vision sensor comprises evagination mirror surface, transparent cylinder, the camera that is used for reflecting monitoring field object, described evagination mirror surface down, described transparent cylinder supports the evagination mirror surface, the camera that is used to take imaging body on the evagination mirror surface is positioned at the inside of transparent cylinder, and camera is positioned on the virtual focus of evagination mirror surface;
Described microprocessor also comprises:
The transducer calibration module is used for the parameter of omnibearing vision sensor is demarcated, and sets up the material picture in space and the corresponding relation of the video image that is obtained;
The color model modular converter, be used for color with each pixel of coloured image from the RGB color space conversion to (Cr, Cb) spatial color model;
The image stretching processing module, the circular video image that is used for gathering expands into the panorama histogram, according to a point (x on the circular omnidirectional images *, y *) and rectangle column panorama sketch on a point (x *, y *) corresponding relation, set up (x *, y *) and (x *, y *) mapping matrix, shown in the formula (1):
Figure C200510061876C00032
In the following formula,
Figure C200510061876C00033
Be mapping matrix,
Figure C200510061876C00034
Be the picture element matrix on the circular omnidirectional images, It is the picture element matrix on the rectangle column panorama sketch;
The motion obj ect detection module, present frame live video image and a relatively stable reference image of being used for being obtained carry out the difference computing, and the computing formula of image subtraction is represented suc as formula (2):
f d(X,t 0,t i)=f(X,t i)-f(X,t 0) (2)
In the following formula, f d(X, t 0, t i) be to photograph the result who carries out image subtraction between image and reference image in real time; F (X, t i) be to photograph image in real time; F (X, t 0) be the reference image;
And with in the present image with the image subtraction computing formula of adjacent K frame shown in (3):
f d(X,t i-k,t i)=f(X,t i)-f(X,t i-k) (3)
In the following formula, f d(X, t I-k, t i) be to photograph the result who carries out image subtraction between image and adjacent K two field picture in real time; F (X, t I-k) image when being adjacent K frame;
As f d(X, t 0, t i) 〉=threshold value, f d(X, t I-k, t iWhen) 〉=threshold value is set up, be judged to be suspicious flame object;
As f d(X, t 0, t i) 〉=threshold value, f d(X, t I-k, t i)<threshold value is judged stationary objects, and upgrades replacement reference image with formula (4):
f ( X , t 0 ) ⇐ f ( X , t i - k ) - - - ( 4 )
As f d(X, t 0, t i)<threshold value is judged to be stationary objects;
The connected region computing module, be used for present image is carried out mark, pixel grey scale is that 0 sub-district represents that this sub-district do not have suspicious flame, pixel grey scale is that 1 this sub-district of expression has suspicious flame, whether the pixel of calculating in the present image equates with the pixel of some points adjacent around the current pixel, equate to be judged as gray scale and have connectedness, all are had connective pixel as a connected region;
The pattern classification module, be used to be judged to be suspicious flame after, each connected region is obtained its area Si, and according to as follows:
1) if S i<threshold value 1, then this region of variation is a noise spot;
2) if S iThreshold value 2, then this region of variation is that large-area Infrared changes;
3) if threshold value 1<S i<threshold value 2, then this region of variation is suspicious flame region;
After being judged as suspicious flame region, calculate the similarity ε of consecutive frame modified-image i, as shown in Equation (5),
ϵ i = Σ ( x , y ) ∈ Ω b i ( x , y ) ∩ b i + 1 ( x , y ) Σ ( x , y ) ∈ Ω b i ( x , y ) ⋃ b i + 1 ( x , y ) , i = 1 , N - 1 - - - ( 5 )
In the formula, b i(x y) is flame region suspicious in the previous frame, b I+1(x y) is flame region suspicious in the present frame;
According to the aforementioned calculation result, pattern classification is:
1) if ε i≤ threshold value 1, then image model is the bright spot of rapid movement;
2) if ε i〉=threshold value 2, then image model is fixing infraluminescence zone;
3) if threshold value 1<ε i<threshold value 2, then image model is a flame;
Flame color feature judge module, be used for (Cr, Cb) spatial distributions model, calculate light emitting source whether drop on flame image (Cr, Cb) in the spatial distributions model, computing formula is by shown in the formula (6):
W firecolor = exp { - 1 2 [ A * ( Cr - Cr ‾ ) 2 + 2 B * ( Cr - Cr ‾ ) * ( Cb - Cb ‾ ) + C * ( Cb - Cb ‾ ) 2 ] } - - - ( 6 )
In the formula (6), W FirecolorBe the color characteristic amount,
Figure C200510061876C00043
,
Figure C200510061876C00044
Be the sample standard average of flame point Cr, Cb, A, B, C are respectively the coefficients that is come out by sample standard deviation and mean value computation;
Figure C200510061876C00043
=144.6;
Figure C200510061876C00044
=117.5;A=3.7*10 -3;B=4.1*10 -3;C=4.5*10 -3
Flame area variation characteristic judge module is used for carrying out recursion calculating according to the light-emitting area Si of every two field picture, asks the recursion value at the light-emitting area of next frame image
Figure C200510061876C00051
Computing formula is provided by formula (7);
S ‾ i t ( i + 1 ) = ( 1 - k ) * S ‾ i t ( i ) + k * S i - - - ( 7 )
In the formula,
Figure C200510061876C00053
Be the recurrence average value of the light-emitting area of next frame image, Be the recurrence average value of the light-emitting area of present frame image, Si is the calculated value of present frame light-emitting area, and K is a coefficient, less than 1, obtains following formula (8)
S ‾ i t ( i + 1 ) + S ‾ i t ( i ) + S ‾ i t ( i - 1 ) 3 > S ‾ i t ( i - 2 ) + S ‾ i t ( i - 3 ) + S ‾ i t ( i - 4 ) 3 - - - ( 8 )
If set up above-mentioned (8), flame area expansion W Fire areaQuantized value is taken as 1; Be false flame area expansion W as following formula (8) Fire areaQuantized value is taken as 0;
Flame flashes the law characteristic judge module, when being used for setting up in above-mentioned (8), with the recurrence average value of the light-emitting area of the calculated value Si of light-emitting area and present frame image
Figure C200510061876C0005090610QIETU
Ask difference to calculate, just count, calculate the change frequency f that the number of times that changes at certain hour Duan Zhongqi flashes as flame as both positive and negative the changing of difference Fenquncy, and with this f FenquncyWith the threshold value f that sets cCompare, work as f Fenquncy〉=f cThe time, flame flashes W Fire bickerQuantized value is taken as 1; Work as f Fenquncy<f cThe time, flame flashes W Fire bickerQuantized value 0;
The fire intensity judge module is used at flame area expansion W Fire areaQuantized value be 1 and the calculated value of flame color feature W fire color greater than 0.5 o'clock, computing formula is as the formula (9);
W fire indensity = S i ΣS * 10 - - - ( 9 )
In the following formula (9), ∑ S is whole monitoring area, W Fire indensityIt is the fire intensity value;
Comprehensive judge module takes place in fire, is used for flashing rule according to flame modes, flame color eigenwert, flame area, flame and has judged whether that comprehensively fire takes place, and its calculating formula is shown in (10):
W fire?alarm=K fire?pattern×ε i+K fire?color×W fire?color+K fire?area×W fire?area (10)
+K fire?bicker×W fire?bicker+K fire?indensity×W fire?indensity
In the formula: K Fire patternWeighting coefficient for flame modes;
K Fire colorWeighting coefficient for the flame color feature;
K Fire areaWeighting coefficient for the flame area variation;
K Fire bickerThe weighting coefficient that flashes for flame;
K Fire indensityWeighting coefficient for fire intensity;
As K Alarm≤ W Fire alarm, be judged to be the fire alarm, notify managerial personnel by communication module.
2, the fire-disaster monitoring device based on omnibearing vision sensor as claimed in claim 1 is characterized in that: warning value K AlarmComprise K Attention, K Alarm1, K Alarm2, K Alarm3
If K Attention≤ W Fire alarm≤ K Alarm1, be judged to be the suspicious attention of flame, notify managerial personnel by communication module;
If K Alarm1<W Fire alarm≤ K Alarm2, be judged to be the fire early warning, notify managerial personnel by the telex network module, startup file memory module record live video data;
If K Alarm2<W Fire alarm≤ K Alarm3, be judged to be fire and take place, by the automatic alert notice 119 of telex network module, and notify managerial personnel, startup file memory module record live video data;
If K Alarm3<W Fire alarm, then sound alarm of fire, by the automatic alert notice 119 of telex network module, and notify managerial personnel, startup file memory module record live video data.
3, the fire-disaster monitoring device based on omnibearing vision sensor as claimed in claim 1, it is characterized in that: described microprocessor also comprises the background maintenance module, described background maintenance module comprises:
The background luminance computing unit is used to calculate average background brightness Y bComputing formula is as the formula (11):
Y ‾ b = Σ x = 0 W - 1 Σ y = 0 H - 1 Y n ( x , y ) ( 1 - M n ( x , y ) ) Σ x = 0 W - 1 Σ y = 0 H - 1 ( 1 - M n ( x , y ) ) - - - ( 11 )
In the formula (11), Y N (x, y)Be the brightness of each pixel of present frame, M N (x, y)Be the mask table of present frame, described mask table is to write down each pixel with one with the measure-alike array M of frame of video whether motion change is arranged, referring to formula (12):
Figure C200510061876C00072
Y B0The background luminance of former frame when being judged to be suspicious flame object, Y B1The background luminance of first frame when being judged to be suspicious flame object, being changed to of two frame mean flow rates:
ΔY=Y b1-Y b0 (13)
If Δ Y, then thinks the incident of turning on light that taken place greater than higher limit; If Δ Y, then thinks the incident of turning off the light that taken place less than certain lower limit; Between higher limit and lower limit, think then that light changes naturally as Δ Y;
The background adaptive unit is used for carrying out adaptive learning according to following formula (14) when light changes naturally:
X mix,bn+1(i)=(1-λ)X mix,bn(i)+λX mix,cn(i) (14)
In the formula: X Mix, cn(i) be present frame RGB vector, X Mix, bn(i) be present frame background RGB vector, X Mix, bn+1(i) be next frame background forecast RGB vector, λ is the speed of context update; Changeless background is used in λ=0; Present frame is used as a setting in λ=1; 0<λ<1, background is mixed by the background and the present frame of previous moment;
When light is caused that by switch lamp background pixel is reset according to present frame, referring to formula (15):
X mix,bn+1(i)=X mix,cn(i) (15)。
4, as the described fire-disaster monitoring device based on omnibearing vision sensor of one of claim 1-3, it is characterized in that: described microprocessor also comprises:
Noise is rejected module, is used for the average displacement of each pixel value with all pixel values in its local neighborhood, as shown in Equation (16):
h[i,j]=(1/M)∑f[k,1] (16)
In the following formula (16), M is the pixel sum in the neighborhood.
5, a kind of fire-disaster monitoring device based on omnibearing vision sensor, this fire-disaster monitoring device comprise microprocessor, are used for the video sensor of on-site supervision, are used for and extraneous communication module of communicating by letter, and described microprocessor comprises:
The view data read module is used to read the video image information of coming from the video sensor biography;
File storage module, the data storage that is used for video sensor is gathered is to storer;
On-the-spot real-time play module is used to connect outside display device, with on-site supervision picture real-time play;
The output of described video sensor is connected with microprocessor communication, it is characterized in that:
Described video sensor is an omnibearing vision sensor, described vision sensor comprises evagination mirror surface, transparent cylinder, the camera that is used for reflecting monitoring field object, described evagination mirror surface down, described transparent cylinder supports the evagination mirror surface, the camera that is used to take imaging body on the evagination mirror surface is positioned at the inside of transparent cylinder, and camera is positioned on the virtual focus of evagination mirror surface;
Described microprocessor also comprises:
The transducer calibration module is used for the parameter of omnibearing vision sensor is demarcated, and sets up the material picture in space and the corresponding relation of the video image that is obtained;
The color model modular converter, be used for color with each pixel of coloured image from the RGB color space conversion to (Cr, Cb) spatial color model;
The image stretching processing module, the circular video image that is used for gathering expands into the panorama histogram, according to a point (x on the circular omnidirectional images *, y *) and rectangle column panorama sketch on a point (x *, y *) corresponding relation, set up (x *, y *) and (x *, y *) mapping matrix, shown in the formula (1):
Figure C200510061876C00032
In the following formula,
Figure C200510061876C00033
Be mapping matrix,
Figure C200510061876C00034
Be the picture element matrix on the circular omnidirectional images, It is the picture element matrix on the rectangle column panorama sketch;
The motion obj ect detection module, present frame live video image and a relatively stable reference image of being used for being obtained carry out the difference computing, and the computing formula of image subtraction is represented suc as formula (2):
f d(X,t 0,t i)=f(X,t i)-f(X,t 0) (2)
In the following formula, f d(X, t 0, t i) be to photograph the result who carries out image subtraction between image and reference image in real time; F (X, t i) be to photograph image in real time; F (X, t 0) be the reference image;
And with in the present image with the image subtraction computing formula of adjacent K frame shown in (3):
f d(X,t i-k,t i)=f(X,t i)-f(X,t i-k) (3)
In the following formula, f d(X, t I-k, t i) be to photograph the result who carries out image subtraction between image and adjacent K two field picture in real time; F (X, t I-k) image when being adjacent K frame;
As f d(X, t 0, t i) 〉=threshold value, f d(X, t I-k, t iWhen) 〉=threshold value is set up, be judged to be suspicious flame object;
As f d(X, t 0, t i) 〉=threshold value, f d(X, t I-k, t i)<threshold value is judged stationary objects, and upgrades replacement reference image with formula (4):
f ( X , t 0 ) ⇐ f ( X , t i - k ) - - - ( 4 )
As f d(X, t 0, t i)<threshold value is judged to be stationary objects;
The connected region computing module, be used for present image is carried out mark, pixel grey scale is that 0 sub-district represents that this sub-district do not have suspicious flame, pixel grey scale is that 1 this sub-district of expression has suspicious flame, whether the pixel of calculating in the present image equates with the pixel of some points adjacent around the current pixel, equate to be judged as gray scale and have connectedness, all are had connective pixel as a connected region;
The pattern classification module, be used to be judged to be suspicious flame after, each connected region is obtained its area Si, and according to as follows:
4) if S i<threshold value 1, then this region of variation is a noise spot;
5) if S iThreshold value 2, then this region of variation is that large-area Infrared changes;
6) if threshold value 1<S i<threshold value 2, then this region of variation is suspicious flame region;
After being judged as suspicious flame region, calculate the similarity ε of consecutive frame modified-image i, as shown in Equation (5),
ϵ i = Σ ( x , y ) ∈ Ω b i ( x , y ) ∩ b i + 1 ( x , y ) Σ ( x , y ) ∈ Ω b i ( x , y ) ⋃ b i + 1 ( x , y ) , i = 1 , N - 1 - - - ( 5 )
In the formula, b i(x y) is flame region suspicious in the previous frame, b I+1(x y) is flame region suspicious in the present frame;
According to the aforementioned calculation result, pattern classification is:
4) if ε i≤ threshold value 1, then image model is the bright spot of rapid movement;
5) if ε i〉=threshold value 2, then image model is fixing infraluminescence zone;
6) if threshold value 1<ε i<threshold value 2, then image model is a flame;
Flame color feature judge module, be used for (Cr, Cb) spatial distributions model, calculate light emitting source whether drop on flame image (Cr, Cb) in the spatial distributions model, computing formula is by shown in the formula (6):
W firecolor = exp { - 1 2 [ A * ( Cr - Cr ‾ ) 2 + 2 B * ( Cr - Cr ‾ ) * ( Cb - Cb ‾ ) + C * ( Cb - Cb ‾ ) 2 ] } - - - ( 6 )
In the formula (6), W FirecolorBe the color characteristic amount,
Figure C200510061876C00043
,
Figure C200510061876C00044
Be the sample standard average of flame point Cr, Cb, A, B, C are respectively the coefficients that is come out by sample standard deviation and mean value computation;
Figure C200510061876C00043
=144.6;
Figure C200510061876C00044
=117.5;A=3.7*10 -3;B=4.1*10 -3;C=4.5*10 -3
Flame area variation characteristic judge module is used for carrying out recursion calculating according to the light-emitting area Si of every two field picture, asks the recursion value at the light-emitting area of next frame image Computing formula is provided by formula (7);
S ‾ i t ( i + 1 ) = ( 1 - k ) * S ‾ i t ( i ) + k * S i - - - ( 7 )
In the formula, Be the recurrence average value of the light-emitting area of next frame image,
Figure C200510061876C00115
Be the recurrence average value of the light-emitting area of present frame image, Si is the calculated value of present frame light-emitting area, and K is a coefficient, less than 1, obtains following formula (8)
S ‾ i t ( i + 1 ) + S ‾ i t ( i ) + S ‾ i t ( i - 1 ) 3 > S ‾ i t ( i - 2 ) + S ‾ i t ( i - 3 ) + S ‾ i t ( i - 4 ) 3 - - - ( 8 )
If set up above-mentioned (8), flame area expansion W Fire areaQuantized value is taken as 1; Be false flame area expansion W as following formula (8) Fire areaQuantized value is taken as 0;
Flame flashes the law characteristic judge module, when being used for setting up in above-mentioned (8), with the recurrence average value of the light-emitting area of the calculated value Si of light-emitting area and present frame image
Figure C200510061876C0005090610QIETU
Ask difference to calculate, just count, calculate the change frequency f that the number of times that changes at certain hour Duan Zhongqi flashes as flame as both positive and negative the changing of difference Fenquncy, and with this f FenquncyWith the threshold value f that sets cCompare, work as f Fenquncy〉=f cThe time, flame flashes W Fire bickerQuantized value is taken as 1; Work as f Fenquncy<f cThe time, flame flashes W Fire bickerQuantized value 0;
The fire intensity judge module is used at flame area expansion W Fire areaQuantized value be 1 and the calculated value of flame color feature W fire color greater than 0.5 o'clock, computing formula is as the formula (9);
W fire indensity = S i ΣS * 10 - - - ( 9 )
In the following formula (9), ∑ S is whole monitoring area, W Fire indensityIt is the fire intensity value;
Flame mass motion characteristic module is used for that track that the integral body by flame moves judges, as judging the flame mass motion, flame arrangement motion quantized value W Fire moveBe taken as 1; As judge the non-integral motion, flame is put motion quantized value W in order Fire moveBe taken as 0;
Comprehensive judge module takes place in fire, is used for flashing rule according to flame modes, flame color eigenwert, flame area, flame and has judged whether that comprehensively fire takes place, and its calculating formula is shown in (10):
W firealarm=K fire?pattern×ε i+K fire?color×W fire?color+K fire?area×W fire?area
+K fire?bicker×W fire?bicker+K fire?indensity×W fire?indensity+K fire?move×W fire?move
In the following formula, K Fire moveIt is the weighting coefficient of flame mass motion;
K Fire pattemWeighting coefficient for flame modes;
K Fire colorWeighting coefficient for the flame color feature;
K Fire areaWeighting coefficient for the flame area variation;
K Fire bickerThe weighting coefficient that flashes for flame;
K Fire indensityWeighting coefficient for fire intensity;
As K Alarm≤ W Fire alarm, be judged to be the fire alarm, notify managerial personnel by communication module.
6, a kind of fire-disaster monitoring device based on omnibearing vision sensor, this fire-disaster monitoring device comprise microprocessor, are used for the video sensor of on-site supervision, are used for and extraneous communication module of communicating by letter, and described microprocessor comprises:
The view data read module is used to read the video image information of coming from the video sensor biography;
File storage module, the data storage that is used for video sensor is gathered is to storer;
On-the-spot real-time play module is used to connect outside display device, with on-site supervision picture real-time play;
The output of described video sensor is connected with microprocessor communication, it is characterized in that:
Described video sensor is an omnibearing vision sensor, described vision sensor comprises evagination mirror surface, transparent cylinder, the camera that is used for reflecting monitoring field object, described evagination mirror surface down, described transparent cylinder supports the evagination mirror surface, the camera that is used to take imaging body on the evagination mirror surface is positioned at the inside of transparent cylinder, and camera is positioned on the virtual focus of evagination mirror surface;
Described microprocessor also comprises:
The transducer calibration module is used for the parameter of omnibearing vision sensor is demarcated, and sets up the material picture in space and the corresponding relation of the video image that is obtained;
The color model modular converter, be used for color with each pixel of coloured image from the RGB color space conversion to (Cr, Cb) spatial color model;
The image stretching processing module, the circular video image that is used for gathering expands into the panorama histogram, according to a point (x on the circular omnidirectional images *, y *) and rectangle column panorama sketch on a point (x *, y *) corresponding relation, set up (x *, y *) and (x *, y *) mapping matrix, shown in the formula (1):
Figure C200510061876C00032
In the following formula,
Figure C200510061876C00033
Be mapping matrix,
Figure C200510061876C00034
Be the picture element matrix on the circular omnidirectional images, It is the picture element matrix on the rectangle column panorama sketch;
The motion obj ect detection module, present frame live video image and a relatively stable reference image of being used for being obtained carry out the difference computing, and the computing formula of image subtraction is represented suc as formula (2):
f d(X,t 0,t i)=f(X,t i)-f(X,t 0) (2)
In the following formula, f d(X, t 0, t i) be to photograph the result who carries out image subtraction between image and reference image in real time; F (X, t i) be to photograph image in real time; F (X, t 0) be the reference image;
And with in the present image with the image subtraction computing formula of adjacent K frame shown in (3):
f d(X,t i-k,t i)=f(X,t i)-f(X,t i-k) (3)
In the following formula, f d(X, t I-k, t i) be to photograph the result who carries out image subtraction between image and adjacent K two field picture in real time; F (X, t I-k) image when being adjacent K frame;
As f d(X, t 0, t i) 〉=threshold value, f d(X, t I-k, t iWhen) 〉=threshold value is set up, be judged to be suspicious flame object;
As f d(X, t 0, t i) 〉=threshold value, f d(X, t I-k, t i)<threshold value is judged stationary objects, and upgrades replacement reference image with formula (4):
f ( X , t 0 ) ⇐ f ( X , t i - k ) - - - ( 4 )
As f d(X, t 0, t i)<threshold value is judged to be stationary objects;
The connected region computing module, be used for present image is carried out mark, pixel grey scale is that 0 sub-district represents that this sub-district do not have suspicious flame, pixel grey scale is that 1 this sub-district of expression has suspicious flame, whether the pixel of calculating in the present image equates with the pixel of some points adjacent around the current pixel, equate to be judged as gray scale and have connectedness, all are had connective pixel as a connected region;
The pattern classification module, be used to be judged to be suspicious flame after, each connected region is obtained its area Si, and according to as follows:
7) if S i<threshold value 1, then this region of variation is a noise spot;
8) if S iThreshold value 2, then this region of variation is that large-area Infrared changes;
9) if threshold value 1<S i<threshold value 2, then this region of variation is suspicious flame region;
After being judged as suspicious flame region, calculate the similarity ε of consecutive frame modified-image i, as shown in Equation (5),
ϵ i = Σ ( x , y ) ∈ Ω b i ( x , y ) ∩ b i + 1 ( x , y ) Σ ( x , y ) ∈ Ω b i ( x , y ) ⋃ b i + 1 ( x , y ) , i = 1 , N - 1 - - - ( 5 )
In the formula, b i(x y) is flame region suspicious in the previous frame, b I+1(x y) is flame region suspicious in the present frame;
According to the aforementioned calculation result, pattern classification is:
7) if ε i≤ threshold value 1, then image model is the bright spot of rapid movement;
8) if ε i〉=threshold value 2, then image model is fixing infraluminescence zone;
9) if threshold value 1<ε i<threshold value 2, then image model is a flame;
Flame color feature judge module, be used for (Cr, Cb) spatial distributions model, calculate light emitting source whether drop on flame image (Cr, Cb) in the spatial distributions model, computing formula is by shown in the formula (6):
W firecolor = exp { - 1 2 [ A * ( Cr - Cr ‾ ) 2 + 2 B * ( Cr - Cr ‾ ) * ( Cb - Cb ‾ ) + C * ( Cb - Cb ‾ ) 2 ] } - - - ( 6 )
In the formula (6), W FirecolorBe the color characteristic amount,
Figure C200510061876C00043
,
Figure C200510061876C00044
Be the sample standard average of flame point Cr, Cb, A, B, C are respectively the coefficients that is come out by sample standard deviation and mean value computation;
Figure C200510061876C00043
=144.6;
Figure C200510061876C00044
=117.5;A=3.7*10 -3;B=4.1*10 -3;C=4.5*10 -3
Flame area variation characteristic judge module is used for carrying out recursion calculating according to the light-emitting area Si of every two field picture, asks the recursion value at the light-emitting area of next frame image
Figure C200510061876C00153
Computing formula is provided by formula (7);
S ‾ i t ( i + 1 ) = ( 1 - k ) * S ‾ i t ( i ) + k * S i - - - ( 7 )
In the formula,
Figure C200510061876C00155
Be the recurrence average value of the light-emitting area of next frame image,
Figure C200510061876C00156
Be the recurrence average value of the light-emitting area of present frame image, Si is the calculated value of present frame light-emitting area, and K is a coefficient, less than 1, obtains following formula (8)
S ‾ i t ( i + 1 ) + S ‾ i t ( i ) + S ‾ i t ( i - 1 ) 3 > S ‾ i t ( i - 2 ) + S ‾ i t ( i - 3 ) + S ‾ i t ( i - 4 ) 3 - - - ( 8 )
If set up above-mentioned (8), flame area expansion W Fire areaQuantized value is taken as 1; Be false flame area expansion W as following formula (8) Fire areaQuantized value is taken as 0;
Flame flashes the law characteristic judge module, when being used for setting up in above-mentioned (8), with the recurrence average value of the light-emitting area of the calculated value Si of light-emitting area and present frame image
Figure C200510061876C0005090610QIETU
Ask difference to calculate, just count, calculate the change frequency f that the number of times that changes at certain hour Duan Zhongqi flashes as flame as both positive and negative the changing of difference Fenquncy, and with this f FenquncyWith the threshold value f that sets cCompare, work as f Fenquncy〉=f cThe time, flame flashes W Fire bickerQuantized value is taken as 1; Work as f Fenquncy<f cThe time, flame flashes W Fire bickerQuantized value 0;
The fire intensity judge module is used at flame area expansion W Fire areaQuantized value be 1 and the calculated value of flame color feature W fire color greater than 0.5 o'clock, computing formula is as the formula (9);
W fire indensity = S i ΣS * 10 - - - ( 9 )
In the following formula (9), ∑ S is whole monitoring area, W Fire indensityIt is the fire intensity value;
Flame body variation characteristic quantization modules is used for the change of shape according to the flame on the horizontal direction, as the body that presents variation is regular, W Fire bodyQuantized value is taken as 0; As the body that presents changes irregularities, W Fire bodyQuantized value is taken as 1;
Comprehensive judge module takes place in fire, is used for flashing rule according to flame modes, flame color eigenwert, flame area, flame and has judged whether that comprehensively fire takes place, and its calculating formula is shown in (10):
W firealarm=K fire?pattern×ε i+K fire?color×W fire?color+K fire?area×W fire?area
+K fire?bicker×W fire?bicker+K fire?indensity×W fire?indensity+K fire?body×W fire?body
In the following formula, K Fire bodyIt is the weighting coefficient that the flame body changes;
K Fire moveIt is the weighting coefficient of flame mass motion;
K Fire pattemWeighting coefficient for flame modes;
K Fire colorWeighting coefficient for the flame color feature;
K Fire areaWeighting coefficient for the flame area variation;
K Fire bickerThe weighting coefficient that flashes for flame;
K Fire indensityWeighting coefficient for fire intensity;
As K Alarm≤ W Fire alarm, be judged to be the fire alarm, notify managerial personnel by communication module.
CNB2005100618768A 2005-12-07 2005-12-07 Fire-disaster monitoring device based on omnibearing vision sensor Expired - Fee Related CN100538757C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2005100618768A CN100538757C (en) 2005-12-07 2005-12-07 Fire-disaster monitoring device based on omnibearing vision sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2005100618768A CN100538757C (en) 2005-12-07 2005-12-07 Fire-disaster monitoring device based on omnibearing vision sensor

Publications (2)

Publication Number Publication Date
CN1979576A CN1979576A (en) 2007-06-13
CN100538757C true CN100538757C (en) 2009-09-09

Family

ID=38130726

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2005100618768A Expired - Fee Related CN100538757C (en) 2005-12-07 2005-12-07 Fire-disaster monitoring device based on omnibearing vision sensor

Country Status (1)

Country Link
CN (1) CN100538757C (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7983442B2 (en) * 2007-08-29 2011-07-19 Cyberlink Corp. Method and apparatus for determining highlight segments of sport video
CN101458865B (en) * 2008-05-09 2012-06-27 丁国锋 Fire disaster probe system and method
CN101393603B (en) * 2008-10-09 2012-01-04 浙江大学 Method for recognizing and detecting tunnel fire disaster flame
CN101441712B (en) * 2008-12-25 2013-03-27 北京中星微电子有限公司 Flame video recognition method and fire hazard monitoring method and system
CN101764999B (en) * 2009-07-28 2011-09-14 北京智安邦科技有限公司 Sub-camera video capture device
CN102236947B (en) * 2010-04-29 2012-08-29 中国建筑科学研究院 Flame monitoring method and system based on video camera
US9017435B2 (en) * 2010-10-08 2015-04-28 General Electric Company Gasifier monitor and control system
CN102034112B (en) * 2010-12-17 2012-04-04 浙江大学 Method for identifying moving and static targets by using phased array three-dimensional acoustic image pickup sonar
CN103377533B (en) * 2012-04-21 2015-10-28 鲍鹏飞 For the fire-smoke detection method of the coloured image ingest process identification of forest fire protection
CN102737467B (en) * 2012-06-29 2014-02-19 深圳市新太阳数码有限公司 Multifunctional sound system and fire alarm monitoring method thereof
CN104978588B (en) * 2015-07-17 2018-12-28 山东大学 A kind of flame detecting method based on support vector machines
CN105096323A (en) * 2015-07-28 2015-11-25 中国石油天然气股份有限公司 Pool fire flame height measurement method based on visible image processing
CN105869183B (en) * 2016-03-25 2018-09-25 北京智芯原动科技有限公司 Based on single-range flame detecting method and device
CN106314342A (en) * 2016-08-25 2017-01-11 宁波鑫星汽车部件有限公司 Automobile passenger seat airbag control system
CN106408846A (en) * 2016-11-29 2017-02-15 周川 Image fire hazard detection method based on video monitoring platform
CN106534945B (en) * 2016-12-14 2020-06-09 深圳Tcl数字技术有限公司 Method and device for controlling football mode of smart television
CN106861100B (en) * 2017-03-17 2019-07-02 大连希尔德安全技术有限公司 Fire detection based on full-view camera positions and puts out method and device
CN106961586B (en) * 2017-04-14 2018-10-09 特斯联(北京)科技有限公司 A kind of Office Area safety monitoring system based on Internet of Things
CN107729811B (en) * 2017-09-13 2020-07-07 浙江大学 Night flame detection method based on scene modeling
CN108961662A (en) * 2018-07-30 2018-12-07 肥城矿业集团矿业管理服务有限公司 A kind of coal-mine fire monitoring system based on wireless technology
CN109260736B (en) * 2018-08-20 2020-04-10 浙江大丰实业股份有限公司 Personnel evacuation system for sports car platform
CN109919120B (en) * 2019-03-15 2023-06-30 江苏鼎集智能科技股份有限公司 Flame detection method based on near infrared spectrum imaging
CN112785809B (en) * 2020-12-31 2022-08-16 四川弘和通讯有限公司 Fire re-ignition prediction method and system based on AI image recognition
CN117612319A (en) * 2024-01-24 2024-02-27 上海意静信息科技有限公司 Alarm information grading early warning method and system based on sensor and picture

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1404021A (en) * 2001-09-06 2003-03-19 松莲科技股份有限公司 Visual fire monitoring alarm method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1404021A (en) * 2001-09-06 2003-03-19 松莲科技股份有限公司 Visual fire monitoring alarm method and device

Also Published As

Publication number Publication date
CN1979576A (en) 2007-06-13

Similar Documents

Publication Publication Date Title
CN100538757C (en) Fire-disaster monitoring device based on omnibearing vision sensor
CN1943824B (en) An automatic fire fighting unit based on omnibearing visual sensor
CN100417223C (en) Intelligent safety protector based on omnibearing vision sensor
CN101334924B (en) Fire hazard probe system and its fire hazard detection method
US7991187B2 (en) Intelligent image smoke/flame sensor and detection system
CN104751593B (en) Method and system for fire detection, warning, positioning and extinguishing
CN100459704C (en) Intelligent tunnel safety monitoring apparatus based on omnibearing computer vision
CN1858551B (en) Engineering car anti-theft alarm system based on omnibearing computer vision
US10536673B2 (en) Smart city closed camera photocell and street lamp device
CN101458865B (en) Fire disaster probe system and method
CN100468245C (en) Air conditioner energy saving controller based on omnibearing computer vision
US20040175040A1 (en) Process and device for detecting fires bases on image analysis
CN101656012A (en) Intelligent image smog and flame detector and flame detection method
CN108389359A (en) A kind of Urban Fires alarm method based on deep learning
CN107360394B (en) More preset point dynamic and intelligent monitoring methods applied to frontier defense video monitoring system
CN101577033A (en) Multiband infrared image-type fire detecting system and fire alarm system thereof
CN107025753B (en) Wide area fire alarm device based on multispectral image analysis
CN108334801A (en) A kind of method for recognizing fire disaster, device and fire alarm system
CN201091014Y (en) Fire detecting device
CN107437318A (en) A kind of visible ray Intelligent Recognition algorithm
CN114724330A (en) Implementation method of self-adaptive mode switching multi-channel video fire real-time alarm system
KR102521726B1 (en) Fire detection system that can predict direction of fire spread based on artificial intelligence and method for predicting direction of fire spread
CN116434533A (en) AI wisdom highway tunnel synthesizes monitoring platform based on 5G
CN104297176A (en) Device, system and method for monitoring visibility of channel segments of Yangtze River in mountainous area in all-weather manner
CN100414992C (en) Omnibearing visual vibrating intruding image detector based on machine vision

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090909

Termination date: 20111207