CN103886344A - Image type fire flame identification method - Google Patents

Image type fire flame identification method Download PDF

Info

Publication number
CN103886344A
CN103886344A CN201410148888.3A CN201410148888A CN103886344A CN 103886344 A CN103886344 A CN 103886344A CN 201410148888 A CN201410148888 A CN 201410148888A CN 103886344 A CN103886344 A CN 103886344A
Authority
CN
China
Prior art keywords
image
centerdot
value
fuzzy
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410148888.3A
Other languages
Chinese (zh)
Other versions
CN103886344B (en
Inventor
王媛彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongkai Shuke Shandong Industrial Park Co ltd
Original Assignee
Xian University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Science and Technology filed Critical Xian University of Science and Technology
Priority to CN201410148888.3A priority Critical patent/CN103886344B/en
Publication of CN103886344A publication Critical patent/CN103886344A/en
Application granted granted Critical
Publication of CN103886344B publication Critical patent/CN103886344B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Fire-Detection Mechanisms (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image type fire flame identification method. The method comprises the following steps of 1, image capturing; 2, image processing. The image processing comprises the steps of 201, image preprocessing; 202, fire identifying. The fire identifying comprises the steps that indentifying is conducted by the adoption of a prebuilt binary classification model, the binary classification model is a support vector machine model for classifying the flame situation and the non-flame situation, wherein the building process of the binary classification model comprises the steps of I, image information capturing;II, feature extracting; III, training sample acquiring; IV, binary classification model building; IV-1, kernel function selecting; IV-2, classification function determining, optimizing parameter C and parameter D by the adoption of the conjugate gradient method, converting the optimized parameter C and parameter D into gamma and sigma 2; V, binary classification model training. By means of the image type fire flame identification method, steps are simple, operation is simple and convenient, reliability is high, using effect is good, and the problems that reliability is lower, false or missing alarm rate is higher, using effect is poor and the like in an existing video fire detecting system under a complex environment are solved effectively.

Description

A kind of Image Fire Flame recognition methods
Technical field
The invention belongs to image acquisition and processing technology field, especially relate to a kind of Image Fire Flame recognition methods.
Background technology
Fire is one of mine disaster, and the safety in production in human health, physical environment and colliery in serious threat.Along with scientific-technical progress, fire Automatic Measurement Technique becomes the important means of monitoring and fire alarm gradually.Nowadays, under coal mine, the main temperature effect take Fire Monitoring of fire prediction and detection, combustion products (effect of smog and gas occurs) and electromagnetic radiation effect are as main, but above-mentioned existing detection method is all waiting raising aspect sensitivity and reliability, and can not react to incipient fire, thereby require incompatible with increasingly strict fire safety evaluating.Especially in the time there is shelter in large space, fire products of combustion can be subject to the impact of spatial altitude and area in the propagation in space, common some type sense cigarette, temperature sensing fire detection alarm system cannot gather rapidly the cigarette temperature change information that fire sends, only have in the time that fire development arrives certain degree, just can make response, thereby be difficult to meet the requirement of early detection fire.Developing rapidly of video processing technique and mode identification technology makes fire detection and alarm mode just towards image conversion, digitizing, scale and intelligent direction development.And fire detection technology based on video monitoring has the advantages such as investigative range is wide, the response time is short, cost is low, not affected by environment, in conjunction with computer intellectual technology can provide more intuitively, abundanter information, significant to the safety in production in colliery.
At present, video fire hazard detection technique still belongs to the starting stage at home and abroad, product is in detection mode, principle of work, the aspect such as system architecture and practical occasion there are differences, typical system mainly contains the Signi Fire TM system of axonx LLC company of U.S. exploitation, the Alarm Eye VISFD distributed intelligence image fire detection system of DHF Intellvision company of U.S. exploitation, the two waveband monitoring of the infrared and common camera of Bosque company of the U.S., the VSD-8 system for power station fire hazard monitoring that ISL company of Switzerland and Magnox Electric company develop jointly.The SKLFS of the domestic current Chinese University of Science and Technology of research to fire detection and self-extinguishing does leadingly, active research has also been done by University Of Tianjin, Xi'an Communications University, Shenyang University of Technology and Shanghai Communications University in addition, but above-mentioned image fire detection system is used for the fire detection in power station, building, warehouse etc., the application under coal mine is relatively also fewer.In recent years, lot of domestic and international researchist is that texture analysis of flame images algorithm has been carried out deep research to the gordian technique of this image-type fire detection system, and made huge contribution, major embodiment in the following areas: 1. based on flame static nature if the spectral characteristic such as pixel intensity, colourity and regional structure are as the video flame detecting method of shape, profile etc.; 2. the video flame detecting method based on flame color moving region; 3. the video detecting method based on flame stroboscopic nature and time-frequency characteristic.But when the research of above-mentioned texture analysis of flame images algorithm is applied in existing video fire hazard detection system, all have some limitations to some extent, in complex scene, can not effectively remove interference, misreport of system is failed to report more serious.Therefore, nowadays lack the Image Fire Flame recognition methods that a kind of method step is simple, realizations is convenient and easy and simple to handle, reliability is high, result of use is good, can effectively solve under the complex environment of existing video fire hazard detection system existence reliability lower, report the problems such as rate of failing to report is higher, result of use is poor by mistake.
Summary of the invention
Technical matters to be solved by this invention is for above-mentioned deficiency of the prior art, a kind of Image Fire Flame recognition methods is provided, its method step is simple, realizations is convenient and easy and simple to handle, reliability is high, result of use good, can solve under the complex environment of existing video fire hazard detection system existence reliability lower, report the problems such as rate of failing to report is higher, result of use is poor by mistake.
For solving the problems of the technologies described above, the technical solution used in the present invention is: a kind of Image Fire Flame recognition methods, is characterized in that the method comprises the following steps:
Step 1, image acquisition: adopt image acquisition units and according to predefined sample frequency f s, the digital picture for the treatment of surveyed area gathers, and the digital picture synchronous driving that each sampling instant is gathered is to processor; Described image acquisition units and processor join;
Step 2, image processing: described processor carries out respectively image processing according to time order and function order to the digital picture that in step 1, each sampling instant gathers, and all identical to the disposal route of each sampling instant institute capturing digital image; When the digital picture that in step 1, any sampling instant gathers is processed, include following steps:
Step 201, image pre-service, process is as follows:
Step 2011, image receive and stores synchronized: the digital picture stores synchronized that described processor gathers the current sampling instant that now received is in data-carrier store, and described data-carrier store and processor join;
Step 2012, figure image intensifying: the digital picture current sampling instant being gathered by processor strengthens processing, obtain and strengthen digital picture after treatment;
Step 2013, image are cut apart: carry out dividing processing by processor to strengthening digital picture after treatment in step 2012, obtain target image;
Step 202, fire identification: adopt two disaggregated models of setting up in advance, target image described in step 2013 is processed, and drawn the fire condition classification in current sampling instant region to be detected; Described fire condition classification includes flame and without two classifications of flame, described two disaggregated models are the supporting vector machine model to having flame and classifying without two classifications of flame;
The process of establishing of described two disaggregated models is as follows:
Step I, image information collecting: adopt described image acquisition units, the multiframe digital picture one in region to be detected and the multiframe digital picture two in region to be detected when breaking out of fire not while gathering breaking out of fire respectively;
Step II, feature extraction: digital picture described in digital picture described in multiframe one and multiframe is carried out respectively to feature extraction, and from each digital picture, extract respectively one group of characteristic parameter that can represent and distinguish this digital picture, and this stack features parameter comprises M characteristic quantity, and M described characteristic quantity is numbered, M described characteristic quantity composition proper vector, wherein M >=2;
Step III, training sample obtain: described in the multiframe obtaining after feature extraction from step II described in digital picture one and multiframe in the proper vector of digital picture two, choose respectively the proper vector composition training sample set of digital picture two described in the proper vector of digital picture one described in m1 frame and m2 frame; Wherein, m1 and m2 are positive integer and m1=40~100, m2=40~100; It is m1+m2 that described training sample is concentrated the quantity of training sample;
Step IV, two disaggregated models are set up, and process is as follows:
Step IV-1, kernel function are chosen: select the kernel function of radial basis function as described two disaggregated models;
Step IV-2, classification function are determined: the nuclear parameter σ that treats selected radial basis function in penalty factor γ and step IV-1 2after determining, just obtain the classification function of described two disaggregated models, and complete the process of establishing of described two disaggregated models; Wherein, γ=C -2, σ=D -1, 0.01 < C≤10,0.01 < D≤50;
To penalty factor γ and nuclear parameter σ 2while determining, first adopt method of conjugate gradient to be optimized parameters C and D, obtain parameters C and D after optimizing, then according to γ=C -2and σ=D -1convert parameters C and D after optimizing to penalty factor γ and nuclear parameter σ 2;
Step V, two disaggregated model training: by m1+m2 concentrated training sample described in step III training sample, be input to the two disaggregated model training of setting up in step IV.
Above-mentioned a kind of Image Fire Flame recognition methods, is characterized in that: it is N and N=m1+m2 that training sample described in step III is concentrated training sample total quantity; Before carrying out two disaggregated model foundation in step IV, first N concentrated training sample of described training sample is numbered, what described training sample was concentrated p training sample is numbered p, p be positive integer and p=1,2 ..., N; P training sample is denoted as (x p, y p), wherein x pbe the characteristic parameter of p training sample, y pbe classification number and the y of p training sample p=1 or-1, wherein classification number is 1 to indicate flame, and classification number is-1 to indicate without flame;
While adopting method of conjugate gradient to be optimized parameters C and D in step IV-2, utilize m1+m2 concentrated training sample of training sample described in step III to be optimized, and optimizing process is as follows:
Step I, objective function are determined: sse ( C , D ) = 1 2 &Sigma; p = 1 N e p 2 = 1 2 &Sigma; p = 1 N [ K ~ ( p , p - ) &CenterDot; s p - y ( p ) ] 2 (1), in formula, sse (C, D) is for staying a Prediction sum squares, and p is the numbering that described training sample is concentrated each training sample, e pfor predicated error to p training sample of two disaggregated models set up in step IV and e p = K ~ ( p , p - ) &CenterDot; s p - y ( p ) ; Wherein, s p = s ( p - ) - s ( p ) ( A - 1 ) ( p , p ) ( A - 1 ) ( p - , p ) , In formula
Figure BDA0000490950310000044
for removing the vector of all the other the element compositions after p element in matrix s; S (p) is p the element of matrix s, (A -1) (p -, p) be matrix A -1p row remove the column vector that all the other elements after p element form, (A -1) (p, p) be matrix A -1p element of p row;
Figure BDA0000490950310000045
for matrix
Figure BDA0000490950310000046
p row remove the column vector that all the other elements after p element form, matrix for the augmented matrix of matrix K, wherein matrix K = | K ( x 1 , x 1 ) K ( x 1 , x 2 ) &CenterDot; &CenterDot; &CenterDot; K ( x 1 , x N ) K ( x 2 , x 1 ) K ( x 2 , x 2 ) &CenterDot; &CenterDot; &CenterDot; K ( x 2 , x N ) &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; K ( x N , x 1 ) K ( x N , x 2 ) &CenterDot; &CenterDot; &CenterDot; K ( x N , x N ) | ; Matrix A -1the inverse matrix of representing matrix A, matrix A = K + C 2 &CenterDot; I I N I N T 0 , Wherein matrix I is unit matrix, matrix I n=[1,1 ..., 1] t, the transposition of T representing matrix, matrix I nin comprise N element and N element is 1; Matrix s=A -1y, matrix Y = y 1 y 2 &CenterDot; &CenterDot; &CenterDot; y N 0 , Wherein y 1, y 2..., y nbe respectively described training sample and concentrate the classification of N training sample;
Step II, initial parameter are set: to the initial value C of parameters C and D 1and D 1determine respectively, and identification error threshold epsilon is set and ε > 0;
The gradient g of step III, current iteration kcalculate: according to formula
Figure BDA00004909503100000411
calculate in step I objective function to C kand D kgradient g k, k be iterations and k=1,2, If || g k||≤ε, stops calculating now C kand D kbe respectively parameters C and D after optimization; Otherwise, enter step IV;
Wherein, &PartialD; sse &PartialD; C k = &Sigma; p = 1 N e p &CenterDot; K ~ ( p , p - ) &CenterDot; &PartialD; s p &PartialD; C k ;
&PartialD; sse &PartialD; D k = &Sigma; p = 1 n e p &CenterDot; [ &PartialD; K ~ ( p , p - ) &PartialD; D k &CenterDot; s ( p - ) + K ~ ( p , p - ) &CenterDot; &PartialD; s p &PartialD; D k ] , In formula for matrix
Figure BDA0000490950310000054
p row remove the column vector that all the other elements after p element form; S (p -) for removing the vector of all the other the element compositions after p element, e in matrix s pfor the predicated error of two disaggregated models to p training sample of setting up in step IV;
The direction of search d of step IV, current iteration kcalculate: according to formula d k = - g k k = 1 - g k + &beta; k d k - 1 k &GreaterEqual; 2 , Calculate the direction of search d of current iteration k, d in formula k-1be the direction of search of the k-1 time iteration, β k=|| g k|| 2/ || g k-1|| 2, g k-1it is the gradient of the k-1 time iteration;
The step-size in search λ of step V, current iteration kdetermine: along determined direction of search d in step IV ksearch for, find out and meet formula
sse ( C k - &lambda; k &PartialD; sse &PartialD; C k , D k - &lambda; k &PartialD; sse &PartialD; D k ) = min &lambda; k > 0 sse ( C k - &lambda; k &PartialD; sse &PartialD; C k , D k - &lambda; k &PartialD; sse &PartialD; D k ) Step-size in search λ k, in formula
Figure BDA0000490950310000057
be illustrated in (0 ,+∞) and find and make
Figure BDA0000490950310000058
reach the step-length λ of minimum value k;
Step VI, according to formula C k + 1 = C k - &lambda; k &PartialD; sse &PartialD; C k With D k + 1 = D k - &lambda; k &PartialD; sse &PartialD; D k , To C k+1and D k+1calculate;
Step VII, make k=k+1, return to afterwards step III, carry out next iteration;
Radial basis function selected in step IV-1 is
Figure BDA00004909503100000511
the regression function of this radial basis function is
Figure BDA00004909503100000512
α in formula tbe regression parameter with b, s be positive integer and s=1,2 ..., N, t be positive integer and t=1,2 ..., N.
Above-mentioned a kind of Image Fire Flame recognition methods, is characterized in that: M=6 in step II, and 6 characteristic quantities are respectively area, similarity, square characteristic, density, textural characteristics and stroboscopic nature.
Above-mentioned a kind of Image Fire Flame recognition methods, is characterized in that: in step II to C 1and D 1while determining, the method that adopts grid search method or randomly draw numerical value is to C 1and D 1determine; Employing is randomly drawed the method for numerical value to C 1and D 1while determining, C 1for (0.01,1] in a numerical value randomly drawing, D 1for (0.01,50] in a numerical value randomly drawing; Adopt grid search method to C 1and D 1while determining, first 10 -3for step-length grid division, then make three dimensional network trrellis diagram take C and D as independent variable and take objective function described in step I as dependent variable, find out afterwards many groups parameter of C and D by grid search, finally average as C to organizing parameter more 1and D 1.
Above-mentioned a kind of Image Fire Flame recognition methods, is characterized in that: while carrying out figure image intensifying in step 2012, adopt the image enchancing method based on fuzzy logic to strengthen processing.
Above-mentioned a kind of Image Fire Flame recognition methods, is characterized in that: adopt the image enchancing method based on fuzzy logic to strengthen while processing, process is as follows:
Step 20121, transform to fuzzy field by image area: according to membership function &mu; gh = T ( x gh ) = x gh / X T x gh &le; X T x gh / X max x gh > X T (7), the gray-scale value of the each pixel of image described to be strengthened is all mapped to the fuzzy membership of fuzzy set, and the fuzzy set of image to be strengthened described in corresponding acquisition; X in formula ghfor the gray-scale value of arbitrary pixel (g, h) in image described to be strengthened, X tfor adopting image enchancing method based on fuzzy logic to described gray threshold selected in the time strengthening image and strengthens processing, X maxfor the maximum gradation value of image described to be strengthened;
Step 20122, utilize fuzzy enhancement operator to carry out fuzzy enhancing processing at fuzzy field: the fuzzy enhancement operator adopting is μ ' gh=I rgh)=I r(I r-1μ gh), in formula, r is that iterations and its are positive integer, r=1,2, Wherein I 1 ( &mu; gh ) = &mu; gh 2 / &mu; c 0 &le; &mu; gh &le; &mu; c 1 - ( 1 - &mu; gh ) 2 / ( 1 - &mu; c ) &mu; c &le; &mu; gh &le; 1 , μ in formula c=T (X c), wherein X cfor getting over a little and X c=X t;
Step 20123, change to image area by fuzzy field inversion: according to formula
Figure BDA0000490950310000063
(6), by the μ ' obtaining after fuzzy enhancing processing ghcarry out inverse transformation, obtain and strengthen the gray-scale value of processing each pixel in rear digital picture, and acquisition strengthens digital picture after treatment.
Above-mentioned a kind of Image Fire Flame recognition methods, is characterized in that: before transforming to fuzzy field by image area in step 20121, first adopt maximum variance between clusters to gray threshold X tchoose; Adopt maximum variance between clusters to gray threshold X tbefore choosing, first from the grey scale change scope of image described to be strengthened, find out pixel quantity and be all gray-scale values of 0, and adopt processor (3) that all gray-scale values of finding out are all labeled as and exempt to calculate gray-scale value; Adopt maximum variance between clusters to gray threshold X twhile choosing, to in the described grey scale change scope wait strengthening image except described in inter-class variance value while exempting to calculate other gray-scale value gray-scale value as threshold value calculate, and find out maximum between-cluster variance value from the inter-class variance value calculating, gray-scale value corresponding to maximum between-cluster variance value of finding out just for gray threshold X t.
Above-mentioned a kind of Image Fire Flame recognition methods, is characterized in that: in step 1, the size of each sampling instant institute capturing digital image is M1 × N1 pixel;
Step 2013 is carried out image while cutting apart, and process is as follows:
Step 20131, two-dimensional histogram are set up: adopt processor to set up the two-dimensional histogram about pixel gray-scale value and neighborhood averaging gray-scale value of described image to be split; In this two-dimensional histogram, any point is designated as (i, j), the abscissa value that wherein i is this two-dimensional histogram and its are arbitrary pixel (m in described image to be split, n) gray-scale value, j is ordinate value and its neighborhood averaging gray-scale value that is this pixel (m, n) of this two-dimensional histogram; Institute sets up the frequency that any point (i, j) in two-dimensional histogram occurs and is designated as C (i, j), and the frequency that point (i, j) occurs is designated as h (i, j), wherein
Figure BDA0000490950310000071
Step 20132, fuzzy parameter Combinatorial Optimization: described processor calls fuzzy parameter Combinatorial Optimization module, and fuzzy parameter combination used is optimized to the image partition method based on the fuzzy division maximum entropy of two dimension to utilize particle swarm optimization algorithm, and obtains the fuzzy parameter combination after optimizing;
In this step, before to fuzzy parameter, combination is optimized, first according to the two-dimensional histogram of setting up in step 20131, the functional relation of the Two-dimensional Fuzzy Entropy while calculating described Image Segmentation Using to be split, and using the functional relation of the Two-dimensional Fuzzy Entropy calculating the fitness function when utilizing particle swarm optimization algorithm to be optimized fuzzy parameter combination;
Step 20133, image are cut apart: described processor utilizes the combination of the fuzzy parameter after optimization in step 20132, and according to the image partition method based on the fuzzy division maximum entropy of two dimension, the each pixel in described image to be split is classified, and the corresponding image cutting procedure that completes, obtain the target image after cutting apart.
Above-mentioned a kind of Image Fire Flame recognition methods, is characterized in that: image to be split described in step 20131 is made up of target image O and background image P; Wherein the membership function of target image O is μ o(i, j)=μ ox(i; A, b) μ oy(j; C, d) (1); The membership function μ of background image P b(i, j)=μ bx(i; A, b) μ oy(j; C, d)+μ ox(i; A, b) μ by(j; C, d)+μ bx(i; A, b) μ by(j; C, d) (2);
In formula (1) and (2), μ ox(i; A, b) and μ oy(j; C, d) be the one dimension membership function of target image O and the two is S function, μ bx(i; A, b) and μ by(j; C, d) be the one dimension membership function of background image P and the two is S function, μ bx(i; A, b)=1-μ ox(i; A, b), μ by(j; C, d)=1-μ oy(j; C, d), wherein a, b, c and d are the parameter that the one dimension membership function shape of target image O and background image P is controlled;
When the functional relation of Two-dimensional Fuzzy Entropy calculating in step 20132, first according to the two-dimensional histogram of setting up in step I, the minimum value g of the pixel gray-scale value to described image to be split minwith maximal value g maxand the minimum value s of neighborhood averaging gray-scale value minwith maximal value s maxdetermine respectively;
The functional relation of the Two-dimensional Fuzzy Entropy calculating in step 20132 is:
H ( P ) = - &Sigma; i = g min g max &Sigma; j = s min s max &mu; o ( i , j ) h ij p ( O ) exp ( 1 - log &mu; o ( i , j ) h ij p ( O ) ) - &Sigma; i = g min g max &Sigma; j = s min s max &mu; b ( i , j ) h ij p ( B ) exp ( 1 - log &mu; b ( i , j ) h ij p ( B ) ) (3), in formula (3) p ( O ) = &Sigma; i = g min g max &Sigma; j = s min s max &mu; o ( i , j , ) h ij , p ( B ) = &Sigma; i = g min g max &Sigma; j = s min s max &mu; b ( i , j , ) h ij , Wherein h (i, j) is the frequency that the point (i, j) described in step I occurs;
While utilizing particle swarm optimization algorithm to be optimized fuzzy parameter combination in step 20132, the fuzzy parameter of optimizing is combined as (a, b, c, d).
Above-mentioned a kind of Image Fire Flame recognition methods, is characterized in that: while carrying out the parameter combinations optimization of two-dimentional fuzzy division maximum entropy in step 20132, comprise the following steps:
Step II-1, population initialization: using a value of parameter combinations as a particle, and by an initialized population of multiple particle composition; Be denoted as (a k, b k, c k, d k), wherein k be positive integer and its k=1,2,3 ,~, K, wherein K for positive integer and its in described population comprise particle quantity, a kfor a random value of parameter a, b kfor a random value of parameter b, c kfor a random value of parameter c, d kfor a random value of parameter d, a k< b kand c k< d k;
Step II-2, fitness function are determined:
Will H ( P ) = - &Sigma; i = g min g max &Sigma; j = s min s max &mu; o ( i , j ) h ij p ( O ) exp ( 1 - log &mu; o ( i , j ) h ij p ( O ) ) - &Sigma; i = g min g max &Sigma; j = s min s max &mu; b ( i , j ) h ij p ( B ) exp ( 1 - log &mu; b ( i , j ) h ij p ( B ) ) (3), as fitness function;
Step II-3, particle fitness evaluation: the fitness to all particles of current time is evaluated respectively, and the fitness evaluation method of all particles is all identical; Wherein, when the fitness of k particle of current time is evaluated, first calculate the fitness value of k particle of current time and be denoted as fitnessk according to determined fitness function in step II-2, and the fitnessk calculating and Pbestk are carried out to difference comparison: in the time relatively drawing fitnessk > Pbestk, Pbestk=fitnessk, and will be updated to the position of k particle of current time, maximum adaptation degree value and its individual extreme value that is k particle of current time that wherein Pbestk reaches for k particle of current time, for the personal best particle of k particle of current time; Wherein, t is that current iteration number of times and its are positive integer;
After the fitness value of all particles of current time all having been calculated according to determined fitness function in step II-2, the fitness value of the particle of current time fitness value maximum is designated as to fitnesskbest, and fitnesskbest and gbest are carried out to difference comparison: in the time relatively drawing fitnesskbest > gbest, gbest=fitnesskbest, and will
Figure BDA0000490950310000091
be updated to the position of the particle of current time fitness value maximum, the global extremum that wherein gbest is current time,
Figure BDA0000490950310000092
for colony's optimal location of current time;
Step II-4, judge whether to meet stopping criterion for iteration: in the time meeting stopping criterion for iteration, complete parameter combinations optimizing process; Otherwise, upgrade and draw position and the speed of next each particle of moment according to colony optimization algorithm in particle, and return to step II-3; In step II-4, stopping criterion for iteration is that current iteration number of times t reaches predefined maximum iteration time I maxor Δ g≤e, wherein Δ g=|gbest-gmax|, is the global extremum of gbest current time in formula, and gmax is original target fitness value of setting, and e is that positive number and its are predefined deviate.
The present invention compared with prior art has the following advantages:
1, method step is simple, reasonable in design and realize conveniently, and input cost is lower.
2, the image enchancing method step that adopts is simple, reasonable in design and strengthen effective, cause the ropy feature of image imaging according to low, the round-the-clock artificial light of illumination under coal mine, strengthen at analysis and comparison traditional images on the basis of Processing Algorithm, figure image intensifying preprocess method based on fuzzy logic has been proposed, the method adopts new membership function, can not only reduce the Pixel Information loss of the low gray areas of image, the problem that has overcome the contrast decline bringing because of fuzzy enhancing, has improved adaptability.Simultaneously, adopt one fast maximum variance between clusters carry out threshold value and choose, realize fuzzy enhancing threshold adaptive ground fast selecting, improve algorithm arithmetic speed, strengthened real-time, can carry out figure image intensifying to the image under varying environment, and can effectively improve the detailed information of image, improve picture quality, and computing velocity is fast, requirement of real time
3, the image partition method step adopting is simple, reasonable in design and segmentation effect good, because One-Dimensional Maximum-Entropy method is lower to signal to noise ratio (S/N ratio), the image segmentation effect of low-light (level) is not ideal enough, thereby adopt the dividing method based on the fuzzy division maximum entropy of two dimension to cut apart, this dividing method has been considered the feature of half-tone information and spatial neighborhood information and self ambiguity, but there is the slow defect of arithmetic speed, combination is optimized to fuzzy parameter in patented claim of the present invention, to adopt particle swarm optimization algorithm, making can be easy, fast and accurately obtain the fuzzy parameter combination after optimizing, thereby increase substantially image and cut apart efficiency.And, the particle swarm optimization algorithm that adopts reasonable in design and realize convenient, it is according to the state of current population and the adaptive adjustment local space of iterations size, higher search success ratio and higher-quality solution under the prerequisite that does not affect speed of convergence, are obtained, segmentation effect is good, strong robustness, and improved arithmetic speed, requirement of real time.
4, because the dividing method based on the fuzzy division maximum entropy of two dimension can be cut apart quickly and accurately to flame image, overcome the problem that traditional algorithm adopts single threshold noise spot to be divided by mistake, adopt particle swarm optimization algorithm combination is optimized to fuzzy parameter simultaneously, solve nature of nonlinear integral programming problem, in overcoming noise effect, made the target of cutting apart keep better shape.Thereby, the present invention combines the dividing method based on the fuzzy division maximum entropy of two dimension and realizes the Fast Segmentation of infrared image with particle swarm optimization algorithm, parameter combination (a is set, b, c, d) as particle, two dimension fuzzy partition entropy determines the direction of search of particle in solution space as fitness function, once obtain the two-dimensional histogram of image, adopted PSO algorithm search to make optimum parameter combination (a, the b of fitness function maximum, c, d), finally according to maximum membership grade principle, the pixel in image is classified, thereby realize cutting apart of image.And, adopt the segmentation effect of the infrared image that dividing method of the present invention is large for noise, contrast is low, target is less all very good.
5, actual while carrying out feature extraction, choose area, similarity, square characteristic, density, textural characteristics, the stroboscopic nature feature foundation as fire image identification, both retained the large feature of classification contribution, give up redundancy feature, reduced intrinsic dimensionality, the optimization that has completed feature is selected.
6, the two disaggregated model modeling methods that adopt are simple.Reasonable in design and realization facilitates, and result of use is good, and adopts method of conjugate gradient to be optimized the super parameter of kernel function.Based on artificial neural network be applicable to processing imperfection and the feature of fuzzy message and the small sample that support vector machine has, non-linear and high dimensional pattern advantage is not carried out fire forest fire, reach the object that each criterion is had complementary advantages, judge that disaster hidden-trouble easily causes the shortcoming of wrong report thereby overcome the single criterion of traditional use.Conventional cross validation Optimal Parameters method, quite time-consuming, and do not guarantee that the parameter of selecting necessarily guarantees that sorter possesses classic classification performance, and existing other super parameter selection algorithm all there is the defect that can not simultaneously select penalty factor and kernel functional parameter, for small sample LS-SVM pattern classification problem, the present invention adopts to minimize that to stay a Prediction sum squares be target, by the method for Gradient Descent, for the LS-SVM of small sample, Nonlinear Modeling chooses two super parameter kernel functional parameters and penalty factor simultaneously.Not only discrimination is high for two disaggregated models that the present invention sets up, and nicety of grading is high, and the time used is short, can be easy, complete fire identifying fast, when the classification that identifies current gathered image is when having flame, breaking out of fire is described, carry out in time alarm, and take corresponding measure.The present invention is directed to small sample problem and the nonlinear problem of fire identification under the complicated particular surroundings in colliery, and the advantage of support vector machine aspect higher-dimension, fire image recognition methods based on least square method supporting vector machine has been proposed, and on the basis of fast leave one out, utilize method of conjugate gradient to surpass parameter optimization, built FR-LSSVM model.
In sum, the inventive method step is simple, realizations is convenient and easy and simple to handle, reliability is high, result of use good, can effectively solve under the complex environment of existing video fire hazard detection system existence reliability lower, report the problems such as rate of failing to report is higher, result of use is poor by mistake.
Below by drawings and Examples, technical scheme of the present invention is described in further detail.
Accompanying drawing explanation
Fig. 1 is method flow block diagram of the present invention.
Fig. 2 is the schematic block circuit diagram of the present invention's image acquisition used and disposal system.
The structural representation that Fig. 3 is two-dimensional histogram that the present invention sets up.
Fig. 4 is that the present invention carries out the cutting state schematic diagram of image while cutting apart.
Description of reference numerals:
1-CCD camera; 2-video frequency collection card; 3-processor; 4-data-carrier store.
Embodiment
A kind of Image Fire Flame recognition methods as shown in Figure 1, comprises the following steps:
Step 1, image acquisition: adopt image acquisition units and according to predefined sample frequency f s, the digital picture for the treatment of surveyed area gathers, and the digital picture synchronous driving that each sampling instant is gathered is to processor 3.Described image acquisition units and processor 3 join.
In the present embodiment, the video frequency collection card 2 that described image acquisition units comprises CCD camera 1 and joins with CCD camera 1, described CCD camera 1 joins with video frequency collection card 2, and described video frequency collection card 2 joins with processor 3.
In the present embodiment, the size of each sampling instant institute capturing digital image is M1 × N1 pixel.Wherein M1 is the quantity of pixel in every a line in institute's capturing digital image, and N1 is in institute's capturing digital image, each lists the quantity of pixel.
Step 2, image processing: described processor 3 carries out respectively image processing according to time order and function order to the digital picture that in step 1, each sampling instant gathers, and all identical to the disposal route of each sampling instant institute capturing digital image; When the digital picture that in step 1, any sampling instant gathers is processed, include following steps:
Step 201, image pre-service, process is as follows:
Step 2011, image receive and stores synchronized: the digital picture stores synchronized that described processor 3 gathers the current sampling instant that now received is in data-carrier store 4, and described data-carrier store 4 joins with processor 3;
In the present embodiment, described CCD camera 1 is infrared CCD camera, and described CCD camera 1, video frequency collection card 2, processor 3 and data-carrier store 4 form image acquisition and pretreatment system, refers to Fig. 2.
Step 2012, figure image intensifying: the digital picture current sampling instant being gathered by processor 3 strengthens processing, obtain and strengthen digital picture after treatment.
Step 2013, image are cut apart: carry out dividing processing by processor 3 to strengthening digital picture after treatment in step 2012, obtain target image.
Step 202, fire identification: adopt two disaggregated models of setting up in advance, target image described in step 2013 is processed, and drawn the fire condition classification in current sampling instant region to be detected; Described fire condition classification includes flame and without two classifications of flame, described two disaggregated models are the supporting vector machine model to having flame and classifying without two classifications of flame.
The process of establishing of described two disaggregated models is as follows:
Step I, image information collecting: adopt described image acquisition units, the multiframe digital picture one in region to be detected and the multiframe digital picture two in region to be detected when breaking out of fire not while gathering breaking out of fire respectively.
Step II, feature extraction: digital picture described in digital picture described in multiframe one and multiframe is carried out respectively to feature extraction, and from each digital picture, extract respectively one group of characteristic parameter that can represent and distinguish this digital picture, and this stack features parameter comprises M characteristic quantity, and M described characteristic quantity is numbered, M described characteristic quantity composition proper vector, wherein M >=2.
Step III, training sample obtain: described in the multiframe obtaining after feature extraction from step II described in digital picture one and multiframe in the proper vector of digital picture two, choose respectively the proper vector composition training sample set of choosing respectively digital picture two described in the proper vector of digital picture one described in m1 frame and m2 frame; Wherein, m1 and m2 are positive integer and m1=40~100, m2=40~100; It is m1+m2 that described training sample is concentrated the quantity of training sample.
In the present embodiment, while obtaining training sample, one and one of digital image sequence while adopting described image acquisition units to gather breaking out of fire in a period of time section t1 is the digital image sequence two in region to be detected when breaking out of fire not; In described digital image sequence one comprise digital picture frame number be n1=t1 × f, wherein t1 is the sampling time of described digital image sequence one; In described digital image sequence two comprise digital picture frame number be n2=t2 × f, wherein t2 is the sampling time of described digital image sequence two.Wherein, n1 is not less than m1, and n2 is not less than m2.Afterwards, from described digital image sequence one, choose m1 digital picture as there being flame sample, and from described digital image sequence two, choose m2 digital picture as without flame sample.
In the present embodiment, m1=m2.
Step IV, two disaggregated models are set up, and process is as follows:
Step IV-1, kernel function are chosen: select the kernel function of radial basis function as described two disaggregated models;
Step IV-2, classification function are determined: the nuclear parameter σ that treats selected radial basis function in penalty factor γ and step IV-1 2after determining, just obtain the classification function of described two disaggregated models, and complete the process of establishing of described two disaggregated models; Wherein, γ=C -2, σ=D -1, 0.01 < C≤10,0.01 < D≤50.
To penalty factor γ and nuclear parameter σ 2while determining, first adopt method of conjugate gradient to be optimized parameters C and D, obtain parameters C and D after optimizing, then according to γ=C -2and σ=D -1convert parameters C and D after optimizing to penalty factor γ and nuclear parameter σ 2.
While adopting method of conjugate gradient to be optimized parameters C and D in step IV-2, utilize m1+m2 concentrated training sample of training sample described in step III to be optimized.
Step V, two disaggregated model training: by m1+m2 concentrated training sample described in step III training sample, be input to the two disaggregated model training of setting up in step IV.
In the present embodiment, it is N and N=m1+m2 that training sample described in step III is concentrated training sample total quantity; Before carrying out two disaggregated model foundation in step IV, first N concentrated training sample of described training sample is numbered, what described training sample was concentrated p training sample is numbered p, p be positive integer and p=1,2 ..., N; P training sample is denoted as (x p, y p), wherein x pbe the characteristic parameter (being described proper vector) of p training sample, y pbe classification number and the y of p training sample p=1 or-1, wherein classification number is 1 to indicate flame, and classification number is-1 to indicate without flame.
While adopting method of conjugate gradient to be optimized parameters C and D in step IV-2, utilize m1+m2 concentrated training sample of training sample described in step III to be optimized, and optimizing process is as follows:
Step I, objective function are determined: sse ( C , D ) = 1 2 &Sigma; p = 1 N e p 2 = 1 2 &Sigma; p = 1 N [ K ~ ( p , p - ) &CenterDot; s p - y ( p ) ] 2 (1), in formula, sse (C, D) is for staying a Prediction sum squares, and p is the numbering that described training sample is concentrated each training sample, e pfor predicated error to p training sample of two disaggregated models set up in step IV and e p = K ~ ( p , p - ) &CenterDot; s p - y ( p ) ; Wherein, s p = s ( p - ) - s ( p ) ( A - 1 ) ( p , p ) ( A - 1 ) ( p - , p ) , S (p in formula -) for removing the vector of all the other the element compositions after p element in matrix s; S (p) is p the element of matrix s, (A -1) (p -, p) be matrix A -1p row remove the column vector that all the other elements after p element form, (A -1) (p, p) be matrix A -1p element of p row;
Figure BDA0000490950310000134
for matrix
Figure BDA0000490950310000135
p row remove the column vector that all the other elements after p element form, matrix
Figure BDA0000490950310000136
for the augmented matrix of matrix K, wherein matrix K = | K ( x 1 , x 1 ) K ( x 1 , x 2 ) &CenterDot; &CenterDot; &CenterDot; K ( x 1 , x N ) K ( x 2 , x 1 ) K ( x 2 , x 2 ) &CenterDot; &CenterDot; &CenterDot; K ( x 2 , x N ) &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; K ( x N , x 1 ) K ( x N , x 2 ) &CenterDot; &CenterDot; &CenterDot; K ( x N , x N ) | ; Matrix A -1the inverse matrix of representing matrix A, matrix A = K + C 2 &CenterDot; I I N I N T 0 , Wherein matrix I is unit matrix, matrix I n=[1,1 ..., 1] t, the transposition of T representing matrix, matrix I nin comprise N element and N element is 1; Matrix s=A -1y, matrix Y = y 1 y 2 &CenterDot; &CenterDot; &CenterDot; y N 0 , Wherein y 1, y 2..., y nbe respectively described training sample and concentrate the classification of N training sample.
Wherein, matrix K ~ = [ K , 1 ] = | K ( x 1 , x 1 ) K ( x 1 , x 2 ) &CenterDot; &CenterDot; &CenterDot; K ( x 1 , x N ) 1 K ( x 2 , x 1 ) K ( x 2 , x 2 ) &CenterDot; &CenterDot; &CenterDot; K ( x 2 , x N ) 1 &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; K ( x N , x 1 ) K ( x N , x 2 ) &CenterDot; &CenterDot; &CenterDot; K ( x N , x N ) 1 | . Wherein, matrix K is kernel matrix.Matrix
Figure BDA0000490950310000145
for the right side of matrix K increases by a column element and each element is the matrix after 1.
Because the constraint condition of least square method supporting vector machine (LS-SVM) is expressed as following form:
Figure BDA0000490950310000146
(5.22), w in formula tφ (x p)+b is the classification lineoid in high-dimensional feature space, and w and b are the parameter of classification lineoid; e pbe the training error of p training sample,
Figure BDA0000490950310000147
for empiric risk; w tw=||w|| 2weigh the complicacy of study machine.
After described training sample set is determined, the performance of LS-SVM model depends on the selection of type and two super parameters of its kernel function, and two super parameters are respectively penalty factor γ and nuclear parameter σ 2, the nicety of grading of LS-SVM model is relevant to the selection of super parameter, nuclear parameter σ 2the smooth performance that represents the width of radial basis function and itself and LS-SVM model has much relations; Penalty factor γ is also referred to as regularization parameter.Control the punishment degree to error sample, it is closely related with the complexity of LS-SVM model and the matching degree of training sample.
In the present embodiment, radial basis function selected in step IV-1 is K ( x s , x t ) = exp [ - | x s - x t | 2 &sigma; 2 ] , The regression function of this radial basis function is y = &Sigma; t = 1 N &alpha; t K ( x , x t ) + b , α in formula tbe regression parameter with b, s be positive integer and s=1,2 ..., N, t be positive integer and t=1,2 ..., N.
Formula (5.22) can be write as: min J ( w , b , e ) = w T w + 1 2 C 2 &Sigma; t = 1 n e t 2 s . t . y t = w T &phi; ( x t ) + b + e t , t = 1,2 , . . . , N - - - ( 5.23 ) ;
C in formula (5.23) -2replace penalty factor γ, but can play Balanced LS equally-effect of SVM model complexity and empiric risk; σ is by D -1replace radial basis function K ( x s , x t ) = exp [ - | x s - x t | 2 &sigma; 2 ] Change by following formula and express: K (x s, x t)=exp (D 2|| x s-x t|| 2).
According to least square method supporting vector machine principle, formula (5.23) is converted into system of linear equations K + C 2 &CenterDot; I I N I N T 0 &CenterDot; s = Y = y 1 y 2 &CenterDot; &CenterDot; &CenterDot; y N 0 (5.25), " based on the online recursive least-squares model construction of SVM method of staying fast a cross-validation method " (author is Shao Weiming, Tian Xuemin) literary composition that the derivation of formula (5.25) was published with reference to the 33rd the 5th phase of volume of in October, 2012 Qingdao University of Science and Technology's journal (natural science edition).
Formula (5.25) is solved, can obtain the regression function of radial basis function
Figure BDA0000490950310000154
can be drawn by formula (5.25): matrix s=A -1y (5.28).
N the training sample concentrated by described training sample carries out N checking to two set up disaggregated models, while wherein carrying out the p time checking, using p training sample as prediction sets, and all the other N-1 sample is as training set, solves LS-SVM parameter a by training set pafter b, p the training sample as prediction sets classified, and record sort result correctness; Through after N checking, can calculate the mis-classification rate e that stays a prediction like this lOO, computing formula is (5.29).For every group of given super parameter (comprising C and D), can calculate corresponding e lOOthereby, can select e lOOfor the super parameter combinations of minimum is as the parameter after optimizing.
Due to s p = s ( p - ) - s ( p ) ( A - 1 ) ( p , p ) ( A - 1 ) ( p - , p ) (5.30), thus for every group of given super parameter, while once staying a cross validation, A of a demand solution -1, calculate subsequently s when each iteration p, thereby can save a large amount of cross validation time, computation amount.
For making sse (C, D) reach minimum, to formula s p = s ( p - ) - s ( p ) ( A - 1 ) ( p , p ) ( A - 1 ) ( p - , p ) Be optimized to search for C and D, the first gradient to C and D to sse (C, D), according to matrix differentiate and inverse matrix differentiate definition, draws: &PartialD; k ( i , j ) &PartialD; &sigma; = - 2 D &CenterDot; K ( i , j ) &CenterDot; | | x i - x j | | 2 (5.32); &PartialD; k ~ &PartialD; D = [ &PartialD; k &PartialD; D , 0 ] (5.33); &PartialD; A &PartialD; C = 2 C &CenterDot; I 0 0 T 0 (5.34) and &PartialD; A &PartialD; D = &PartialD; k &PartialD; D 0 0 T 0 (5.35), in formula (5.35), matrix 0 represents that element is 0 N dimensional vector entirely.
According to AA -1=I(I is unit matrix), can derive and obtain:
Figure BDA0000490950310000166
(5.36);
According to formula sse ( C , D ) = 1 2 &Sigma; p = 1 N e p 2 = 1 2 &Sigma; p = 1 N [ K ~ ( p , p - ) &CenterDot; s p - y ( p ) ] 2 , Can derive two partial derivatives is respectively: &PartialD; sse &PartialD; C = &PartialD; sse &PartialD; C = &Sigma; p = 1 n e p &CenterDot; K ~ ( p , p - ) &CenterDot; &PartialD; s p &PartialD; C - - - ( 5.37 ) ,
&PartialD; sse &PartialD; D k = &Sigma; p = 1 n e p &CenterDot; [ &PartialD; K ~ ( p , p - ) &PartialD; D &CenterDot; s ( p - ) + K ~ ( p , p - ) &CenterDot; &PartialD; s p &PartialD; D k ] - - - ( 5.38 ) ;
According to formula (5.30),
&PartialD; s p &PartialD; C = &PartialD; s &PartialD; C ( p - ) - &PartialD; ( A - 1 ) ( p - , p ) &PartialD; C &CenterDot; s ( p ) + ( A - ) ( p - , p ) &CenterDot; &PartialD; s ( p ) &PartialD; C ( A - 1 ) ( p , p ) + ( A - 1 ) ( p - , p ) &CenterDot; s ( p ) [ ( A - 1 ) ( p - , p ) ] 2 &CenterDot; &PartialD; ( A - 1 ) ( p - , p ) &PartialD; C - - - ( 5.39 ) ;
&PartialD; s p &PartialD; D = &PartialD; s &PartialD; D ( p - ) - &PartialD; ( A - ) ( p - , p ) &PartialD; D &CenterDot; s ( p ) + ( A - ) ( p - , p ) &CenterDot; &PartialD; s ( p ) &PartialD; D ( A - 1 ) ( p , p ) + ( A - ) ( p - , p ) &CenterDot; s ( p ) [ ( A - ) ( p - , p ) ] 2 &CenterDot; &PartialD; ( A - ) ( p - , p ) &PartialD; D - - - ( 5.40 ) ;
Significantly,
Figure BDA00004909503100001612
and can be calculated by formula (5.32)-(5.35).
All can calculate the gradient of sse (C, D) to them according to formula (5.37) and (5.38) to each super parameters C of group and D.Known according to LS-SVM principle, the selection of the super parameter of LS-SVM is converted to unconstrained optimization problem from a constrained optimization problem, uses C -2replace γ, use D -1replace σ, this conversion can not affect the performance of LS-SVM model, and the value condition of C and D does not affect the calculating of gradient on the other hand.
Step II, initial parameter are set: to the initial value C of parameters C and D 1and D 1determine respectively, and identification error threshold epsilon is set and ε > 0.
The gradient g of step III, current iteration kcalculate: according to formula
Figure BDA00004909503100001614
calculate in step I objective function to C kand D kgradient g k, k be iterations and k=1,2, If || g k||≤ε, stops calculating now C kand D kbe respectively parameters C and D after optimization; Otherwise, enter step IV.
Wherein, &PartialD; sse &PartialD; C k = &Sigma; p = 1 N e p &CenterDot; K ~ ( p , p - ) &CenterDot; &PartialD; s p &PartialD; C k ;
&PartialD; sse &PartialD; D k = &Sigma; p = 1 n e p &CenterDot; [ &PartialD; K ~ ( p , p - ) &PartialD; D k &CenterDot; s ( p - ) + K ~ ( p , p - ) &CenterDot; &PartialD; s p &PartialD; D k ] , In formula
Figure BDA0000490950310000173
for matrix
Figure BDA0000490950310000174
p row remove the column vector that all the other elements after p element form; S (p -) for removing the vector of all the other the element compositions after p element, e in matrix s pfor the predicated error of two disaggregated models to p training sample of setting up in step IV.
The direction of search d of step IV, current iteration kcalculate: according to formula d k = - g k k = 1 - g k + &beta; k d k - 1 k &GreaterEqual; 2 , Calculate the direction of search d of current iteration k, d in formula k-1be the direction of search of the k-1 time iteration, β k=|| g k|| 2/ || g k-1|| 2, g k-1it is the gradient of the k-1 time iteration.
The step-size in search λ of step V, current iteration kdetermine: along determined direction of search d in step IV ksearch for, find out and meet formula
sse ( C k - &lambda; k &PartialD; sse &PartialD; C k , D k - &lambda; k &PartialD; sse &PartialD; D k ) = min &lambda; k > 0 sse ( C k - &lambda; k &PartialD; sse &PartialD; C k , D k - &lambda; k &PartialD; sse &PartialD; D k ) Step-size in search λ k, in formula
Figure BDA0000490950310000177
be illustrated in (0 ,+∞) and find and make reach the step-length λ of minimum value k.
Step VI, according to formula C k + 1 = C k - &lambda; k &PartialD; sse &PartialD; C k With D k + 1 = D k - &lambda; k &PartialD; sse &PartialD; D k , To C k+1and D k+1calculate.
Step VII, make k=k+1, return to afterwards step III, carry out next iteration.
Finally try to achieve matrix s = A - 1 Y = &alpha; 1 &alpha; 2 &CenterDot; &CenterDot; &CenterDot; &alpha; N 0 .
In the present embodiment, after two disaggregated models are set up, the final sorter adopting is: f ( x ) = sgn [ &Sigma; t = 1 N &alpha; t y t K ( x &CenterDot; x t ) + b ] .
In the present embodiment, in step V
Figure BDA00004909503100001713
the transposition of T representing matrix, H is autocorrelation matrix and H=A ta.
In actual mechanical process, in step II to C 1and D 1while determining, the method that adopts grid search method or randomly draw numerical value is to C 1and D 1determine; Employing is randomly drawed the method for numerical value to C 1and D 1while determining, C 1for (0.01,1] in a numerical value randomly drawing, D 1for (0.01,50] in a numerical value randomly drawing; Adopt grid search method to C 1and D 1while determining, first 10 -3for step-length grid division, then make three dimensional network trrellis diagram take C and D as independent variable and take objective function described in step I as dependent variable, find out afterwards many groups parameter of C and D by grid search, finally average as C to organizing parameter more 1and D 1.
In the present embodiment, adopt grid search method to C 1and D 1determine, and find out the B group parameter of C and D by grid search, wherein B is positive integer and B=5~20.
Method of conjugate gradient has the features such as algorithm is easy, required storage is little, fast convergence rate, multidimensional problem is converted into a series of along the optimizing of negative gradient direction linear search method, just the fastest direction of localized target functional value decline, can effectively reduce iterations, reduces working time.
And in the time adopting grid search method to carry out optimizing to C and D, while only having step-length setting very little, just can find the super parameter value of optimum that precision is higher, like this will be very consuming time.
Burning is a unsettled physical process of lasting typical case, has multiple characterization parameter.The early stage flame image of fire mainly contains the features such as flame area increase, edge shake, out-of-shape, position be basicly stable.Wherein, Area Growth characteristic criterion used is calculated all at Visual C++ [115]under platform, exploitation realizes, and wherein area change rate is defined as:
Figure BDA0000490950310000181
wherein, AR represents the area change rate of highlight regions between consecutive frame, and A (n) and A (n+1) represent respectively the area of suspicious region in present frame and next frame.For preventing not existing suspicious flame region to make the area change rate calculating become infinity in adjacent two two field pictures, on denominator, add a minimal value eps.In order to realize normalization, get the maximal value of highlight regions area in two frames as the denominator of above formula in addition, can make like this result finally calculating between (0,1).
The shape similarity of image conventionally will be by means of describing sub similarity degree and carry out with known, and this method can be set up corresponding similarity measure in the degree of any complexity.According to background subtraction point-score, establishing known image sequence is f h(x, y), h=1,2 ..., N 0, wherein (x, y) is the coordinate of each pixel in image, N 0for frame number, establishing benchmark image is f o(x, y), can define like this error image sequence and be: δ h(x, y)=| f h(x, y)-f o(x, y) |, this error image sequence has represented the difference between each frame of original image sequence and benchmark image.Then, the error image sequence obtaining is carried out to binaryzation and obtain image sequence { b h(x, y) }.In this image sequence, be designated the region that 1 pixel represents to exist between original sequence and benchmark image marked difference.We think that this region is possible flame region, and the pixel that in the image sequence after the impact of filtering isolated point, every frame is 1 is carried out to mark, obtain in sequence image flame region Ω possible in every frame h.When finding after suspicious flame region, the method for the similarity by calculating successive frame modified-image is classified to flame and chaff interference.The similarity ξ of successive frame modified-image hbe defined as:
Figure BDA0000490950310000191
try to achieve after several similarities, use the similarity ξ of continuous several two field pictures hmean value
Figure BDA0000490950310000192
as criterion.
Square characteristic, from the angle of flame identification, has adopted the centroid feature of flame image, represents its stability with barycenter.For a width flame image, first calculate its barycenter, as formula: m 00for the zeroth order square of target area, be the area of this target area.Computed image
Figure BDA0000490950310000194
with
Figure BDA0000490950310000195
first moment (the M of direction 10, M 01), then calculate its barycenter.
The edge variation of incipient fire flame has its unique rule, identifies the edge variation of flame with density and this simple and practical characteristic parameter of excentricity as one of Fire Criterion.
Density is commonly used to describe the complexity of object boundary, and also referred to as circularity or dispersion degree, it is on the basis of area and perimeter, calculates the characteristic quantity of the complex-shaped degree in object or region.Be defined as follows: k=1,2 ... n1, in formula (4.7), C krepresent the density of the pel that is numbered k, P kbe the girth of k pel, i.e. the boundary length of suspicious pel, can obtain by computation bound chain code.A kbe the area of k pel, for gray level image, can obtain by the bright spot number that calculates suspicious pel, for bianry image, the pixel number that can be 1 by calculating pixel value obtains, and n is the number of suspicious flame pel in image.The calculating of girth is comparatively complicated, but can determine by extracting boundary chain code.
The step of calculating flame region density is as follows:
1. on the basis of cutting apart at image, calculate the area of doubtful flame region;
2. the continuous girth pixel of detection of vertical direction, and record the number Nx of continuous girth pixel; The boundary pixel point that detection level direction is continuous, and record the number N of continuum boundary pixel y, computation bound pixel sum S n;
3. the chain code number of verso is N e=N x+ N y, the chain code number N of odd number code 0=S n-N e, utilize girth formula P = N E + 2 N 0 Calculate girth.
4. density will be calculated in result substitution formula (4.7) 1. and 3..
Adopt gray level co-occurrence matrixes to extract image texture characteristic, the people such as Haralick have extracted 14 kinds of features by gray level co-occurrence matrixes.In the present embodiment, the image texture characteristic extracting comprises contrast, entropy, energy, homogeneity and five features of correlativity.
There is scintillation in burned flame, the distribution of the pixel that this characteristic has embodied a two field picture on different grey-scale over time.By the variation of edge calculation pixel, just can obtain the flicker rule of target pattern.The frequency of flame flicking is the low frequency range in 10-20Hz generally.The general frequency of obtaining due to video image is that (therefore 2 frames/s), do not reach the undistorted sampling request that obtains flicker frequency directly pass through its characteristic frequency spectrum of acquiring video information more difficult to 25Hz.For flicker rule, Toreyin proposes to utilize the color value of fixed pixel point in each frame under RGB model, to utilize over time small echo to analyze the R component of this point, if there is flame, cause the acute variation of value at this point, the high fdrequency component of wavelet decomposition will be nonzero value.Wang Zhenhua [73]deng proposition wavelet transform, flame characteristic time series is carried out to decomposition and reconstruction, utilize area change situation to represent its flicker rule.When Zhang Jinhua etc. point out flame flicking, it highly can great changes will take place, between its Changing Pattern and flicker frequency, exist and contact directly, and and interference source between have very big difference, therefore proposed to adopt the variation of flame height to replace flame flicking feature carrying out the method for flame identification.In the present embodiment, when employing Zhang Jinhua etc. points out flame flicking, it highly understands the method extraction stroboscopic nature that great changes will take place.
In the present embodiment, M=6 in step II, and 6 characteristic quantities are respectively area, similarity, square characteristic, density, textural characteristics and stroboscopic nature.
In actual mechanical process, for detecting the performance of two disaggregated models of setting up, select to include flame sample and amount to 81 without the training sample of flame sample, each sample is 7 dimensions.For 81 training samples, take out one group of data for predicting classification at every turn, all the other 80 groups of data are in optimized selection super parameter.Wherein initial value C 1=1 and D 1=1, to search for by method of conjugate gradient, the average of gained C is 0.1386, and the average that mean square deviation is 0.0286, D is 0.2421, and mean square deviation is 0.0273, and visible preferred super parameter is more stable.By two disaggregated models of the present invention (FR-LSSVM model) and BP(neural network model), LS-SVM(least square method supporting vector machine model) and standard SVM(supporting vector machine model) three kinds of disaggregated models contrast, recognition result is as shown in table 1:
The recognition result contrast table of the different disaggregated models of table 1
Figure BDA0000490950310000201
As can be seen from Table 1, with regard to discrimination, BP is the poorest, and only also poor with the LS-SVM of the selected initial value of grid search, FR-LSSVM and standard SVM are obviously better than them.And with regard to the training time, FR-LSSVM and LS-SVM are significantly dominant, standard SVM shows slightly and is dominant, standard SVM is more difficult searches to obtain optimum super parameter, BP neural metwork training is very consuming time, and discrimination is lower slightly, reason is that training sample quantity is less, sample size is larger on discrimination impact, comprise characteristic information amount insufficient, and neural network convergence and part minimize aspect Shortcomings, parameters is selected dependence experience, parameter arranges the uncertainty that existence is larger, can pass through supplementary training sample, the weights of BP neural network are further revised and improved discrimination.The discrimination of standard SVM is high compared with LS-SVM, but training time and recognition time are all longer.And comparatively standard of the algorithm of the super parameter of FR-LSSVM is consuming time few, and more stable, reduce uncertainty, be particularly useful for the modeling of small sample, nonlinear problem, there is significant advantage in superiority aspect speed and precision.These algorithms are had relatively high expectations to picture quality in addition, if image pixel is lower, the target area in image is blocked by barrier large area or covered, surrounds by dust, extract target imperfect or extract and have in target noise etc. all may cause discrimination to reduce.
In the present embodiment, while carrying out figure image intensifying in step 2012, adopt the image enchancing method based on fuzzy logic to strengthen processing.
, when the image enchancing method (specifically classical Pal-King fuzzy enhancement algorithm, i.e. Pal algorithm) of employing based on fuzzy logic carries out image enhancement processing, there is following defect in actual enhancing while processing:
1. Pal algorithm, in the time carrying out blurring mapping and inverse transformation thereof, adopts complicated power function as fuzzy membership functions, has the defect that real-time is poor, operand is large;
2. in fuzzy enhancing conversion process, be set to zero by considerable low gray-scale value is rigid in original image, cause the loss of low half-tone information;
3. fuzzy enhancing threshold value (is getted over an X c) choose generally by rule of thumb or repeatedly relatively attempt obtain, lack theoretical direction, there is randomness; Parameter F in subordinate function d, F ethere is adjustability, parameter value F d, F erational choice and image processing effect in close relations;
4. in fuzzy enhancing conversion process, repeatedly interative computation is for image is strengthened to processing repeatedly, and choosing without correlation theory principle of its iterations instructed, and when iterations is more, can have influence on edge details.
There is above-mentioned defect for overcoming classical Pal-King fuzzy enhancement algorithm, in the present embodiment, in step 2012 to described digital picture wait strengthen image strengthen process time, process is as follows:
Step 20121, transform to fuzzy field by image area: according to membership function &mu; gh = T ( x gh ) = x gh / X T x gh &le; X T x gh / X max x gh > X T (7), the gray-scale value of the each pixel of image described to be strengthened is all mapped to the fuzzy membership of fuzzy set, and the fuzzy set of image to be strengthened described in corresponding acquisition; X in formula ghfor the gray-scale value of arbitrary pixel (g, h) in image described to be strengthened, X tfor adopting image enchancing method based on fuzzy logic to described gray threshold selected in the time strengthening image and strengthens processing, X maxfor the maximum gradation value of image described to be strengthened.
All be mapped to after the fuzzy membership of fuzzy set wait the gray-scale value that strengthens the each pixel of image described, correspondingly described in the fuzzy membership matrix of the fuzzy membership composition fuzzy set that is mapped to of the gray-scale value of all pixels of image to be strengthened.
Due to μ in formula (7) gh∈ [0,1], having overcome in classical Pal-King fuzzy enhancement algorithm that the low gray-scale value of many original images after blurring mapping is cut is zero defect, and with threshold X tfor separatrix, subregion definition gray level x ghdegree of membership, this method that defines respectively degree of membership in He Gao gray area, the low gray area of image, has also guaranteed the information loss minimum of image in low gray areas, thereby guarantees the effect of figure image intensifying.
In the present embodiment, before transforming to fuzzy field by image area in step 20121, first adopt maximum variance between clusters to gray threshold X tchoose.
Step 20122, utilize fuzzy enhancement operator to carry out fuzzy enhancing processing at fuzzy field: the fuzzy enhancement operator adopting is μ ' gh=I rgh)=I r(I r-1μ gh), in formula, r is that iterations and its are positive integer, r=1,2, Wherein I 1 ( &mu; gh ) = &mu; gh 2 / &mu; c 0 &le; &mu; gh &le; &mu; c 1 - ( 1 - &mu; gh ) 2 / ( 1 - &mu; c ) &mu; c &le; &mu; gh &le; 1 , μ in formula c=T (X c), wherein X cfor getting over a little and X c=X t.
Above-mentioned formula I 1 ( &mu; gh ) = &mu; gh 2 / &mu; c 0 &le; &mu; gh &le; &mu; c 1 - ( 1 - &mu; gh ) 2 / ( 1 - &mu; c ) &mu; c &le; &mu; gh &le; 1 Nonlinear transformation increased and be greater than μ cμ ghvalue, reduced to be less than μ simultaneously cμ ghvalue.Here μ cdevelop into getting over a little of a broad sense.
Step 20123, change to image area by fuzzy field inversion: according to formula
Figure BDA0000490950310000223
(6), by the μ ' obtaining after fuzzy enhancing processing ghcarry out inverse transformation, obtain and strengthen the gray-scale value of processing each pixel in rear digital picture, and acquisition strengthens digital picture after treatment.
Because fuzzy enhancing threshold value in Pal algorithm (is getted over an X c) to choose be the key of figure image intensifying, need in actual applications by rule of thumb or repeatedly attempt obtaining.Wherein more classical method is maximum variance between clusters (Ostu), and the method simple and stable is effective, is the method often adopting in practical application.Ostu Research on threshold selection has been broken away from the limitation that needs manpower intervention repeatedly to attempt, and can automatically determine optimal threshold according to the half-tone information of image by computing machine.Ostu ratio juris is to utilize inter-class variance as criterion, chooses and makes the gray-scale value of inter-class variance maximum realize automatically choosing of fuzzy enhancing threshold value as optimal threshold, thereby avoid the manual intervention in enhanced processes.
In the present embodiment, adopt maximum variance between clusters to gray threshold X tbefore choosing, first from the grey scale change scope of image described to be strengthened, find out pixel quantity and be all gray-scale values of 0, and adopt processor 3 that all gray-scale values of finding out are all labeled as and exempt to calculate gray-scale value; Adopt maximum variance between clusters to gray threshold X twhile choosing, to in the described grey scale change scope wait strengthening image except described in inter-class variance value while exempting to calculate other gray-scale value gray-scale value as threshold value calculate, and find out maximum between-cluster variance value from the inter-class variance value calculating, gray-scale value corresponding to maximum between-cluster variance value of finding out just for gray threshold X t.
While adopting traditional maximum variance between clusters (Ostu) to choose fuzzy enhancing, if the pixel count that gray-scale value is s is n s, total pixel number
Figure BDA0000490950310000231
the probability that each gray level of the digital picture gathering occurs
Figure BDA0000490950310000232
threshold X tpixel in image is divided into two class C by its gray level 0and C 1, C 0=0,1 ... t}, C 1=t+1, t+2 ... L-1}, and suppose class C 0and C 1the pixel number ratio that accounts for total pixel number be respectively w 0and w (t) 1(t) and the two average gray value be respectively μ 0and μ (t) 1(t).
For C 0have: w 0 ( t ) = &Sigma; i 2 = 0 t P i 2 = w ( t ) , &mu; 0 ( t ) = 1 w 0 &Sigma; i 2 = 0 t i 2 P i 2 = &mu; ( t ) w ( t ) ;
For C 1have: w 1 ( t ) = &Sigma; i 3 = t 2 + 1 L - 1 P i 3 = 1 - w ( t ) , &mu; 1 ( t ) = 1 w 1 &Sigma; i 3 = i 2 + 1 L - 1 i 3 P i 3 = &mu; - &mu; ( t ) 1 - w ( t ) ;
Wherein
Figure BDA0000490950310000235
the average statistical of general image gray scale, μ=w 0μ 0+ w 1μ 1;
Thereby optimal threshold X T = &sigma; max 2 ( t ) = Arg max t &Element; L ( w 0 ( t ) &times; w 1 ( t ) &times; ( &mu; 1 ( t ) - &mu; 0 ( t ) ) 2 ) - - - ( 8 ) ,
The best fuzzy enhancing threshold X of above-mentioned automatic extraction tprocess be: travel through all gray levels to L-1 level from gray level 0, find the X when meeting formula (8) and getting maximal value tvalue is required threshold X t.Because image may the pixel count in some gray level be zero, calculate variance number of times for reducing, the present invention proposes a kind of improved quick Ostu method;
&sigma; 2 ( t ) = w 0 &times; w 1 &times; ( &mu; 0 - &mu; 1 ) 2 = w ( t ) &times; [ 1 - w ( t ) ] &times; [ &mu; ( t ) w ( t ) - &mu; - &mu; ( t ) 1 - w ( t ) ] 2 = [ &mu; ( t ) - w ( t ) &mu; ] 2 w ( t ) [ 1 - w ( t ) ] - - - ( 2.32 )
Suppose that gray level is that the pixel count of t' is zero, P t'=0
If when selected t'-1 is threshold value, have:
w ( t &prime; - 1 ) = &Sigma; i = 0 t &prime; - 1 P i ; &mu; ( t &prime; - 1 ) = &Sigma; i = 0 t &prime; - 1 iP i ; &mu; = &Sigma; i = 0 L - 1 iP i - - - ( 2.33 )
When elected t' is threshold value again:
w ( t &prime; ) = &Sigma; i = 0 t &prime; P i = &Sigma; i = 0 t &prime; - 1 P i + P i &prime; = &Sigma; i = 0 t &prime; - 1 P i = w ( t &prime; - 1 ) - - - ( 2.34 )
&mu; ( t &prime; ) = &Sigma; i = 0 t &prime; iP i = &Sigma; i = 0 t &prime; - 1 iP i + t &prime; P i &prime; = &Sigma; i = 0 t &prime; - 1 iP i = &mu; ( t &prime; - 1 ) - - - ( 2.35 )
&mu; = &Sigma; i = 0 L - 1 iP i - - - ( 2.36 )
As can be seen here:
σ 2(t'-1)=σ 2(t') (2.37)
Hypothesis has continuous gray level t again 1, t 2..., t n, also can imitate to push away:
σ 2(t 1-1)=σ 2(t 1)=σ 2(t 2-1)=σ 2(t 2)=…=σ 2(t n-1)=σ 2(t n) (2.38)
From the above, if the pixel count of a certain gray level is zero, inter-class variance value needn't calculate using it as threshold value time, and only need be using the corresponding inter-class variance of less gray level non-vanishing neighborhood pixels number as its inter-class variance value, therefore,, for finding fast the maximal value of inter-class variance, multiple gray levels that inter-class variance can be equated are used as same gray level, the gray-scale value that is zero those pixel counts is considered as not existing, the inter-class variance σ while directly setting it as threshold value 2(t) assignment is zero, and does not need to calculate their variance yields, and this chooses without any impact threshold value net result, has but improved and has strengthened the speed that threshold adaptive is chosen.
In the present embodiment, before carrying out fuzzy enhancing processing in step 20122, first adopt low-pass filtering method to carry out smoothing processing to the fuzzy set of image to be strengthened described in obtaining in step 20121; Actual while carrying out low-pass filtering treatment, the filter operator adopting is 1 16 1 2 1 2 4 2 1 2 1 .
Because image is vulnerable to noise pollution in generation and transmitting procedure, before therefore image being strengthened to processing, first the fuzzy set of image is carried out to smoothing processing to reduce noise.In the present embodiment, by the convolution algorithm of 3 × 3 spatial domain low-pass filtering operators and image blurring collection matrix, realize the smoothing processing to image blurring collection.
In the present embodiment, step 2013 is carried out image while cutting apart, and process is as follows:
Step 20131, two-dimensional histogram are set up: adopt processor 3 to set up the two-dimensional histogram about pixel gray-scale value and neighborhood averaging gray-scale value of described image to be split; In this two-dimensional histogram, any point is designated as (i, j), the abscissa value that wherein i is this two-dimensional histogram and its are arbitrary pixel (m in described image to be split, n) gray-scale value, j is ordinate value and its neighborhood averaging gray-scale value that is this pixel (m, n) of this two-dimensional histogram; Institute sets up the frequency that any point (i, j) in two-dimensional histogram occurs and is designated as C (i, j), and the frequency that point (i, j) occurs is designated as h (i, j), wherein
Figure BDA0000490950310000251
In the present embodiment, when the neighborhood averaging gray-scale value of pixel (m, n) is calculated, according to formula g ( m , n ) = 1 d &times; d &Sigma; i 1 = - ( d - 1 ) / 2 ( d - 1 ) / 2 &Sigma; j 1 = - ( d - 1 ) / 2 ( d - 1 ) / 2 f ( m + i 1 , n + j 1 ) (6) calculate, in formula, f (m+i1, n+j1) is the gray-scale value of pixel (m+i1, n+j1), and wherein d is the width of pixel square neighborhood window, generally gets odd number.
And, neighborhood averaging gray-scale value g (m, n) with pixel gray-scale value f (m, n) identical and the two the grey scale change scope of grey scale change scope be [0, L), thereby the two-dimensional histogram of setting up in step I is a square area, refer to Fig. 3, wherein L-1 is the maximal value of neighborhood averaging gray-scale value g (m, n) and pixel gray-scale value f (m, n).
In Fig. 3, utilize threshold vector (i, j) that set up two-dimensional histogram is divided into four regions.Because correlativity between the pixel of target image inside or background image inside is very strong, the gray-scale value of pixel and its neighborhood averaging gray-scale value are very approaching; And near the border of target image and background image pixel, the difference between its pixel gray-scale value and neighborhood averaging gray-scale value is obvious.Thereby, in Fig. 3,0# region is corresponding with background image, 1# region is corresponding with target image, and near pixel and noise spot distribution 2# region and 3# region representation border, thereby should be in 0# and 1# region with pixel gray-scale value and neighborhood averaging gray-scale value and determine optimal threshold by the dividing method of the fuzzy division maximum entropy of two dimension, make the quantity of information maximum of authentic representative target and background.
Step 20132, fuzzy parameter Combinatorial Optimization: described processor 3 calls fuzzy parameter Combinatorial Optimization module, and fuzzy parameter combination used is optimized to the image partition method based on the fuzzy division maximum entropy of two dimension to utilize particle swarm optimization algorithm, and obtains the fuzzy parameter combination after optimizing.
In this step, before to fuzzy parameter, combination is optimized, first according to the two-dimensional histogram of setting up in step 20131, the functional relation of the Two-dimensional Fuzzy Entropy while calculating described Image Segmentation Using to be split, and using the functional relation of the Two-dimensional Fuzzy Entropy calculating the fitness function when utilizing particle swarm optimization algorithm to be optimized fuzzy parameter combination.
In the present embodiment, image to be split described in step 20131 is made up of target image O and background image P; Wherein the membership function of target image O is μ o(i, j)=μ ox(i; A, b) μ oy(j; C, d) (1).
The membership function μ of background image P b(i, j)=μ bx(i; A, b) μ oy(j; C, d)+μ ox(i; A, b) μ by(j; C, d)+μ bx(i; A, b) μ by(j; C, d) (2).
In formula (1) and (2), μ ox(i; A, b) and μ oy(j; C, d) be the one dimension membership function of target image O and the two is S function, μ bx(i; A, b) and μ by(j; C, d) be the one dimension membership function of background image P and the two is S function, μ bx(i; A, b)=1-μ ox(i; A, b), μ by(j; C, d)=1-μ oy(j; C, d), wherein a, b, c and d are the parameter that the one dimension membership function shape of target image O and background image P is controlled.
Wherein, &mu; ox ( i ; a , b ) = 0 i &le; a 2 &times; ( i - a b - a ) 2 a < i &le; a + b 2 1 - 2 &times; ( i - b b - a ) 2 a + b 2 < i &le; b 1 b < i &le; L - 1 ;
&mu; oy ( j ; c , d ) = 0 j &le; c 2 &times; ( j - c d - c ) 2 c < j &le; c + d 2 1 - 2 &times; ( j - d d - c ) 2 c + d 2 < j &le; d 1 d < j &le; L - 1 .
When the functional relation of Two-dimensional Fuzzy Entropy calculating in step 20132, first according to the two-dimensional histogram of setting up in step 20131, the minimum value g of the pixel gray-scale value to described image to be split minwith maximal value g maxand the minimum value s of neighborhood averaging gray-scale value minwith maximal value s maxdetermine respectively.In the present embodiment, g max=s max=L-1, and g min=s min=0.Wherein, L-1=255.
The functional relation of the Two-dimensional Fuzzy Entropy calculating in step 20132 is:
H ( P ) = - &Sigma; i = g min g max &Sigma; j = s min s max &mu; o ( i , j ) h ij p ( O ) exp ( 1 - log &mu; o ( i , j ) h ij p ( O ) ) - &Sigma; i = g min g max &Sigma; j = s min s max &mu; b ( i , j ) h ij p ( B ) exp ( 1 - log &mu; b ( i , j ) h ij p ( B ) ) (3), in formula (3) p ( O ) = &Sigma; i = g min g max &Sigma; j = s min s max &mu; o ( i , j , ) h ij , p ( B ) = &Sigma; i = g min g max &Sigma; j = s min s max &mu; b ( i , j , ) h ij , Wherein h (i, j) is the frequency that the point (i, j) described in step I occurs.
While utilizing particle swarm optimization algorithm to be optimized fuzzy parameter combination in step 20132, the fuzzy parameter of optimizing is combined as (a, b, c, d).
In the present embodiment, while carrying out the parameter combinations optimization of two-dimentional fuzzy division maximum entropy in step 20132, comprise the following steps:
Step II-1, population initialization: using a value of parameter combinations as a particle, and by an initialized population of multiple particle composition; Be denoted as (a k, b k, c k, d k), wherein k be positive integer and its k=1,2,3 ,~, K, wherein K for positive integer and its in described population comprise particle quantity, a kfor a random value of parameter a, b kfor a random value of parameter b, c kfor a random value of parameter c, d kfor a random value of parameter d, a k< b kand c k< d k.
In the present embodiment, K=15.
When actual use, can according to specific needs, K be carried out between 10~100 to value.
Step II-2, fitness function are determined:
Will H ( P ) = - &Sigma; i = g min g max &Sigma; j = s min s max &mu; o ( i , j ) h ij p ( O ) exp ( 1 - log &mu; o ( i , j ) h ij p ( O ) ) - &Sigma; i = g min g max &Sigma; j = s min s max &mu; b ( i , j ) h ij p ( B ) exp ( 1 - log &mu; b ( i , j ) h ij p ( B ) ) (3), as fitness function.
Step II-3, particle fitness evaluation: the fitness to all particles of current time is evaluated respectively, and the fitness evaluation method of all particles is all identical; Wherein, when the fitness of k particle of current time is evaluated, first calculate the fitness value of k particle of current time and be denoted as fitnessk according to determined fitness function in step II-2, and the fitnessk calculating and Pbestk are carried out to difference comparison: in the time relatively drawing fitnessk > Pbestk, Pbestk=fitnessk, and will
Figure BDA0000490950310000272
be updated to the position of k particle of current time, maximum adaptation degree value and its individual extreme value that is k particle of current time that wherein Pbestk reaches for k particle of current time,
Figure BDA0000490950310000273
for the personal best particle of k particle of current time; Wherein, t is that current iteration number of times and its are positive integer.
After the fitness value of all particles of current time all having been calculated according to determined fitness function in step II-2, the fitness value of the particle of current time fitness value maximum is designated as to fitnesskbest, and fitnesskbest and gbest are carried out to difference comparison: in the time relatively drawing fitnesskbest > gbest, gbest=fitnesskbest, and will
Figure BDA0000490950310000274
be updated to the position of the particle of current time fitness value maximum, the global extremum that wherein gbest is current time,
Figure BDA0000490950310000275
for colony's optimal location of current time.
Step II-4, judge whether to meet stopping criterion for iteration: in the time meeting stopping criterion for iteration, complete parameter combinations optimizing process; Otherwise, upgrade and draw position and the speed of next each particle of moment according to colony optimization algorithm in particle, and return to step II-3.
In step II-4, stopping criterion for iteration is that current iteration number of times t reaches predefined maximum iteration time I maxor Δ g≤e, wherein Δ g=|gbest-gmax|, is the global extremum of gbest current time in formula, and gmax is original target fitness value of setting, and e is that positive number and its are predefined deviate.
In the present embodiment, maximum iteration time I max=30.When actual use, can be according to specific needs, by maximum iteration time I maxbetween 20~200, adjust.
In the present embodiment, while carrying out population initialization in step II-1, particle (a k, b k, c k, d k) in (a k, c k) be the initial velocity vector of k particle, (b k, d k) be the initial position of k particle.
In step II-4, upgrade while drawing the position of next each particle of moment and speed according to colony optimization algorithm in particle, the position of all particles and the update method of speed are all identical; Wherein, when speed to next moment k particle and position are upgraded, first according to the velocity of k particle of current time, position and individual extreme value Pbestk and global extremum, calculate the velocity of next moment k particle, and calculate the position of next moment k particle according to the position of k particle of current time and the velocity of next moment k the particle calculating.
And, when the speed to next moment k particle in step II-4 and position are upgraded, according to v k t + 1 = &omega; v k t + c 1 r 1 ( g kbest t - x k t ) + c 2 r 2 ( g gbest t - x k t ) And formula (4) x k t + 1 = x k t + v k t + 1 (5) calculate the velocity of next moment k particle
Figure BDA0000490950310000283
and position
Figure BDA0000490950310000284
in formula (4) and (5)
Figure BDA0000490950310000285
for the position of k particle of current time, in formula (4)
Figure BDA0000490950310000286
for the velocity of k particle of current time, c 1and c 2be acceleration factor and c 1+ c 2=4, r 1and r 2for the equally distributed random number between [0,1]; ω is that inertia weight and its increase linearity with iterations reduce,
Figure BDA0000490950310000287
ω in formula maxand ω minbe respectively predefined inertia weight maximal value and minimum value, t is current iteration number of times, I maxfor predefined maximum iteration time.
In the present embodiment, ω max=0.9, ω min=0.4, c 1=c 2=2.
In the present embodiment, before carrying out population initialization in step II-1, need first to a k, b k, c kand d khunting zone determine, wherein the pixel minimum gray value of image to be split described in step I is g minand its minimum value is g max; The Size of Neighborhood of pixel (m, n) is the average gray minimum value s of d × d pixel and its neighborhood minand its average gray maximal value s max, a k=g min..., g max-1, b k=g min+ 1 ..., g max, c k=s min..., s max-1, d k=s min+ 1 ..., s max.
In the present embodiment, d=5.
In actual use procedure, can according to specific needs, the value size of d be adjusted accordingly.
Step 20133, image are cut apart: described processor 3 utilizes the combination of the fuzzy parameter after optimization in step 20132, and according to the image partition method based on the fuzzy division maximum entropy of two dimension, the each pixel in described image to be split is classified, and the corresponding image cutting procedure that completes, obtain the target image after cutting apart.
In the present embodiment, the fuzzy parameter obtaining after optimizing is combined as after (a, b, c, d), according to maximum membership grade principle, pixel is classified: wherein work as μ o(i, j)>=0.5 o'clock, is divided into target area by this type of pixel, otherwise is divided into background area, refers to Fig. 4.In Fig. 4, μ othe grid at place, (i, j)>=0.5 is expressed as the target area after image is cut apart.
The above; it is only preferred embodiment of the present invention; not the present invention is imposed any restrictions, every any simple modification of above embodiment being done according to the technology of the present invention essence, change and equivalent structure change, and all still belong in the protection domain of technical solution of the present invention.

Claims (10)

1. an Image Fire Flame recognition methods, is characterized in that the method comprises the following steps:
Step 1, image acquisition: adopt image acquisition units and according to predefined sample frequency f s, the digital picture for the treatment of surveyed area gathers, and the digital picture synchronous driving that each sampling instant is gathered is to processor (3); Described image acquisition units and processor (3) join;
Step 2, image processing: described processor (3) carries out respectively image processing according to time order and function order to the digital picture that in step 1, each sampling instant gathers, and all identical to the disposal route of each sampling instant institute capturing digital image; When the digital picture that in step 1, any sampling instant gathers is processed, include following steps:
Step 201, image pre-service, process is as follows:
Step 2011, image receive and stores synchronized: the digital picture stores synchronized that described processor (3) gathers the current sampling instant that now received is in data-carrier store (4), and described data-carrier store (4) joins with processor (3);
Step 2012, figure image intensifying: the digital picture current sampling instant being gathered by processor (3) strengthens processing, obtain and strengthen digital picture after treatment;
Step 2013, image are cut apart: carry out dividing processing by processor (3) to strengthening digital picture after treatment in step 2012, obtain target image;
Step 202, fire identification: adopt two disaggregated models of setting up in advance, target image described in step 2013 is processed, and drawn the fire condition classification in current sampling instant region to be detected; Described fire condition classification includes flame and without two classifications of flame, described two disaggregated models are the supporting vector machine model to having flame and classifying without two classifications of flame;
The process of establishing of described two disaggregated models is as follows:
Step I, image information collecting: adopt described image acquisition units, the multiframe digital picture one in region to be detected and the multiframe digital picture two in region to be detected when breaking out of fire not while gathering breaking out of fire respectively;
Step II, feature extraction: digital picture described in digital picture described in multiframe one and multiframe is carried out respectively to feature extraction, and from each digital picture, extract respectively one group of characteristic parameter that can represent and distinguish this digital picture, and this stack features parameter comprises M characteristic quantity, and M described characteristic quantity is numbered, M described characteristic quantity composition proper vector, wherein M >=2;
Step III, training sample obtain: described in the multiframe obtaining after feature extraction from step II described in digital picture one and multiframe in the proper vector of digital picture two, choose respectively the proper vector composition training sample set of digital picture two described in the proper vector of digital picture one described in m1 frame and m2 frame; Wherein, m1 and m2 are positive integer and m1=40~100, m2=40~100; It is m1+m2 that described training sample is concentrated the quantity of training sample;
Step IV, two disaggregated models are set up, and process is as follows:
Step IV-1, kernel function are chosen: select the kernel function of radial basis function as described two disaggregated models;
Step IV-2, classification function are determined: the nuclear parameter σ that treats selected radial basis function in penalty factor γ and step IV-1 2after determining, just obtain the classification function of described two disaggregated models, and complete the process of establishing of described two disaggregated models; Wherein, γ=C -2, σ=D -1, 0.01 < C≤10,0.01 < D≤50;
To penalty factor γ and nuclear parameter σ 2while determining, first adopt method of conjugate gradient to be optimized parameters C and D, obtain parameters C and D after optimizing, then according to γ=C -2and σ=D -1convert parameters C and D after optimizing to penalty factor γ and nuclear parameter σ 2;
Step V, two disaggregated model training: by m1+m2 concentrated training sample described in step III training sample, be input to the two disaggregated model training of setting up in step IV.
2. according to a kind of Image Fire Flame recognition methods claimed in claim 1, it is characterized in that: it is N and N=m1+m2 that training sample described in step III is concentrated training sample total quantity; Before carrying out two disaggregated model foundation in step IV, first N concentrated training sample of described training sample is numbered, what described training sample was concentrated p training sample is numbered p, p be positive integer and p=1,2 ..., N; P training sample is denoted as (x p, y p), wherein x pbe the characteristic parameter of p training sample, y pbe classification number and the y of p training sample p=1 or-1, wherein classification number is 1 to indicate flame, and classification number is-1 to indicate without flame;
While adopting method of conjugate gradient to be optimized parameters C and D in step IV-2, utilize m1+m2 concentrated training sample of training sample described in step III to be optimized, and optimizing process is as follows:
Step I, objective function are determined: sse ( C , D ) = 1 2 &Sigma; p = 1 N e p 2 = 1 2 &Sigma; p = 1 N [ K ~ ( p , p - ) &CenterDot; s p - y ( p ) ] 2 (1), in formula, sse (C, D) is for staying a Prediction sum squares, and p is the numbering that described training sample is concentrated each training sample, e pfor predicated error to p training sample of two disaggregated models set up in step IV and e p = K ~ ( p , p - ) &CenterDot; s p - y ( p ) ; Wherein, s p = s ( p - ) - s ( p ) ( A - 1 ) ( p , p ) ( A - 1 ) ( p - , p ) , S (p in formula -) for removing the vector of all the other the element compositions after p element in matrix s; S (p) is p the element of matrix s, (A -1) (p -, p) be matrix A -1p row remove the column vector that all the other elements after p element form, (A -1) (p, p) be matrix A -1p element of p row;
Figure FDA0000490950300000034
for matrix
Figure FDA0000490950300000035
p row remove the column vector that all the other elements after p element form, matrix
Figure FDA0000490950300000036
for the augmented matrix of matrix K, wherein matrix K = K ( x 1 , x 1 ) K ( x 1 , x 2 ) &CenterDot; &CenterDot; &CenterDot; K ( x 1 , x N ) K ( x 2 , x 1 ) K ( x 2 , x 2 ) &CenterDot; &CenterDot; &CenterDot; K ( x 2 , x N ) &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; &CenterDot; K ( x N , x 1 ) K ( x N , x 2 ) &CenterDot; &CenterDot; &CenterDot; K ( x N , x N ) | ; Matrix A -1the inverse matrix of representing matrix A, matrix A = K + C 2 &CenterDot; I I N I N T 0 , Wherein matrix I is unit matrix, matrix I n=[1,1 ..., 1] t, the transposition of T representing matrix, matrix I nin comprise N element and N element is 1; Matrix s=A -1y, matrix Y = y 1 y 2 &CenterDot; &CenterDot; &CenterDot; y N 0 , Wherein y 1, y 2..., y nbe respectively described training sample and concentrate the classification of N training sample;
Step II, initial parameter are set: to the initial value C of parameters C and D 1and D 1determine respectively, and identification error threshold epsilon is set and ε > 0;
The gradient g of step III, current iteration kcalculate: according to formula
Figure FDA00004909503000000310
calculate in step I objective function to C kand D kgradient g k, k be iterations and k=1,2, If || g k||≤ε, stops calculating now C kand D kbe respectively parameters C and D after optimization; Otherwise, enter step IV;
Wherein, &PartialD; sse &PartialD; C k = &Sigma; p = 1 N e p &CenterDot; K ~ ( p , p - ) &CenterDot; &PartialD; s p &PartialD; C k ;
&PartialD; sse &PartialD; D k = &Sigma; p = 1 n e p &CenterDot; [ &PartialD; K ~ ( p , p - ) &PartialD; D k &CenterDot; s ( p - ) + K ~ ( p , p - ) &CenterDot; &PartialD; s p &PartialD; D k ] , In formula
Figure FDA0000490950300000043
for matrix
Figure FDA0000490950300000044
p row remove the column vector that all the other elements after p element form; S (p -) for removing the vector of all the other the element compositions after p element, e in matrix s pfor the predicated error of two disaggregated models to p training sample of setting up in step IV;
The direction of search d of step IV, current iteration kcalculate: according to formula d k = - g k k = 1 - g k + &beta; k d k - 1 k &GreaterEqual; 2 , Calculate the direction of search d of current iteration k, d in formula k-1be the direction of search of the k-1 time iteration, β k=|| g k|| 2/ || g k-1|| 2, g k-1it is the gradient of the k-1 time iteration;
The step-size in search λ of step V, current iteration kdetermine: along determined direction of search d in step IV ksearch for, find out and meet formula
sse ( C k - &lambda; k &PartialD; sse &PartialD; C k , D k - &lambda; k &PartialD; sse &PartialD; D k ) = min &lambda; k > 0 sse ( C k - &lambda; k &PartialD; sse &PartialD; C k , D k - &lambda; k &PartialD; sse &PartialD; D k ) Step-size in search λ k, in formula
Figure FDA0000490950300000047
be illustrated in (0 ,+∞) and find and make
Figure FDA0000490950300000048
reach the step-length λ of minimum value k;
Step VI, according to formula C k + 1 = C k - &lambda; k &PartialD; sse &PartialD; C k With D k + 1 = D k - &lambda; k &PartialD; sse &PartialD; D k , To C k+1and D k+1calculate;
Step VII, make k=k+1, return to afterwards step III, carry out next iteration;
Radial basis function selected in step IV-1 is
Figure FDA00004909503000000411
the regression function of this radial basis function is
Figure FDA00004909503000000412
α in formula tbe regression parameter with b, s be positive integer and s=1,2 ..., N, t be positive integer and t=1,2 ..., N.
3. according to a kind of Image Fire Flame recognition methods described in claim 1 or 2, it is characterized in that: M=6 in step II, and 6 characteristic quantities are respectively area, similarity, square characteristic, density, textural characteristics and stroboscopic nature.
4. according to a kind of Image Fire Flame recognition methods claimed in claim 2, it is characterized in that: in step II to C 1and D 1while determining, the method that adopts grid search method or randomly draw numerical value is to C 1and D 1determine; Employing is randomly drawed the method for numerical value to C 1and D 1while determining, C 1for (0.01,1] in a numerical value randomly drawing, D 1for (0.01,50] in a numerical value randomly drawing; Adopt grid search method to C 1and D 1while determining, first 10 -3for step-length grid division, then make three dimensional network trrellis diagram take C and D as independent variable and take objective function described in step I as dependent variable, find out afterwards many groups parameter of C and D by grid search, finally average as C to organizing parameter more 1and D 1.
5. according to a kind of Image Fire Flame recognition methods described in claim 1 or 2, it is characterized in that: while carrying out figure image intensifying in step 2012, adopt the image enchancing method based on fuzzy logic to strengthen processing.
6. according to a kind of Image Fire Flame recognition methods claimed in claim 5, it is characterized in that: adopt the image enchancing method based on fuzzy logic to strengthen while processing, process is as follows:
Step 20121, transform to fuzzy field by image area: according to membership function &mu; gh = T ( x gh ) = x gh / X T x gh &le; X T x gh / X max x gh > X T (7), the gray-scale value of the each pixel of image described to be strengthened is all mapped to the fuzzy membership of fuzzy set, and the fuzzy set of image to be strengthened described in corresponding acquisition; X in formula ghfor the gray-scale value of arbitrary pixel (g, h) in image described to be strengthened, X tfor adopting image enchancing method based on fuzzy logic to described gray threshold selected in the time strengthening image and strengthens processing, X maxfor the maximum gradation value of image described to be strengthened;
Step 20122, utilize fuzzy enhancement operator to carry out fuzzy enhancing processing at fuzzy field: the fuzzy enhancement operator adopting is μ ' gh=I rgh)=I r(I r-1μ gh), in formula, r is that iterations and its are positive integer, r=1,2, Wherein I 1 ( &mu; gh ) = &mu; gh 2 / &mu; c 0 &le; &mu; gh &le; &mu; c 1 - ( 1 - &mu; gh ) 2 / ( 1 - &mu; c ) &mu; c &le; &mu; gh &le; 1 , μ in formula c=T (X c), wherein X cfor getting over a little and X c=X t;
Step 20123, change to image area by fuzzy field inversion: according to formula
Figure FDA0000490950300000062
(6), by the μ ' obtaining after fuzzy enhancing processing ghcarry out inverse transformation, obtain and strengthen the gray-scale value of processing each pixel in rear digital picture, and acquisition strengthens digital picture after treatment.
7. according to a kind of Image Fire Flame recognition methods claimed in claim 6, it is characterized in that: before transforming to fuzzy field by image area in step 20121, first adopt maximum variance between clusters to gray threshold X tchoose; Adopt maximum variance between clusters to gray threshold X tbefore choosing, first from the grey scale change scope of image described to be strengthened, find out pixel quantity and be all gray-scale values of 0, and adopt processor (3) that all gray-scale values of finding out are all labeled as and exempt to calculate gray-scale value; Adopt maximum variance between clusters to gray threshold X twhile choosing, to in the described grey scale change scope wait strengthening image except described in inter-class variance value while exempting to calculate other gray-scale value gray-scale value as threshold value calculate, and find out maximum between-cluster variance value from the inter-class variance value calculating, gray-scale value corresponding to maximum between-cluster variance value of finding out just for gray threshold X t.
8. according to a kind of Image Fire Flame recognition methods described in claim 1 or 2, it is characterized in that: in step 1, the size of each sampling instant institute capturing digital image is M1 × N1 pixel;
Step 2013 is carried out image while cutting apart, and process is as follows:
Step 20131, two-dimensional histogram are set up: adopt processor (3) to set up the two-dimensional histogram about pixel gray-scale value and neighborhood averaging gray-scale value of described image to be split; In this two-dimensional histogram, any point is designated as (i, j), the abscissa value that wherein i is this two-dimensional histogram and its are arbitrary pixel (m in described image to be split, n) gray-scale value, j is ordinate value and its neighborhood averaging gray-scale value that is this pixel (m, n) of this two-dimensional histogram; Institute sets up the frequency that any point (i, j) in two-dimensional histogram occurs and is designated as C (i, j), and the frequency that point (i, j) occurs is designated as h (i, j), wherein
Figure FDA0000490950300000061
Step 20132, fuzzy parameter Combinatorial Optimization: described processor (3) calls fuzzy parameter Combinatorial Optimization module, and fuzzy parameter combination used is optimized to the image partition method based on the fuzzy division maximum entropy of two dimension to utilize particle swarm optimization algorithm, and obtains the fuzzy parameter combination after optimizing;
In this step, before to fuzzy parameter, combination is optimized, first according to the two-dimensional histogram of setting up in step 20131, the functional relation of the Two-dimensional Fuzzy Entropy while calculating described Image Segmentation Using to be split, and using the functional relation of the Two-dimensional Fuzzy Entropy calculating the fitness function when utilizing particle swarm optimization algorithm to be optimized fuzzy parameter combination;
Step 20133, image are cut apart: described processor (3) utilizes the combination of the fuzzy parameter after optimization in step 20132, and according to the image partition method based on the fuzzy division maximum entropy of two dimension, the each pixel in described image to be split is classified, and the corresponding image cutting procedure that completes, obtain the target image after cutting apart.
9. according to a kind of Image Fire Flame recognition methods claimed in claim 8, it is characterized in that: image to be split described in step 20131 is made up of target image O and background image P; Wherein the membership function of target image O is μ o(i, j)=μ ox(i; A, b) μ oy(j; C, d) (1); The membership function μ of background image P b(i, j)=μ bx(i; A, b) μ oy(j; C, d)+μ ox(i; A, b) μ by(j; C, d)+μ bx(i; A, b) μ by(j; C, d) (2);
In formula (1) and (2), μ ox(i; A, b) and μ oy(j; C, d) be the one dimension membership function of target image O and the two is S function, μ bx(i; A, b) and μ by(j; C, d) be the one dimension membership function of background image P and the two is S function, μ bx(i; A, b)=1-μ ox(i; A, b), μ by(j; C, d)=1-μ oy(j; C, d), wherein a, b, c and d are the parameter that the one dimension membership function shape of target image O and background image P is controlled;
When the functional relation of Two-dimensional Fuzzy Entropy calculating in step 20132, first according to the two-dimensional histogram of setting up in step I, the minimum value g of the pixel gray-scale value to described image to be split minwith maximal value g maxand the minimum value s of neighborhood averaging gray-scale value minwith maximal value s maxdetermine respectively;
The functional relation of the Two-dimensional Fuzzy Entropy calculating in step 20132 is:
H ( P ) = - &Sigma; i = g min g max &Sigma; j = s min s max &mu; o ( i , j ) h ij p ( O ) exp ( 1 - log &mu; o ( i , j ) h ij p ( O ) ) - &Sigma; i = g min g max &Sigma; j = s min s max &mu; b ( i , j ) h ij p ( B ) exp ( 1 - log &mu; b ( i , j ) h ij p ( B ) ) (3), in formula (3) p ( O ) = &Sigma; i = g min g max &Sigma; j = s min s max &mu; o ( i , j , ) h ij , p ( B ) = &Sigma; i = g min g max &Sigma; j = s min s max &mu; b ( i , j , ) h ij , Wherein h (i, j) is the frequency that the point (i, j) described in step I occurs;
While utilizing particle swarm optimization algorithm to be optimized fuzzy parameter combination in step 20132, the fuzzy parameter of optimizing is combined as (a, b, c, d).
10. according to a kind of Image Fire Flame recognition methods claimed in claim 9, it is characterized in that: while carrying out the parameter combinations optimization of two-dimentional fuzzy division maximum entropy in step 20132, comprise the following steps:
Step II-1, population initialization: using a value of parameter combinations as a particle, and by an initialized population of multiple particle composition; Be denoted as (a k, b k, c k, d k), wherein k be positive integer and its k=1,2,3 ,~, K, wherein K for positive integer and its in described population comprise particle quantity, a kfor a random value of parameter a, b kfor a random value of parameter b, c kfor a random value of parameter c, d kfor a random value of parameter d, a k< b kand c k< d k;
Step II-2, fitness function are determined:
Will H ( P ) = - &Sigma; i = g min g max &Sigma; j = s min s max &mu; o ( i , j ) h ij p ( O ) exp ( 1 - log &mu; o ( i , j ) h ij p ( O ) ) - &Sigma; i = g min g max &Sigma; j = s min s max &mu; b ( i , j ) h ij p ( B ) exp ( 1 - log &mu; b ( i , j ) h ij p ( B ) ) (3), as fitness function;
Step II-3, particle fitness evaluation: the fitness to all particles of current time is evaluated respectively, and the fitness evaluation method of all particles is all identical; Wherein, when the fitness of k particle of current time is evaluated, first calculate the fitness value of k particle of current time and be denoted as fitnessk according to determined fitness function in step II-2, and the fitnessk calculating and Pbestk are carried out to difference comparison: in the time relatively drawing fitnessk > Pbestk, Pbestk=fitnessk, and will
Figure FDA0000490950300000082
be updated to the position of k particle of current time, maximum adaptation degree value and its individual extreme value that is k particle of current time that wherein Pbestk reaches for k particle of current time,
Figure FDA0000490950300000083
for the personal best particle of k particle of current time; Wherein, t is that current iteration number of times and its are positive integer;
After the fitness value of all particles of current time all having been calculated according to determined fitness function in step II-2, the fitness value of the particle of current time fitness value maximum is designated as to fitnesskbest, and fitnesskbest and gbest are carried out to difference comparison: in the time relatively drawing fitnesskbest > gbest, gbest=fitnesskbest, and will be updated to the position of the particle of current time fitness value maximum, the global extremum that wherein gbest is current time,
Figure FDA0000490950300000092
for colony's optimal location of current time;
Step II-4, judge whether to meet stopping criterion for iteration: in the time meeting stopping criterion for iteration, complete parameter combinations optimizing process; Otherwise, upgrade and draw position and the speed of next each particle of moment according to colony optimization algorithm in particle, and return to step II-3; In step II-4, stopping criterion for iteration is that current iteration number of times t reaches predefined maximum iteration time I maxor Δ g≤e, wherein Δ g=|gbest-gmax|, is the global extremum of gbest current time in formula, and gmax is original target fitness value of setting, and e is that positive number and its are predefined deviate.
CN201410148888.3A 2014-04-14 2014-04-14 A kind of Image Fire Flame recognition methods Active CN103886344B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410148888.3A CN103886344B (en) 2014-04-14 2014-04-14 A kind of Image Fire Flame recognition methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410148888.3A CN103886344B (en) 2014-04-14 2014-04-14 A kind of Image Fire Flame recognition methods

Publications (2)

Publication Number Publication Date
CN103886344A true CN103886344A (en) 2014-06-25
CN103886344B CN103886344B (en) 2017-07-07

Family

ID=50955227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410148888.3A Active CN103886344B (en) 2014-04-14 2014-04-14 A kind of Image Fire Flame recognition methods

Country Status (1)

Country Link
CN (1) CN103886344B (en)

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976365A (en) * 2016-04-28 2016-09-28 天津大学 Nocturnal fire disaster video detection method
CN106204553A (en) * 2016-06-30 2016-12-07 江苏理工学院 Image fast segmentation method based on least square method curve fitting
CN106355812A (en) * 2016-08-10 2017-01-25 安徽理工大学 Fire hazard prediction method based on temperature fields
CN107015852A (en) * 2016-06-15 2017-08-04 珠江水利委员会珠江水利科学研究院 A kind of extensive Hydropower Stations multi-core parallel concurrent Optimization Scheduling
CN107209873A (en) * 2015-01-29 2017-09-26 高通股份有限公司 Hyper parameter for depth convolutional network is selected
CN107316012A (en) * 2017-06-14 2017-11-03 华南理工大学 The fire detection and tracking of small-sized depopulated helicopter
CN107704820A (en) * 2017-09-28 2018-02-16 深圳市鑫汇达机械设计有限公司 A kind of effective coal-mine fire detecting system
CN108038510A (en) * 2017-12-22 2018-05-15 湖南源信光电科技股份有限公司 A kind of detection method based on doubtful flame region feature
CN105809643B (en) * 2016-03-14 2018-07-06 浙江外国语学院 A kind of image enchancing method based on adaptive block channel extrusion
CN108280755A (en) * 2018-02-28 2018-07-13 阿里巴巴集团控股有限公司 The recognition methods of suspicious money laundering clique and identification device
CN108319964A (en) * 2018-02-07 2018-07-24 嘉兴学院 A kind of fire image recognition methods based on composite character and manifold learning
CN108416968A (en) * 2018-01-31 2018-08-17 国家能源投资集团有限责任公司 Fire alarm method and apparatus
CN108537150A (en) * 2018-03-27 2018-09-14 秦广民 Reflective processing system based on image recognition
CN108664980A (en) * 2018-05-14 2018-10-16 昆明理工大学 A kind of sun crown ring structure recognition methods based on guiding filtering and wavelet transformation
CN108765335A (en) * 2018-05-25 2018-11-06 电子科技大学 A kind of forest fire detection method based on remote sensing images
CN108876741A (en) * 2018-06-22 2018-11-23 中国矿业大学(北京) A kind of image enchancing method under the conditions of complex illumination
CN108875626A (en) * 2018-06-13 2018-11-23 江苏电力信息技术有限公司 A kind of static fire detection method of transmission line of electricity
CN109145796A (en) * 2018-08-13 2019-01-04 福建和盛高科技产业有限公司 A kind of identification of electric power piping lane fire source and fire point distance measuring method based on video image convergence analysis algorithm
CN109204106A (en) * 2018-08-27 2019-01-15 浙江大丰实业股份有限公司 Stage equipment mobile system
CN109272496A (en) * 2018-09-04 2019-01-25 西安科技大学 A kind of coal-mine fire video monitoring fire image recognition methods
CN109584423A (en) * 2018-12-13 2019-04-05 佛山单常科技有限公司 A kind of intelligent unlocking system
CN109685266A (en) * 2018-12-21 2019-04-26 长安大学 A kind of lithium battery bin fire prediction method and system based on SVM
CN109887220A (en) * 2019-01-23 2019-06-14 珠海格力电器股份有限公司 Air conditioner and control method thereof
CN109919071A (en) * 2019-02-28 2019-06-21 沈阳天眼智云信息科技有限公司 Flame identification method based on infrared multiple features combining technology
CN110033040A (en) * 2019-04-12 2019-07-19 华南师范大学 A kind of flame identification method, system, medium and equipment
CN110120142A (en) * 2018-02-07 2019-08-13 中国石油化工股份有限公司 A kind of fire hazard aerosol fog video brainpower watch and control early warning system and method for early warning
CN110163278A (en) * 2019-05-16 2019-08-23 东南大学 A kind of flame holding monitoring method based on image recognition
CN110334664A (en) * 2019-07-09 2019-10-15 中南大学 Statistical method, device, electronic equipment and the medium of phase fraction is precipitated in a kind of alloy
CN111105587A (en) * 2019-12-31 2020-05-05 广州思瑞智能科技有限公司 Intelligent flame detection method and device, detector and storage medium
CN111476965A (en) * 2020-03-13 2020-07-31 深圳信息职业技术学院 Method for constructing fire detection model, fire detection method and related equipment
CN112115766A (en) * 2020-07-28 2020-12-22 辽宁长江智能科技股份有限公司 Flame identification method, device, equipment and storage medium based on video picture
CN112149509A (en) * 2020-08-25 2020-12-29 浙江浙大中控信息技术有限公司 Traffic signal lamp fault detection method integrating deep learning and image processing
CN112215831A (en) * 2020-10-21 2021-01-12 厦门市美亚柏科信息股份有限公司 Method and system for evaluating quality of face image
CN112396026A (en) * 2020-11-30 2021-02-23 北京华正明天信息技术股份有限公司 Fire image feature extraction method based on feature aggregation and dense connection
CN113158719A (en) * 2020-11-30 2021-07-23 齐鲁工业大学 Image identification method for fire disaster of photovoltaic power station
CN114220046A (en) * 2021-11-25 2022-03-22 中国民用航空飞行学院 Fire image fuzzy membership recognition method based on gray comprehensive association degree
CN114530025A (en) * 2021-12-31 2022-05-24 武汉烽理光电技术有限公司 Tunnel fire alarm method and device based on array grating and electronic equipment
CN116701409A (en) * 2023-08-07 2023-09-05 湖南永蓝检测技术股份有限公司 Sensor data storage method for intelligent on-line detection of environment
CN117152474A (en) * 2023-07-25 2023-12-01 华能核能技术研究院有限公司 High-temperature gas cooled reactor flame identification method based on K-means clustering algorithm
CN117612319A (en) * 2024-01-24 2024-02-27 上海意静信息科技有限公司 Alarm information grading early warning method and system based on sensor and picture

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101393603B (en) * 2008-10-09 2012-01-04 浙江大学 Method for recognizing and detecting tunnel fire disaster flame

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
VIKSHANT KHANNA等: "Fire Detection Mechanism using Fuzzy Logic", 《INTERNATIONAL JOURNAL OF COMPUTER APPLICATION》 *
孙福志等: "火灾识别中RS-SVM模型的应用", 《计算机工程与应用》 *
赵敏等: "模糊聚类遗传算法在遗煤自燃火灾识别中的应用", 《煤炭技术》 *

Cited By (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107209873A (en) * 2015-01-29 2017-09-26 高通股份有限公司 Hyper parameter for depth convolutional network is selected
CN107209873B (en) * 2015-01-29 2021-06-25 高通股份有限公司 Hyper-parameter selection for deep convolutional networks
CN105809643B (en) * 2016-03-14 2018-07-06 浙江外国语学院 A kind of image enchancing method based on adaptive block channel extrusion
CN105976365A (en) * 2016-04-28 2016-09-28 天津大学 Nocturnal fire disaster video detection method
CN107015852A (en) * 2016-06-15 2017-08-04 珠江水利委员会珠江水利科学研究院 A kind of extensive Hydropower Stations multi-core parallel concurrent Optimization Scheduling
CN106204553A (en) * 2016-06-30 2016-12-07 江苏理工学院 Image fast segmentation method based on least square method curve fitting
CN106204553B (en) * 2016-06-30 2019-03-08 江苏理工学院 Image fast segmentation method based on least square method curve fitting
CN106355812A (en) * 2016-08-10 2017-01-25 安徽理工大学 Fire hazard prediction method based on temperature fields
CN107316012A (en) * 2017-06-14 2017-11-03 华南理工大学 The fire detection and tracking of small-sized depopulated helicopter
CN107316012B (en) * 2017-06-14 2020-12-22 华南理工大学 Fire detection and tracking method of small unmanned helicopter
CN107704820A (en) * 2017-09-28 2018-02-16 深圳市鑫汇达机械设计有限公司 A kind of effective coal-mine fire detecting system
CN108038510A (en) * 2017-12-22 2018-05-15 湖南源信光电科技股份有限公司 A kind of detection method based on doubtful flame region feature
CN108416968A (en) * 2018-01-31 2018-08-17 国家能源投资集团有限责任公司 Fire alarm method and apparatus
CN108319964A (en) * 2018-02-07 2018-07-24 嘉兴学院 A kind of fire image recognition methods based on composite character and manifold learning
CN108319964B (en) * 2018-02-07 2021-10-22 嘉兴学院 Fire image recognition method based on mixed features and manifold learning
CN110120142A (en) * 2018-02-07 2019-08-13 中国石油化工股份有限公司 A kind of fire hazard aerosol fog video brainpower watch and control early warning system and method for early warning
CN108280755A (en) * 2018-02-28 2018-07-13 阿里巴巴集团控股有限公司 The recognition methods of suspicious money laundering clique and identification device
WO2019165817A1 (en) * 2018-02-28 2019-09-06 阿里巴巴集团控股有限公司 Method and device for recognizing suspicious money laundering group
CN108537150A (en) * 2018-03-27 2018-09-14 秦广民 Reflective processing system based on image recognition
CN108664980A (en) * 2018-05-14 2018-10-16 昆明理工大学 A kind of sun crown ring structure recognition methods based on guiding filtering and wavelet transformation
CN108765335A (en) * 2018-05-25 2018-11-06 电子科技大学 A kind of forest fire detection method based on remote sensing images
CN108875626A (en) * 2018-06-13 2018-11-23 江苏电力信息技术有限公司 A kind of static fire detection method of transmission line of electricity
CN108876741B (en) * 2018-06-22 2021-08-24 中国矿业大学(北京) Image enhancement method under complex illumination condition
CN108876741A (en) * 2018-06-22 2018-11-23 中国矿业大学(北京) A kind of image enchancing method under the conditions of complex illumination
CN109145796A (en) * 2018-08-13 2019-01-04 福建和盛高科技产业有限公司 A kind of identification of electric power piping lane fire source and fire point distance measuring method based on video image convergence analysis algorithm
CN109204106A (en) * 2018-08-27 2019-01-15 浙江大丰实业股份有限公司 Stage equipment mobile system
CN109204106B (en) * 2018-08-27 2020-08-07 浙江大丰实业股份有限公司 Stage equipment moving system
CN109272496A (en) * 2018-09-04 2019-01-25 西安科技大学 A kind of coal-mine fire video monitoring fire image recognition methods
CN109272496B (en) * 2018-09-04 2022-05-03 西安科技大学 Fire image identification method for coal mine fire video monitoring
CN109584423A (en) * 2018-12-13 2019-04-05 佛山单常科技有限公司 A kind of intelligent unlocking system
CN109685266A (en) * 2018-12-21 2019-04-26 长安大学 A kind of lithium battery bin fire prediction method and system based on SVM
CN109887220A (en) * 2019-01-23 2019-06-14 珠海格力电器股份有限公司 Air conditioner and control method thereof
CN109919071A (en) * 2019-02-28 2019-06-21 沈阳天眼智云信息科技有限公司 Flame identification method based on infrared multiple features combining technology
CN110033040A (en) * 2019-04-12 2019-07-19 华南师范大学 A kind of flame identification method, system, medium and equipment
CN110163278A (en) * 2019-05-16 2019-08-23 东南大学 A kind of flame holding monitoring method based on image recognition
CN110334664A (en) * 2019-07-09 2019-10-15 中南大学 Statistical method, device, electronic equipment and the medium of phase fraction is precipitated in a kind of alloy
CN111105587A (en) * 2019-12-31 2020-05-05 广州思瑞智能科技有限公司 Intelligent flame detection method and device, detector and storage medium
CN111476965A (en) * 2020-03-13 2020-07-31 深圳信息职业技术学院 Method for constructing fire detection model, fire detection method and related equipment
CN111476965B (en) * 2020-03-13 2021-08-03 深圳信息职业技术学院 Method for constructing fire detection model, fire detection method and related equipment
CN112115766A (en) * 2020-07-28 2020-12-22 辽宁长江智能科技股份有限公司 Flame identification method, device, equipment and storage medium based on video picture
CN112149509A (en) * 2020-08-25 2020-12-29 浙江浙大中控信息技术有限公司 Traffic signal lamp fault detection method integrating deep learning and image processing
CN112149509B (en) * 2020-08-25 2023-05-09 浙江中控信息产业股份有限公司 Traffic signal lamp fault detection method integrating deep learning and image processing
CN112215831A (en) * 2020-10-21 2021-01-12 厦门市美亚柏科信息股份有限公司 Method and system for evaluating quality of face image
CN112215831B (en) * 2020-10-21 2022-08-26 厦门市美亚柏科信息股份有限公司 Method and system for evaluating quality of face image
CN113158719A (en) * 2020-11-30 2021-07-23 齐鲁工业大学 Image identification method for fire disaster of photovoltaic power station
CN112396026A (en) * 2020-11-30 2021-02-23 北京华正明天信息技术股份有限公司 Fire image feature extraction method based on feature aggregation and dense connection
CN112396026B (en) * 2020-11-30 2024-06-07 北京华正明天信息技术股份有限公司 Fire image feature extraction method based on feature aggregation and dense connection
CN114220046A (en) * 2021-11-25 2022-03-22 中国民用航空飞行学院 Fire image fuzzy membership recognition method based on gray comprehensive association degree
CN114530025A (en) * 2021-12-31 2022-05-24 武汉烽理光电技术有限公司 Tunnel fire alarm method and device based on array grating and electronic equipment
CN114530025B (en) * 2021-12-31 2024-03-08 武汉烽理光电技术有限公司 Tunnel fire alarming method and device based on array grating and electronic equipment
CN117152474A (en) * 2023-07-25 2023-12-01 华能核能技术研究院有限公司 High-temperature gas cooled reactor flame identification method based on K-means clustering algorithm
CN116701409A (en) * 2023-08-07 2023-09-05 湖南永蓝检测技术股份有限公司 Sensor data storage method for intelligent on-line detection of environment
CN116701409B (en) * 2023-08-07 2023-11-03 湖南永蓝检测技术股份有限公司 Sensor data storage method for intelligent on-line detection of environment
CN117612319A (en) * 2024-01-24 2024-02-27 上海意静信息科技有限公司 Alarm information grading early warning method and system based on sensor and picture

Also Published As

Publication number Publication date
CN103886344B (en) 2017-07-07

Similar Documents

Publication Publication Date Title
CN103886344A (en) Image type fire flame identification method
Zhang et al. Joint Deep Learning for land cover and land use classification
EP3614308B1 (en) Joint deep learning for land cover and land use classification
Hamraz et al. Deep learning for conifer/deciduous classification of airborne LiDAR 3D point clouds representing individual trees
US10984532B2 (en) Joint deep learning for land cover and land use classification
Zhang et al. Quantification of sawgrass marsh aboveground biomass in the coastal Everglades using object-based ensemble analysis and Landsat data
Chiang et al. Deep learning-based automated forest health diagnosis from aerial images
CN103871029A (en) Image enhancement and partition method
CN103942557A (en) Coal-mine underground image preprocessing method
Pacifici et al. Automatic change detection in very high resolution images with pulse-coupled neural networks
CN108038846A (en) Transmission line equipment image defect detection method and system based on multilayer convolutional neural networks
CN111723693B (en) Crowd counting method based on small sample learning
Wan et al. UAV swarm based radar signal sorting via multi-source data fusion: A deep transfer learning framework
CN106503734B (en) Image classification method based on trilateral filter and the sparse autocoder of storehouse
Chen et al. Agricultural remote sensing image cultivated land extraction technology based on deep learning
CN105005773A (en) Pedestrian detection method with integration of time domain information and spatial domain information
CN108388828A (en) A kind of seashore wetland land cover pattern information extracting method of comprehensive multi- source Remote Sensing Data data
Xiao et al. Citrus greening disease recognition algorithm based on classification network using TRL-GAN
CN103177248A (en) Rapid pedestrian detection method based on vision
CN105989615A (en) Pedestrian tracking method based on multi-feature fusion
CN103646420A (en) Intelligent 3D scene reduction method based on self learning algorithm
Zhu et al. MAP-MRF approach to Landsat ETM+ SLC-Off image classification
Mountrakis et al. Developing collaborative classifiers using an expert-based model
Tang et al. A recurrent curve matching classification method integrating within-object spectral variability and between-object spatial association
Duan et al. Recognition of combustion condition in MSWI process based on multi-scale color moment features and random forest

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210125

Address after: 710077 718, block a, Haixing city square, Keji Road, high tech Zone, Xi'an City, Shaanxi Province

Patentee after: Xi'an zhicaiquan Technology Transfer Center Co.,Ltd.

Address before: 710054 No. 58, middle section, Yanta Road, Shaanxi, Xi'an

Patentee before: XI'AN University OF SCIENCE AND TECHNOLOGY

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211102

Address after: 257000 Room 308, building 3, Dongying Software Park, No. 228, Nanyi Road, development zone, Dongying City, Shandong Province

Patentee after: Dongkai Shuke (Shandong) Industrial Park Co.,Ltd.

Address before: 710077 718, block a, Haixing city square, Keji Road, high tech Zone, Xi'an City, Shaanxi Province

Patentee before: Xi'an zhicaiquan Technology Transfer Center Co.,Ltd.

TR01 Transfer of patent right