CN103886344B - A kind of Image Fire Flame recognition methods - Google Patents

A kind of Image Fire Flame recognition methods Download PDF

Info

Publication number
CN103886344B
CN103886344B CN201410148888.3A CN201410148888A CN103886344B CN 103886344 B CN103886344 B CN 103886344B CN 201410148888 A CN201410148888 A CN 201410148888A CN 103886344 B CN103886344 B CN 103886344B
Authority
CN
China
Prior art keywords
image
value
parameter
fuzzy
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410148888.3A
Other languages
Chinese (zh)
Other versions
CN103886344A (en
Inventor
王媛彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongkai Shuke Shandong Industrial Park Co ltd
Original Assignee
Xian University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Science and Technology filed Critical Xian University of Science and Technology
Priority to CN201410148888.3A priority Critical patent/CN103886344B/en
Publication of CN103886344A publication Critical patent/CN103886344A/en
Application granted granted Critical
Publication of CN103886344B publication Critical patent/CN103886344B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Fire-Detection Mechanisms (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of Image Fire Flame recognition methods, comprise the following steps:First, IMAQ;2nd, image procossing:201st, image preprocessing;Step 202, fire identification:It is identified using two disaggregated models for pre-building, two disaggregated models are the supporting vector machine model to having flame and classified without two classifications of flame;Two disaggregated models to set up process as follows:Ith, image information collecting;IIth, feature extraction;IIIth, training sample is obtained;IVth, two disaggregated models are set up:IV 1, kernel function is chosen;IV 2, classification function determines:Parameter C and D is optimized using conjugate gradient method, then the parameter C after optimization and D are converted into γ and σ2;Vth, two disaggregated model training.The inventive method step is simple, high, using effect is good to realize convenient and easy to operate, reliability, can effectively solve under the complex environment that existing video fire hazard detecting system is present that reliability is relatively low, report the problems such as rate of failing to report is higher, using effect is poor by mistake.

Description

A kind of Image Fire Flame recognition methods
Technical field
The invention belongs to image variants technical field, more particularly, to a kind of Image Fire Flame identification side Method.
Background technology
Fire is one of mine disaster, seriously threatens the safety in production in human health, natural environment and colliery.With Scientific and technological progress, fire Automatic Measurement Technique is increasingly becoming the important means of monitoring and fire alarm.Nowadays, in coal mine Under, fire prediction and detection are mainly with temperature effect, the combustion products of monitoring fire(There is the effect of smog and gas)And electricity Based on magnetic radiation effect, but above-mentioned existing detection method all waits to improve in terms of sensitivity and reliability, and can not be right Incipient fire is reacted, thus incompatible with increasingly strict fire safety evaluating requirement.Especially when existing in large space During shelter, propagation of the fire combustion product in space can be influenceed by spatial altitude and area, common point-type sense cigarette, sense Warm fire detection warning system cannot rapidly gather the cigarette temperature change information that fire sends, only when fire development to certain journey When spending, can just respond, so as to be difficult to meet the requirement of early detection fire.Video processing technique and mode identification technology Developing rapidly makes fire detection and alarm mode just develop towards image conversion, digitlization, scale and intelligent direction.And be based on The fire detection technology of video monitoring has that investigative range is wide, the response time is short, low cost, the advantage, knot such as not affected by environment Closing computer intellectual technology can provide more directly perceived, more rich information, and the safety in production to colliery is significant.
At present, video fire hazard detection technique at home and abroad still belongs to the starting stage, product in detection mode, operation principle, be The aspect such as system structure and practical occasion has differences, and typical system mainly has the Signi that axonx LLC companies of the U.S. develop Fire TM systems, the Alarm Eye VISFD distributed intelligence images fire inspection of DHF Intellvision companies of U.S. exploitation Examining system, infrared and common camera the two waveband monitoring of Bosque companies of the U.S., ISL companies of Switzerland and Magnox The VSD-8 systems for power station fire hazard monitoring that Electric companies develop jointly.It is domestic to fire detection and self-extinguishing Studying the SKLFS of current Chinese University of Science and Technology, to be made comparing leading, and University Of Tianjin, Xi'an are handed in addition Logical university, Shenyang University of Technology and Shanghai Communications University have also been made actively research, but above-mentioned image fire detection system is used for The fire detection in power station, building, warehouse etc., it is relative also fewer in the application of underground coal mine.In recent years, lot of domestic and international research Personnel are that texture analysis of flame images algorithm has carried out in-depth study to the key technology of this image-type fire detection system, and are done Tremendous contribution is gone out, major embodiment is in the following areas:1. based on the flame static nature such as spectral characteristic such as pixel intensity, colourity and The video flame detecting method of regional structure such as shape, profile etc.;2. the video flame based on flame color moving region is detected Method;3. it is based on the video detecting method of flame stroboscopic nature and time-frequency characteristic.But the research of above-mentioned texture analysis of flame images algorithm When being applied in existing video fire hazard detecting system, have some limitations to some extent, can not in complex scene Effectively removal interference, system wrong report is failed to report more serious.Therefore, nowadays lack that a kind of method and step is simple, it is convenient to realize and The Image Fire Flame recognition methods that easy to operate, reliability is high, using effect is good, can effectively solve existing video fire hazard inspection The problems such as reliability is relatively low under the complex environment that examining system is present, wrong report rate of failing to report is higher, using effect is poor.
The content of the invention
The technical problems to be solved by the invention are for above-mentioned deficiency of the prior art, there is provided a kind of image-type fire Calamity flame identification method, its method and step is simple, high, using effect is good to realize convenient and easy to operate, reliability, can solve existing With the presence of reliability under the complex environment of video fire hazard detecting system it is relatively low, wrong report rate of failing to report is higher, using effect is poor etc. asks Topic.
In order to solve the above technical problems, the technical solution adopted by the present invention is:A kind of Image Fire Flame recognition methods, It is characterized in that the method is comprised the following steps:
Step one, IMAQ:Using image acquisition units and according to sample frequency f set in advances, treat detection zone The digital picture in domain is acquired, and the digital picture synchronous driving that each sampling instant is gathered is to processor;It is described Image acquisition units connect with processor;
Step 2, image procossing:The processor is gathered according to time order and function order to each sampling instant in step one Digital picture carry out image procossing respectively, and the processing method all same of digital picture is gathered to each sampling instant;To step When the digital picture that any one sampling instant is gathered in rapid is processed, comprise the following steps:
Step 201, image preprocessing, process are as follows:
Step 2011, image-receptive and synchronous storage:The current sample time that the processor will be received now is adopted The digital picture of collection is synchronously stored in data storage, and the data storage connects with processor;
Step 2012, image enhaucament:The digital picture gathered to current sample time by processor is carried out at enhancing Reason, obtains the digital picture after enhancing treatment;
Step 2013, image segmentation:Digital picture after enhancing during processor is to step 2012 is processed is split Treatment, obtains target image;
Step 202, fire identification:Using two disaggregated models for pre-building, target image described in step 2013 is entered Row treatment, and draw the fire condition classification in current sample time region to be detected;The fire condition classification includes flame With without two classifications of flame, two disaggregated model is the SVMs to having flame and classified without two classifications of flame Model;
Two disaggregated model to set up process as follows:
Step I, image information collecting:Using described image collecting unit, there is region to be detected during fire in collection respectively Multiframe digital picture one and when there is no fire region to be detected multiframe digital picture two;
Step II, feature extraction:Feature is carried out respectively to digital picture described in digital picture described in multiframe one and multiframe to carry Take, and extract one group of characteristic parameter that can be represented and distinguish the digital picture, and this group of feature respectively from each digital picture Parameter includes M characteristic quantity, and the M characteristic quantity is numbered, and the M characteristic quantity constitutes a characteristic vector, its Middle M >=2;
Step III, training sample are obtained:The He of digital picture one described in the multiframe obtained after feature extraction from step II In the characteristic vector of digital picture two described in multiframe, choose respectively described in the characteristic vector and m2 frames of digital picture one described in m1 frames The characteristic vector composition training sample set of digital picture two;Wherein, m1 and m2 are positive integer and m1=40~100, and m2=40~ 100;It is m1+m2 that the training sample concentrates the quantity of training sample;
Step IV, two disaggregated models are set up, and process is as follows:
Step IV -1, kernel function are chosen:From RBF as two disaggregated model kernel function;
Step IV -2, classification function determine:Treat the core ginseng of penalty factor γ and selected RBF in step IV -1 Number σ2It is determined that after, just obtain the classification function of two disaggregated model, and complete two disaggregated model set up process;Its In, γ=C-2, σ=D-1, 0.01 < C≤10,0.01 < D≤50;
To penalty factor γ and nuclear parameter σ2When being determined, parameter C and D is optimized using conjugate gradient method first, Parameter C and D after being optimized, further according to γ=C-2With σ=D-1Parameter C after optimization and D are converted into penalty factor γ and core Parameter σ2
Step V, two disaggregated model training:The m1+m2 training sample that training sample described in step III is concentrated, it is defeated Enter two disaggregated models set up in step IV to be trained.
A kind of above-mentioned Image Fire Flame recognition methods, it is characterized in that:Training sample described in step III concentrates training Total sample number amount is N and N=m1+m2;Before two disaggregated model foundation are carried out in step IV, the N for first being concentrated to the training sample Individual training sample is numbered, and it is p that the training sample concentrates the numbering of p-th training sample, p be positive integer and p=1, 2、…、N;P-th training sample is denoted as (xp,yp), wherein xpIt is p-th characteristic parameter of training sample, ypIt is p-th training The classification number and y of samplep=1 or -1, wherein classification number indicates flame for 1, and classification number is indicated without flame for -1;
When being optimized to parameter C and D using conjugate gradient method in step IV -2, using training sample described in step III M1+m2 training sample of concentration is optimized, and optimization process is as follows:
Step I, object function determine:(1), in formula To stay a Prediction sum squares, p is the numbering that the training sample concentrates each training sample, e to sse (C, D)pFor in step IV Two disaggregated models set up to p-th predicated error of training sample andWherein,In formulaTo remove remaining the element group after p-th element in matrix s Into vector;S (p) is p-th element of matrix s, (A-1)(p-, p) it is matrix A-1Pth row remove p-th element after its The column vector of remaining element composition, (A-1) (p, p) be matrix A-1Pth row p-th element;It is matrixPth Row remove the column vector of remaining element composition after p-th element, matrixIt is the augmented matrix of matrix K, wherein matrixMatrix A-1The inverse matrix of representing matrix A, matrixWherein matrix I is unit matrix, matrix IN=[1,1,…,1]T, the transposition of T representing matrixs, matrix INIn comprising N number of element and N number of element is 1;Matrix s=A-1Y, matrixWherein y1、y2、…、yNIt is respectively described Training sample concentrates the classification of N number of training sample;
Step II, Initial parameter sets:To the initial value C of parameter C and D1And D1It is determined respectively, and to identification error Threshold epsilon is set and ε > 0;
The gradient g of step III, current iterationkCalculate:According to formulaCalculate target in step I Function pair CkAnd DkGradient gk, k be iterations and k=1,2 ...;If | | gk| |≤ε, stop calculating, now CkAnd DkRespectively It is parameter C and D after optimization;Otherwise, into step IV;
Wherein,
In formulaIt is matrixPth row Remove the column vector of the composition of remaining element after p-th element;s(p-) to remove remaining element after p-th element in matrix s The vector of composition, epBy two disaggregated models set up in step IV are to p-th predicated error of training sample;
The direction of search d of step IV, current iterationkCalculate:According to formulaCalculate and work as The direction of search d of preceding iterationk, d in formulak-1It is the direction of search of -1 iteration of kth, βk=||gk||2/||gk-1||2, gk-1For kth- 1 gradient of iteration;
The step-size in search λ of step V, current iterationkIt is determined that:The identified direction of search d along step IVkScan for, Find out and meet formula
Step-size in search λk, In formulaRepresent(0 ,+∞)Middle searching makesReach the step-length λ of minimum valuek
Step VI, according to formulaWithTo Ck+1And Dk+1Counted Calculate;
Step VII, k=k+1 is made, return to step III, carries out next iteration afterwards;
Selected RBF is in step IV -1The RBF Regression function isα in formulatBe regression parameter with b, s be positive integer and s=1,2 ..., N, t is Positive integer and t=1,2 ..., N.
A kind of above-mentioned Image Fire Flame recognition methods, it is characterized in that:M=6 in step II, and 6 characteristic quantities are respectively Area, similarity, square characteristic, consistency, textural characteristics and stroboscopic nature.
A kind of above-mentioned Image Fire Flame recognition methods, it is characterized in that:To C in step II1And D1When being determined, adopt With grid data service or the method for randomly selecting numerical value to C1And D1It is determined;Using randomly selecting the method for numerical value to C1With D1When being determined, C1For(0.01,1] numerical value randomly selected in, D1For(0.01,50] number randomly selected in Value;Using grid data service to C1And D1When being determined, first 10-3Be step-length grid division, then with C and D as independent variable and with Object function described in step I is that dependent variable makes three dimensional network trrellis diagram, finds out multigroup parameter of C and D by grid search afterwards, Finally to multigroup parameter is averaged as C1And D1
A kind of above-mentioned Image Fire Flame recognition methods, it is characterized in that:When carrying out image enhaucament in step 2012, use Image enchancing method based on fuzzy logic carries out enhancing treatment.
A kind of above-mentioned Image Fire Flame recognition methods, it is characterized in that:Using the image enhaucament side based on fuzzy logic When method carries out enhancing treatment, process is as follows:
Step 20121, fuzzy field is transformed to by image area:According to membership function (7), the gray value of each pixel of image to be reinforced is mapped to the fuzzy membership of fuzzy set, and is accordingly obtained described The fuzzy set of image to be reinforced;X in formulaghIt is any pixel point in the image to be reinforced(G, h)Gray value, XTIt is to use base Selected gray threshold, X when the image enchancing method of fuzzy logic carries out enhancing treatment to the image to be reinforcedmaxFor The maximum gradation value of the image to be reinforced;
Step 20122, carry out enhanced fuzzy treatment using fuzzy enhancement operator in fuzzy field:The enhanced fuzzy for being used is calculated Son is μ 'gh=Irgh)=Ir(Ir-1μgh), r is iterations and it is positive integer in formula, r=1,2 ...;Whereinμ in formulac=T(XC), wherein XCTo get over a little and XC=XT
Step 20123, image area is changed to by fuzzy field inversion:According to formula(6), by enhanced fuzzy The μ ' obtained after reasonghInverse transformation is carried out, the gray value of each pixel in digital picture after enhancing is processed is obtained, and obtained at enhancing Digital picture after reason.
A kind of above-mentioned Image Fire Flame recognition methods, it is characterized in that:Transformed to by image area in step 20121 fuzzy Before domain, first using maximum variance between clusters to gray threshold XTChosen;Using maximum variance between clusters to gray threshold XT Before being chosen, all gray values that pixel quantity is 0 are first found out from the grey scale change scope of the image to be reinforced, And use processor(3)The all gray values that will be found out are marked to calculate gray value;Using maximum variance between clusters to ash Degree threshold XTWhen being chosen, in the grey scale change scope of the image to be reinforced except it is described exempt from calculate gray value in addition to its Inter-class variance value when its gray value is as threshold value is calculated, and from the inter-class variance value for calculating find out maximum kind between side Difference, it is just gray threshold X to find out the corresponding gray value of maximum between-cluster variance valueT
A kind of above-mentioned Image Fire Flame recognition methods, it is characterized in that:In step one, each sampling instant gathers numeral The size of image is M1 × N1 pixel;
When step 2013 carries out image segmentation, process is as follows:
Step 20131, two-dimensional histogram are set up:Using processor set up the image to be split on pixel gray level The two-dimensional histogram of value and neighborhood averaging gray value;Any point is designated as (i, j) in the two-dimensional histogram, and wherein i is the straight two dimension The abscissa value of square figure and its be any pixel point (m, n) in the image to be split gray value, j is the two-dimensional histogram Ordinate value and its be the pixel (m, n) neighborhood averaging gray value;Any point (i, j) occurs in set up two-dimensional histogram Frequency be designated as C (i, j), and the frequency that point (i, j) occurs is designated as h (i, j), wherein
Step 20132, fuzzy parameter Combinatorial Optimization:The processor calls fuzzy parameter Combinatorial Optimization module, and utilizes Particle swarm optimization algorithm carries out excellent to the fuzzy parameter combination used by the image partition method based on two dimension fuzzy division maximum entropy Change, and fuzzy parameter after optimize is combined;
In this step, before being optimized to fuzzy parameter combination, first according to the two-dimentional Nogata set up in step 20131 Figure, calculate the functional relation of Two-dimensional Fuzzy Entropy when splitting to the image to be split, and will calculate The functional relation of Two-dimensional Fuzzy Entropy is used as fitness when being optimized to fuzzy parameter combination using particle swarm optimization algorithm Function;
Step 20133, image segmentation:The processor is pressed using the fuzzy parameter combination after optimizing in step 20132 Each pixel in the image to be split is classified according to the image partition method that maximum entropy is divided based on two dimension fuzzy, and It is corresponding to complete image segmentation process, the target image after being split.
A kind of above-mentioned Image Fire Flame recognition methods, it is characterized in that:Image to be split is by mesh described in step 20131 Logo image O and background image P is constituted;Wherein the membership function of target image O is μo(i,j)=μox(i;a,b)μoy(j;c,d) (1);The membership function μ of background image Pb(i,j)=μbx(i;a,b)μoy(j;c,d)+μox(i;a,b)μby(j;c,d)+μbx(i; a,b)μby(j;c,d)(2);
In formula (1) and (2), μox(i;A, b) and μoy(j;C, d) be target image O one-dimensional membership function and the two It is S function, μbx(i;A, b) and μby(j;C, d) it is the one-dimensional membership function of background image P and both at S function, μbx(i;a,b)=1-μox(i;A, b), μby(j;c,d)=1-μoy(j;C, d), wherein a, b, c and d are to target image O and the back of the body The parameter that the one-dimensional membership function shape of scape image P is controlled;
When functional relation in step 20132 to Two-dimensional Fuzzy Entropy is calculated, first according to two set up in step I Dimension histogram, to the minimum value g of the pixel gray value of the image to be splitminWith maximum gmaxAnd neighborhood averaging gray scale The minimum value s of valueminWith maximum smaxIt is determined respectively;
The functional relation of the Two-dimensional Fuzzy Entropy calculated in step 20132 is:
(3), In formula (3)Wherein h (i, j) is described in step I Point (i, j) occur frequency;
When being optimized to fuzzy parameter combination using particle swarm optimization algorithm in step 20132, the fuzzy ginseng for being optimized Array is combined into (a, b, c, d).
A kind of above-mentioned Image Fire Flame recognition methods, it is characterized in that:Two dimension fuzzy is carried out in step 20132 to divide most When the parameter combination of big entropy optimizes, comprise the following steps:
Step II -1, population initialization:Using a value of parameter combination as a particle, and by multiple particle groups Into a population for initialization;It is denoted as (ak,bk,ck,dk), wherein k be positive integer and its k=1,2,3 ,~, K, wherein K is for just Integer and its be particle included in the population quantity, akIt is a random value of parameter a, bkIt is one of parameter b Random value, ckIt is a random value of parameter c, dkIt is a random value of parameter d, ak< bkAnd ck< dk
Step II -2, fitness function determine:
Will(3), As fitness function;
Step II -3, particle fitness evaluation:To current time, the fitness of all particles is evaluated respectively, and all The fitness evaluation method all same of particle;Wherein, when the fitness to k-th particle of current time is evaluated, first basis Identified fitness function calculates the fitness value of k-th particle of current time and is denoted as in step II -2 Fitnessk, and the fitnessk that will be calculated and Pbestk carries out difference comparsion:Fitnessk > are drawn when comparing During Pbestk, Pbestk=fitnessk, and willThe position of k-th particle of current time is updated to, wherein Pbestk is to work as Maximum adaptation angle value that k-th particle of preceding moment is reached and its be k-th particle of current time individual extreme value,It is to work as The personal best particle of k-th particle of preceding moment;Wherein, t is for current iteration number of times and it is positive integer;
Treat to be calculated the fitness value of current time all particles according to identified fitness function in step II -2 After the completion of, the fitness value of the maximum particle of current time fitness value is designated as fitnesskbest, and will Fitnesskbest and gbest carries out difference comparsion:When compare draw fitnesskbest > gbest when, gbest= Fitnesskbest, and willThe position of the maximum particle of current time fitness value is updated to, when wherein gbest is current The global extremum at quarter,It is colony's optimal location at current time;
Step II -4, judge whether to meet stopping criterion for iteration:When stopping criterion for iteration is met, parameter combination is completed excellent Change process;Otherwise, position and the speed for drawing each particle of subsequent time, and return to step are updated according to colony optimization algorithm in particle Ⅱ-3;Stopping criterion for iteration reaches maximum iteration I set in advance for current iteration number of times t in step II -4maxOr Δ G≤e, wherein Δ g=| gbest-gmax |, are the global extremum at gbest current times in formula, and gmax is the target of original setting Fitness value, e is for positive number and it is deviation set in advance.
The present invention has advantages below compared with prior art:
1st, method and step is simple, reasonable in design and realizes that conveniently, input cost is relatively low.
2nd, the image enchancing method step for being used is simple, reasonable in design and enhancing effect is good, according to underground coal mine illumination The characteristics of low, round-the-clock artificial light causes image image quality difference, analyzing and comparing traditional images enhancing Processing Algorithm On the basis of, it is proposed that the image enhaucament preprocess method based on fuzzy logic, the method uses new membership function, can not only Reduce the Pixel Information loss of image low gray level areas, overcome the problem that the contrast brought by enhanced fuzzy declines, improve Adaptability.Meanwhile, threshold value selection is carried out using a kind of quick maximum variance between clusters, realize enhanced fuzzy threshold adaptive Ground fast selecting, improves algorithm arithmetic speed, enhances real-time, can carry out image increasing to the image under varying environment By force, and the detailed information of image can be effectively improved, improves picture quality, and calculating speed is fast, meets requirement of real-time
3rd, the image partition method step for being used is simple, reasonable in design and segmentation effect is good, due to One-Dimensional Maximum-Entropy method Segmentation effect is not ideal enough for relatively low to signal to noise ratio, low-light (level) image, thus divides maximum entropy using based on two dimension fuzzy Dividing method split, the characteristics of the dividing method considers half-tone information and space neighborhood information and itself ambiguity, But there is the slow defect of arithmetic speed, use particle swarm optimization algorithm to carry out fuzzy parameter combination in present patent application excellent Change so that can it is easy, fast and accurately optimize after fuzzy parameter combine, thus image segmentation be greatly improved imitate Rate.Also, the particle swarm optimization algorithm for being used is reasonable in design and realizes convenient, its state and iteration according to current particle group The adjustment local space size of number of times self adaptation, obtained on the premise of convergence rate is not influenceed search success rate higher and Higher-quality solution, segmentation effect is good, strong robustness, and improves arithmetic speed, meets requirement of real-time.
4th, flame image can quickly and accurately be divided due to dividing the dividing method of maximum entropy based on two dimension fuzzy Cut, overcome the problem that traditional algorithm is divided by mistake using single threshold noise spot, while using particle swarm optimization algorithm to fuzzy ginseng Array is closed and optimized, and solves nature of nonlinear integral programming problem, and the target of segmentation is caused while influence of noise is overcome more Keep shape well.Thus, the present invention will divide the dividing method and particle swarm optimization algorithm phase of maximum entropy based on two dimension fuzzy The Fast Segmentation of infrared image is implemented in combination with, parameter combination (a, b, c, d) is set used as particle, two dimension fuzzy partition entropy is used as suitable Response function determines the direction of search of the particle in solution space, once the two-dimensional histogram of image is obtained, using PSO algorithm search So that maximum optimum parameter combination (a, b, c, d) of fitness function, finally according to maximum membership grade principle to the picture in image Element is classified, so as to realize the segmentation of image.Also, use dividing method of the present invention is big for noise, contrast The segmentation effect of the less infrared image of low, target is all very good.
When the 5th, actually carrying out feature extraction, area, similarity, square characteristic, consistency, textural characteristics, stroboscopic nature are chosen Know another characteristic foundation as fire image, both remained the feature big to classification contribution, given up redundancy feature, reduce spy Dimension is levied, the optimum choice of feature is completed.
6th, the two disaggregated model modeling methods for being used are simple.Conveniently, using effect is good, and adopts for reasonable in design and realization The hyper parameter of kernel function is optimized with conjugate gradient method.Treatment imperfection and fuzzy message are adapted to based on artificial neural network The characteristics of and the small sample that has of SVMs, non-linear and high dimensional pattern advantage carry out fire forest fire respectively, reach To the purpose that each criterion has complementary advantages, judge that disaster hidden-trouble easily causes wrong report so as to overcome the single criterion of traditional use Shortcoming.Conventional cross validation Optimal Parameters method, it is fairly time consuming, and do not ensure that the parameter of selection necessarily ensures utensil of classifying Standby classic classification performance, and all existing for existing other hyper parameter selection algorithms is unable to simultaneous selection penalty factor and core The defect of function parameter, for small sample LS-SVM pattern classification problems, the present invention use and stay a predicated error flat minimizing Side and be target, with gradient decline method, be small sample, Nonlinear Modeling LS-SVM simultaneously choose two hyper parameter core letters Number parameter and penalty factor.Not only discrimination is high for two disaggregated models that the present invention is set up, and nicety of grading is high, and the time used Short, energy is easy, be rapidly completed fire identification process, when the current classification for gathering image is identified to there is flame, then illustrates Generation fire, carries out alarm, and take corresponding measure in time.The present invention is for fire identification under the complexity particular surroundings of colliery Small sample problem and nonlinear problem, and advantage of the SVMs in terms of higher-dimension, it is proposed that based on least square branch The fire image recognition methods of vector machine is held, and on the basis of fast leave one out, is carried out hyper parameter using conjugate gradient method and is sought It is excellent, construct FR-LSSVM models.
In sum, the inventive method step is simple, high, using effect is good to realize convenient and easy to operate, reliability, energy Effectively solve under the complex environment that existing video fire hazard detecting system is present that reliability is relatively low, wrong report rate of failing to report is higher, use effect Really poor the problems such as.
Below by drawings and Examples, technical scheme is described in further detail.
Brief description of the drawings
Fig. 1 is method of the present invention FB(flow block).
Fig. 2 is the schematic block circuit diagram of image variants system used by the present invention.
Fig. 3 is the structural representation that the present invention sets up two-dimensional histogram.
Fig. 4 is the cutting state schematic diagram when present invention carries out image segmentation.
Description of reference numerals:
1-CCD camera;2-video frequency collection card;3-processor;4-data storage.
Specific embodiment
A kind of Image Fire Flame recognition methods as shown in Figure 1, comprises the following steps:
Step one, IMAQ:Using image acquisition units and according to sample frequency f set in advances, treat detection zone The digital picture in domain is acquired, and the digital picture synchronous driving that each sampling instant is gathered is to processor 3.It is described Image acquisition units connect with processor 3.
In the present embodiment, described image collecting unit includes CCD camera 1 and the video acquisition connected with CCD camera 1 Card 2, the CCD camera 1 connects with video frequency collection card 2, and the video frequency collection card 2 connects with processor 3.
In the present embodiment, the size that each sampling instant gathers digital picture is M1 × N1 pixel.Wherein M1 is Per the quantity of pixel in a line in gathered digital picture, N1 by collection digital picture on each row pixel number Amount.
Step 2, image procossing:The processor 3 is gathered according to time order and function order to each sampling instant in step one Digital picture carry out image procossing respectively, and the processing method all same of digital picture is gathered to each sampling instant;To step When the digital picture that any one sampling instant is gathered in rapid is processed, comprise the following steps:
Step 201, image preprocessing, process are as follows:
Step 2011, image-receptive and synchronous storage:The current sample time that the processor 3 will be received now is adopted The digital picture of collection is synchronously stored in data storage 4, and the data storage 4 connects with processor 3;
In the present embodiment, the CCD camera 1 is infrared CCD camera, and the CCD camera 1, video acquisition Card 2, processor 3 and data storage 4 composition IMAQ and pretreatment system, refer to Fig. 2.
Step 2012, image enhaucament:The digital picture gathered to current sample time by processor 3 is carried out at enhancing Reason, obtains the digital picture after enhancing treatment.
Step 2013, image segmentation:Digital picture after enhancing during processor 3 is to step 2012 is processed is split Treatment, obtains target image.
Step 202, fire identification:Using two disaggregated models for pre-building, target image described in step 2013 is entered Row treatment, and draw the fire condition classification in current sample time region to be detected;The fire condition classification includes flame With without two classifications of flame, two disaggregated model is the SVMs to having flame and classified without two classifications of flame Model.
Two disaggregated model to set up process as follows:
Step I, image information collecting:Using described image collecting unit, there is region to be detected during fire in collection respectively Multiframe digital picture one and when there is no fire region to be detected multiframe digital picture two.
Step II, feature extraction:Feature is carried out respectively to digital picture described in digital picture described in multiframe one and multiframe to carry Take, and extract one group of characteristic parameter that can be represented and distinguish the digital picture, and this group of feature respectively from each digital picture Parameter includes M characteristic quantity, and the M characteristic quantity is numbered, and the M characteristic quantity constitutes a characteristic vector, its Middle M >=2.
Step III, training sample are obtained:The He of digital picture one described in the multiframe obtained after feature extraction from step II In the characteristic vector of digital picture two described in multiframe, choose respectively respectively choose m1 frames described in digital picture one characteristic vector and The characteristic vector composition training sample set of digital picture two described in m2 frames;Wherein, m1 and m2 are positive integer and m1=40~100, M2=40~100;It is m1+m2 that the training sample concentrates the quantity of training sample.
In the present embodiment, when obtaining training sample, using generation in described image collecting unit collection certain time t1 There is no the digital image sequence two in region to be detected during fire in digital image sequence one during fire and one;The digitized map As the frame number of digital picture included in sequence one is n1=t1 × f, when wherein t1 is the sampling of the digital image sequence one Between;The frame number of digital picture is n2=t2 × f included in the digital image sequence two, and wherein t2 is the digital picture sequence The sampling time of row two.Wherein, n1 is not less than m1, and n2 is not less than m2.Afterwards, m1 is chosen from the digital image sequence one Digital picture chooses m2 digital picture as without flame sample as there is a flame sample from the digital image sequence two This.
In the present embodiment, m1=m2.
Step IV, two disaggregated models are set up, and process is as follows:
Step IV -1, kernel function are chosen:From RBF as two disaggregated model kernel function;
Step IV -2, classification function determine:Treat the core ginseng of penalty factor γ and selected RBF in step IV -1 Number σ2It is determined that after, just obtain the classification function of two disaggregated model, and complete two disaggregated model set up process;Its In, γ=C-2, σ=D-1, 0.01 < C≤10,0.01 < D≤50.
To penalty factor γ and nuclear parameter σ2When being determined, parameter C and D is optimized using conjugate gradient method first, Parameter C and D after being optimized, further according to γ=C-2With σ=D-1Parameter C after optimization and D are converted into penalty factor γ and core Parameter σ2
When being optimized to parameter C and D using conjugate gradient method in step IV -2, using training sample described in step III M1+m2 training sample of concentration is optimized.
Step V, two disaggregated model training:The m1+m2 training sample that training sample described in step III is concentrated, it is defeated Enter two disaggregated models set up in step IV to be trained.
In the present embodiment, it is N and N=m1+m2 that training sample described in step III concentrates training sample total quantity;Step IV In carry out two disaggregated model foundation before, first to the training sample concentrate N number of training sample be numbered, the training sample The numbering of p-th training sample of this concentration is p, p be positive integer and p=1,2 ..., N;P-th training sample is denoted as (xp,yp), its Middle xpIt is p-th characteristic parameter of training sample(I.e. described characteristic vector), ypIt is p-th classification number and y of training samplep=1 Or -1, wherein classification number indicates flame for 1, and classification number is indicated without flame for -1.
When being optimized to parameter C and D using conjugate gradient method in step IV -2, using training sample described in step III M1+m2 training sample of concentration is optimized, and optimization process is as follows:
Step I, object function determine:(1), sse in formula (C, D), to stay a Prediction sum squares, p is the numbering that the training sample concentrates each training sample, epIt is institute in step IV Two disaggregated models set up to p-th predicated error of training sample andWherein,S (p in formula-) to remove remaining the element group after p-th element in matrix s Into vector;S (p) is p-th element of matrix s, (A-1)(p-, p) it is matrix A-1Pth row remove p-th element after its The column vector of remaining element composition, (A-1) (p, p) be matrix A-1Pth row p-th element;It is matrixPth Row remove the column vector of remaining element composition after p-th element, matrixIt is the augmented matrix of matrix K, wherein matrixMatrix A-1The inverse matrix of representing matrix A, matrix Wherein matrix I is unit matrix, matrix IN=[1,1,…,1]T, the transposition of T representing matrixs, matrix INIn include N number of element and N Individual element is 1;Matrix s=A-1Y, matrixWherein y1、y2、…、yNRespectively described training sample concentrates N number of training The classification of sample.
Wherein, matrixWherein, matrix K is core letter Matrix number.MatrixFor the right side of matrix K increases a column element and each element is the matrix after 1.
Due to least square method supporting vector machine(LS-SVM)Constraints be expressed as following form:
(5.22), w in formulaT·φ(xp)+b is that higher-dimension is special The Optimal Separating Hyperplane in space is levied, w and b is the parameter of Optimal Separating Hyperplane;epIt is p-th training error of training sample,For Empiric risk;wT·w=||w||2The complexity of Learning machine is weighed.
After training sample set determination, the performance of LS-SVM models depends on the type and two super ginsengs of its kernel function Several selections, two hyper parameters are respectively penalty factor γ and nuclear parameter σ2, nicety of grading and the hyper parameter of LS-SVM models Selection is related, nuclear parameter σ2Represent the width of RBF and it there are much relations with the smoothness of LS-SVM models;Punish Penalty factor γ is also referred to as regularization parameter.Control to the punishment degree of error sample, its complexity with LS-SVM models and The matching degree of training sample is closely related.
In the present embodiment, selected RBF is in step IV -1The footpath It is to the regression function of basic functionα in formulatBe regression parameter with b, s be positive integer and s=1, 2nd ..., N, t be positive integer and t=1,2 ..., N.
Formula (5.22) can be write as:
C in formula (5.23)-2Penalty factor γ is instead of, but can equally play Balanced LS-SVM model complexities and warp Test the effect of risk;σ is by D-1Instead of RBFChange and expressed by following formula:K(xs,xt)= exp(-D2·||xs-xt||2)。
According to least square method supporting vector machine principle, formula (5.23) is converted into system of linear equations(5.25), the derivation of formula (5.25) is big with reference in October, 2012 Qingdao science and technology Learn journal(Natural science edition)What the 5th phase of volume 33 published《Based on the online recursive least-squares for quickly staying a cross-validation method Model construction of SVM method》(Author is Shao Weiming, Tian Xuemin)One text.
Formula (5.25) is solved, the regression function of RBF can be obtainedBy public affairs Formula (5.25) can draw:Matrix s=A-1Y(5.28)。
Two disaggregated models of the N number of training sample concentrated by the training sample to being set up carry out n times checking, wherein When carrying out pth time checking, using p-th training sample as prediction sets, and remaining N-1 sample is gathered as training, passes through Training set solves LS-SVM parameters apAfter b, p-th training sample as prediction sets is classified, and record Classification results correctness;After so being verified by n times, the wrong classification rate e for staying a prediction can be calculatedLOO, calculate public Formula is(5.29).For every group of given hyper parameter(Comprising C and D), corresponding e can be calculatedLOO, so as to select eLOOIt is that minimum hyper parameter is combined as the parameter after optimization.
Due to(5.30), thus for every group of given hyper parameter, enter When row once stays a cross validation, A of a demand solution-1, then calculate s during each iterationp, thus a large amount of friendships can be saved Fork proving time, computation amount.
To make sse (C, D) reach minimum, to formulaOptimize to search Rope C and D, define to the gradient of C and D to sse (C, D) according to matrix derivation and inverse matrix derivation first, draw:(5.32);(5.33);(5.34) and(5.35), matrix 0 represents that element is all 0 N-dimensional column vector in formula (5.35).
According to AA-1=I(I is unit matrix), can be derived by:(5.36);
According to formulaTwo partial derivative difference can be derived For:
According to formula (5.30),
It is apparent thatAndCan be calculated by formula (5.32)-(5.35).
All sse (C, D) can be calculated to their gradient according to formula (5.37) and (5.38) to each group of hyper parameter C and D. It can be seen from LS-SVM principles, the selection of LS-SVM hyper parameters is converted to unconstrained optimization problem from a constrained optimization problem, Use C-2Instead of γ, D is used-1Instead of σ, this conversion does not interfere with the performance of LS-SVM models, the value condition of another aspect C and D The calculating of gradient is not influenceed.
Step II, Initial parameter sets:To the initial value C of parameter C and D1And D1It is determined respectively, and to identification error Threshold epsilon is set and ε > 0.
The gradient g of step III, current iterationkCalculate:According to formulaCalculate target in step I Function pair CkAnd DkGradient gk, k be iterations and k=1,2 ...;If | | gk| |≤ε, stop calculating, now CkAnd DkRespectively It is parameter C and D after optimization;Otherwise, into step IV.
Wherein,
In formulaIt is matrixPth row Remove the column vector of the composition of remaining element after p-th element;s(p-) to remove remaining element after p-th element in matrix s The vector of composition, epBy two disaggregated models set up in step IV are to p-th predicated error of training sample.
The direction of search d of step IV, current iterationkCalculate:According to formulaCalculate and work as The direction of search d of preceding iterationk, d in formulak-1It is the direction of search of -1 iteration of kth, βk=||gk||2/||gk-1||2, gk-1For kth- 1 gradient of iteration.
The step-size in search λ of step V, current iterationkIt is determined that:The identified direction of search d along step IVkScan for, Find out and meet formula
Step-size in search λk, In formulaRepresent(0 ,+∞)Middle searching makesReach the step-length λ of minimum valuek
Step VI, according to formulaWithTo Ck+1And Dk+1Counted Calculate.
Step VII, k=k+1 is made, return to step III, carries out next iteration afterwards.
Finally try to achieve, matrix
In the present embodiment, after two disaggregated models are set up, the final grader for using for:
In the present embodiment, in step VThe transposition of T representing matrixs, H is auto-correlation square Battle array and H=AT·A。
In actual mechanical process, to C in step II1And D1When being determined, using grid data service or numerical value is randomly selected Method to C1And D1It is determined;Using randomly selecting the method for numerical value to C1And D1When being determined, C1For(0.01,1] in The numerical value randomly selected, D1For(0.01,50] numerical value randomly selected in;Using grid data service to C1And D1Enter When row determines, first 10-3It is step-length grid division, then with C and D as independent variable and with object function described in step I as dependent variable Three dimensional network trrellis diagram is made, multigroup parameter of C and D is found out by grid search afterwards, finally to multigroup parameter is averaged work It is C1And D1
In the present embodiment, using grid data service to C1And D1It is determined, and the B of C and D is found out by grid search Group parameter, wherein B is positive integer and B=5~20.
The features such as conjugate gradient method has easy algorithm, small required amount of storage, fast convergence rate, multidimensional problem is converted into It is a series of along the linear search method optimizing of negative gradient direction, simply localized target functional value declines most fast direction, can be effective Iterations is reduced, run time is reduced.
And it is higher just to find precision when optimizing is carried out to C and D using grid data service, when only step-length sets very little Optimal hyper parameter value, will so take very much.
Burning is a lasting typical unstable physical process, with various characterization parameters.Fire early stage flame figure As the main feature such as have area of flame increase, edge shake, in irregular shape, position basicly stable.Wherein, area rising characteristic Criterion used is calculated in Visual C++[115]Developed under platform and realized, wherein area change rate is defined as:Wherein, AR represents the area change rate of adjacent interframe highlight regions, A (n) and A (n+ 1) area of suspicious region in present frame and next frame is represented respectively.To prevent all not existing suspicious flame in adjacent two field pictures Region and the area change rate that calculates is turned into infinity, on denominator plus a minimum eps.In addition to realization is returned One changes, and takes the maximum of highlight regions area in two frames as the denominator of above formula, can so make the result that finally calculates between (0,1) between.
The shape similarity of image will generally be carried out by means of with the known similarity degree for describing son, and this method can be with Corresponding similarity measure is set up in any complicated degree.According to background subtraction, if known image sequence is fh(x, Y), h=1,2 ..., N0, wherein (x, y) is the coordinate of each pixel in image, N0It is frame number, if benchmark image is fo(x, y), this Sample can define an error image sequence:δh(x,y)=|fh(x,y)-fo(x, y) |, this error image sequence is illustrated Difference between each frame of original image sequence and benchmark image.Then, binaryzation is carried out to the error image sequence for obtaining to obtain Image sequence { bh(x,y)}.The pixel that 1 is designated in this image sequence represented and deposited between original sequence and benchmark image In the region of marked difference.It is considered that the region is possible flame region, to the image after the influence for filtering isolated point Per frame it is that 1 pixel is marked in sequence, possible flame region Ω in obtaining in sequence image per frameh.It is suspicious when finding After flame region, flame is classified with chaff interference by calculating the method for similarity of successive frame modified-image.Successive frame The similarity ξ of modified-imagehIt is defined as:After trying to achieve several similarities, using continuous The similarity ξ of several two field pictureshAverage valueAs criterion.
Square characteristic, from the angle of flame identification, employs the centroid feature of flame image, and its stabilization is represented with barycenter Property.For a width flame image, its barycenter, such as formula are calculated first:M00It is target The zeroth order square in region, the as area of the target area.Calculate imageWithFirst moment (the M in direction10,M01), then calculate To its barycenter.
The edge variation of incipient fire flame has its unique rule, this simple and practical using consistency and eccentricity Characteristic parameter the edge variation of flame is recognized as one of Fire Criterion.
Consistency is commonly used to describe the complexity of object boundary, also referred to as circularity or decentralization, and it is in area On the basis of girth, the characteristic quantity of the complex-shaped degree in object or region is calculated.It is defined as follows:K=1, 2 ... n1, in formula (4.7), CkRepresent the consistency of the pel that numbering is k, PkIt is k-th girth of pel, you can doubt pel Boundary length, can be obtained by calculating boundary chain code.AkIt is k-th area of pel, for gray level image, can be by meter The bright spot number for calculating suspicious pel is obtained, and for bianry image, can be obtained by calculating the pixel number that pixel value is 1, and n is figure The number of suspicious flame pel as in.The calculating of girth is relatively complicated, but can be determined by extracting boundary chain code.
The step of calculating flame region consistency is as follows:
1. the area of doubtful flame region is calculated on the basis of image segmentation;
2. the continuous girth pixel of vertical direction is detected, and records the number Nx of continuous girth pixel;Detection level The continuous boundary pixel point in direction, and record the number N of continuum boundary pixely, calculate boundary pixel point sum SN
3. the chain code number of verso is NE=Nx+Ny, the chain code number N of odd number code0=SN—NE, using girth formulaCalculate girth.
4. consistency is calculated in result 1. and 3. being substituted into formula (4.7).
Image texture characteristic, Haralick et al. are extracted using gray level co-occurrence matrixes be extracted 14 kinds by gray level co-occurrence matrixes Feature.In the present embodiment, the image texture characteristic for being extracted includes contrast, entropy, energy, five features of uniformity and correlation.
There is scintillation in burned flame, the pixel of one two field picture of this characteristics exhibit is on different grey-scale Distribution changes with time.By calculating the change of edge pixel point, the flicker rule of target pattern can be just obtained.Flame flicking Frequency be typically in the low frequency range of 10-20Hz.Because the general frequency that video image is obtained is 25Hz (2 frames/s), do not reach The undistorted sampling request for obtaining flicker frequency, therefore directly by acquiring video information, its characteristic frequency spectrum is more difficult.For dodging Bright rule, Toreyin proposes to be changed with time under RGB models using small using the color value of fixed pixel point in each frame Ripple analyzes the R component of this point, if there is flame, at this, point causes the acute variation of value, and the high fdrequency component of wavelet decomposition will It is nonzero value.Wang Zhenhua[73]Decomposition and reconstruction is carried out to flame characteristic time series Deng proposition wavelet transform, using area Situation of change represents its flicker rule.Zhang Jinhua etc. points out during flame flicking it, and height can great changes will take place, its Changing Pattern Existed between flicker frequency and directly contacted, and have very big difference between interference source, therefore proposed and use flame height Method of the change instead of flame flicking feature to carry out flame identification.In the present embodiment, point out that flame dodges using Zhang Jinhua etc. Its height meeting method that great changes will take place extracts stroboscopic nature when bright.
In the present embodiment, M=6 in step II, and 6 characteristic quantities are respectively area, similarity, square characteristic, consistency, line Reason feature and stroboscopic nature.
In actual mechanical process, the performance of two disaggregated models is set up for detection, selection includes flame sample and without fire The training sample of flame sample amounts to 81, and each sample is 7 dimensions.For 81 training samples, one group of data is taken out every time Classify for predicting, remaining 80 groups of data is in optimized selection to hyper parameter.Wherein initial value C1=1 and D1=1, use conjugate gradient Method is scanned for, and the average of gained C is 0.1386, and mean square deviation is that the average of 0.0286, D is 0.2421, and mean square deviation is 0.0273, It can be seen that preferred hyper parameter is more stable.By two disaggregated model of the present invention(FR-LSSVM models)With BP(Nerve net Network model)、LS-SVM(Least square method supporting vector machine model)With standard SVM(Supporting vector machine model)Three kinds of disaggregated models enter Row contrast, recognition result is as shown in table 1:
The recognition result contrast table of the different classifications model of table 1
As it can be seen from table 1 for discrimination, BP is worst, only the LS-SVM with the selected initial value of grid search is also poor, FR-LSSVM and standard SVM are substantially better than them.And for the training time, FR-LSSVM and LS-SVM is significantly dominant, standard SVM shows slightly and is dominant, and standard SVM is more difficult to search to obtain optimal hyper parameter, and BP neural network training is quite time-consuming, and discrimination is lower slightly, former Because being less training samples number, sample size influences larger to discrimination, insufficient comprising characteristic information amount, and nerve net Network Shortcomings in terms of convergence and local minimum, parameters selection relies on experience, parameter setting exist it is larger not Certainty, can further correct to improve discrimination by supplementary training sample, the weights to BP neural network.Standard SVM's Discrimination is high compared with LS-SVM, but training time and recognition time are all longer.And the algorithm of FR-LSSVM hyper parameters more specification, consumption When it is few, and relatively stablize, reduce uncertainty, be particularly suited for the modeling of small sample, nonlinear problem, in speed and precision side Face superiority has significant advantage.Other these algorithms are higher to image quality requirements, if image pixel is relatively low, the mesh in image Mark region by barrier large area block or by dust covering, surround, extract target it is imperfect or extract target in have noise etc. Discrimination may all be caused to be reduced.
In the present embodiment, when carrying out image enhaucament in step 2012, entered using the image enchancing method based on fuzzy logic Row enhancing is processed.
When actually carrying out enhancing treatment, using the image enchancing method based on fuzzy logic(Specifically classical Pal- King fuzzy enhancement algorithms, i.e. Pal algorithms)When carrying out image enhancement processing, there is following defect:
1. Pal algorithms are when blurring mapping and its inverse transformation is carried out, using complicated power function as fuzzy membership functions, There is the big defect of poor real, operand;
2. in enhanced fuzzy conversion process, considerable low gray value hardness in original image is set to zero, causes low ash The loss of degree information;
3. enhanced fuzzy threshold value(Get over point Xc)Selection it is general compare trial by rule of thumb or repeatedly and obtain, lack theory and refer to Lead, with randomness;Parameter F in membership functiond、FeWith adjustability, parameter value Fd、FeRational choice and image procossing imitate It is really in close relations;
4. in enhanced fuzzy conversion process, successive ignition computing is that it changes in order to enhancing treatment is repeated to image The selection of generation number is instructed without correlation theory principle, and edge details are influenced whether when iterations is more.
It is right in step 2012 in the present embodiment for the Pal-King fuzzy enhancement algorithms for overcoming classics have drawbacks described above When the digital picture is that image to be reinforced carries out enhancing treatment, process is as follows:
Step 20121, fuzzy field is transformed to by image area:According to membership function (7), the gray value of each pixel of image to be reinforced is mapped to the fuzzy membership of fuzzy set, and is accordingly obtained described The fuzzy set of image to be reinforced;X in formulaghIt is any pixel point in the image to be reinforced(G, h)Gray value, XTIt is to use base Selected gray threshold, X when the image enchancing method of fuzzy logic carries out enhancing treatment to the image to be reinforcedmaxFor The maximum gradation value of the image to be reinforced.
After the gray value of each pixel of image to be reinforced is mapped into the fuzzy membership of fuzzy set, correspondingly institute State the fuzzy membership matrix that the fuzzy membership that the gray value of image all pixels point to be reinforced is mapped to constitutes fuzzy set.
Due to μ in formula (7)gh∈ [0,1], overcomes many after blurring mapping in classical Pal-King fuzzy enhancement algorithms The low gray value of original image is cut to zero defect, and with threshold XTIt is line of demarcation, subregion defines gray level xghDegree of membership, This method for defining degree of membership respectively in the low gray area of image and gray area high, also ensure that letter of the image in low gray level areas Breath loss reduction, so as to ensure the effect of image enhaucament.
In the present embodiment, before transforming to fuzzy field by image area in step 20121, first using maximum variance between clusters pair Gray threshold XTChosen.
Step 20122, carry out enhanced fuzzy treatment using fuzzy enhancement operator in fuzzy field:The enhanced fuzzy for being used is calculated Son is μ 'gh=Irgh)=Ir(Ir-1μgh), r is iterations and it is positive integer in formula, r=1,2 ...;Whereinμ in formulac=T(XC), wherein XCTo get over a little and XC=XT
Above-mentioned formulaNonlinear transformation increase and be more than μcμghValue, while reducing less than μcμghValue.Here μcGetting over a little for broad sense is developed into.
Step 20123, image area is changed to by fuzzy field inversion:According to formula(6), by enhanced fuzzy The μ ' obtained after reasonghInverse transformation is carried out, the gray value of each pixel in digital picture after enhancing is processed is obtained, and obtained at enhancing Digital picture after reason.
Because enhanced fuzzy threshold value (gets over point X in Pal algorithmsc) selection be image enhaucament key, in practical application Middle needs are attempted obtaining by rule of thumb or repeatedly.Wherein more classical method is maximum variance between clusters (Ostu), and the method is simple Stabilization is effective, be in practical application through frequently with method.Ostu Research on threshold selection has broken away from that to need manpower intervention to carry out more The limitation of secondary trial, can automatically determine optimal threshold by computer according to the half-tone information of image.The principle of Ostu methods is By the use of inter-class variance as criterion, the gray value that selection makes inter-class variance maximum realizes enhanced fuzzy threshold value as optimal threshold Automatic selection, so as to avoid the manual intervention in enhanced processes.
In the present embodiment, using maximum variance between clusters to gray threshold XTBefore being chosen, first from described to be reinforced All gray values that pixel quantity is 0 are found out in the grey scale change scope of image, and all ashes that will be found out using processor 3 Angle value is marked to calculate gray value;Using maximum variance between clusters to gray threshold XTWhen being chosen, wait to increase to described In the grey scale change scope of strong image except it is described exempt to calculate gray value in addition to other gray values as threshold value when inter-class variance Value is calculated, and finds out maximum between-cluster variance value from the inter-class variance value for calculating, and finds out maximum between-cluster variance value pair The gray value answered just is gray threshold XT
When choosing enhanced fuzzy using traditional maximum variance between clusters (Ostu), if gray value is n for the pixel count of ss, Then total pixel numberThe probability of the digital picture for being gathered each gray level appearanceThreshold XTWill figure Pixel as in is divided into two class C by its gray level0And C1, C0={ 0,1 ... t }, C1={ t+1, t+2 ... L-1 }, and assume Class C0And C1Pixel number account for the ratio respectively w of total pixel number0(t) and w1T () and the two average gray value is respectively μ0 (t) and μ1(t)。
For C0Have:
For C1Have:
WhereinThe average statistical of general image gray scale, then μ=w0μ0+w1μ1
Thus optimal threshold
It is above-mentioned to automatically extract optimal enhanced fuzzy threshold XTProcess be:All of gray level to L-1 is traveled through from gray level 0 Level, finds the X met when formula (8) takes maximumTValue is required threshold XT.Because image may be in the pixel in some gray levels Number is zero, and variance number of times is calculated to reduce, and the present invention proposes a kind of improved quick Ostu methods;
It is assumed that gray level is zero for the pixel count of t', then Pt'=0
If selected t'-1 is threshold value, have:
Again when it is threshold value to select t':
As can be seen here:
σ2(t'-1)=σ2(t') (2.37)
Assume there is continuous gray level t again1, t2..., tn, can also imitate and push away:
σ2(t1-1)=σ2(t1)=σ2(t2-1)=σ2(t2)=…=σ2(tn-1)=σ2(tn) (2.38)
If from the foregoing, the pixel count of a certain gray level is zero, need not calculate using it as side between class during threshold value Difference, and the inter-class variance corresponding to the smaller gray level that closest pixel count need to be only not zero is used as its inter-class variance value, Therefore, to be quickly found out the maximum of inter-class variance, can by the equal multiple gray levels of inter-class variance as same gray level, The gray value that those pixel counts are zero is considered as and is not existed, inter-class variance σ when directly as threshold value2T () is entered as Zero, without calculating their variance yields, this selection on threshold value final result does not have any influence, but improves enhancing threshold value The speed that self adaptation is chosen.
In the present embodiment, before carrying out enhanced fuzzy treatment in step 20122, first using LPF method to step The fuzzy set of the image described to be reinforced obtained in 20121 is smoothed;When actually carrying out low-pass filtering treatment, adopted Filter operator is
Because image is vulnerable to noise pollution in generation and transmitting procedure, therefore before carrying out enhancing treatment to image, First the fuzzy set to image is smoothed to reduce noise.In the present embodiment, by 3 × 3 spatial domain LPF operators with Smoothing processing of the convolution algorithm of image blurring collection matrix to realize to image blurring collection.
In the present embodiment, when step 2013 carries out image segmentation, process is as follows:
Step 20131, two-dimensional histogram are set up:The grey on pixel of the image to be split is set up using processor 3 The two-dimensional histogram of angle value and neighborhood averaging gray value;Any point is designated as (i, j) in the two-dimensional histogram, and wherein i is the two dimension Histogrammic abscissa value and its be any pixel point (m, n) in the image to be split gray value, j be the two-dimensional histogram Ordinate value and its be the pixel (m, n) neighborhood averaging gray value;Any point (i, j) hair in set up two-dimensional histogram Raw frequency is designated as C (i, j), and the frequency that point (i, j) occurs is designated as h (i, j), wherein
In the present embodiment, when the neighborhood averaging gray value to pixel (m, n) is calculated, according to formula(6) calculated, f (m+i1, n+j1) is pixel (m+ in formula I1, n+j1) gray value, wherein d for pixel square neighborhood window width, typically take odd number.
Also, the grey scale change scope of neighborhood averaging gray value g (m, n) and pixel gray value f (m, n) it is identical and the two Grey scale change scope be [0, L), thus the two-dimensional histogram set up in step I is a square area, refers to figure 3, wherein L-1 are the maximum of neighborhood averaging gray value g (m, n) and pixel gray value f (m, n).
In Fig. 3, set up two-dimensional histogram is divided into four regions using threshold vector (i, j).Due to target image Correlation is very strong between pixel inside internal or background image, and the gray value of pixel and its neighborhood averaging gray value are non- Very close to;And near the border of target image and background image pixel, its pixel gray value and neighborhood averaging gray value Between difference it is obvious.Thus, 0# regions are corresponding with background image in Fig. 3, and 1# regions are corresponding with target image, and 2# regions and Pixel and the noise spot distribution nearby of 3# region representations border, thus should be in 0# and 1# regions with pixel gray value and neighbour Domain average gray value simultaneously determines optimal threshold by the dividing method that two dimension fuzzy divides maximum entropy, makes authentic representative target and the back of the body The information content of scape is maximum.
Step 20132, fuzzy parameter Combinatorial Optimization:The processor 3 calls fuzzy parameter Combinatorial Optimization module, and utilizes Particle swarm optimization algorithm carries out excellent to the fuzzy parameter combination used by the image partition method based on two dimension fuzzy division maximum entropy Change, and fuzzy parameter after optimize is combined.
In this step, before being optimized to fuzzy parameter combination, first according to the two-dimentional Nogata set up in step 20131 Figure, calculate the functional relation of Two-dimensional Fuzzy Entropy when splitting to the image to be split, and will calculate The functional relation of Two-dimensional Fuzzy Entropy is used as fitness when being optimized to fuzzy parameter combination using particle swarm optimization algorithm Function.
In the present embodiment, image to be split described in step 20131 is made up of target image O and background image P;Wherein mesh The membership function of logo image O is μo(i,j)=μox(i;a,b)μoy(j;c,d)(1)。
The membership function μ of background image Pb(i,j)=μbx(i;a,b)μoy(j;c,d)+μox(i;a,b)μby(j;c,d)+ μbx(i;a,b)μby(j;c,d)(2)。
In formula (1) and (2), μox(i;A, b) and μoy(j;C, d) be target image O one-dimensional membership function and the two It is S function, μbx(i;A, b) and μby(j;C, d) it is the one-dimensional membership function of background image P and both at S function, μbx(i;a,b)=1-μox(i;A, b), μby(j;c,d)=1-μoy(j;C, d), wherein a, b, c and d are to target image O and the back of the body The parameter that the one-dimensional membership function shape of scape image P is controlled.
Wherein,
When functional relation in step 20132 to Two-dimensional Fuzzy Entropy is calculated, first set up according in step 20131 Two-dimensional histogram, to the minimum value g of the pixel gray value of the image to be splitminWith maximum gmaxAnd neighborhood averaging The minimum value s of gray valueminWith maximum smaxIt is determined respectively.In the present embodiment, gmax=smax=L-1, and gmin=smin= 0.Wherein, L-1=255.
The functional relation of the Two-dimensional Fuzzy Entropy calculated in step 20132 is:
(3), In formula (3)Wherein h (i, j) is described in step I Point (i, j) occur frequency.
When being optimized to fuzzy parameter combination using particle swarm optimization algorithm in step 20132, the fuzzy ginseng for being optimized Array is combined into (a, b, c, d).
It is when the parameter combination that two dimension fuzzy division maximum entropy is carried out in the present embodiment, in step 20132 optimizes including following Step:
Step II -1, population initialization:Using a value of parameter combination as a particle, and by multiple particle groups Into a population for initialization;It is denoted as (ak,bk,ck,dk), wherein k be positive integer and its k=1,2,3 ,~, K, wherein K is for just Integer and its be particle included in the population quantity, akIt is a random value of parameter a, bkIt is one of parameter b Random value, ckIt is a random value of parameter c, dkIt is a random value of parameter d, ak< bkAnd ck< dk
In the present embodiment, K=15.
When actually used, K can be carried out into value between 10~100 according to specific needs.
Step II -2, fitness function determine:
Will(3), As fitness function.
Step II -3, particle fitness evaluation:To current time, the fitness of all particles is evaluated respectively, and all The fitness evaluation method all same of particle;Wherein, when the fitness to k-th particle of current time is evaluated, first basis Identified fitness function calculates the fitness value of k-th particle of current time and is denoted as in step II -2 Fitnessk, and the fitnessk that will be calculated and Pbestk carries out difference comparsion:Fitnessk > are drawn when comparing During Pbestk, Pbestk=fitnessk, and willThe position of k-th particle of current time is updated to, wherein Pbestk is Maximum adaptation angle value that k-th particle of current time is reached and its be k-th particle of current time individual extreme value,For The personal best particle of k-th particle of current time;Wherein, t is for current iteration number of times and it is positive integer.
Treat to be calculated the fitness value of current time all particles according to identified fitness function in step II -2 After the completion of, the fitness value of the maximum particle of current time fitness value is designated as fitnesskbest, and will Fitnesskbest and gbest carries out difference comparsion:When compare draw fitnesskbest > gbest when, gbest= Fitnesskbest, and willThe position of the maximum particle of current time fitness value is updated to, when wherein gbest is current The global extremum at quarter,It is colony's optimal location at current time.
Step II -4, judge whether to meet stopping criterion for iteration:When stopping criterion for iteration is met, parameter combination is completed excellent Change process;Otherwise, position and the speed for drawing each particle of subsequent time, and return to step are updated according to colony optimization algorithm in particle Ⅱ-3。
Stopping criterion for iteration reaches maximum iteration I set in advance for current iteration number of times t in step II -4maxOr Person's Δ g≤e, wherein Δ g=| gbest-gmax |, are the global extremum at gbest current times in formula, and gmax is original setting Target fitness value, e is for positive number and it is deviation set in advance.
In the present embodiment, maximum iteration Imax=30.When actually used, can according to specific needs, by greatest iteration time Number ImaxIt is adjusted between 20~200.
When population initialization is carried out in the present embodiment, in step II -1, particle (ak,bk,ck,dk) in (ak,ck) it is kth The initial velocity vector of individual particle, (bk,dk) it is k-th initial position of particle.
In step II -4 according in particle colony optimization algorithm update draw position and the speed of each particle of subsequent time when, institute There are the position of particle and the update method all same of speed;Wherein, the speed and position to k-th particle of subsequent time are carried out more When new, velocity first according to k-th particle of current time, position and individuality extreme value Pbestk and global extremum, calculating Draw the velocity of k-th particle of subsequent time, and position according to k-th particle of current time and calculate it is next The velocity of k-th particle of moment calculates the position of k-th particle of subsequent time.
Also, when being updated to the speed of k-th particle of subsequent time and position in step II -4, according toAnd formula (4)(5) subsequent time is calculated K-th velocity of particleAnd positionIn formula (4) and (5)It is the position of k-th particle of current time, it is public In formula (4)It is the velocity of k-th particle of current time, c1And c2It is acceleration factor and c1+c2=4, r1And r2For [0,1] the equally distributed random number between;ω be inertia weight and its linearly reduce with the increase of iterations,ω in formulamaxAnd ωminInertia weight maximum respectively set in advance and minimum value, t It is current iteration number of times, ImaxIt is maximum iteration set in advance.
In the present embodiment, ωmax=0.9, ωmin=0.4, c1=c2=2。
In the present embodiment, before carrying out population initialization in step II -1, need first to ak、bk、ckAnd dkHunting zone It is determined, the pixel gray level minimum value of image to be split is g wherein described in step IminAnd its minimum value is gmax;Pixel The Size of Neighborhood of point (m, n) is d × d pixel and the average gray minimum value s of its neighborhoodminAnd its average gray maximum smax, ak=gmin、…、gmax- 1, bk=gmin+1、…、gmax, ck=smin、…、smax- 1, dk=smin+1、…、smax
In the present embodiment, d=5.
In actual use, can according to specific needs, the value size to d is adjusted accordingly.
Step 20133, image segmentation:The processor 3 is combined using the fuzzy parameter after optimizing in step 20132, and Each pixel in the image to be split is classified according to the image partition method based on two dimension fuzzy division maximum entropy, And image segmentation process is accordingly completed, the target image after being split.
In the present embodiment, after the fuzzy parameter after being optimized is combined as (a, b, c, d), according to maximum membership grade principle pair Pixel is classified:Wherein work as μoDuring (i, j) >=0.5, such pixel is divided into target area, is otherwise divided into background area Domain, refers to Fig. 4.In Fig. 4, μoGrid where (i, j) >=0.5 is to be expressed as the target area after image segmentation.
The above, is only presently preferred embodiments of the present invention, and not the present invention is imposed any restrictions, every according to the present invention Any simple modification, change and equivalent structure change that technical spirit is made to above example, still fall within skill of the present invention In the protection domain of art scheme.

Claims (4)

1. a kind of Image Fire Flame recognition methods, it is characterised in that the method is comprised the following steps:
Step one, IMAQ:Using image acquisition units and according to sample frequency f set in advances, treat detection zone Digital picture is acquired, and the digital picture synchronous driving that each sampling instant is gathered is to processor (3);The figure As collecting unit connects with processor (3);
Step 2, image procossing:What the processor (3) was gathered according to time order and function order to each sampling instant in step one Digital picture carries out image procossing respectively, and the processing method all same of digital picture is gathered to each sampling instant;To step When the digital picture that any one sampling instant is gathered in is processed, comprise the following steps:
Step 201, image preprocessing, process are as follows:
Step 2011, image-receptive and synchronous storage:The current sample time that the processor (3) will now be received is gathered Digital picture synchronously storage in the data storage (4), the data storage (4) connects with processor (3);
Step 2012, image enhaucament:The digital picture gathered to current sample time by processor (3) is carried out at enhancing Reason, obtains the digital picture after enhancing treatment;
Step 2013, image segmentation:Digital picture after enhancing during processor (3) is to step 2012 is processed carries out segmentation portion Reason, obtains target image;
Step 202, fire identification:Using two disaggregated models for pre-building, at target image described in step 2013 Reason, and draw the fire condition classification in current sample time region to be detected;The fire condition classification includes flame and nothing Two classifications of flame, two disaggregated model is the SVMs mould to having flame and classified without two classifications of flame Type;
Two disaggregated model to set up process as follows:
Step I, image information collecting:Using described image collecting unit, region to be detected is more when gathering generation fire respectively Frame of digital image one and when there is no fire region to be detected multiframe digital picture two;
Step II, feature extraction:Feature extraction is carried out respectively to digital picture two described in digital picture described in multiframe one and multiframe, And extract one group of characteristic parameter that can be represented and distinguish the digital picture, and this group of characteristic parameter respectively from each digital picture Including M characteristic quantity, and the M characteristic quantity is numbered, one characteristic vector of the M characteristic quantity composition, wherein M >= 2;
Step III, training sample are obtained:Digital picture one and multiframe described in the multiframe obtained after feature extraction from step II In the characteristic vector of the digital picture two, numeral described in the characteristic vector and m2 frames of digital picture one described in m1 frames is chosen respectively The characteristic vector composition training sample set of image two;Wherein, m1 and m2 are positive integer and m1=40~100, and m2=40~ 100;It is m1+m2 that the training sample concentrates the quantity of training sample;
Step IV, two disaggregated models are set up, and process is as follows:
Step IV -1, kernel function are chosen:From RBF as two disaggregated model kernel function;
Step IV -2, classification function determine:Treat the nuclear parameter σ of penalty factor γ and selected RBF in step IV -12Really After fixed, just obtain the classification function of two disaggregated model, and complete two disaggregated model set up process;Wherein, γ= C-2, σ=D-1, 0.01 < C≤10,0.01 < D≤50;
To penalty factor γ and nuclear parameter σ2When being determined, parameter C and D is optimized using conjugate gradient method first, obtained excellent Parameter C and D after change, further according to γ=C-2With σ=D-1Parameter C after optimization and D are converted into penalty factor γ and nuclear parameter σ2
Step V, two disaggregated model training:The m1+m2 training sample that training sample described in step III is concentrated, is input to Two disaggregated models set up in step IV are trained;
When carrying out image enhaucament in step 2012, enhancing treatment is carried out using the image enchancing method based on fuzzy logic;
When carrying out enhancing treatment using the image enchancing method based on fuzzy logic, process is as follows:
Step 20121, fuzzy field is transformed to by image area:According to membership function The gray value of each pixel of image to be reinforced is mapped to the fuzzy membership of fuzzy set, and waits to increase described in accordingly obtaining The fuzzy set of strong image;X in formulaghIt is the gray value of any pixel point (g, h) in the image to be reinforced, XTIt is using based on mould The image enchancing method of fuzzy logic carries out gray threshold selected during enhancing treatment, X to the image to be reinforcedmaxFor described The maximum gradation value of image to be reinforced;
Step 20122, carry out enhanced fuzzy treatment using fuzzy enhancement operator in fuzzy field:The fuzzy enhancement operator for being used for μ'gh=Irgh)=Ir(Ir-1μgh), r is iterations and it is positive integer in formula, r=1,2 ...;Whereinμ in formulac=T (XC), wherein XCTo get over a little and XC=XT
Step 20123, image area is changed to by fuzzy field inversion:According to formulaAt enhanced fuzzy The μ ' obtained after reasonghInverse transformation is carried out, the gray value of each pixel in digital picture after enhancing is processed is obtained, and obtained at enhancing Digital picture after reason;
Before transforming to fuzzy field by image area in step 20121, first using maximum variance between clusters to gray threshold XTSelected Take;Using maximum variance between clusters to gray threshold XTBefore being chosen, first from the grey scale change model of the image to be reinforced Pixel quantity is found out in enclosing and is 0 all gray values, and use all gray values that processor (3) will be found out to mark for Calculate gray value;Using maximum variance between clusters to gray threshold XTWhen being chosen, the gray scale to the image to be reinforced becomes Change scope in except it is described exempt from calculate gray value in addition to other gray values as threshold value when inter-class variance value calculated, and from The inter-class variance value for calculating finds out maximum between-cluster variance value, and it is just ash to find out the corresponding gray value of maximum between-cluster variance value Degree threshold XT
In step one, the size that each sampling instant gathers digital picture is M1 × N1 pixel;
When step 2013 carries out image segmentation, process is as follows:
Step 20131, two-dimensional histogram are set up:Using processor (3) set up the image to be split on pixel gray level The two-dimensional histogram of value and neighborhood averaging gray value;Any point is designated as (i, j) in the two-dimensional histogram, and wherein i is the straight two dimension The abscissa value of square figure and its be any pixel point (m, n) in the image to be split gray value, j is the two-dimensional histogram Ordinate value and its be the pixel (m, n) neighborhood averaging gray value;Any point (i, j) occurs in set up two-dimensional histogram Frequency be designated as C (i, j), and the frequency that point (i, j) occurs is designated as h (i, j), wherein
Step 20132, fuzzy parameter Combinatorial Optimization:The processor (3) calls fuzzy parameter Combinatorial Optimization module, and utilizes grain Subgroup optimized algorithm is optimized to the fuzzy parameter combination used by the image partition method based on two dimension fuzzy division maximum entropy, And the fuzzy parameter combination after being optimized;
In this step, before being optimized to fuzzy parameter combination, first according to the two-dimensional histogram set up in step 20131, Calculate the functional relation of Two-dimensional Fuzzy Entropy when splitting to the image to be split, and the two dimension that will be calculated The functional relation of fuzzy entropy is used as fitness function when being optimized to fuzzy parameter combination using particle swarm optimization algorithm;
Step 20133, image segmentation:The processor (3) is pressed using the fuzzy parameter combination after optimizing in step 20132 Each pixel in the image to be split is classified according to the image partition method that maximum entropy is divided based on two dimension fuzzy, and It is corresponding to complete image segmentation process, the target image after being split;
Image to be split described in step 20131 is made up of target image O and background image P;The wherein degree of membership of target image O Function is μo(i, j)=μox(i;a,b)μoy(j;c,d) (1);The membership function μ of background image Pb(i, j)=μbx(i;a,b) μoy(j;c,d)+μox(i;a,b)μby(j;c,d)+μbx(i;a,b)μby(j;c,d) (2);
In formula (1) and (2), μox(i;A, b) and μoy(j;C, d) it is the one-dimensional membership function of target image O and both at S Function, μbx(i;A, b) and μby(j;C, d) it is the one-dimensional membership function of background image P and both at S function, μbx(i; A, b)=1- μox(i;A, b), μby(j;C, d)=1- μoy(j;C, d), wherein a, b, c and d are to target image O and Background As the parameter that the one-dimensional membership function shape of P is controlled;
When functional relation in step 20132 to Two-dimensional Fuzzy Entropy is calculated, first according to two set up in step 20131 Dimension histogram, to the minimum value g of the pixel gray value of the image to be splitminWith maximum gmaxAnd neighborhood averaging gray scale The minimum value s of valueminWith maximum smaxIt is determined respectively;
The functional relation of the Two-dimensional Fuzzy Entropy calculated in step 20132 is:
H ( P ) = - Σ i = g min g max Σ j = s mn s max μ 0 ( i , j ) h ( i , j ) p ( O ) exp ( 1 - log μ 0 ( i , j ) h ( i , j ) p ( O ) ) - Σ i = g min g max Σ j = s min s max μ b ( i , j ) h ( i , j ) p ( B ) exp ( 1 - log μ b ( i , j ) h ( i , j ) p ( B ) ) ( 3 ) In formula (3)Wherein h (i, j) is in step I The frequency that described point (i, j) occurs;
When being optimized to fuzzy parameter combination using particle swarm optimization algorithm in step 20132, the fuzzy parameter group for being optimized It is combined into (a, b, c, d);
When the parameter combination that two dimension fuzzy division maximum entropy is carried out in step 20132 optimizes, comprise the following steps:
Step II -1, population initialization:Using a value of parameter combination as a particle, and multiple particles are constituted one The population of individual initialization;It is denoted as (ak,bk,ck,dk), wherein k be positive integer and its k=1,2,3 ,~, K, wherein K is just whole Number and its be particle included in the population quantity, akIt is a random value of parameter a, bkFor one of parameter b with Machine value, ckIt is a random value of parameter c, dkIt is a random value of parameter d, ak< bkAnd ck< dk
Step II -2, fitness function determine:
Will
H ( P ) = - Σ i = g min g max Σ j = s mn s max μ 0 ( i , j ) h ( i , j ) p ( O ) exp ( 1 - log μ 0 ( i , j ) h ( i , j ) p ( O ) ) - Σ i = g min g max Σ j = s min s max μ b ( i , j ) h ( i , j ) p ( B ) exp ( 1 - log μ b ( i , j ) h ( i , j ) p ( B ) ) ( 3 ) As fitness function;
Step II -3, particle fitness evaluation:To current time, the fitness of all particles is evaluated respectively, and all particles Fitness evaluation method all same;Wherein, when the fitness to k-th particle of current time is evaluated, first according to step Identified fitness function calculates the fitness value of k-th particle of current time and is denoted as fitnessk in II -2, and The fitnessk that will be calculated and Pbestk carries out difference comparsion:When compare draw fitnessk > Pbestk when, Pbestk =fitnessk, and willThe position of k-th particle of current time is updated to, wherein Pbestk is current time k-th The maximum adaptation angle value that is reached of son and its be k-th particle of current time individual extreme value,For current time k-th The personal best particle of son;Wherein, t is for current iteration number of times and it is positive integer;
Treat that the fitness value of current time all particles is calculated into completion according to identified fitness function in step II -2 Afterwards, the fitness value of the maximum particle of current time fitness value is designated as fitnesskbest, and by fitnesskbest with Gbest carries out difference comparsion:When compare draw fitnesskbest > gbest when, gbest=fitnesskbest, and willThe position of the maximum particle of current time fitness value is updated to, wherein gbest is the global extremum at current time, It is colony's optimal location at current time;
Step II -4, judge whether to meet stopping criterion for iteration:When stopping criterion for iteration is met, complete parameter combination and optimized Journey;Otherwise, updated according to colony optimization algorithm in particle and draw position and the speed of each particle of subsequent time, and return to step II- 3;Stopping criterion for iteration reaches maximum iteration I set in advance for current iteration number of times t in step II -4maxOr Δ g≤ E, wherein Δ g=| gbest-gmax |, are the global extremum at gbest current times in formula, and gmax is that the target of original setting is fitted Angle value is answered, e is for positive number and it is deviation set in advance.
2. according to a kind of Image Fire Flame recognition methods described in claim 1, it is characterised in that:Instructed described in step III Training sample total quantity is N and N=m1+m2 in practicing sample set;Before two disaggregated model foundation are carried out in step IV, first to described N number of training sample that training sample is concentrated is numbered, and it is p that the training sample concentrates the numbering of p-th training sample, and p is Positive integer and p=1,2 ..., N;P-th training sample is denoted as (xp,yp), wherein xpIt is p-th characteristic parameter of training sample, ypIt is p-th classification number and y of training samplep=1 or -1, wherein classification number indicates flame for 1, classification number for -1 indicate without Flame;
When being optimized to parameter C and D using conjugate gradient method in step IV -2, concentrated using training sample described in step III M1+m2 training sample optimize, and optimization process is as follows:
Step I, object function determine: Sse in formula (C, D), to stay a Prediction sum squares, p is the numbering that the training sample concentrates each training sample, epIt is institute in step IV Two disaggregated models set up to p-th predicated error of training sample andWherein,S (p in formula-) constituted to remove remaining element after p-th element in matrix s Vector;S (p) is p-th element of matrix s, (A-1)(p-, p) it is matrix A-1Pth row remove p-th element after remaining The column vector of element composition, (A-1) (p, p) be matrix A-1Pth row p-th element;It is matrixPth row Remove the column vector of the composition of remaining element after p-th element, matrixIt is the augmented matrix of matrix K, wherein matrixMatrix A-1The inverse matrix of representing matrix A, matrixWherein matrix I is unit matrix, matrix IN=[1,1 ..., 1]T, the transposition of T representing matrixs, matrix INIn comprising N number of element and N number of element is 1;Matrix s=A-1Y, matrixWherein y1、y2、…、yNRespectively institute State the classification that training sample concentrates N number of training sample;
Step II, Initial parameter sets:To the initial value C of parameter C and D1And D1It is determined respectively, and to identification error threshold epsilon Set and ε > 0;
The gradient g of step III, current iterationkCalculate:According to formulaCalculate object function in step I To CkAnd DkGradient gk, k be iterations and k=1,2 ...;If | | gk| |≤ε, stop calculating, now CkAnd DkIt is respectively excellent Parameter C and D after change;Otherwise, into step IV;
Wherein,
In formulaIt is matrixPth row remove the The column vector of remaining element composition after p element;s(p-) to remove the composition of remaining element after p-th element in matrix s Vector, epBy two disaggregated models set up in step IV are to p-th predicated error of training sample;
The direction of search d of step IV, current iterationkCalculate:According to formulaCalculate current changing The direction of search d in generationk, d in formulak-1It is the direction of search of -1 iteration of kth, βk=| | gk||/||gk-1||2, gk-1For kth -1 time The gradient of iteration;
The step-size in search λ of step V, current iterationkIt is determined that:The identified direction of search d along step IVkScan for, find out Meet formula
Step-size in search λk, in formulaRepresenting to be found in (0 ,+∞) makesReach the step-length λ of minimum valuek
Step VI, according to formulaWithTo Ck+1And Dk+1Calculated;
Step VII, k=k+1 is made, return to step III, carries out next iteration afterwards;
Selected RBF is in step IV -1The regression function of the RBF Forα in formulatBe regression parameter with b, s be positive integer and s=1,2 ..., N, t is positive integer and t= 1、2、…、N。
3. according to a kind of Image Fire Flame recognition methods described in claim 1 or 2, it is characterised in that:M=in step II 6, and 6 characteristic quantities are respectively area, similarity, square characteristic, consistency, textural characteristics and stroboscopic nature.
4. according to a kind of Image Fire Flame recognition methods described in claim 2, it is characterised in that:To C in step II1With D1When being determined, using grid data service or the method for randomly selecting numerical value to C1And D1It is determined;Using randomly selecting number The method of value is to C1And D1When being determined, C1For (0.01,1] in a numerical value randomly selecting, D1For (0.01,50] in The numerical value that machine is extracted;Using grid data service to C1And D1When being determined, first 10-3Be step-length grid division, then with C and D is that dependent variable makes three dimensional network trrellis diagram for independent variable and with object function described in step I, finds out C by grid search afterwards With multigroup parameter of D, finally to multigroup parameter is averaged as C1And D1
CN201410148888.3A 2014-04-14 2014-04-14 A kind of Image Fire Flame recognition methods Active CN103886344B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410148888.3A CN103886344B (en) 2014-04-14 2014-04-14 A kind of Image Fire Flame recognition methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410148888.3A CN103886344B (en) 2014-04-14 2014-04-14 A kind of Image Fire Flame recognition methods

Publications (2)

Publication Number Publication Date
CN103886344A CN103886344A (en) 2014-06-25
CN103886344B true CN103886344B (en) 2017-07-07

Family

ID=50955227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410148888.3A Active CN103886344B (en) 2014-04-14 2014-04-14 A kind of Image Fire Flame recognition methods

Country Status (1)

Country Link
CN (1) CN103886344B (en)

Families Citing this family (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10275719B2 (en) * 2015-01-29 2019-04-30 Qualcomm Incorporated Hyper-parameter selection for deep convolutional networks
CN105809643B (en) * 2016-03-14 2018-07-06 浙江外国语学院 A kind of image enchancing method based on adaptive block channel extrusion
CN105976365A (en) * 2016-04-28 2016-09-28 天津大学 Nocturnal fire disaster video detection method
CN107015852A (en) * 2016-06-15 2017-08-04 珠江水利委员会珠江水利科学研究院 A kind of extensive Hydropower Stations multi-core parallel concurrent Optimization Scheduling
CN106204553B (en) * 2016-06-30 2019-03-08 江苏理工学院 A kind of image fast segmentation method based on least square method curve matching
CN106355812A (en) * 2016-08-10 2017-01-25 安徽理工大学 Fire hazard prediction method based on temperature fields
CN107316012B (en) * 2017-06-14 2020-12-22 华南理工大学 Fire detection and tracking method of small unmanned helicopter
CN107704820A (en) * 2017-09-28 2018-02-16 深圳市鑫汇达机械设计有限公司 A kind of effective coal-mine fire detecting system
CN108038510A (en) * 2017-12-22 2018-05-15 湖南源信光电科技股份有限公司 A kind of detection method based on doubtful flame region feature
CN108416968B (en) * 2018-01-31 2020-09-01 国家能源投资集团有限责任公司 Fire early warning method and device
CN108319964B (en) * 2018-02-07 2021-10-22 嘉兴学院 Fire image recognition method based on mixed features and manifold learning
CN110120142B (en) * 2018-02-07 2021-12-31 中国石油化工股份有限公司 Fire smoke video intelligent monitoring early warning system and early warning method
CN108280755A (en) * 2018-02-28 2018-07-13 阿里巴巴集团控股有限公司 The recognition methods of suspicious money laundering clique and identification device
CN108537150B (en) * 2018-03-27 2019-01-18 长沙英迈智越信息技术有限公司 Reflective processing system based on image recognition
CN108664980A (en) * 2018-05-14 2018-10-16 昆明理工大学 A kind of sun crown ring structure recognition methods based on guiding filtering and wavelet transformation
CN108765335B (en) * 2018-05-25 2022-08-02 电子科技大学 Forest fire detection method based on remote sensing image
CN108875626A (en) * 2018-06-13 2018-11-23 江苏电力信息技术有限公司 A kind of static fire detection method of transmission line of electricity
CN108876741B (en) * 2018-06-22 2021-08-24 中国矿业大学(北京) Image enhancement method under complex illumination condition
CN109145796A (en) * 2018-08-13 2019-01-04 福建和盛高科技产业有限公司 A kind of identification of electric power piping lane fire source and fire point distance measuring method based on video image convergence analysis algorithm
CN109204106B (en) * 2018-08-27 2020-08-07 浙江大丰实业股份有限公司 Stage equipment moving system
CN109272496B (en) * 2018-09-04 2022-05-03 西安科技大学 Fire image identification method for coal mine fire video monitoring
CN109584423A (en) * 2018-12-13 2019-04-05 佛山单常科技有限公司 A kind of intelligent unlocking system
CN109685266A (en) * 2018-12-21 2019-04-26 长安大学 A kind of lithium battery bin fire prediction method and system based on SVM
CN109887220A (en) * 2019-01-23 2019-06-14 珠海格力电器股份有限公司 The control method of air-conditioning and air-conditioning
CN109919071B (en) * 2019-02-28 2021-05-04 沈阳天眼智云信息科技有限公司 Flame identification method based on infrared multi-feature combined technology
CN110033040B (en) * 2019-04-12 2021-05-04 华南师范大学 Flame identification method, system, medium and equipment
CN110163278B (en) * 2019-05-16 2023-04-07 东南大学 Flame stability monitoring method based on image recognition
CN110334664B (en) * 2019-07-09 2021-06-04 中南大学 Statistical method and device for alloy precipitated phase fraction, electronic equipment and medium
CN111105587B (en) * 2019-12-31 2021-01-01 广州思瑞智能科技有限公司 Intelligent flame detection method and device, detector and storage medium
CN111476965B (en) * 2020-03-13 2021-08-03 深圳信息职业技术学院 Method for constructing fire detection model, fire detection method and related equipment
CN112115766A (en) * 2020-07-28 2020-12-22 辽宁长江智能科技股份有限公司 Flame identification method, device, equipment and storage medium based on video picture
CN112149509B (en) * 2020-08-25 2023-05-09 浙江中控信息产业股份有限公司 Traffic signal lamp fault detection method integrating deep learning and image processing
CN112215831B (en) * 2020-10-21 2022-08-26 厦门市美亚柏科信息股份有限公司 Method and system for evaluating quality of face image
CN113158719B (en) * 2020-11-30 2022-09-06 齐鲁工业大学 Image identification method for fire disaster of photovoltaic power station
CN112396026A (en) * 2020-11-30 2021-02-23 北京华正明天信息技术股份有限公司 Fire image feature extraction method based on feature aggregation and dense connection
CN114220046B (en) * 2021-11-25 2023-05-26 中国民用航空飞行学院 Fire image fuzzy membership degree identification method based on gray comprehensive association degree
CN114530025B (en) * 2021-12-31 2024-03-08 武汉烽理光电技术有限公司 Tunnel fire alarming method and device based on array grating and electronic equipment
CN117152474A (en) * 2023-07-25 2023-12-01 华能核能技术研究院有限公司 High-temperature gas cooled reactor flame identification method based on K-means clustering algorithm
CN116701409B (en) * 2023-08-07 2023-11-03 湖南永蓝检测技术股份有限公司 Sensor data storage method for intelligent on-line detection of environment
CN117612319A (en) * 2024-01-24 2024-02-27 上海意静信息科技有限公司 Alarm information grading early warning method and system based on sensor and picture

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101393603A (en) * 2008-10-09 2009-03-25 浙江大学 Method for recognizing and detecting tunnel fire disaster flame

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101393603A (en) * 2008-10-09 2009-03-25 浙江大学 Method for recognizing and detecting tunnel fire disaster flame

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Fire Detection Mechanism using Fuzzy Logic;Vikshant Khanna等;《International Journal of Computer Application》;20130331;第65卷(第12期);全文 *
模糊聚类遗传算法在遗煤自燃火灾识别中的应用;赵敏等;《煤炭技术》;20140331;第33卷(第3期);全文 *
火灾识别中RS-SVM模型的应用;孙福志等;《计算机工程与应用》;20101231;第46卷(第3期);全文 *

Also Published As

Publication number Publication date
CN103886344A (en) 2014-06-25

Similar Documents

Publication Publication Date Title
CN103886344B (en) A kind of Image Fire Flame recognition methods
CN103871029B (en) A kind of image enhaucament and dividing method
CN103942557B (en) A kind of underground coal mine image pre-processing method
EP3614308B1 (en) Joint deep learning for land cover and land use classification
CN108764085B (en) Crowd counting method based on generation of confrontation network
CN107016357A (en) A kind of video pedestrian detection method based on time-domain convolutional neural networks
CN109508710A (en) Based on the unmanned vehicle night-environment cognitive method for improving YOLOv3 network
CN108932479A (en) A kind of human body anomaly detection method
CN110458844A (en) A kind of semantic segmentation method of low illumination scene
CN107229929A (en) A kind of license plate locating method based on R CNN
CN106127148A (en) A kind of escalator passenger's unusual checking algorithm based on machine vision
KR101084719B1 (en) Intelligent smoke detection system using image processing and computational intelligence
CN108319964A (en) A kind of fire image recognition methods based on composite character and manifold learning
CN107133496B (en) Gene feature extraction method based on manifold learning and closed-loop deep convolution double-network model
CN103049751A (en) Improved weighting region matching high-altitude video pedestrian recognizing method
CN105320950A (en) A video human face living body detection method
CN106295124A (en) Utilize the method that multiple image detecting technique comprehensively analyzes gene polyadenylation signal figure likelihood probability amount
CN106934386A (en) A kind of natural scene character detecting method and system based on from heuristic strategies
CN109903339B (en) Video group figure positioning detection method based on multi-dimensional fusion features
CN106373146A (en) Target tracking method based on fuzzy learning
CN107230267A (en) Intelligence In Baogang Kindergarten based on face recognition algorithms is registered method
CN107463954A (en) A kind of template matches recognition methods for obscuring different spectrogram picture
CN113221655B (en) Face spoofing detection method based on feature space constraint
Chen et al. Agricultural remote sensing image cultivated land extraction technology based on deep learning
CN114241511B (en) Weak supervision pedestrian detection method, system, medium, equipment and processing terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210125

Address after: 710077 718, block a, Haixing city square, Keji Road, high tech Zone, Xi'an City, Shaanxi Province

Patentee after: Xi'an zhicaiquan Technology Transfer Center Co.,Ltd.

Address before: 710054 No. 58, middle section, Yanta Road, Shaanxi, Xi'an

Patentee before: XI'AN University OF SCIENCE AND TECHNOLOGY

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211102

Address after: 257000 Room 308, building 3, Dongying Software Park, No. 228, Nanyi Road, development zone, Dongying City, Shandong Province

Patentee after: Dongkai Shuke (Shandong) Industrial Park Co.,Ltd.

Address before: 710077 718, block a, Haixing city square, Keji Road, high tech Zone, Xi'an City, Shaanxi Province

Patentee before: Xi'an zhicaiquan Technology Transfer Center Co.,Ltd.