The content of the invention
The technical problems to be solved by the invention are for above-mentioned deficiency of the prior art, there is provided a kind of image-type fire
Calamity flame identification method, its method and step is simple, high, using effect is good to realize convenient and easy to operate, reliability, can solve existing
With the presence of reliability under the complex environment of video fire hazard detecting system it is relatively low, wrong report rate of failing to report is higher, using effect is poor etc. asks
Topic.
In order to solve the above technical problems, the technical solution adopted by the present invention is:A kind of Image Fire Flame recognition methods,
It is characterized in that the method is comprised the following steps:
Step one, IMAQ:Using image acquisition units and according to sample frequency f set in advances, treat detection zone
The digital picture in domain is acquired, and the digital picture synchronous driving that each sampling instant is gathered is to processor;It is described
Image acquisition units connect with processor;
Step 2, image procossing:The processor is gathered according to time order and function order to each sampling instant in step one
Digital picture carry out image procossing respectively, and the processing method all same of digital picture is gathered to each sampling instant;To step
When the digital picture that any one sampling instant is gathered in rapid is processed, comprise the following steps:
Step 201, image preprocessing, process are as follows:
Step 2011, image-receptive and synchronous storage:The current sample time that the processor will be received now is adopted
The digital picture of collection is synchronously stored in data storage, and the data storage connects with processor;
Step 2012, image enhaucament:The digital picture gathered to current sample time by processor is carried out at enhancing
Reason, obtains the digital picture after enhancing treatment;
Step 2013, image segmentation:Digital picture after enhancing during processor is to step 2012 is processed is split
Treatment, obtains target image;
Step 202, fire identification:Using two disaggregated models for pre-building, target image described in step 2013 is entered
Row treatment, and draw the fire condition classification in current sample time region to be detected;The fire condition classification includes flame
With without two classifications of flame, two disaggregated model is the SVMs to having flame and classified without two classifications of flame
Model;
Two disaggregated model to set up process as follows:
Step I, image information collecting:Using described image collecting unit, there is region to be detected during fire in collection respectively
Multiframe digital picture one and when there is no fire region to be detected multiframe digital picture two;
Step II, feature extraction:Feature is carried out respectively to digital picture described in digital picture described in multiframe one and multiframe to carry
Take, and extract one group of characteristic parameter that can be represented and distinguish the digital picture, and this group of feature respectively from each digital picture
Parameter includes M characteristic quantity, and the M characteristic quantity is numbered, and the M characteristic quantity constitutes a characteristic vector, its
Middle M >=2;
Step III, training sample are obtained:The He of digital picture one described in the multiframe obtained after feature extraction from step II
In the characteristic vector of digital picture two described in multiframe, choose respectively described in the characteristic vector and m2 frames of digital picture one described in m1 frames
The characteristic vector composition training sample set of digital picture two;Wherein, m1 and m2 are positive integer and m1=40~100, and m2=40~
100;It is m1+m2 that the training sample concentrates the quantity of training sample;
Step IV, two disaggregated models are set up, and process is as follows:
Step IV -1, kernel function are chosen:From RBF as two disaggregated model kernel function;
Step IV -2, classification function determine:Treat the core ginseng of penalty factor γ and selected RBF in step IV -1
Number σ2It is determined that after, just obtain the classification function of two disaggregated model, and complete two disaggregated model set up process;Its
In, γ=C-2, σ=D-1, 0.01 < C≤10,0.01 < D≤50;
To penalty factor γ and nuclear parameter σ2When being determined, parameter C and D is optimized using conjugate gradient method first,
Parameter C and D after being optimized, further according to γ=C-2With σ=D-1Parameter C after optimization and D are converted into penalty factor γ and core
Parameter σ2;
Step V, two disaggregated model training:The m1+m2 training sample that training sample described in step III is concentrated, it is defeated
Enter two disaggregated models set up in step IV to be trained.
A kind of above-mentioned Image Fire Flame recognition methods, it is characterized in that:Training sample described in step III concentrates training
Total sample number amount is N and N=m1+m2;Before two disaggregated model foundation are carried out in step IV, the N for first being concentrated to the training sample
Individual training sample is numbered, and it is p that the training sample concentrates the numbering of p-th training sample, p be positive integer and p=1,
2、…、N;P-th training sample is denoted as (xp,yp), wherein xpIt is p-th characteristic parameter of training sample, ypIt is p-th training
The classification number and y of samplep=1 or -1, wherein classification number indicates flame for 1, and classification number is indicated without flame for -1;
When being optimized to parameter C and D using conjugate gradient method in step IV -2, using training sample described in step III
M1+m2 training sample of concentration is optimized, and optimization process is as follows:
Step I, object function determine:(1), in formula
To stay a Prediction sum squares, p is the numbering that the training sample concentrates each training sample, e to sse (C, D)pFor in step IV
Two disaggregated models set up to p-th predicated error of training sample andWherein,In formulaTo remove remaining the element group after p-th element in matrix s
Into vector;S (p) is p-th element of matrix s, (A-1)(p-, p) it is matrix A-1Pth row remove p-th element after its
The column vector of remaining element composition, (A-1) (p, p) be matrix A-1Pth row p-th element;It is matrixPth
Row remove the column vector of remaining element composition after p-th element, matrixIt is the augmented matrix of matrix K, wherein matrixMatrix A-1The inverse matrix of representing matrix A, matrixWherein matrix I is unit matrix, matrix IN=[1,1,…,1]T, the transposition of T representing matrixs, matrix
INIn comprising N number of element and N number of element is 1;Matrix s=A-1Y, matrixWherein y1、y2、…、yNIt is respectively described
Training sample concentrates the classification of N number of training sample;
Step II, Initial parameter sets:To the initial value C of parameter C and D1And D1It is determined respectively, and to identification error
Threshold epsilon is set and ε > 0;
The gradient g of step III, current iterationkCalculate:According to formulaCalculate target in step I
Function pair CkAnd DkGradient gk, k be iterations and k=1,2 ...;If | | gk| |≤ε, stop calculating, now CkAnd DkRespectively
It is parameter C and D after optimization;Otherwise, into step IV;
Wherein,
In formulaIt is matrixPth row
Remove the column vector of the composition of remaining element after p-th element;s(p-) to remove remaining element after p-th element in matrix s
The vector of composition, epBy two disaggregated models set up in step IV are to p-th predicated error of training sample;
The direction of search d of step IV, current iterationkCalculate:According to formulaCalculate and work as
The direction of search d of preceding iterationk, d in formulak-1It is the direction of search of -1 iteration of kth, βk=||gk||2/||gk-1||2, gk-1For kth-
1 gradient of iteration;
The step-size in search λ of step V, current iterationkIt is determined that:The identified direction of search d along step IVkScan for,
Find out and meet formula
Step-size in search λk,
In formulaRepresent(0 ,+∞)Middle searching makesReach the step-length λ of minimum valuek;
Step VI, according to formulaWithTo Ck+1And Dk+1Counted
Calculate;
Step VII, k=k+1 is made, return to step III, carries out next iteration afterwards;
Selected RBF is in step IV -1The RBF
Regression function isα in formulatBe regression parameter with b, s be positive integer and s=1,2 ..., N, t is
Positive integer and t=1,2 ..., N.
A kind of above-mentioned Image Fire Flame recognition methods, it is characterized in that:M=6 in step II, and 6 characteristic quantities are respectively
Area, similarity, square characteristic, consistency, textural characteristics and stroboscopic nature.
A kind of above-mentioned Image Fire Flame recognition methods, it is characterized in that:To C in step II1And D1When being determined, adopt
With grid data service or the method for randomly selecting numerical value to C1And D1It is determined;Using randomly selecting the method for numerical value to C1With
D1When being determined, C1For(0.01,1] numerical value randomly selected in, D1For(0.01,50] number randomly selected in
Value;Using grid data service to C1And D1When being determined, first 10-3Be step-length grid division, then with C and D as independent variable and with
Object function described in step I is that dependent variable makes three dimensional network trrellis diagram, finds out multigroup parameter of C and D by grid search afterwards,
Finally to multigroup parameter is averaged as C1And D1。
A kind of above-mentioned Image Fire Flame recognition methods, it is characterized in that:When carrying out image enhaucament in step 2012, use
Image enchancing method based on fuzzy logic carries out enhancing treatment.
A kind of above-mentioned Image Fire Flame recognition methods, it is characterized in that:Using the image enhaucament side based on fuzzy logic
When method carries out enhancing treatment, process is as follows:
Step 20121, fuzzy field is transformed to by image area:According to membership function
(7), the gray value of each pixel of image to be reinforced is mapped to the fuzzy membership of fuzzy set, and is accordingly obtained described
The fuzzy set of image to be reinforced;X in formulaghIt is any pixel point in the image to be reinforced(G, h)Gray value, XTIt is to use base
Selected gray threshold, X when the image enchancing method of fuzzy logic carries out enhancing treatment to the image to be reinforcedmaxFor
The maximum gradation value of the image to be reinforced;
Step 20122, carry out enhanced fuzzy treatment using fuzzy enhancement operator in fuzzy field:The enhanced fuzzy for being used is calculated
Son is μ 'gh=Ir(μgh)=Ir(Ir-1μgh), r is iterations and it is positive integer in formula, r=1,2 ...;Whereinμ in formulac=T(XC), wherein XCTo get over a little and XC=XT;
Step 20123, image area is changed to by fuzzy field inversion:According to formula(6), by enhanced fuzzy
The μ ' obtained after reasonghInverse transformation is carried out, the gray value of each pixel in digital picture after enhancing is processed is obtained, and obtained at enhancing
Digital picture after reason.
A kind of above-mentioned Image Fire Flame recognition methods, it is characterized in that:Transformed to by image area in step 20121 fuzzy
Before domain, first using maximum variance between clusters to gray threshold XTChosen;Using maximum variance between clusters to gray threshold XT
Before being chosen, all gray values that pixel quantity is 0 are first found out from the grey scale change scope of the image to be reinforced,
And use processor(3)The all gray values that will be found out are marked to calculate gray value;Using maximum variance between clusters to ash
Degree threshold XTWhen being chosen, in the grey scale change scope of the image to be reinforced except it is described exempt from calculate gray value in addition to its
Inter-class variance value when its gray value is as threshold value is calculated, and from the inter-class variance value for calculating find out maximum kind between side
Difference, it is just gray threshold X to find out the corresponding gray value of maximum between-cluster variance valueT。
A kind of above-mentioned Image Fire Flame recognition methods, it is characterized in that:In step one, each sampling instant gathers numeral
The size of image is M1 × N1 pixel;
When step 2013 carries out image segmentation, process is as follows:
Step 20131, two-dimensional histogram are set up:Using processor set up the image to be split on pixel gray level
The two-dimensional histogram of value and neighborhood averaging gray value;Any point is designated as (i, j) in the two-dimensional histogram, and wherein i is the straight two dimension
The abscissa value of square figure and its be any pixel point (m, n) in the image to be split gray value, j is the two-dimensional histogram
Ordinate value and its be the pixel (m, n) neighborhood averaging gray value;Any point (i, j) occurs in set up two-dimensional histogram
Frequency be designated as C (i, j), and the frequency that point (i, j) occurs is designated as h (i, j), wherein
Step 20132, fuzzy parameter Combinatorial Optimization:The processor calls fuzzy parameter Combinatorial Optimization module, and utilizes
Particle swarm optimization algorithm carries out excellent to the fuzzy parameter combination used by the image partition method based on two dimension fuzzy division maximum entropy
Change, and fuzzy parameter after optimize is combined;
In this step, before being optimized to fuzzy parameter combination, first according to the two-dimentional Nogata set up in step 20131
Figure, calculate the functional relation of Two-dimensional Fuzzy Entropy when splitting to the image to be split, and will calculate
The functional relation of Two-dimensional Fuzzy Entropy is used as fitness when being optimized to fuzzy parameter combination using particle swarm optimization algorithm
Function;
Step 20133, image segmentation:The processor is pressed using the fuzzy parameter combination after optimizing in step 20132
Each pixel in the image to be split is classified according to the image partition method that maximum entropy is divided based on two dimension fuzzy, and
It is corresponding to complete image segmentation process, the target image after being split.
A kind of above-mentioned Image Fire Flame recognition methods, it is characterized in that:Image to be split is by mesh described in step 20131
Logo image O and background image P is constituted;Wherein the membership function of target image O is μo(i,j)=μox(i;a,b)μoy(j;c,d)
(1);The membership function μ of background image Pb(i,j)=μbx(i;a,b)μoy(j;c,d)+μox(i;a,b)μby(j;c,d)+μbx(i;
a,b)μby(j;c,d)(2);
In formula (1) and (2), μox(i;A, b) and μoy(j;C, d) be target image O one-dimensional membership function and the two
It is S function, μbx(i;A, b) and μby(j;C, d) it is the one-dimensional membership function of background image P and both at S function,
μbx(i;a,b)=1-μox(i;A, b), μby(j;c,d)=1-μoy(j;C, d), wherein a, b, c and d are to target image O and the back of the body
The parameter that the one-dimensional membership function shape of scape image P is controlled;
When functional relation in step 20132 to Two-dimensional Fuzzy Entropy is calculated, first according to two set up in step I
Dimension histogram, to the minimum value g of the pixel gray value of the image to be splitminWith maximum gmaxAnd neighborhood averaging gray scale
The minimum value s of valueminWith maximum smaxIt is determined respectively;
The functional relation of the Two-dimensional Fuzzy Entropy calculated in step 20132 is:
(3),
In formula (3)Wherein h (i, j) is described in step I
Point (i, j) occur frequency;
When being optimized to fuzzy parameter combination using particle swarm optimization algorithm in step 20132, the fuzzy ginseng for being optimized
Array is combined into (a, b, c, d).
A kind of above-mentioned Image Fire Flame recognition methods, it is characterized in that:Two dimension fuzzy is carried out in step 20132 to divide most
When the parameter combination of big entropy optimizes, comprise the following steps:
Step II -1, population initialization:Using a value of parameter combination as a particle, and by multiple particle groups
Into a population for initialization;It is denoted as (ak,bk,ck,dk), wherein k be positive integer and its k=1,2,3 ,~, K, wherein K is for just
Integer and its be particle included in the population quantity, akIt is a random value of parameter a, bkIt is one of parameter b
Random value, ckIt is a random value of parameter c, dkIt is a random value of parameter d, ak< bkAnd ck< dk;
Step II -2, fitness function determine:
Will(3),
As fitness function;
Step II -3, particle fitness evaluation:To current time, the fitness of all particles is evaluated respectively, and all
The fitness evaluation method all same of particle;Wherein, when the fitness to k-th particle of current time is evaluated, first basis
Identified fitness function calculates the fitness value of k-th particle of current time and is denoted as in step II -2
Fitnessk, and the fitnessk that will be calculated and Pbestk carries out difference comparsion:Fitnessk > are drawn when comparing
During Pbestk, Pbestk=fitnessk, and willThe position of k-th particle of current time is updated to, wherein Pbestk is to work as
Maximum adaptation angle value that k-th particle of preceding moment is reached and its be k-th particle of current time individual extreme value,It is to work as
The personal best particle of k-th particle of preceding moment;Wherein, t is for current iteration number of times and it is positive integer;
Treat to be calculated the fitness value of current time all particles according to identified fitness function in step II -2
After the completion of, the fitness value of the maximum particle of current time fitness value is designated as fitnesskbest, and will
Fitnesskbest and gbest carries out difference comparsion:When compare draw fitnesskbest > gbest when, gbest=
Fitnesskbest, and willThe position of the maximum particle of current time fitness value is updated to, when wherein gbest is current
The global extremum at quarter,It is colony's optimal location at current time;
Step II -4, judge whether to meet stopping criterion for iteration:When stopping criterion for iteration is met, parameter combination is completed excellent
Change process;Otherwise, position and the speed for drawing each particle of subsequent time, and return to step are updated according to colony optimization algorithm in particle
Ⅱ-3;Stopping criterion for iteration reaches maximum iteration I set in advance for current iteration number of times t in step II -4maxOr Δ
G≤e, wherein Δ g=| gbest-gmax |, are the global extremum at gbest current times in formula, and gmax is the target of original setting
Fitness value, e is for positive number and it is deviation set in advance.
The present invention has advantages below compared with prior art:
1st, method and step is simple, reasonable in design and realizes that conveniently, input cost is relatively low.
2nd, the image enchancing method step for being used is simple, reasonable in design and enhancing effect is good, according to underground coal mine illumination
The characteristics of low, round-the-clock artificial light causes image image quality difference, analyzing and comparing traditional images enhancing Processing Algorithm
On the basis of, it is proposed that the image enhaucament preprocess method based on fuzzy logic, the method uses new membership function, can not only
Reduce the Pixel Information loss of image low gray level areas, overcome the problem that the contrast brought by enhanced fuzzy declines, improve
Adaptability.Meanwhile, threshold value selection is carried out using a kind of quick maximum variance between clusters, realize enhanced fuzzy threshold adaptive
Ground fast selecting, improves algorithm arithmetic speed, enhances real-time, can carry out image increasing to the image under varying environment
By force, and the detailed information of image can be effectively improved, improves picture quality, and calculating speed is fast, meets requirement of real-time
3rd, the image partition method step for being used is simple, reasonable in design and segmentation effect is good, due to One-Dimensional Maximum-Entropy method
Segmentation effect is not ideal enough for relatively low to signal to noise ratio, low-light (level) image, thus divides maximum entropy using based on two dimension fuzzy
Dividing method split, the characteristics of the dividing method considers half-tone information and space neighborhood information and itself ambiguity,
But there is the slow defect of arithmetic speed, use particle swarm optimization algorithm to carry out fuzzy parameter combination in present patent application excellent
Change so that can it is easy, fast and accurately optimize after fuzzy parameter combine, thus image segmentation be greatly improved imitate
Rate.Also, the particle swarm optimization algorithm for being used is reasonable in design and realizes convenient, its state and iteration according to current particle group
The adjustment local space size of number of times self adaptation, obtained on the premise of convergence rate is not influenceed search success rate higher and
Higher-quality solution, segmentation effect is good, strong robustness, and improves arithmetic speed, meets requirement of real-time.
4th, flame image can quickly and accurately be divided due to dividing the dividing method of maximum entropy based on two dimension fuzzy
Cut, overcome the problem that traditional algorithm is divided by mistake using single threshold noise spot, while using particle swarm optimization algorithm to fuzzy ginseng
Array is closed and optimized, and solves nature of nonlinear integral programming problem, and the target of segmentation is caused while influence of noise is overcome more
Keep shape well.Thus, the present invention will divide the dividing method and particle swarm optimization algorithm phase of maximum entropy based on two dimension fuzzy
The Fast Segmentation of infrared image is implemented in combination with, parameter combination (a, b, c, d) is set used as particle, two dimension fuzzy partition entropy is used as suitable
Response function determines the direction of search of the particle in solution space, once the two-dimensional histogram of image is obtained, using PSO algorithm search
So that maximum optimum parameter combination (a, b, c, d) of fitness function, finally according to maximum membership grade principle to the picture in image
Element is classified, so as to realize the segmentation of image.Also, use dividing method of the present invention is big for noise, contrast
The segmentation effect of the less infrared image of low, target is all very good.
When the 5th, actually carrying out feature extraction, area, similarity, square characteristic, consistency, textural characteristics, stroboscopic nature are chosen
Know another characteristic foundation as fire image, both remained the feature big to classification contribution, given up redundancy feature, reduce spy
Dimension is levied, the optimum choice of feature is completed.
6th, the two disaggregated model modeling methods for being used are simple.Conveniently, using effect is good, and adopts for reasonable in design and realization
The hyper parameter of kernel function is optimized with conjugate gradient method.Treatment imperfection and fuzzy message are adapted to based on artificial neural network
The characteristics of and the small sample that has of SVMs, non-linear and high dimensional pattern advantage carry out fire forest fire respectively, reach
To the purpose that each criterion has complementary advantages, judge that disaster hidden-trouble easily causes wrong report so as to overcome the single criterion of traditional use
Shortcoming.Conventional cross validation Optimal Parameters method, it is fairly time consuming, and do not ensure that the parameter of selection necessarily ensures utensil of classifying
Standby classic classification performance, and all existing for existing other hyper parameter selection algorithms is unable to simultaneous selection penalty factor and core
The defect of function parameter, for small sample LS-SVM pattern classification problems, the present invention use and stay a predicated error flat minimizing
Side and be target, with gradient decline method, be small sample, Nonlinear Modeling LS-SVM simultaneously choose two hyper parameter core letters
Number parameter and penalty factor.Not only discrimination is high for two disaggregated models that the present invention is set up, and nicety of grading is high, and the time used
Short, energy is easy, be rapidly completed fire identification process, when the current classification for gathering image is identified to there is flame, then illustrates
Generation fire, carries out alarm, and take corresponding measure in time.The present invention is for fire identification under the complexity particular surroundings of colliery
Small sample problem and nonlinear problem, and advantage of the SVMs in terms of higher-dimension, it is proposed that based on least square branch
The fire image recognition methods of vector machine is held, and on the basis of fast leave one out, is carried out hyper parameter using conjugate gradient method and is sought
It is excellent, construct FR-LSSVM models.
In sum, the inventive method step is simple, high, using effect is good to realize convenient and easy to operate, reliability, energy
Effectively solve under the complex environment that existing video fire hazard detecting system is present that reliability is relatively low, wrong report rate of failing to report is higher, use effect
Really poor the problems such as.
Below by drawings and Examples, technical scheme is described in further detail.
Specific embodiment
A kind of Image Fire Flame recognition methods as shown in Figure 1, comprises the following steps:
Step one, IMAQ:Using image acquisition units and according to sample frequency f set in advances, treat detection zone
The digital picture in domain is acquired, and the digital picture synchronous driving that each sampling instant is gathered is to processor 3.It is described
Image acquisition units connect with processor 3.
In the present embodiment, described image collecting unit includes CCD camera 1 and the video acquisition connected with CCD camera 1
Card 2, the CCD camera 1 connects with video frequency collection card 2, and the video frequency collection card 2 connects with processor 3.
In the present embodiment, the size that each sampling instant gathers digital picture is M1 × N1 pixel.Wherein M1 is
Per the quantity of pixel in a line in gathered digital picture, N1 by collection digital picture on each row pixel number
Amount.
Step 2, image procossing:The processor 3 is gathered according to time order and function order to each sampling instant in step one
Digital picture carry out image procossing respectively, and the processing method all same of digital picture is gathered to each sampling instant;To step
When the digital picture that any one sampling instant is gathered in rapid is processed, comprise the following steps:
Step 201, image preprocessing, process are as follows:
Step 2011, image-receptive and synchronous storage:The current sample time that the processor 3 will be received now is adopted
The digital picture of collection is synchronously stored in data storage 4, and the data storage 4 connects with processor 3;
In the present embodiment, the CCD camera 1 is infrared CCD camera, and the CCD camera 1, video acquisition
Card 2, processor 3 and data storage 4 composition IMAQ and pretreatment system, refer to Fig. 2.
Step 2012, image enhaucament:The digital picture gathered to current sample time by processor 3 is carried out at enhancing
Reason, obtains the digital picture after enhancing treatment.
Step 2013, image segmentation:Digital picture after enhancing during processor 3 is to step 2012 is processed is split
Treatment, obtains target image.
Step 202, fire identification:Using two disaggregated models for pre-building, target image described in step 2013 is entered
Row treatment, and draw the fire condition classification in current sample time region to be detected;The fire condition classification includes flame
With without two classifications of flame, two disaggregated model is the SVMs to having flame and classified without two classifications of flame
Model.
Two disaggregated model to set up process as follows:
Step I, image information collecting:Using described image collecting unit, there is region to be detected during fire in collection respectively
Multiframe digital picture one and when there is no fire region to be detected multiframe digital picture two.
Step II, feature extraction:Feature is carried out respectively to digital picture described in digital picture described in multiframe one and multiframe to carry
Take, and extract one group of characteristic parameter that can be represented and distinguish the digital picture, and this group of feature respectively from each digital picture
Parameter includes M characteristic quantity, and the M characteristic quantity is numbered, and the M characteristic quantity constitutes a characteristic vector, its
Middle M >=2.
Step III, training sample are obtained:The He of digital picture one described in the multiframe obtained after feature extraction from step II
In the characteristic vector of digital picture two described in multiframe, choose respectively respectively choose m1 frames described in digital picture one characteristic vector and
The characteristic vector composition training sample set of digital picture two described in m2 frames;Wherein, m1 and m2 are positive integer and m1=40~100,
M2=40~100;It is m1+m2 that the training sample concentrates the quantity of training sample.
In the present embodiment, when obtaining training sample, using generation in described image collecting unit collection certain time t1
There is no the digital image sequence two in region to be detected during fire in digital image sequence one during fire and one;The digitized map
As the frame number of digital picture included in sequence one is n1=t1 × f, when wherein t1 is the sampling of the digital image sequence one
Between;The frame number of digital picture is n2=t2 × f included in the digital image sequence two, and wherein t2 is the digital picture sequence
The sampling time of row two.Wherein, n1 is not less than m1, and n2 is not less than m2.Afterwards, m1 is chosen from the digital image sequence one
Digital picture chooses m2 digital picture as without flame sample as there is a flame sample from the digital image sequence two
This.
In the present embodiment, m1=m2.
Step IV, two disaggregated models are set up, and process is as follows:
Step IV -1, kernel function are chosen:From RBF as two disaggregated model kernel function;
Step IV -2, classification function determine:Treat the core ginseng of penalty factor γ and selected RBF in step IV -1
Number σ2It is determined that after, just obtain the classification function of two disaggregated model, and complete two disaggregated model set up process;Its
In, γ=C-2, σ=D-1, 0.01 < C≤10,0.01 < D≤50.
To penalty factor γ and nuclear parameter σ2When being determined, parameter C and D is optimized using conjugate gradient method first,
Parameter C and D after being optimized, further according to γ=C-2With σ=D-1Parameter C after optimization and D are converted into penalty factor γ and core
Parameter σ2。
When being optimized to parameter C and D using conjugate gradient method in step IV -2, using training sample described in step III
M1+m2 training sample of concentration is optimized.
Step V, two disaggregated model training:The m1+m2 training sample that training sample described in step III is concentrated, it is defeated
Enter two disaggregated models set up in step IV to be trained.
In the present embodiment, it is N and N=m1+m2 that training sample described in step III concentrates training sample total quantity;Step IV
In carry out two disaggregated model foundation before, first to the training sample concentrate N number of training sample be numbered, the training sample
The numbering of p-th training sample of this concentration is p, p be positive integer and p=1,2 ..., N;P-th training sample is denoted as (xp,yp), its
Middle xpIt is p-th characteristic parameter of training sample(I.e. described characteristic vector), ypIt is p-th classification number and y of training samplep=1
Or -1, wherein classification number indicates flame for 1, and classification number is indicated without flame for -1.
When being optimized to parameter C and D using conjugate gradient method in step IV -2, using training sample described in step III
M1+m2 training sample of concentration is optimized, and optimization process is as follows:
Step I, object function determine:(1), sse in formula
(C, D), to stay a Prediction sum squares, p is the numbering that the training sample concentrates each training sample, epIt is institute in step IV
Two disaggregated models set up to p-th predicated error of training sample andWherein,S (p in formula-) to remove remaining the element group after p-th element in matrix s
Into vector;S (p) is p-th element of matrix s, (A-1)(p-, p) it is matrix A-1Pth row remove p-th element after its
The column vector of remaining element composition, (A-1) (p, p) be matrix A-1Pth row p-th element;It is matrixPth
Row remove the column vector of remaining element composition after p-th element, matrixIt is the augmented matrix of matrix K, wherein matrixMatrix A-1The inverse matrix of representing matrix A, matrix
Wherein matrix I is unit matrix, matrix IN=[1,1,…,1]T, the transposition of T representing matrixs, matrix INIn include N number of element and N
Individual element is 1;Matrix s=A-1Y, matrixWherein y1、y2、…、yNRespectively described training sample concentrates N number of training
The classification of sample.
Wherein, matrixWherein, matrix K is core letter
Matrix number.MatrixFor the right side of matrix K increases a column element and each element is the matrix after 1.
Due to least square method supporting vector machine(LS-SVM)Constraints be expressed as following form:
(5.22), w in formulaT·φ(xp)+b is that higher-dimension is special
The Optimal Separating Hyperplane in space is levied, w and b is the parameter of Optimal Separating Hyperplane;epIt is p-th training error of training sample,For
Empiric risk;wT·w=||w||2The complexity of Learning machine is weighed.
After training sample set determination, the performance of LS-SVM models depends on the type and two super ginsengs of its kernel function
Several selections, two hyper parameters are respectively penalty factor γ and nuclear parameter σ2, nicety of grading and the hyper parameter of LS-SVM models
Selection is related, nuclear parameter σ2Represent the width of RBF and it there are much relations with the smoothness of LS-SVM models;Punish
Penalty factor γ is also referred to as regularization parameter.Control to the punishment degree of error sample, its complexity with LS-SVM models and
The matching degree of training sample is closely related.
In the present embodiment, selected RBF is in step IV -1The footpath
It is to the regression function of basic functionα in formulatBe regression parameter with b, s be positive integer and s=1,
2nd ..., N, t be positive integer and t=1,2 ..., N.
Formula (5.22) can be write as:
C in formula (5.23)-2Penalty factor γ is instead of, but can equally play Balanced LS-SVM model complexities and warp
Test the effect of risk;σ is by D-1Instead of RBFChange and expressed by following formula:K(xs,xt)=
exp(-D2·||xs-xt||2)。
According to least square method supporting vector machine principle, formula (5.23) is converted into system of linear equations(5.25), the derivation of formula (5.25) is big with reference in October, 2012 Qingdao science and technology
Learn journal(Natural science edition)What the 5th phase of volume 33 published《Based on the online recursive least-squares for quickly staying a cross-validation method
Model construction of SVM method》(Author is Shao Weiming, Tian Xuemin)One text.
Formula (5.25) is solved, the regression function of RBF can be obtainedBy public affairs
Formula (5.25) can draw:Matrix s=A-1Y(5.28)。
Two disaggregated models of the N number of training sample concentrated by the training sample to being set up carry out n times checking, wherein
When carrying out pth time checking, using p-th training sample as prediction sets, and remaining N-1 sample is gathered as training, passes through
Training set solves LS-SVM parameters apAfter b, p-th training sample as prediction sets is classified, and record
Classification results correctness;After so being verified by n times, the wrong classification rate e for staying a prediction can be calculatedLOO, calculate public
Formula is(5.29).For every group of given hyper parameter(Comprising C and
D), corresponding e can be calculatedLOO, so as to select eLOOIt is that minimum hyper parameter is combined as the parameter after optimization.
Due to(5.30), thus for every group of given hyper parameter, enter
When row once stays a cross validation, A of a demand solution-1, then calculate s during each iterationp, thus a large amount of friendships can be saved
Fork proving time, computation amount.
To make sse (C, D) reach minimum, to formulaOptimize to search
Rope C and D, define to the gradient of C and D to sse (C, D) according to matrix derivation and inverse matrix derivation first, draw:(5.32);(5.33);(5.34) and(5.35), matrix 0 represents that element is all 0 N-dimensional column vector in formula (5.35).
According to AA-1=I(I is unit matrix), can be derived by:(5.36);
According to formulaTwo partial derivative difference can be derived
For:
According to formula (5.30),
It is apparent thatAndCan be calculated by formula (5.32)-(5.35).
All sse (C, D) can be calculated to their gradient according to formula (5.37) and (5.38) to each group of hyper parameter C and D.
It can be seen from LS-SVM principles, the selection of LS-SVM hyper parameters is converted to unconstrained optimization problem from a constrained optimization problem,
Use C-2Instead of γ, D is used-1Instead of σ, this conversion does not interfere with the performance of LS-SVM models, the value condition of another aspect C and D
The calculating of gradient is not influenceed.
Step II, Initial parameter sets:To the initial value C of parameter C and D1And D1It is determined respectively, and to identification error
Threshold epsilon is set and ε > 0.
The gradient g of step III, current iterationkCalculate:According to formulaCalculate target in step I
Function pair CkAnd DkGradient gk, k be iterations and k=1,2 ...;If | | gk| |≤ε, stop calculating, now CkAnd DkRespectively
It is parameter C and D after optimization;Otherwise, into step IV.
Wherein,
In formulaIt is matrixPth row
Remove the column vector of the composition of remaining element after p-th element;s(p-) to remove remaining element after p-th element in matrix s
The vector of composition, epBy two disaggregated models set up in step IV are to p-th predicated error of training sample.
The direction of search d of step IV, current iterationkCalculate:According to formulaCalculate and work as
The direction of search d of preceding iterationk, d in formulak-1It is the direction of search of -1 iteration of kth, βk=||gk||2/||gk-1||2, gk-1For kth-
1 gradient of iteration.
The step-size in search λ of step V, current iterationkIt is determined that:The identified direction of search d along step IVkScan for,
Find out and meet formula
Step-size in search λk,
In formulaRepresent(0 ,+∞)Middle searching makesReach the step-length λ of minimum valuek。
Step VI, according to formulaWithTo Ck+1And Dk+1Counted
Calculate.
Step VII, k=k+1 is made, return to step III, carries out next iteration afterwards.
Finally try to achieve, matrix
In the present embodiment, after two disaggregated models are set up, the final grader for using for:
In the present embodiment, in step VThe transposition of T representing matrixs, H is auto-correlation square
Battle array and H=AT·A。
In actual mechanical process, to C in step II1And D1When being determined, using grid data service or numerical value is randomly selected
Method to C1And D1It is determined;Using randomly selecting the method for numerical value to C1And D1When being determined, C1For(0.01,1] in
The numerical value randomly selected, D1For(0.01,50] numerical value randomly selected in;Using grid data service to C1And D1Enter
When row determines, first 10-3It is step-length grid division, then with C and D as independent variable and with object function described in step I as dependent variable
Three dimensional network trrellis diagram is made, multigroup parameter of C and D is found out by grid search afterwards, finally to multigroup parameter is averaged work
It is C1And D1。
In the present embodiment, using grid data service to C1And D1It is determined, and the B of C and D is found out by grid search
Group parameter, wherein B is positive integer and B=5~20.
The features such as conjugate gradient method has easy algorithm, small required amount of storage, fast convergence rate, multidimensional problem is converted into
It is a series of along the linear search method optimizing of negative gradient direction, simply localized target functional value declines most fast direction, can be effective
Iterations is reduced, run time is reduced.
And it is higher just to find precision when optimizing is carried out to C and D using grid data service, when only step-length sets very little
Optimal hyper parameter value, will so take very much.
Burning is a lasting typical unstable physical process, with various characterization parameters.Fire early stage flame figure
As the main feature such as have area of flame increase, edge shake, in irregular shape, position basicly stable.Wherein, area rising characteristic
Criterion used is calculated in Visual C++[115]Developed under platform and realized, wherein area change rate is defined as:Wherein, AR represents the area change rate of adjacent interframe highlight regions, A (n) and A (n+
1) area of suspicious region in present frame and next frame is represented respectively.To prevent all not existing suspicious flame in adjacent two field pictures
Region and the area change rate that calculates is turned into infinity, on denominator plus a minimum eps.In addition to realization is returned
One changes, and takes the maximum of highlight regions area in two frames as the denominator of above formula, can so make the result that finally calculates between
(0,1) between.
The shape similarity of image will generally be carried out by means of with the known similarity degree for describing son, and this method can be with
Corresponding similarity measure is set up in any complicated degree.According to background subtraction, if known image sequence is fh(x,
Y), h=1,2 ..., N0, wherein (x, y) is the coordinate of each pixel in image, N0It is frame number, if benchmark image is fo(x, y), this
Sample can define an error image sequence:δh(x,y)=|fh(x,y)-fo(x, y) |, this error image sequence is illustrated
Difference between each frame of original image sequence and benchmark image.Then, binaryzation is carried out to the error image sequence for obtaining to obtain
Image sequence { bh(x,y)}.The pixel that 1 is designated in this image sequence represented and deposited between original sequence and benchmark image
In the region of marked difference.It is considered that the region is possible flame region, to the image after the influence for filtering isolated point
Per frame it is that 1 pixel is marked in sequence, possible flame region Ω in obtaining in sequence image per frameh.It is suspicious when finding
After flame region, flame is classified with chaff interference by calculating the method for similarity of successive frame modified-image.Successive frame
The similarity ξ of modified-imagehIt is defined as:After trying to achieve several similarities, using continuous
The similarity ξ of several two field pictureshAverage valueAs criterion.
Square characteristic, from the angle of flame identification, employs the centroid feature of flame image, and its stabilization is represented with barycenter
Property.For a width flame image, its barycenter, such as formula are calculated first:M00It is target
The zeroth order square in region, the as area of the target area.Calculate imageWithFirst moment (the M in direction10,M01), then calculate
To its barycenter.
The edge variation of incipient fire flame has its unique rule, this simple and practical using consistency and eccentricity
Characteristic parameter the edge variation of flame is recognized as one of Fire Criterion.
Consistency is commonly used to describe the complexity of object boundary, also referred to as circularity or decentralization, and it is in area
On the basis of girth, the characteristic quantity of the complex-shaped degree in object or region is calculated.It is defined as follows:K=1,
2 ... n1, in formula (4.7), CkRepresent the consistency of the pel that numbering is k, PkIt is k-th girth of pel, you can doubt pel
Boundary length, can be obtained by calculating boundary chain code.AkIt is k-th area of pel, for gray level image, can be by meter
The bright spot number for calculating suspicious pel is obtained, and for bianry image, can be obtained by calculating the pixel number that pixel value is 1, and n is figure
The number of suspicious flame pel as in.The calculating of girth is relatively complicated, but can be determined by extracting boundary chain code.
The step of calculating flame region consistency is as follows:
1. the area of doubtful flame region is calculated on the basis of image segmentation;
2. the continuous girth pixel of vertical direction is detected, and records the number Nx of continuous girth pixel;Detection level
The continuous boundary pixel point in direction, and record the number N of continuum boundary pixely, calculate boundary pixel point sum SN;
3. the chain code number of verso is NE=Nx+Ny, the chain code number N of odd number code0=SN—NE, using girth formulaCalculate girth.
4. consistency is calculated in result 1. and 3. being substituted into formula (4.7).
Image texture characteristic, Haralick et al. are extracted using gray level co-occurrence matrixes be extracted 14 kinds by gray level co-occurrence matrixes
Feature.In the present embodiment, the image texture characteristic for being extracted includes contrast, entropy, energy, five features of uniformity and correlation.
There is scintillation in burned flame, the pixel of one two field picture of this characteristics exhibit is on different grey-scale
Distribution changes with time.By calculating the change of edge pixel point, the flicker rule of target pattern can be just obtained.Flame flicking
Frequency be typically in the low frequency range of 10-20Hz.Because the general frequency that video image is obtained is 25Hz (2 frames/s), do not reach
The undistorted sampling request for obtaining flicker frequency, therefore directly by acquiring video information, its characteristic frequency spectrum is more difficult.For dodging
Bright rule, Toreyin proposes to be changed with time under RGB models using small using the color value of fixed pixel point in each frame
Ripple analyzes the R component of this point, if there is flame, at this, point causes the acute variation of value, and the high fdrequency component of wavelet decomposition will
It is nonzero value.Wang Zhenhua[73]Decomposition and reconstruction is carried out to flame characteristic time series Deng proposition wavelet transform, using area
Situation of change represents its flicker rule.Zhang Jinhua etc. points out during flame flicking it, and height can great changes will take place, its Changing Pattern
Existed between flicker frequency and directly contacted, and have very big difference between interference source, therefore proposed and use flame height
Method of the change instead of flame flicking feature to carry out flame identification.In the present embodiment, point out that flame dodges using Zhang Jinhua etc.
Its height meeting method that great changes will take place extracts stroboscopic nature when bright.
In the present embodiment, M=6 in step II, and 6 characteristic quantities are respectively area, similarity, square characteristic, consistency, line
Reason feature and stroboscopic nature.
In actual mechanical process, the performance of two disaggregated models is set up for detection, selection includes flame sample and without fire
The training sample of flame sample amounts to 81, and each sample is 7 dimensions.For 81 training samples, one group of data is taken out every time
Classify for predicting, remaining 80 groups of data is in optimized selection to hyper parameter.Wherein initial value C1=1 and D1=1, use conjugate gradient
Method is scanned for, and the average of gained C is 0.1386, and mean square deviation is that the average of 0.0286, D is 0.2421, and mean square deviation is 0.0273,
It can be seen that preferred hyper parameter is more stable.By two disaggregated model of the present invention(FR-LSSVM models)With BP(Nerve net
Network model)、LS-SVM(Least square method supporting vector machine model)With standard SVM(Supporting vector machine model)Three kinds of disaggregated models enter
Row contrast, recognition result is as shown in table 1:
The recognition result contrast table of the different classifications model of table 1
As it can be seen from table 1 for discrimination, BP is worst, only the LS-SVM with the selected initial value of grid search is also poor,
FR-LSSVM and standard SVM are substantially better than them.And for the training time, FR-LSSVM and LS-SVM is significantly dominant, standard
SVM shows slightly and is dominant, and standard SVM is more difficult to search to obtain optimal hyper parameter, and BP neural network training is quite time-consuming, and discrimination is lower slightly, former
Because being less training samples number, sample size influences larger to discrimination, insufficient comprising characteristic information amount, and nerve net
Network Shortcomings in terms of convergence and local minimum, parameters selection relies on experience, parameter setting exist it is larger not
Certainty, can further correct to improve discrimination by supplementary training sample, the weights to BP neural network.Standard SVM's
Discrimination is high compared with LS-SVM, but training time and recognition time are all longer.And the algorithm of FR-LSSVM hyper parameters more specification, consumption
When it is few, and relatively stablize, reduce uncertainty, be particularly suited for the modeling of small sample, nonlinear problem, in speed and precision side
Face superiority has significant advantage.Other these algorithms are higher to image quality requirements, if image pixel is relatively low, the mesh in image
Mark region by barrier large area block or by dust covering, surround, extract target it is imperfect or extract target in have noise etc.
Discrimination may all be caused to be reduced.
In the present embodiment, when carrying out image enhaucament in step 2012, entered using the image enchancing method based on fuzzy logic
Row enhancing is processed.
When actually carrying out enhancing treatment, using the image enchancing method based on fuzzy logic(Specifically classical Pal-
King fuzzy enhancement algorithms, i.e. Pal algorithms)When carrying out image enhancement processing, there is following defect:
1. Pal algorithms are when blurring mapping and its inverse transformation is carried out, using complicated power function as fuzzy membership functions,
There is the big defect of poor real, operand;
2. in enhanced fuzzy conversion process, considerable low gray value hardness in original image is set to zero, causes low ash
The loss of degree information;
3. enhanced fuzzy threshold value(Get over point Xc)Selection it is general compare trial by rule of thumb or repeatedly and obtain, lack theory and refer to
Lead, with randomness;Parameter F in membership functiond、FeWith adjustability, parameter value Fd、FeRational choice and image procossing imitate
It is really in close relations;
4. in enhanced fuzzy conversion process, successive ignition computing is that it changes in order to enhancing treatment is repeated to image
The selection of generation number is instructed without correlation theory principle, and edge details are influenced whether when iterations is more.
It is right in step 2012 in the present embodiment for the Pal-King fuzzy enhancement algorithms for overcoming classics have drawbacks described above
When the digital picture is that image to be reinforced carries out enhancing treatment, process is as follows:
Step 20121, fuzzy field is transformed to by image area:According to membership function
(7), the gray value of each pixel of image to be reinforced is mapped to the fuzzy membership of fuzzy set, and is accordingly obtained described
The fuzzy set of image to be reinforced;X in formulaghIt is any pixel point in the image to be reinforced(G, h)Gray value, XTIt is to use base
Selected gray threshold, X when the image enchancing method of fuzzy logic carries out enhancing treatment to the image to be reinforcedmaxFor
The maximum gradation value of the image to be reinforced.
After the gray value of each pixel of image to be reinforced is mapped into the fuzzy membership of fuzzy set, correspondingly institute
State the fuzzy membership matrix that the fuzzy membership that the gray value of image all pixels point to be reinforced is mapped to constitutes fuzzy set.
Due to μ in formula (7)gh∈ [0,1], overcomes many after blurring mapping in classical Pal-King fuzzy enhancement algorithms
The low gray value of original image is cut to zero defect, and with threshold XTIt is line of demarcation, subregion defines gray level xghDegree of membership,
This method for defining degree of membership respectively in the low gray area of image and gray area high, also ensure that letter of the image in low gray level areas
Breath loss reduction, so as to ensure the effect of image enhaucament.
In the present embodiment, before transforming to fuzzy field by image area in step 20121, first using maximum variance between clusters pair
Gray threshold XTChosen.
Step 20122, carry out enhanced fuzzy treatment using fuzzy enhancement operator in fuzzy field:The enhanced fuzzy for being used is calculated
Son is μ 'gh=Ir(μgh)=Ir(Ir-1μgh), r is iterations and it is positive integer in formula, r=1,2 ...;Whereinμ in formulac=T(XC), wherein XCTo get over a little and XC=XT。
Above-mentioned formulaNonlinear transformation increase and be more than
μcμghValue, while reducing less than μcμghValue.Here μcGetting over a little for broad sense is developed into.
Step 20123, image area is changed to by fuzzy field inversion:According to formula(6), by enhanced fuzzy
The μ ' obtained after reasonghInverse transformation is carried out, the gray value of each pixel in digital picture after enhancing is processed is obtained, and obtained at enhancing
Digital picture after reason.
Because enhanced fuzzy threshold value (gets over point X in Pal algorithmsc) selection be image enhaucament key, in practical application
Middle needs are attempted obtaining by rule of thumb or repeatedly.Wherein more classical method is maximum variance between clusters (Ostu), and the method is simple
Stabilization is effective, be in practical application through frequently with method.Ostu Research on threshold selection has broken away from that to need manpower intervention to carry out more
The limitation of secondary trial, can automatically determine optimal threshold by computer according to the half-tone information of image.The principle of Ostu methods is
By the use of inter-class variance as criterion, the gray value that selection makes inter-class variance maximum realizes enhanced fuzzy threshold value as optimal threshold
Automatic selection, so as to avoid the manual intervention in enhanced processes.
In the present embodiment, using maximum variance between clusters to gray threshold XTBefore being chosen, first from described to be reinforced
All gray values that pixel quantity is 0 are found out in the grey scale change scope of image, and all ashes that will be found out using processor 3
Angle value is marked to calculate gray value;Using maximum variance between clusters to gray threshold XTWhen being chosen, wait to increase to described
In the grey scale change scope of strong image except it is described exempt to calculate gray value in addition to other gray values as threshold value when inter-class variance
Value is calculated, and finds out maximum between-cluster variance value from the inter-class variance value for calculating, and finds out maximum between-cluster variance value pair
The gray value answered just is gray threshold XT。
When choosing enhanced fuzzy using traditional maximum variance between clusters (Ostu), if gray value is n for the pixel count of ss,
Then total pixel numberThe probability of the digital picture for being gathered each gray level appearanceThreshold XTWill figure
Pixel as in is divided into two class C by its gray level0And C1, C0={ 0,1 ... t }, C1={ t+1, t+2 ... L-1 }, and assume
Class C0And C1Pixel number account for the ratio respectively w of total pixel number0(t) and w1T () and the two average gray value is respectively μ0
(t) and μ1(t)。
For C0Have:
For C1Have:
WhereinThe average statistical of general image gray scale, then μ=w0μ0+w1μ1;
Thus optimal threshold
It is above-mentioned to automatically extract optimal enhanced fuzzy threshold XTProcess be:All of gray level to L-1 is traveled through from gray level 0
Level, finds the X met when formula (8) takes maximumTValue is required threshold XT.Because image may be in the pixel in some gray levels
Number is zero, and variance number of times is calculated to reduce, and the present invention proposes a kind of improved quick Ostu methods;
It is assumed that gray level is zero for the pixel count of t', then Pt'=0
If selected t'-1 is threshold value, have:
Again when it is threshold value to select t':
As can be seen here:
σ2(t'-1)=σ2(t') (2.37)
Assume there is continuous gray level t again1, t2..., tn, can also imitate and push away:
σ2(t1-1)=σ2(t1)=σ2(t2-1)=σ2(t2)=…=σ2(tn-1)=σ2(tn) (2.38)
If from the foregoing, the pixel count of a certain gray level is zero, need not calculate using it as side between class during threshold value
Difference, and the inter-class variance corresponding to the smaller gray level that closest pixel count need to be only not zero is used as its inter-class variance value,
Therefore, to be quickly found out the maximum of inter-class variance, can by the equal multiple gray levels of inter-class variance as same gray level,
The gray value that those pixel counts are zero is considered as and is not existed, inter-class variance σ when directly as threshold value2T () is entered as
Zero, without calculating their variance yields, this selection on threshold value final result does not have any influence, but improves enhancing threshold value
The speed that self adaptation is chosen.
In the present embodiment, before carrying out enhanced fuzzy treatment in step 20122, first using LPF method to step
The fuzzy set of the image described to be reinforced obtained in 20121 is smoothed;When actually carrying out low-pass filtering treatment, adopted
Filter operator is
Because image is vulnerable to noise pollution in generation and transmitting procedure, therefore before carrying out enhancing treatment to image,
First the fuzzy set to image is smoothed to reduce noise.In the present embodiment, by 3 × 3 spatial domain LPF operators with
Smoothing processing of the convolution algorithm of image blurring collection matrix to realize to image blurring collection.
In the present embodiment, when step 2013 carries out image segmentation, process is as follows:
Step 20131, two-dimensional histogram are set up:The grey on pixel of the image to be split is set up using processor 3
The two-dimensional histogram of angle value and neighborhood averaging gray value;Any point is designated as (i, j) in the two-dimensional histogram, and wherein i is the two dimension
Histogrammic abscissa value and its be any pixel point (m, n) in the image to be split gray value, j be the two-dimensional histogram
Ordinate value and its be the pixel (m, n) neighborhood averaging gray value;Any point (i, j) hair in set up two-dimensional histogram
Raw frequency is designated as C (i, j), and the frequency that point (i, j) occurs is designated as h (i, j), wherein
In the present embodiment, when the neighborhood averaging gray value to pixel (m, n) is calculated, according to formula(6) calculated, f (m+i1, n+j1) is pixel (m+ in formula
I1, n+j1) gray value, wherein d for pixel square neighborhood window width, typically take odd number.
Also, the grey scale change scope of neighborhood averaging gray value g (m, n) and pixel gray value f (m, n) it is identical and the two
Grey scale change scope be [0, L), thus the two-dimensional histogram set up in step I is a square area, refers to figure
3, wherein L-1 are the maximum of neighborhood averaging gray value g (m, n) and pixel gray value f (m, n).
In Fig. 3, set up two-dimensional histogram is divided into four regions using threshold vector (i, j).Due to target image
Correlation is very strong between pixel inside internal or background image, and the gray value of pixel and its neighborhood averaging gray value are non-
Very close to;And near the border of target image and background image pixel, its pixel gray value and neighborhood averaging gray value
Between difference it is obvious.Thus, 0# regions are corresponding with background image in Fig. 3, and 1# regions are corresponding with target image, and 2# regions and
Pixel and the noise spot distribution nearby of 3# region representations border, thus should be in 0# and 1# regions with pixel gray value and neighbour
Domain average gray value simultaneously determines optimal threshold by the dividing method that two dimension fuzzy divides maximum entropy, makes authentic representative target and the back of the body
The information content of scape is maximum.
Step 20132, fuzzy parameter Combinatorial Optimization:The processor 3 calls fuzzy parameter Combinatorial Optimization module, and utilizes
Particle swarm optimization algorithm carries out excellent to the fuzzy parameter combination used by the image partition method based on two dimension fuzzy division maximum entropy
Change, and fuzzy parameter after optimize is combined.
In this step, before being optimized to fuzzy parameter combination, first according to the two-dimentional Nogata set up in step 20131
Figure, calculate the functional relation of Two-dimensional Fuzzy Entropy when splitting to the image to be split, and will calculate
The functional relation of Two-dimensional Fuzzy Entropy is used as fitness when being optimized to fuzzy parameter combination using particle swarm optimization algorithm
Function.
In the present embodiment, image to be split described in step 20131 is made up of target image O and background image P;Wherein mesh
The membership function of logo image O is μo(i,j)=μox(i;a,b)μoy(j;c,d)(1)。
The membership function μ of background image Pb(i,j)=μbx(i;a,b)μoy(j;c,d)+μox(i;a,b)μby(j;c,d)+
μbx(i;a,b)μby(j;c,d)(2)。
In formula (1) and (2), μox(i;A, b) and μoy(j;C, d) be target image O one-dimensional membership function and the two
It is S function, μbx(i;A, b) and μby(j;C, d) it is the one-dimensional membership function of background image P and both at S function,
μbx(i;a,b)=1-μox(i;A, b), μby(j;c,d)=1-μoy(j;C, d), wherein a, b, c and d are to target image O and the back of the body
The parameter that the one-dimensional membership function shape of scape image P is controlled.
Wherein,
When functional relation in step 20132 to Two-dimensional Fuzzy Entropy is calculated, first set up according in step 20131
Two-dimensional histogram, to the minimum value g of the pixel gray value of the image to be splitminWith maximum gmaxAnd neighborhood averaging
The minimum value s of gray valueminWith maximum smaxIt is determined respectively.In the present embodiment, gmax=smax=L-1, and gmin=smin=
0.Wherein, L-1=255.
The functional relation of the Two-dimensional Fuzzy Entropy calculated in step 20132 is:
(3),
In formula (3)Wherein h (i, j) is described in step I
Point (i, j) occur frequency.
When being optimized to fuzzy parameter combination using particle swarm optimization algorithm in step 20132, the fuzzy ginseng for being optimized
Array is combined into (a, b, c, d).
It is when the parameter combination that two dimension fuzzy division maximum entropy is carried out in the present embodiment, in step 20132 optimizes including following
Step:
Step II -1, population initialization:Using a value of parameter combination as a particle, and by multiple particle groups
Into a population for initialization;It is denoted as (ak,bk,ck,dk), wherein k be positive integer and its k=1,2,3 ,~, K, wherein K is for just
Integer and its be particle included in the population quantity, akIt is a random value of parameter a, bkIt is one of parameter b
Random value, ckIt is a random value of parameter c, dkIt is a random value of parameter d, ak< bkAnd ck< dk。
In the present embodiment, K=15.
When actually used, K can be carried out into value between 10~100 according to specific needs.
Step II -2, fitness function determine:
Will(3),
As fitness function.
Step II -3, particle fitness evaluation:To current time, the fitness of all particles is evaluated respectively, and all
The fitness evaluation method all same of particle;Wherein, when the fitness to k-th particle of current time is evaluated, first basis
Identified fitness function calculates the fitness value of k-th particle of current time and is denoted as in step II -2
Fitnessk, and the fitnessk that will be calculated and Pbestk carries out difference comparsion:Fitnessk > are drawn when comparing
During Pbestk, Pbestk=fitnessk, and willThe position of k-th particle of current time is updated to, wherein Pbestk is
Maximum adaptation angle value that k-th particle of current time is reached and its be k-th particle of current time individual extreme value,For
The personal best particle of k-th particle of current time;Wherein, t is for current iteration number of times and it is positive integer.
Treat to be calculated the fitness value of current time all particles according to identified fitness function in step II -2
After the completion of, the fitness value of the maximum particle of current time fitness value is designated as fitnesskbest, and will
Fitnesskbest and gbest carries out difference comparsion:When compare draw fitnesskbest > gbest when, gbest=
Fitnesskbest, and willThe position of the maximum particle of current time fitness value is updated to, when wherein gbest is current
The global extremum at quarter,It is colony's optimal location at current time.
Step II -4, judge whether to meet stopping criterion for iteration:When stopping criterion for iteration is met, parameter combination is completed excellent
Change process;Otherwise, position and the speed for drawing each particle of subsequent time, and return to step are updated according to colony optimization algorithm in particle
Ⅱ-3。
Stopping criterion for iteration reaches maximum iteration I set in advance for current iteration number of times t in step II -4maxOr
Person's Δ g≤e, wherein Δ g=| gbest-gmax |, are the global extremum at gbest current times in formula, and gmax is original setting
Target fitness value, e is for positive number and it is deviation set in advance.
In the present embodiment, maximum iteration Imax=30.When actually used, can according to specific needs, by greatest iteration time
Number ImaxIt is adjusted between 20~200.
When population initialization is carried out in the present embodiment, in step II -1, particle (ak,bk,ck,dk) in (ak,ck) it is kth
The initial velocity vector of individual particle, (bk,dk) it is k-th initial position of particle.
In step II -4 according in particle colony optimization algorithm update draw position and the speed of each particle of subsequent time when, institute
There are the position of particle and the update method all same of speed;Wherein, the speed and position to k-th particle of subsequent time are carried out more
When new, velocity first according to k-th particle of current time, position and individuality extreme value Pbestk and global extremum, calculating
Draw the velocity of k-th particle of subsequent time, and position according to k-th particle of current time and calculate it is next
The velocity of k-th particle of moment calculates the position of k-th particle of subsequent time.
Also, when being updated to the speed of k-th particle of subsequent time and position in step II -4, according toAnd formula (4)(5) subsequent time is calculated
K-th velocity of particleAnd positionIn formula (4) and (5)It is the position of k-th particle of current time, it is public
In formula (4)It is the velocity of k-th particle of current time, c1And c2It is acceleration factor and c1+c2=4, r1And r2For
[0,1] the equally distributed random number between;ω be inertia weight and its linearly reduce with the increase of iterations,ω in formulamaxAnd ωminInertia weight maximum respectively set in advance and minimum value, t
It is current iteration number of times, ImaxIt is maximum iteration set in advance.
In the present embodiment, ωmax=0.9, ωmin=0.4, c1=c2=2。
In the present embodiment, before carrying out population initialization in step II -1, need first to ak、bk、ckAnd dkHunting zone
It is determined, the pixel gray level minimum value of image to be split is g wherein described in step IminAnd its minimum value is gmax;Pixel
The Size of Neighborhood of point (m, n) is d × d pixel and the average gray minimum value s of its neighborhoodminAnd its average gray maximum
smax, ak=gmin、…、gmax- 1, bk=gmin+1、…、gmax, ck=smin、…、smax- 1, dk=smin+1、…、smax。
In the present embodiment, d=5.
In actual use, can according to specific needs, the value size to d is adjusted accordingly.
Step 20133, image segmentation:The processor 3 is combined using the fuzzy parameter after optimizing in step 20132, and
Each pixel in the image to be split is classified according to the image partition method based on two dimension fuzzy division maximum entropy,
And image segmentation process is accordingly completed, the target image after being split.
In the present embodiment, after the fuzzy parameter after being optimized is combined as (a, b, c, d), according to maximum membership grade principle pair
Pixel is classified:Wherein work as μoDuring (i, j) >=0.5, such pixel is divided into target area, is otherwise divided into background area
Domain, refers to Fig. 4.In Fig. 4, μoGrid where (i, j) >=0.5 is to be expressed as the target area after image segmentation.
The above, is only presently preferred embodiments of the present invention, and not the present invention is imposed any restrictions, every according to the present invention
Any simple modification, change and equivalent structure change that technical spirit is made to above example, still fall within skill of the present invention
In the protection domain of art scheme.