Summary of the invention
For the deficiency that above-mentioned prior art exists, the object of the invention is to, use for reference the vision noticing mechanism of the mankind, in conjunction with the image partition method based on graph theory, set up a kind of vehicle target parted pattern of view-based access control model significant characteristics, under good environment condition, accurately can not only segment car load, and there is certain adaptability and robustness, under night-environment, shade circumstance of occlusion, more adequately can be partitioned into the weak contrast's vehicle target in traffic scene.
A vehicle target dividing method under weak contrast, specifically comprises the steps:
Step 1: adopt the method for multi-instance learning to carry out remarkable model modeling to training image; Then utilize remarkable model to predict the bag in test pattern and example, obtain the saliency map of test pattern; Specifically comprise:
Step 11, carries out pre-service to training image, and extracts gradient of image intensity feature, color gradient feature and texture gradient feature;
Step 12, is incorporated into multi-instance learning in saliency detection, obtains the conspicuousness testing result of test pattern;
Step 2: the significance introducing figure of test pattern is cut framework, the mark of foundation exemplary characteristics vector and example bag cuts framework to figure and is optimized, and solves the suboptimal solution that figure cuts optimization, obtains the Accurate Segmentation of target.
Further, in described step 11, pre-service is carried out to training image, and extracts brightness step feature, color gradient feature and texture gradient feature, specifically comprise step 111 ~ step 113:
Step 111, carries out the conversion of color space and the quantification pre-service of each component thereof to training image, obtains the luminance component L after normalization and color component a, b;
Step 112, calculates the brightness step of each pixel corresponding to the matrix of luminance component L;
Step 113, calculates the color gradient of each pixel in the matrix of color component a and color component b respectively;
Step 114, calculates the texture gradient of each pixel.
Further, described step 111 is specific as follows:
First, training image is carried out gamma correction, to realize the nonlinear adjustment to image color component, training image is converted to Lab color space by rgb color space; Again training image luminance component L and two color component a, b under Lab color space are normalized, obtain the luminance component L after normalization and color component a, b.
Further, described step 113 specifically comprises steps A-D:
The weight matrix Wights < > of A, structure 3 yardsticks;
The key map matrix S lice_map < > of B, structure 3 yardsticks; The weight matrix Wights < > of the corresponding yardstick of the key map matrix S lice_map < > of each yardstick has identical dimension, i.e. the square formation of each key map Slice_map < > matrix to be also line number and columns be 2r+1; Choose 8 directions (0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 °) and matrix is divided into 16 regions, in each region, the value of element is identical with the numbering 0 ~ 15 in this region;
C, being multiplied by the element one_to_one corresponding in the weight matrix Wights < > of each key map matrix S lice_map < > yardstick corresponding to it obtains the matrix of corresponding yardstick, i.e. neighborhood gradient operator;
D, utilize neighborhood gradient operator, calculate the brightness step of a pixel to be asked in the matrix of luminance component L.
Further, described steps A is specific as follows:
Build the weight matrix Wights < > of 3 yardsticks respectively; Described weight matrix Wights < > is the square formation that line number and columns are equal to 2r+1; Element in weight matrix Wights < > is non-zero is 1, the Elemental redistribution equaling 1 is with square formation central element (r+1, r+1) for the center of circle, with r be radius disk within the scope of, form the incircle of square formation, in square formation, all the other elements are 0; 3 yardsticks are respectively r=3, r=5 and r=10.
Further, described step D is specific as follows:
1. for some yardsticks, in the matrix of the luminance component L obtained by step 111 centered by a pixel to be asked, carry out dot product by the neighborhood gradient operator of a certain yardstick with each luminance component within the scope of neighborhood of pixel points to be asked, obtain the matrix N eibor < > within the scope of neighborhood of pixel points to be asked; Disk in neighborhood gradient operator, as separatrix, is divided into left semicircle and right semi-circle by the straight line choosing vertical direction (90 °), and left semicircle comprises the 0th sector to the 7th sector, and right semi-circle comprises the 8th sector to the 15th sector; The element of the matrix N eibor < > that each semicircle is corresponding forms a histogram and is normalized it, is designated as Slice_hist respectively
1< > and Slice_hist
2< >; H
1represent the histogram corresponding to half-circle area, the left side, H
2represent the histogram of the right corresponding to half-circle area, i is the value of histogrammic bin, is defined as [0,24], i.e. brightness range.
2. the card side's distance shown in through type (1) calculates the difference between two normalization histograms, namely obtains the brightness step on the vertical direction of the next pixel to be asked of a certain yardstick;
After having calculated the brightness step on a certain yardstick vertical direction, choose other place, direction straight lines respectively as separatrix, obtained the brightness step on the every other direction of a certain yardstick of this pixel to be asked; Mode same according to step D again calculates the directive brightness step on these other yardsticks of pixel to be asked.When completing this after asking the brightness step on all directions of all yardsticks of pixel to calculate, calculated the final brightness step of this pixel to be asked by formula (2):
f(x,y,r,n_ori;r=3,5,10;n_ori=1,2,......8)->Brightness Gradient(x,y) (2)
In formula, f is a mapping function, and (x, y) is arbitrary pixel to be asked, and r represents the yardstick chosen, and n_ori represents the direction chosen; The final brightness step that Brightness Gradient (x, y) is pixel (x, y); The correspondence rule of f is select the high-high brightness Grad of each direction in 3 yardsticks as the party's luminance gradient value upwards, the brightness step summation on 8 directions is obtained the final brightness step of pixel (x, y).
Further, described step 114 is specific as follows:
A, structure multi-dimension texture bank of filters set Filters
(x, y)(n
f, filter, r, θ), n
frepresent the number of wave filter, filter represents the set of wave filter kind, and r represents yardstick, and θ represents the direction chosen;
The corresponding texture filtering response vector of each pixel in B, calculation training image, i.e. Tex (x, y)=(fil
1, fil
2, fil
3..., fil
nf), specific as follows:
By gray level image I
gray(x, y) and the multi-dimension texture filter set Filters built
(x, y)[n
f, filter, r, θ] and carry out convolution in corresponding scale neighborhood centered by pixel (x, y), obtain the texture filtering response vector of pixel (x, y).During as yardstick r=5, in the 11*11 neighborhood centered by a certain pixel, carry out convolution, i.e. I
gray(x, y) * Filters (n
f, filter, r, θ), wherein n
f=17, filter=(fil
cs, fil
1, fil
2), r=5, θ=0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 °; Obtain texture filtering response vector Tex (x, the y)=(fil of pixel (x, y)
1, fil
2, fil
3..., fil
17).
With a certain pixel (x when calculating r=5, r=10, r=20 respectively with said method, the texture feature vector of the corresponding scale neighborhood y), obtain corresponding texture filtering response vector Tex (x, the y)=(fil of a certain pixel (x, y)
1, fil
2, fil
3..., fil
51).
C, structure texton histogram; Specific as follows:
Adopt K-means method to pixel (x all in training image, y) texture filtering response vector carries out cluster, in cluster process, get K=32 as initial value, obtain 32 cluster centres altogether, take out texture filtering response vector corresponding for 32 cluster centres obtained as texture primitive, 32 bin be used as in textural characteristics statistic histogram mark to build texton histogram;
D, calculate the texture gradient of each pixel; Specific as follows:
First steps A-the C in step 112 is adopted, obtain the neighborhood gradient operator under 3 yardsticks, for a certain yardstick, with pixel (x a certain to be asked, y) centered by, the texture filtering response vector corresponding with it by each element in the neighborhood gradient operator of this yardstick carries out multiplication operation, obtain the Neighborhood matrix group Neibor [< >] of this pixel, choose the straight line of vertical direction (90 °) as separatrix, disk in yardstick contiguous range is divided into left semicircle and right semi-circle, left semicircle comprises the 0th sector to the 7th sector, right semi-circle comprises the 8th sector to the 15th sector, the element of the Neighborhood matrix group Neibor [< >] that each semicircle is corresponding forms a texton histogram, H
1represent the histogram corresponding to half-circle area, the left side, H
2represent the histogram of the right corresponding to half-circle area, provide histogrammic bin by step C and mark, identical with the 2. step in the step D of step 112, try to achieve the final texture gradient of each pixel to be asked in training image, be designated as TextureGradient (x, y).
Further, described steps A is specific as follows:
Convert training image to gray level image, be designated as I
gray(x, y), and to gray level image I
graythe gray component of each pixel (x, y) of (x, y) is normalized; Choose three kinds of wave filters, be respectively the wave filter after Gauss's Second Order Partial waveguide filter and Hilbert transform thereof and center ring around wave filter; Build multi-dimension texture filter set from 8 directions and 3 yardsticks, be designated as Filters
(x, y)[n
f, filter, r, θ], wherein, n
frepresent the number of wave filter, filter represents the set of wave filter kind, and r represents yardstick, and θ represents the direction chosen; n
f=51, filter=(fil
cs, fil
1, fil
2), r=5,10,20, θ=0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 °; Multi-dimension texture filter set Filters
(x, y)[n
f, filter, r, θ]; Shown in 5,6,7:
Gauss's Second Order Partial waveguide filter of 3 yardsticks in 8 directions:
Wave filter after Gauss's second order local derviation Hilbert transform of 3 yardsticks in 8 directions:
f
2(x,y)=Hilbert(f
1(x,y)) (6)
The center ring of 3 yardsticks is around wave filter:
Gaussian_cs<>=m_surround<>-m_center<> (7)
The standard deviation sigma value corresponding around wave filter, center-filter, Gauss's Second Order Partial waveguide filter and hilbert-transform filter thereof is respectively
2 and
Further, in described step 12, multi-instance learning is introduced into saliency and detects the conspicuousness testing result obtaining test pattern, specifically comprise step 121 and step 122:
Step 121, the brightness utilizing method described in step 11 to obtain, color and texture gradient feature, in conjunction with multi-instance learning EMDD algorithm realization to the study of training set, obtain the conspicuousness detection model succeeded in school;
Step 122, substitutes into test pattern the conspicuousness testing result that the conspicuousness detection model succeeded in school obtains test pattern.
Further, described step 2 specifically comprises the steps:
Step 21, the conspicuousness testing result of image step 1 obtained cuts the input of algorithm as figure, the conspicuousness mark according to bag builds such as formula the weight function shown in (3) with exemplary characteristics vector; And obtain the figure after such as formula the optimization shown in (4) and cut cost function;
In formula (3), w
ijrepresent the visual signature similarity of i example bag and j example bag corresponding region, Salien (i) and Salien (j) represent the remarkable angle value after region i and region j normalization respectively, σ is the sensitive parameter regulating visual signature difference, and value is 10 ~ 20; Region i is 0 to the similar weights of himself; Similarity matrix W={w
ijto be diagonal line be 0 symmetric matrix, and w
ij∈ [0,1]; f
i, f
jrepresent the exemplary characteristics vector of correspondence respectively in i and j example bag respectively, namely the brightness step feature of image, color gradient feature and texture gradient proper vector synthesize the 4 mix vector Mixvector tieed up
i={ BrightnessGradient
i, ColorGradient
i, TextureGradient
i, then Sim (f
i, f
j)=|| Mixvector
i-Mixvector
j||
2; Figure represented by formula (4) cuts in framework, and D is that N ties up diagonal matrix, element on its diagonal line
For cutting state vector, each component of a vector U
irepresent the cutting state of region i; The visual similarity divided between subrepresentation region i and region j of formula (4), denominator represents the visual similarity in the i of region;
Step 22, adopts Agglomerative Hierarchical Clustering algorithm, solves the vector of the cutting state corresponding to minimum value eigenwert of R (U), namely obtains the optimum segmentation result of image.
Embodiment
As shown in Figure 1, the vehicle target dividing method under the weak contrast that the present invention provides, specifically comprises the steps:
Step 1: choose night Expressway Road image as training image, adopts the method for multi-instance learning to carry out remarkable model modeling to training image; Then utilize remarkable model to predict the bag in test pattern and example, obtain the saliency map of test pattern;
Step 2: the significance introducing figure of test pattern is cut framework, the mark of foundation exemplary characteristics vector and example bag cuts framework to figure and is optimized, and adopts Agglomerative Hierarchical Clustering Algorithm for Solving figure to cut the suboptimal solution of optimization, obtains the Accurate Segmentation of target.
Further, described step 1 specifically comprises step 11 and step 12:
Step 11, carries out pre-service to training image, and extracts gradient of image intensity feature, color gradient feature and texture gradient feature;
Step 12, is incorporated into multi-instance learning in saliency detection, obtains the conspicuousness testing result of test pattern.
Further, in described step 11, pre-service is carried out to training image, and extracts brightness step feature, color gradient feature and texture gradient feature, specifically comprise step 111 ~ step 113:
Step 111, carries out the conversion of color space and the quantification pre-service of each component thereof to training image, obtains the luminance component L after normalization and color component a, b; Specific as follows:
First, training image is carried out gamma correction, to realize the nonlinear adjustment to image color component, training image is converted to Lab color space by rgb color space; Again training image luminance component L and two color component a, b under Lab color space are normalized, obtain the luminance component L after normalization and color component a, b;
After completing the pre-service of training image, the present invention analyzes vehicle shadow feature in training image, and choosing for ensuing Gradient Features provides theoretical foundation.Training image is all the road image with characteristic feature at night, due to driving at night, all there is the situation of shadow interference in the vehicle target in every width training image, the existence of vehicle shadow can cause the enlargement deformation of car body area, many cars are even caused to be connected, have a strong impact on the accurate segmentation of car body and the extraction of car body information, and night car light illumination range and intensity also can affect Target Segmentation to a certain extent, the segmentation effect that obtain will eliminate the shade formed by illumination.
Shade is that the light sent due to light source is subject to blocking of object in scene and a kind of physical phenomenon produced, and comprises from shade and cast shadow.It is the part seeming darker because object stop light source itself causes uneven illumination from shade; Cast shadow refers to the shadow of object on other body surfaces (as road).By the training image of the vehicle and shade thereof that comprise highway driving at night in a large number, can show that the feature that shade is different from vehicle target is mainly:
(1) can not there is the change of conspicuousness in the color on road surface that covers of shade and texture.
(2) generally cast shadow brightness, lower than background luminance, and is the numerical value being less than 1 relative to the luminance gain of background area; But it is then contrary under the interference having vehicular high beam headlight.
(3) gray-value variation in shaded interior region is inviolent, gradient shows as smooth, or locally flat.
To sum up analyze, the present invention adopts the brightness step feature of training image, color gradient feature and texture gradient feature to carry out the study of conspicuousness model.
Step 112, calculates the brightness step of each pixel corresponding to the matrix of luminance component L.Specifically comprise steps A-D:
The weight matrix Wights < > of A, structure 3 yardsticks.Specific as follows:
Build the weight matrix Wights < > of 3 yardsticks respectively; Described weight matrix Wights < > is the square formation that line number and columns are equal to 2r+1; Element in weight matrix Wights < > is non-zero is 1, the Elemental redistribution equaling 1 is with square formation central element (r+1, r+1) for the center of circle, with r be radius disk within the scope of, form the incircle of square formation, in square formation, all the other elements are 0; In the present invention, when 3 yardsticks are respectively r=3, r=5 and r=10, weight matrix Wights < > corresponding is respectively as follows:
The key map matrix S lice_map < > of B, structure 3 yardsticks; The weight matrix Wights < > of the corresponding yardstick of the key map matrix S lice_map < > of each yardstick has identical dimension, i.e. the square formation of each key map Slice_map < > matrix to be also line number and columns be 2r+1; Choose 8 directions (0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 °) and matrix is divided into 16 regions, in each region, the value of element is identical with the numbering 0 ~ 15 in this region; The object setting up key map matrix S lice_map < > is the quick position in order to realize subregion.In the present invention, 3 key map matrix S lice_map < > are as follows respectively:
C, being multiplied by the element one_to_one corresponding in the weight matrix Wights < > of each key map matrix S lice_map < > yardstick corresponding to it obtains the matrix of corresponding yardstick, i.e. neighborhood gradient operator.Neighborhood gradient operator under 3 yardsticks is as follows:
D, utilize neighborhood gradient operator, calculate the brightness step of a pixel to be asked in the matrix of luminance component L.Specific as follows:
1. for some yardsticks, in the matrix of the luminance component L obtained by step 111 centered by a pixel to be asked, carry out dot product by the neighborhood gradient operator of a certain yardstick with each luminance component within the scope of neighborhood of pixel points to be asked, obtain the matrix N eibor < > within the scope of neighborhood of pixel points to be asked; Disk in neighborhood gradient operator, as separatrix, is divided into left semicircle and right semi-circle by the straight line choosing vertical direction (90 °), and left semicircle comprises the 0th sector to the 7th sector, and right semi-circle comprises the 8th sector to the 15th sector; The element of the matrix N eibor < > that each semicircle is corresponding forms a histogram and is normalized it, is designated as Slice_hist respectively
1< > and Slice_hist
2< >; As shown in Figure 4.H
1represent the histogram corresponding to half-circle area, the left side, H
2represent the histogram of the right corresponding to half-circle area, i is the value of histogrammic bin, is defined as [0,24], i.e. brightness range.
2. the card side's distance shown in through type (1) calculates the difference between two normalization histograms, namely obtains the brightness step on the vertical direction of the next pixel to be asked of a certain yardstick;
After having calculated the brightness step on a certain yardstick vertical direction, as shown in Figure 5, choose other place, direction straight lines respectively as separatrix, obtain the brightness step on the every other direction of a certain yardstick of this pixel to be asked; Mode same according to step D again calculates the directive brightness step on these other yardsticks of pixel to be asked.When completing this after asking the brightness step on all directions of all yardsticks of pixel to calculate, calculated the final brightness step of this pixel to be asked by formula (2):
f(x,y,r,n_ori;r=3,5,10;n_ori=1,2,......8)->Brightness Gradient(x,y) (2)
In formula, f is a mapping function, and (x, y) is arbitrary pixel to be asked, and r represents the yardstick chosen, and n_ori represents the direction chosen; The final brightness step that Brightness Gradient (x, y) is pixel (x, y); The correspondence rule of f is select the high-high brightness Grad of each direction in 3 yardsticks as the party's luminance gradient value upwards, the brightness step summation on 8 directions is obtained the final brightness step of pixel (x, y);
Step 113, calculates the color gradient of each pixel in the matrix of color component a and color component b respectively.Specific as follows:
The calculating of color gradient and the compute classes of brightness step seemingly, are the color gradient for two color components unlike color gradient feature, color component a and b namely under Lab color space; Be with the calculating difference of brightness step, 3 yardsticks chosen are respectively r=5, r=10 and r=20; Therefore, the size of corresponding weight matrix and map reference matrix is respectively 11*11,21*21 and 41*41; The calculating of the color gradient of two color components and brightness step adopt identical computing method, obtain the final color gradient of each pixel to be asked in color component a and b matrix.
Step 114, calculates the texture gradient of each pixel.Specific as follows:
A, structure multi-dimension texture bank of filters set Filters
(x, y)(n
f, filter, r, θ), n
frepresent the number of wave filter, filter represents the set of wave filter kind, and r represents yardstick, and θ represents the direction chosen.Specific as follows:
Convert training image to gray level image, be designated as I
gray(x, y), and to gray level image I
graythe gray component of each pixel (x, y) of (x, y) is normalized; Choose three kinds of wave filters, be respectively Gauss's Second Order Partial waveguide filter and (be designated as fil
1< >) and Hilbert transform after wave filter (be designated as fil
2< >) with center ring around wave filter (being designated as Gaussian_cs < >); Build multi-dimension texture filter set from 8 directions and 3 yardsticks, be designated as Filters
(x, y)[n
f, filter, r, θ], wherein, n
frepresent the number of wave filter, filter represents the set of wave filter kind, and r represents yardstick, and θ represents the direction chosen; n
f=51, filter=(fil
cs, fil
1, fil
2), r=5,10,20, θ=0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 °; Multi-dimension texture filter set Filters
(x, y)[n
f, filter, r, θ].Shown in 5,6,7:
Gauss's Second Order Partial waveguide filter of 3 yardsticks in 8 directions:
Wave filter after Gauss's second order local derviation Hilbert transform of 3 yardsticks in 8 directions:
f
2(x,y)=Hilbert(f
1(x,y)) (6)
The center ring of 3 yardsticks is around wave filter:
Gaussian_cs<>=m_surround<>-m_center<> (7)
Bank of filters set Filters
(x, y)[n
f, filter, r, θ] in center ring there is no directivity around wave filter, be the difference around wave filter and center-filter.All Gauss's Second Order Partial waveguide filter around wave filter and center-filter.The standard deviation sigma value corresponding around wave filter, center-filter, Gauss's Second Order Partial waveguide filter and hilbert-transform filter thereof is respectively
2 and
The corresponding texture filtering response vector of each pixel in B, calculation training image, i.e. Tex (x, y)=(fil
1, fil
2, fil
3..., fil
nf), specific as follows:
By gray level image I
gray(x, y) and the multi-dimension texture filter set Filters built
(x, y)[n
f, filter, r, θ] and carry out convolution in corresponding scale neighborhood centered by pixel (x, y), obtain the texture filtering response vector of pixel (x, y).During as yardstick r=5, in the 11*11 neighborhood centered by a certain pixel, carry out convolution, i.e. I
gray(x, y) * Filters (n
f, filter, r, θ), wherein n
f=17, filter=(fil
cs, fil
1, fil
2), r=5, θ=0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 °; Obtain texture filtering response vector Tex (x, the y)=(fil of pixel (x, y)
1, fil
2, fil
3..., fil
17).
With a certain pixel (x when calculating r=5, r=10, r=20 respectively with said method, the texture feature vector of the corresponding scale neighborhood y), obtain corresponding texture filtering response vector Tex (x, the y)=(fil of a certain pixel (x, y)
1, fil
2, fil
3..., fil
51).
C, structure texton histogram.Specific as follows:
Adopt K-means method to pixel (x all in training image, y) texture filtering response vector carries out cluster, in cluster process, get K=32 as initial value, obtain 32 cluster centres altogether, take out texture filtering response vector corresponding for 32 cluster centres obtained as texture primitive, 32 bin be used as in textural characteristics statistic histogram mark to build texton histogram; As shown in Figure 6.
D, calculate the texture gradient of each pixel.Specific as follows:
First steps A-the C in step 112 is adopted, obtain the neighborhood gradient operator under 3 yardsticks, for a certain yardstick, with pixel (x a certain to be asked, y) centered by, the texture filtering response vector corresponding with it by each element in the neighborhood gradient operator of this yardstick carries out multiplication operation, obtain the Neighborhood matrix group Neibor [< >] of this pixel, choose the straight line of vertical direction (90 °) as separatrix, disk in yardstick contiguous range is divided into left semicircle and right semi-circle, left semicircle comprises the 0th sector to the 7th sector, right semi-circle comprises the 8th sector to the 15th sector, the element of the Neighborhood matrix group Neibor [< >] that each semicircle is corresponding forms a texton histogram, as shown in Figure 7.H
1represent the histogram corresponding to half-circle area, the left side, H
2represent the histogram of the right corresponding to half-circle area, provide histogrammic bin by step C and mark.Identical with the 2. step in the step D of step 112, try to achieve the final texture gradient of each pixel to be asked in training image, be designated as TextureGradient (x, y).
Further, in step 12, multi-instance learning is introduced into saliency and detects the conspicuousness testing result obtaining test pattern, specifically comprise step 121 and step 122:
Step 121, the brightness utilizing method described in step 11 to obtain, color and texture gradient feature, in conjunction with multi-instance learning EMDD algorithm realization to the study of training set, obtain the conspicuousness detection model succeeded in school.Concrete steps are as follows:
First adopt hyperfractionated method to carry out region segmentation to training image, the minimum pixel number that each region is comprised is 200; Each region is taken as a bag, and carry out stochastic sampling to each region, the pixel in the region be sampled is taken as example, extracts corresponding brightness step feature and color gradient eigenvector as sampling instances eigenvector; According to sampling instances eigenvector, adopt multi-instance learning method EMDD algorithm to carry out the training of sorter, obtain the conspicuousness detection model succeeded in school;
Step 122, substitutes into the conspicuousness detection model succeeded in school, obtains the conspicuousness testing result of test pattern by test pattern.
To each width test pattern, utilize the process identical with step 11 to carry out pre-service to test pattern, obtain brightness step characteristic sum color gradient feature; Then adopt hyperfractionated method to carry out region segmentation to test pattern, the minimum pixel number that each region is comprised is 200; Each region is used as a bag and stochastic sampling is carried out to each region, in the region be sampled, pixel is taken as example, extract corresponding brightness step feature and color gradient eigenvector as sampling instances eigenvector, the conspicuousness detection model succeeded in school utilizing step 121 to obtain, obtain the conspicuousness of each bag of significant exemplary characteristics vector, thus obtain the conspicuousness testing result of test pattern.
Further, described step 2 specifically comprises the steps:
Step 21, the conspicuousness testing result of image step 1 obtained cuts the input of algorithm as figure, the conspicuousness mark according to bag builds such as formula the weight function shown in (3) with exemplary characteristics vector; And obtain the figure after such as formula the optimization shown in (4) and cut cost function;
In formula (3), w
ijrepresent the visual signature similarity of i example bag and j example bag corresponding region, Salien (i) and Salien (j) represent the remarkable angle value after region i and region j normalization respectively, σ is the sensitive parameter regulating visual signature difference, and value is 10 ~ 20; Region i is 0 to the similar weights of himself; Similarity matrix W={w
ijto be diagonal line be 0 symmetric matrix, and w
ij∈ [0,1]; f
i, f
jrepresent the exemplary characteristics vector of correspondence respectively in i and j example bag respectively, namely the brightness step feature of image, color gradient feature and texture gradient proper vector synthesize the 4 mix vector Mixvector tieed up
i={ BrightnessGradient
i, ColorGradient
i, TextureGradient
i, then Sim (f
i, f
j)=|| Mixvector
i-Mixvector
j||
2.Figure represented by formula (4) cuts in framework, and D is that N ties up diagonal matrix, element on its diagonal line
For cutting state vector, each component of a vector U
irepresent the cutting state of region i; The visual similarity divided between subrepresentation region i and region j of formula (4), denominator represents the visual similarity in the i of region;
Step 22, adopts Agglomerative Hierarchical Clustering algorithm, solves the vector of the cutting state corresponding to minimum value eigenwert of R (U), namely obtains the optimum segmentation result of image.
Wherein, affiliated Agglomerative Hierarchical Clustering algorithm refers to that number of patent application is the step 2 of the method for 201210257591.1 and the method for step 3.
Verification experimental verification
For verifying the validity of the inventive method, adopt linear array CCD camera in general acquisition of road traffic information and detection system to gather night Expressway Road view data as research object, choose wherein 200 comprise vehicle target and there is the road image of characteristic feature at night, using 100 images as training image, the bottom visual signature of study highway night running vehicle target, adopts the inventive method remaining 100 images to be carried out to the segmentation of vehicle target.Enumerate part of test results as shown in Figure 2, Fig. 2 sets forth the spectrum partitioning algorithm that decomposes based on multiple dimensioned figure to the segmentation result of test pattern and the segmentation result adopting the inventive method.Be described as follows:
In Fig. 2, subgraph (a ?1) is original image to (a ?5), subgraph (b ?1) is to (b ?the 5) segmentation result of spectrum partitioning algorithm for decomposing based on multiple dimensioned figure, and subgraph (c ?1) is the inventive method to (c ?5).Comparative result by experiment, can find out, the spectrum partitioning algorithm decomposed based on multiple dimensioned figure can obtain comparatively complete vehicle target for the vehicle target that phase road pavement contrast is higher, as the white vehicle target of the white vehicle target in the middle of figure (a ?1) and figure (a ?4) centre, but be split unsuccessfully substantially for weak contrast's vehicle target, and most vehicle target in night Expressway Road image can split by paper algorithm, especially the segmentation effect of weak contrast's vehicle target obviously will be better than the spectrum partitioning algorithm that decomposes based on multiple dimensioned figure.Because the inventive method can obtain the marking area mark in image very soon in conjunction with multi-instance learning method, and the exemplary characteristics vector in each example bag contains the reflection bottom visual signature of target information and the feature on the middle and senior level of objective contour, at the beginning of alligatoring, be that subsequent treatment provides and splits foundation comparatively accurately with regard to considering the comprehensive character of image, therefore when object and background border transition slowly and difference is minimum, the situation that contrast is weak, still can obtain good segmentation result.