The content of the invention
For above-mentioned the shortcomings of the prior art, it is an object of the present invention to use for reference the vision noticing mechanism of the mankind, tie
The image partition method based on graph theory is closed, establishes a kind of vehicle target parted pattern of view-based access control model significant characteristics, can not only
The accurately segmentation entire vehicle, and have certain adaptability and robustness under the conditions of good environment, can be in night-environment, the moon
The weak contrast's vehicle target being relatively accurately partitioned under shadow circumstance of occlusion in traffic scene.
A kind of vehicle target dividing method under weak contrast, specifically comprises the following steps:
Step 1:Notable model modeling is carried out using the method for multi-instance learning to training image;Then notable model is utilized
Bag in test image and example are predicted, obtain the saliency map of test image;Specifically include:
Step 11, training image is pre-processed, and extracts gradient of image intensity feature, color gradient feature and texture
Gradient Features;
Step 12, multi-instance learning is incorporated into saliency detection, obtains the conspicuousness detection knot of test image
Fruit;
Step 2:The significance introducing figure of test image is cut into framework, the mark pair according to exemplary characteristics vector and example bag
Figure cuts framework and optimized, and solution figure cuts the suboptimal solution of optimization, obtains the Accurate Segmentation of target.
Further, training image is pre-processed in the step 11, and extracts brightness step feature, color gradient
Feature and texture gradient feature, specifically include step 111~step 113:
Step 111, the conversion of color space is carried out to training image and its quantization of each component pre-processes, is normalized
Luminance component L and color component a, b afterwards;
Step 112, the brightness step of each pixel corresponding to luminance component L matrix is calculated;
Step 113, the color gradient of each pixel in color component a and color component b matrix is calculated respectively;
Step 114, the texture gradient of each pixel is calculated.
Further, the step 111 is specific as follows:
First, training image is subjected to gamma correction, to realize the nonlinear adjustment to image color component, training schemed
As being changed by rgb color space to Lab color spaces;Again to luminance component L of the training image under Lab color spaces and two
Color component a, b are normalized, luminance component L and color component a, b after being normalized.
Further, the step 113 specifically includes step A-D:
A, the weight matrix Wights < > of 3 yardsticks are built;
B, the key map matrix Slice_map < > of 3 yardsticks are built;The key map matrix Slice_ of each yardstick
The weight matrix Wights < > that map < > correspond to yardstick have identical dimension, i.e., each key map Slice_map < >
Matrix is also the square formation that line number and columns are all 2r+1;Choose 8 directions (0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °,
135 °, 157.5 °) matrix is divided into 16 regions, the value of element is identical with the numbering 0~15 in the region in each region;
C, by the weight matrix Wights < > of the corresponding yardsticks of each key map matrix Slice_map < >
Element, which corresponds to be multiplied, obtains the matrix of corresponding yardstick, i.e. neighborhood gradient operator;
D, using neighborhood gradient operator, the brightness step of a pixel to be asked in luminance component L matrix is calculated.
Further, the step A is specific as follows:
The weight matrix Wights < > of 3 yardsticks are built respectively;Described weight matrix Wights < > be line number and
Columns is equal to 2r+1 square formation;Element non-zero i.e. 1 in weight matrix Wights < >, the Elemental redistribution equal to 1 is with square formation
Central element (r+1, r+1) be the center of circle, using r as the disk of radius in the range of, form the inscribed circle of square formation, remaining element in square formation
It is 0;3 yardsticks are respectively r=3, r=5 and r=10.
Further, the step D is specific as follows:
1. for some yardstick, in the luminance component L obtained by step 111 matrix centered on a pixel to be asked,
Dot product is carried out by the neighborhood gradient operator and each luminance component in the range of neighborhood of pixel points to be asked of a certain yardstick, treated
Seek the matrix N eibor < > in the range of neighborhood of pixel points;The straight line of vertical direction (90 °) is chosen as line of demarcation, by neighborhood ladder
Disk in degree operator is divided into left semicircle and right semi-circle, and left semicircle includes the 0th sector to the 7th sector, and right semi-circle includes the 8th fan
Area to the 15th sector;Matrix N eibor < > element forms a histogram and carries out normalizing to it corresponding to each semicircle
Change, be designated as Slice_hist respectively1< > and Slice_hist2< >;H1The histogram corresponding to the half-circle area of the left side is represented,
H2The histogram corresponding to the half-circle area of the right is represented, i is the bin of histogram value, is defined as [0,24], i.e. brightness model
Enclose.
2. calculating the difference between two normalization histograms by card side's distance shown in formula (1), that is, obtain a certain chi
The brightness step spent on the vertical direction of next pixel to be asked;
After brightness step on a certain yardstick vertical direction has been calculated, straight line conduct where other directions is chosen respectively
Line of demarcation, obtain the brightness step on the every other direction of a certain yardstick of pixel to be asked;Further according to the same modes of step D
The directive brightness step of institute on the pixel to be asked other yardsticks is calculated.When completion all yardsticks of pixel to be asked
After brightness step on all directions calculates, the final brightness step of the pixel to be asked is calculated by formula (2):
f(x,y,r,n_ori;R=3,5,10;N_ori=1,2 ... 8)-> Brightness Gradient (x,
y) (2)
In formula, f is a mapping function, and (x, y) is any pixel to be asked, and r represents the yardstick chosen, and n_ori represents choosing
The direction taken;Brightness Gradient (x, y) are the final brightness step of pixel (x, y);F correspondence rule is choosing
High-high brightness Grad of each direction in 3 yardsticks is selected as luminance gradient value in this direction, will be bright on 8 directions
Degree gradient sums to obtain the final brightness step of pixel (x, y).
Further, the step 114 is specific as follows:
A, multi-dimension texture wave filter group set Filters is built(x,y)(nf, filter, r, θ), nfRepresent wave filter
Number, filter represent the set of wave filter species, and r represents yardstick, and θ represents the direction chosen;
B, the corresponding texture filtering response vector of each pixel in training image, i.e. Tex (x, y)=(fil are calculated1,
fil2,fil3...,filnf), it is specific as follows:
By gray level image Igray(x, y) and structure multi-dimension texture filter set Filters(x,y)[nf,filter,
R, θ] convolution is carried out in corresponding scale neighborhood centered on pixel (x, y), obtain the texture filtering response of pixel (x, y)
Vector.During such as yardstick r=5, convolution, i.e. I are carried out in the 11*11 neighborhoods centered on a certain pixelgray(x,y)*Filters
(nf, filter, r, θ), wherein nf=17, filter=(filcs,fil1,fil2), r=5, θ=0 °, 22.5 °, 45 °,
67.5°、90°、112.5°、135°、157.5°;Obtain the texture filtering response vector Tex (x, y) of pixel (x, y)=
(fil1,fil2,fil3...,fil17)。
Corresponding scale when method described above calculates r=5, r=10, r=20 respectively centered on a certain pixel (x, y)
The texture feature vector of neighborhood, obtain corresponding texture filtering response vector Tex (x, the y)=(fil of a certain pixel (x, y)1,
fil2,fil3...,fil51)。
C, texton histogram is built;It is specific as follows:
The texture filtering response vector of all pixels point (x, y) in training image is clustered using K-means methods,
In cluster process, K=32 is taken 32 cluster centres to be obtained, by line corresponding to obtain 32 cluster centres as initial value
Reason filter response vector, which takes out, is used as texture primitive, is marked as 32 bin in textural characteristics statistic histogram to build line
Manage primitive histogram;
D, the texture gradient of each pixel is calculated;It is specific as follows:
The step A-C in step 112 is used first, obtains the neighborhood gradient operator under 3 yardsticks, for a certain yardstick,
It is corresponding by each element in the neighborhood gradient operator of the yardstick centered on a certain pixel (x, y) to be asked
Texture filtering response vector carries out multiplication operation, obtains the Neighborhood matrix group Neibor [< >] of the pixel, chooses vertical side
To (90 °) straight line as line of demarcation, the disk in yardstick contiguous range is divided into left semicircle and right semi-circle, left semicircle includes
0th sector to the 7th sector, right semi-circle include the 8th sector to the 15th sector;Neighborhood matrix group Neibor corresponding to each semicircle
The element of [< >] forms a texton histogram;H1Represent the histogram corresponding to the half-circle area of the left side, H2Represent the right
Histogram corresponding to half-circle area, the bin that histogram is provided by step C are marked;With the 2. step in the step D of step 112
It is identical, each final texture gradient of pixel to be asked is tried to achieve in training image, is designated as TextureGradient (x, y).
Further, the step A is specific as follows:
Training image is converted into gray level image, is designated as Igray(x, y), and to gray level image IgrayEach picture of (x, y)
The gray component of vegetarian refreshments (x, y) is normalized;Choose three kinds of wave filters, respectively Gauss second order derviation wave filter and its Xi Er
Wave filter after Bert conversion is with center ring around wave filter;From 8 directions and 3 yardstick structure multi-dimension texture wave filter collection
Close, be designated as Filters(x,y)[nf, filter, r, θ], wherein, nfThe number of wave filter is represented, filter represents wave filter species
Set, r represent yardstick, θ represent choose direction;nf=51, filter=(filcs,fil1,fil2), r=5,10,20, θ
=0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 °;Multi-dimension texture filter set Filters(x,y)
[nf,filter,r,θ];As shown in formula 5,6,7:
The Gauss second order derviation wave filter of 83, direction yardsticks:
Wave filter after the Gauss second order derviation Hilbert transform of 83, direction yardsticks:
f2(x, y)=Hilbert (f1(x,y)) (6)
The center ring of 3 yardsticks is around wave filter:
Gaussian_cs < >=m_surround < >-m_center < > (7)
Around corresponding to wave filter, center-filter, Gauss second order derviation wave filter and its hilbert-transform filter
Standard deviation sigma value is respectively2 and
Further, multi-instance learning is introduced into saliency in the step 12 to detect to obtain the aobvious of test image
Work property testing result, specifically includes step 121 and step 122:
Step 121, brightness, color and the texture gradient feature obtained using method described in step 11, with reference to more examples
Learn study of the EMDD algorithms realization to training set, obtain the conspicuousness detection model succeeded in school;
Step 122, test image is substituted into the conspicuousness detection model succeeded in school and obtains the conspicuousness detection of test image
As a result.
Further, the step 2 specifically comprises the following steps:
Step 21, the conspicuousness testing result of image step 1 obtained cuts the input of algorithm as figure, according to the aobvious of bag
Work property mark builds the weight function as shown in formula (3) with exemplary characteristics vector;And the figure obtained after the optimization as shown in formula (4) is cut
Cost function;
In formula (3), wijRepresent the visual signature similitude of i examples bag and j example bags corresponding region, Salien (i) with
Salien (j) represents the notable angle value after region i and region j normalization respectively, and σ is the sensitive ginseng of regulation visual signature difference
Number, value are 10~20;I weights similar to its own in region are 0;Similarity matrix W={ wijBe diagonal be 0 it is symmetrical
Matrix, and wij∈[0,1];fi,fjRepresent i with distinguishing corresponding exemplary characteristics vector, the i.e. brightness of image in j example bags respectively
Gradient Features, color gradient feature synthesize the mix vector Mixvector of 4 dimensions with texture gradient characteristic vectori=
{BrightnessGradienti,ColorGradienti, TextureGradienti, then Sim (fi,fj)=| |
Mixvectori-Mixvectorj||2;Figure represented by formula (4) is cut in framework, and D is N-dimensional diagonal matrix, element on its diagonalFor cutting state vector, each component of a vector UiRepresent region i's
Cutting state;The molecule of formula (4) represents the visual similarity between region i and region j, and denominator represents the vision phase in the i of region
Like property;
Step 22, using Agglomerative Hierarchical Clustering algorithm, solve cutting state corresponding to R (U) minimum value characteristic value to
Amount, that is, obtain the optimum segmentation result of image.
Embodiment
As shown in figure 1, the vehicle target dividing method under the weak contrast that the present invention provides, specifically comprises the following steps:
Step 1:Night Expressway Road image is chosen as training image, to training image using multi-instance learning
Method carries out notable model modeling;Then the bag in test image and example are predicted using notable model, tested
The saliency map of image;
Step 2:The significance introducing figure of test image is cut into framework, the mark pair according to exemplary characteristics vector and example bag
Figure cuts framework and optimized, and the suboptimal solution of optimization is cut using Agglomerative Hierarchical Clustering Algorithm for Solving figure, obtains the Accurate Segmentation of target.
Further, described step 1 specifically includes step 11 and step 12:
Step 11, training image is pre-processed, and extracts gradient of image intensity feature, color gradient feature and texture
Gradient Features;
Step 12, multi-instance learning is incorporated into saliency detection, obtains the conspicuousness detection knot of test image
Fruit.
Further, training image is pre-processed in the step 11, and extracts brightness step feature, color gradient
Feature and texture gradient feature, specifically include step 111~step 113:
Step 111, the conversion of color space is carried out to training image and its quantization of each component pre-processes, is normalized
Luminance component L and color component a, b afterwards;It is specific as follows:
First, training image is subjected to gamma correction, to realize the nonlinear adjustment to image color component, training schemed
As being changed by rgb color space to Lab color spaces;Again to luminance component L of the training image under Lab color spaces and two
Color component a, b are normalized, luminance component L and color component a, b after being normalized;
After the pretreatment for completing training image, the present invention is analyzed vehicle shadow feature in training image, to connect
The Gradient Features to get off, which are chosen, provides theoretical foundation.Training image is all the road image for having night characteristic feature, due to night
Between drive a vehicle, all there is shadow interference in the vehicle target in every width training image, the presence of vehicle shadow can cause car body
The enlargement deformation in region, or even cause more cars to be connected, have a strong impact on the accurate segmentation of car body and the extraction of car body information, and night
Between car light illumination range and intensity can also influence Target Segmentation to a certain extent, the segmentation effect to be got well will eliminate
The shade formed by illumination.
Shade is due to that the light that light source is sent is blocked and a kind of caused physical phenomenon by object in scene, including from
Shade and cast shadow.It is due to the part that object stops that light source causes uneven illumination and seems dark in itself from shade;Projection
Shade refers to shadow of the object on other body surfaces (such as road).By largely include night highway driving vehicle and
The training image of its shade, it can be deduced that the feature that shade is different from vehicle target is mainly:
(1) change of conspicuousness will not occur for the color on the road surface that shade is covered and texture.
(2) generally cast shadow brightness is less than background luminance, and is one relative to the luminance gain of background area
Numerical value less than 1;It is but then opposite under the interference for having vehicular high beam headlight.
(3) gray-value variation in shaded interior region is not violent, and flat, or local flat is shown as in gradient
's.
To sum up analyze, the present invention uses brightness step feature, color gradient feature and the texture gradient feature of training image
Carry out the study of conspicuousness model.
Step 112, the brightness step of each pixel corresponding to luminance component L matrix is calculated.Specifically include step
A-D:
A, the weight matrix Wights < > of 3 yardsticks are built.It is specific as follows:
The weight matrix Wights < > of 3 yardsticks are built respectively;Described weight matrix Wights < > be line number and
Columns is equal to 2r+1 square formation;Element non-zero i.e. 1 in weight matrix Wights < >, the Elemental redistribution equal to 1 is with square formation
Central element (r+1, r+1) be the center of circle, using r as the disk of radius in the range of, form the inscribed circle of square formation, remaining element in square formation
It is 0;In the present invention, when 3 yardsticks are respectively r=3, r=5 and r=10, weight matrix Wights < > corresponding to difference are such as
Under:
B, the key map matrix Slice_map < > of 3 yardsticks are built;The key map matrix Slice_ of each yardstick
The weight matrix Wights < > that map < > correspond to yardstick have identical dimension, i.e., each key map Slice_map < >
Matrix is also the square formation that line number and columns are all 2r+1;Choose 8 directions (0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °,
135 °, 157.5 °) matrix is divided into 16 regions, the value of element is identical with the numbering 0~15 in the region in each region;Build
Vertical key map matrix Slice_map < > purpose is to realize the fast positioning to subregion.In the present invention, 3 indexes
Map matrix Slice_map < > difference is as follows:
C, by the weight matrix Wights < > of the corresponding yardsticks of each key map matrix Slice_map < >
Element, which corresponds to be multiplied, obtains the matrix of corresponding yardstick, i.e. neighborhood gradient operator.Neighborhood gradient operator under 3 yardsticks is such as
Under:
D, using neighborhood gradient operator, the brightness step of a pixel to be asked in luminance component L matrix is calculated.Specifically
It is as follows:
1. for some yardstick, in the luminance component L obtained by step 111 matrix centered on a pixel to be asked,
Dot product is carried out by the neighborhood gradient operator and each luminance component in the range of neighborhood of pixel points to be asked of a certain yardstick, treated
Seek the matrix N eibor < > in the range of neighborhood of pixel points;The straight line of vertical direction (90 °) is chosen as line of demarcation, by neighborhood ladder
Disk in degree operator is divided into left semicircle and right semi-circle, and left semicircle includes the 0th sector to the 7th sector, and right semi-circle includes the 8th fan
Area to the 15th sector;Matrix N eibor < > element forms a histogram and carries out normalizing to it corresponding to each semicircle
Change, be designated as Slice_hist respectively1< > and Slice_hist2< >;As shown in Figure 4.H1Represent corresponding to the half-circle area of the left side
Histogram, H2The histogram corresponding to the half-circle area of the right is represented, i is the bin of histogram value, is defined as [0,24],
That is brightness range.
2. calculating the difference between two normalization histograms by card side's distance shown in formula (1), that is, obtain a certain chi
The brightness step spent on the vertical direction of next pixel to be asked;
After brightness step on a certain yardstick vertical direction has been calculated, as shown in figure 5, choosing other direction institutes respectively
In straight line as line of demarcation, the brightness step on the every other direction of a certain yardstick of pixel to be asked is obtained;Further according to step D
The directive brightness step of institute on the pixel to be asked other yardsticks is calculated in same mode.When the completion pixel to be asked
After brightness step on all directions of yardstick of point calculates, the final brightness of the pixel to be asked is calculated by formula (2)
Gradient:
f(x,y,r,n_ori;R=3,5,10;N_ori=1,2 ... 8)-> Brightness Gradient (x,
y) (2)
In formula, f is a mapping function, and (x, y) is any pixel to be asked, and r represents the yardstick chosen, and n_ori represents choosing
The direction taken;Brightness Gradient (x, y) are the final brightness step of pixel (x, y);F correspondence rule is choosing
High-high brightness Grad of each direction in 3 yardsticks is selected as luminance gradient value in this direction, will be bright on 8 directions
Degree gradient sums to obtain the final brightness step of pixel (x, y);
Step 113, the color gradient of each pixel in color component a and color component b matrix is calculated respectively.Tool
Body is as follows:
The calculating of color gradient is similar with the calculating of brightness step, the difference is that color gradient is characterized in being directed to two colors
Color component a and b under the color gradient of component, i.e. Lab color spaces;It is with the calculating difference of brightness step, selects
3 yardsticks taken are respectively r=5, r=10 and r=20;Therefore, the size of corresponding weight matrix and map reference matrix point
Wei not 11*11,21*21 and 41*41;The calculating of the color gradient of two color components and brightness step use identical calculating side
Method, obtain in color component a and b matrix each final color gradient of pixel to be asked.
Step 114, the texture gradient of each pixel is calculated.It is specific as follows:
A, multi-dimension texture wave filter group set Filters is built(x,y)(nf, filter, r, θ), nfRepresent wave filter
Number, filter represent the set of wave filter species, and r represents yardstick, and θ represents the direction chosen.It is specific as follows:
Training image is converted into gray level image, is designated as Igray(x, y), and to gray level image IgrayEach picture of (x, y)
The gray component of vegetarian refreshments (x, y) is normalized;Three kinds of wave filters are chosen, respectively Gauss second order derviation wave filter (is designated as
fil1< >) and its Hilbert transform after wave filter (be designated as fil2< >) (it is designated as around wave filter with center ring
Gaussian_cs < >);From 8 directions and 3 yardstick structure multi-dimension texture filter sets, Filters is designated as(x,y)
[nf, filter, r, θ], wherein, nfThe number of wave filter is represented, filter represents the set of wave filter species, and r represents yardstick, θ
Represent the direction chosen;nf=51, filter=(filcs,fil1,fil2), r=5,10,20, θ=0 °, 22.5 °, 45 °,
67.5°、90°、112.5°、135°、157.5°;Multi-dimension texture filter set Filters(x,y)[nf,filter,r,θ].Such as
Shown in formula 5,6,7:
The Gauss second order derviation wave filter of 83, direction yardsticks:
Wave filter after the Gauss second order derviation Hilbert transform of 83, direction yardsticks:
f2(x, y)=Hilbert (f1(x,y)) (6)
The center ring of 3 yardsticks is around wave filter:
Gaussian_cs < >=m_surround < >-m_center < > (7)
Wave filter group set Filters(x,y)[nf, filter, r, θ] in center ring there is no directionality around wave filter, be
Around wave filter and the difference of center-filter.All it is Gauss second order derviation wave filter around wave filter and center-filter.It surround
Wave filter, center-filter, Gauss second order derviation wave filter and its standard deviation sigma value corresponding to hilbert-transform filter are distinguished
For 2 and
B, the corresponding texture filtering response vector of each pixel in training image, i.e. Tex (x, y)=(fil are calculated1,
fil2,fil3...,filnf), it is specific as follows:
By gray level image Igray(x, y) and structure multi-dimension texture filter set Filters(x,y)[nf,filter,
R, θ] convolution is carried out in corresponding scale neighborhood centered on pixel (x, y), obtain the texture filtering response of pixel (x, y)
Vector.During such as yardstick r=5, convolution, i.e. I are carried out in the 11*11 neighborhoods centered on a certain pixelgray(x,y)*Filters
(nf, filter, r, θ), wherein nf=17, filter=(filcs,fil1,fil2), r=5, θ=0 °, 22.5 °, 45 °,
67.5°、90°、112.5°、135°、157.5°;Obtain the texture filtering response vector Tex (x, y) of pixel (x, y)=
(fil1,fil2,fil3...,fil17)。
Corresponding scale when method described above calculates r=5, r=10, r=20 respectively centered on a certain pixel (x, y)
The texture feature vector of neighborhood, obtain corresponding texture filtering response vector Tex (x, the y)=(fil of a certain pixel (x, y)1,
fil2,fil3...,fil51)。
C, texton histogram is built.It is specific as follows:
The texture filtering response vector of all pixels point (x, y) in training image is clustered using K-means methods,
In cluster process, K=32 is taken 32 cluster centres to be obtained, by line corresponding to obtain 32 cluster centres as initial value
Reason filter response vector, which takes out, is used as texture primitive, is marked as 32 bin in textural characteristics statistic histogram to build line
Manage primitive histogram;As shown in Figure 6.
D, the texture gradient of each pixel is calculated.It is specific as follows:
The step A-C in step 112 is used first, obtains the neighborhood gradient operator under 3 yardsticks, for a certain yardstick,
It is corresponding by each element in the neighborhood gradient operator of the yardstick centered on a certain pixel (x, y) to be asked
Texture filtering response vector carries out multiplication operation, obtains the Neighborhood matrix group Neibor [< >] of the pixel, chooses vertical side
To (90 °) straight line as line of demarcation, the disk in yardstick contiguous range is divided into left semicircle and right semi-circle, left semicircle includes
0th sector to the 7th sector, right semi-circle include the 8th sector to the 15th sector;Neighborhood matrix group Neibor corresponding to each semicircle
The element of [< >] forms a texton histogram;As shown in Figure 7.H1Represent the Nogata corresponding to the half-circle area of the left side
Figure, H2The histogram corresponding to the half-circle area of the right is represented, the bin that histogram is provided by step C is marked.With the step of step 112
2. step in rapid D is identical, tries to achieve in training image each final texture gradient of pixel to be asked, is designated as
TextureGradient(x,y)。
Further, multi-instance learning is introduced into saliency in step 12 to detect to obtain the conspicuousness of test image
Testing result, specifically include step 121 and step 122:
Step 121, brightness, color and the texture gradient feature obtained using method described in step 11, with reference to more examples
Learn study of the EMDD algorithms realization to training set, obtain the conspicuousness detection model succeeded in school.Comprise the following steps that:
Region segmentation, the minimum pixel number for including each region are carried out to training image using oversubscription segmentation method first
For 200;Each region is taken as a bag, and to each region progress stochastical sampling, the pixel in the region being sampled is taken as
Example, corresponding brightness step feature is extracted with color gradient characteristic vector as sampling instances characteristic vector;Shown according to sampling
Example characteristic vector, the training of grader is carried out using multi-instance learning method EMDD algorithms, obtains the conspicuousness detection succeeded in school
Model;
Step 122, test image is substituted into the conspicuousness detection model succeeded in school, obtains the conspicuousness detection of test image
As a result.
To each width test image, test image is pre-processed using with step 11 identical process, obtains brightness
Gradient Features and color gradient feature;Then region segmentation is carried out to test image using oversubscription segmentation method, wraps each region
The minimum pixel number contained is 200;Each region as a bag and is subjected to stochastical sampling, the area being sampled to each region
Pixel is taken as example in domain, extracts corresponding brightness step feature with color gradient characteristic vector as sampling instances Characteristic Vectors
Amount, conspicuousness detection model succeed in school obtained using step 121, obtain that significant exemplary characteristics vector each wraps shows
Work property, so as to obtain the conspicuousness testing result of test image.
Further, described step 2 specifically comprises the following steps:
Step 21, the conspicuousness testing result of image step 1 obtained cuts the input of algorithm as figure, according to the aobvious of bag
Work property mark builds the weight function as shown in formula (3) with exemplary characteristics vector;And the figure obtained after the optimization as shown in formula (4) is cut
Cost function;
In formula (3), wijRepresent the visual signature similitude of i examples bag and j example bags corresponding region, Salien (i) with
Salien (j) represents the notable angle value after region i and region j normalization respectively, and σ is the sensitive ginseng of regulation visual signature difference
Number, value are 10~20;I weights similar to its own in region are 0;Similarity matrix W={ wijBe diagonal be 0 it is symmetrical
Matrix, and wij∈[0,1];fi,fjRepresent i with distinguishing corresponding exemplary characteristics vector, the i.e. brightness of image in j example bags respectively
Gradient Features, color gradient feature synthesize the mix vector Mixvector of 4 dimensions with texture gradient characteristic vectori=
{BrightnessGradienti,ColorGradienti, TextureGradienti, then Sim (fi,fj)=| |
Mixvectori-Mixvectorj||2.Figure represented by formula (4) is cut in framework, and D is N-dimensional diagonal matrix, element on its diagonalFor cutting state vector, each component of a vector UiRepresent region i's
Cutting state;The molecule of formula (4) represents the visual similarity between region i and region j, and denominator represents the vision phase in the i of region
Like property;
Step 22, using Agglomerative Hierarchical Clustering algorithm, solve cutting state corresponding to R (U) minimum value characteristic value to
Amount, that is, obtain the optimum segmentation result of image.
Wherein, affiliated Agglomerative Hierarchical Clustering algorithm refers to the step 2 for the method that number of patent application is 201210257591.1
With the method for step 3.
Verification experimental verification
To verify the validity of the inventive method, using linear array in general acquisition of road traffic information and detecting system
The night Expressway Road view data of ccd video camera collection chooses wherein 200 and includes vehicle target as research object
And the road image with night characteristic feature, using 100 images as training image, learn highway night running vehicle
The bottom visual signature of target, the segmentation of vehicle target is carried out to remaining 100 images using the inventive method.Enumerate part
Experimental result is as shown in Fig. 2 Fig. 2 sets forth segmentation knot of the spectrum partitioning algorithm based on the decomposition of multiple dimensioned figure to test image
Fruit and the segmentation result using the inventive method.It is described as follows:
Subgraph (a-1) to (a-5) is original image in Fig. 2, and subgraph (b-1) to (b-5) is to be decomposed based on multiple dimensioned figure
The segmentation result of partitioning algorithm is composed, subgraph (c-1) to (c-5) is the inventive method.Contrasted by experimental result, it can be seen that
The spectrum partitioning algorithm decomposed based on multiple dimensioned figure can obtain more complete for the higher vehicle target of relative road surface contrast
Vehicle target, white vehicle target among (a-1) and white vehicle target among figure (a-4) are such as schemed, but for weak right
It is substantially to split failure than degree vehicle target, and paper algorithm can be by most car in night Expressway Road image
Target Segmentation comes out, and especially the segmentation effect of weak contrast's vehicle target will be substantially better than the spectrum decomposed based on multiple dimensioned figure
Partitioning algorithm.Because the inventive method combination multi-instance learning method can obtain the marking area mark in image quickly, and
Exemplary characteristics vector in each example bag contain reflection target information bottom visual signature and objective contour it is on the middle and senior level
Feature, at the beginning of roughening, the comprehensive character for just considering image provides accurately segmentation foundation for subsequent treatment, therefore
When target, slow and difference is minimum with background border transition, the weak situation of contrast, can still obtain preferable segmentation result.