CN105005989A - Vehicle target segmentation method under weak contrast - Google Patents

Vehicle target segmentation method under weak contrast Download PDF

Info

Publication number
CN105005989A
CN105005989A CN201510374899.8A CN201510374899A CN105005989A CN 105005989 A CN105005989 A CN 105005989A CN 201510374899 A CN201510374899 A CN 201510374899A CN 105005989 A CN105005989 A CN 105005989A
Authority
CN
China
Prior art keywords
pixel
matrix
fil
filter
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510374899.8A
Other languages
Chinese (zh)
Other versions
CN105005989B (en
Inventor
刘占文
赵祥模
房建武
段宗涛
王润民
郝茹茹
戚秀珍
周洲
周经美
林杉
康俊民
王姣姣
徐江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changan University
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN201510374899.8A priority Critical patent/CN105005989B/en
Publication of CN105005989A publication Critical patent/CN105005989A/en
Application granted granted Critical
Publication of CN105005989B publication Critical patent/CN105005989B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior

Abstract

The present invention discloses a vehicle target segmentation method under weak contrast. The method comprises: step 1: performing significant model modeling on a training image by using a multi-instance learning method; performing prediction on packages and examples in a test image by using a significant model, to obtain a saliency figure of the test image; step 2: introducing a saliency degree of the test image into an image segmentation frame; performing optimization on the image segmentation frame according to feature vectors of examples and marks of example packages; resolving a suboptimal solution of image segmentation optimization, to obtain accurate segmentation of a target. According to the vehicle target segmentation method under weak contrast of the present invention, the human visual attention mechanism is learned from, and a graph theory based image segmentation method is combined, to establish a visual saliency features based vehicle target segmentation model; not only a complete vehicle can be accurately divided under good environmental conditions, but also certain adaptability and robustness are achieved, so that weak contrast vehicle targets in traffic scenes can be accurately divided in the nighttime environment and under shadow occlusion.

Description

Vehicle target dividing method under a kind of weak contrast
Technical field
The invention belongs to image processing field, relate to a kind of image partition method, the vehicle target dividing method specifically under a kind of weak contrast.
Background technology
Along with the progress of science and technology, intelligent transportation system (ITS) has become the important means that the mankind improve traffic intelligence degree and traffic management level.Particularly along with the fast development of computer technology and sensor technology, become an important component part in ITS based on the vehicle detection of machine vision and monitoring, for the traffic administration improved in ITS, traffic information collection, emergency management and rescue etc., all there is important effect.From traffic monitoring image, carry out splitting that to extract be gordian technique based in the vehicle detection of machine vision and vehicle monitoring system to interested vehicle target, the precision of its segmentation result directly affects the accuracy of vehicle detection, and is the process such as follow-up vehicle classification, vehicle identification and tracking.Many classic methods about Iamge Segmentation are used for the segmentation of vehicle target at present, as the image partition method in the image partition method based on area information, the image partition method based on marginal information, feature based space, the image partition method etc. based on threshold value, all there is different threshold values and arrange segmentation result can be caused very large with segmentation mass discrepancy, the shortcoming of poor robustness in these methods; In recent years based on the image Segmentation Technology of graph theory at nearly extensive concern causing scholars for 30 years, become the study hotspot that of Iamge Segmentation field is newer.Pictorial element is successfully mapped in figure by the image Segmentation Technology based on graph theory carries out Iamge Segmentation to obtain area-of-interest, such as based on the spectrum dividing method that multiple dimensioned figure decomposes, the figure segmentation method (namely application number is method in the patent of 201210257591.1) etc. of direct employing brightness and color character, these methods are used for vehicle target segmentation, have by day when daylight shade and can split vehicle target preferably, but still there is the shortcomings such as environmental suitability is poor, especially for vehicles in complex traffic scene, night scenes or inclement weather are (as dense fog, sleet etc.) under weak contrast's Target Segmentation be difficult to obtain gratifying segmentation effect.
Summary of the invention
For the deficiency that above-mentioned prior art exists, the object of the invention is to, use for reference the vision noticing mechanism of the mankind, in conjunction with the image partition method based on graph theory, set up a kind of vehicle target parted pattern of view-based access control model significant characteristics, under good environment condition, accurately can not only segment car load, and there is certain adaptability and robustness, under night-environment, shade circumstance of occlusion, more adequately can be partitioned into the weak contrast's vehicle target in traffic scene.
A vehicle target dividing method under weak contrast, specifically comprises the steps:
Step 1: adopt the method for multi-instance learning to carry out remarkable model modeling to training image; Then utilize remarkable model to predict the bag in test pattern and example, obtain the saliency map of test pattern; Specifically comprise:
Step 11, carries out pre-service to training image, and extracts gradient of image intensity feature, color gradient feature and texture gradient feature;
Step 12, is incorporated into multi-instance learning in saliency detection, obtains the conspicuousness testing result of test pattern;
Step 2: the significance introducing figure of test pattern is cut framework, the mark of foundation exemplary characteristics vector and example bag cuts framework to figure and is optimized, and solves the suboptimal solution that figure cuts optimization, obtains the Accurate Segmentation of target.
Further, in described step 11, pre-service is carried out to training image, and extracts brightness step feature, color gradient feature and texture gradient feature, specifically comprise step 111 ~ step 113:
Step 111, carries out the conversion of color space and the quantification pre-service of each component thereof to training image, obtains the luminance component L after normalization and color component a, b;
Step 112, calculates the brightness step of each pixel corresponding to the matrix of luminance component L;
Step 113, calculates the color gradient of each pixel in the matrix of color component a and color component b respectively;
Step 114, calculates the texture gradient of each pixel.
Further, described step 111 is specific as follows:
First, training image is carried out gamma correction, to realize the nonlinear adjustment to image color component, training image is converted to Lab color space by rgb color space; Again training image luminance component L and two color component a, b under Lab color space are normalized, obtain the luminance component L after normalization and color component a, b.
Further, described step 113 specifically comprises steps A-D:
The weight matrix Wights < > of A, structure 3 yardsticks;
The key map matrix S lice_map < > of B, structure 3 yardsticks; The weight matrix Wights < > of the corresponding yardstick of the key map matrix S lice_map < > of each yardstick has identical dimension, i.e. the square formation of each key map Slice_map < > matrix to be also line number and columns be 2r+1; Choose 8 directions (0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 °) and matrix is divided into 16 regions, in each region, the value of element is identical with the numbering 0 ~ 15 in this region;
C, being multiplied by the element one_to_one corresponding in the weight matrix Wights < > of each key map matrix S lice_map < > yardstick corresponding to it obtains the matrix of corresponding yardstick, i.e. neighborhood gradient operator;
D, utilize neighborhood gradient operator, calculate the brightness step of a pixel to be asked in the matrix of luminance component L.
Further, described steps A is specific as follows:
Build the weight matrix Wights < > of 3 yardsticks respectively; Described weight matrix Wights < > is the square formation that line number and columns are equal to 2r+1; Element in weight matrix Wights < > is non-zero is 1, the Elemental redistribution equaling 1 is with square formation central element (r+1, r+1) for the center of circle, with r be radius disk within the scope of, form the incircle of square formation, in square formation, all the other elements are 0; 3 yardsticks are respectively r=3, r=5 and r=10.
Further, described step D is specific as follows:
1. for some yardsticks, in the matrix of the luminance component L obtained by step 111 centered by a pixel to be asked, carry out dot product by the neighborhood gradient operator of a certain yardstick with each luminance component within the scope of neighborhood of pixel points to be asked, obtain the matrix N eibor < > within the scope of neighborhood of pixel points to be asked; Disk in neighborhood gradient operator, as separatrix, is divided into left semicircle and right semi-circle by the straight line choosing vertical direction (90 °), and left semicircle comprises the 0th sector to the 7th sector, and right semi-circle comprises the 8th sector to the 15th sector; The element of the matrix N eibor < > that each semicircle is corresponding forms a histogram and is normalized it, is designated as Slice_hist respectively 1< > and Slice_hist 2< >; H 1represent the histogram corresponding to half-circle area, the left side, H 2represent the histogram of the right corresponding to half-circle area, i is the value of histogrammic bin, is defined as [0,24], i.e. brightness range.
2. the card side's distance shown in through type (1) calculates the difference between two normalization histograms, namely obtains the brightness step on the vertical direction of the next pixel to be asked of a certain yardstick;
d C h i _ s q u a r e d ( H 1 , H 2 ) = 1 2 &Sigma; i ( H 1 ( i ) - H 2 ( i ) ) 2 H 1 ( i ) + H 2 ( i ) - - - ( 1 )
After having calculated the brightness step on a certain yardstick vertical direction, choose other place, direction straight lines respectively as separatrix, obtained the brightness step on the every other direction of a certain yardstick of this pixel to be asked; Mode same according to step D again calculates the directive brightness step on these other yardsticks of pixel to be asked.When completing this after asking the brightness step on all directions of all yardsticks of pixel to calculate, calculated the final brightness step of this pixel to be asked by formula (2):
f(x,y,r,n_ori;r=3,5,10;n_ori=1,2,......8)->Brightness Gradient(x,y) (2)
In formula, f is a mapping function, and (x, y) is arbitrary pixel to be asked, and r represents the yardstick chosen, and n_ori represents the direction chosen; The final brightness step that Brightness Gradient (x, y) is pixel (x, y); The correspondence rule of f is select the high-high brightness Grad of each direction in 3 yardsticks as the party's luminance gradient value upwards, the brightness step summation on 8 directions is obtained the final brightness step of pixel (x, y).
Further, described step 114 is specific as follows:
A, structure multi-dimension texture bank of filters set Filters (x, y)(n f, filter, r, θ), n frepresent the number of wave filter, filter represents the set of wave filter kind, and r represents yardstick, and θ represents the direction chosen;
The corresponding texture filtering response vector of each pixel in B, calculation training image, i.e. Tex (x, y)=(fil 1, fil 2, fil 3..., fil nf), specific as follows:
By gray level image I gray(x, y) and the multi-dimension texture filter set Filters built (x, y)[n f, filter, r, θ] and carry out convolution in corresponding scale neighborhood centered by pixel (x, y), obtain the texture filtering response vector of pixel (x, y).During as yardstick r=5, in the 11*11 neighborhood centered by a certain pixel, carry out convolution, i.e. I gray(x, y) * Filters (n f, filter, r, θ), wherein n f=17, filter=(fil cs, fil 1, fil 2), r=5, θ=0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 °; Obtain texture filtering response vector Tex (x, the y)=(fil of pixel (x, y) 1, fil 2, fil 3..., fil 17).
With a certain pixel (x when calculating r=5, r=10, r=20 respectively with said method, the texture feature vector of the corresponding scale neighborhood y), obtain corresponding texture filtering response vector Tex (x, the y)=(fil of a certain pixel (x, y) 1, fil 2, fil 3..., fil 51).
C, structure texton histogram; Specific as follows:
Adopt K-means method to pixel (x all in training image, y) texture filtering response vector carries out cluster, in cluster process, get K=32 as initial value, obtain 32 cluster centres altogether, take out texture filtering response vector corresponding for 32 cluster centres obtained as texture primitive, 32 bin be used as in textural characteristics statistic histogram mark to build texton histogram;
D, calculate the texture gradient of each pixel; Specific as follows:
First steps A-the C in step 112 is adopted, obtain the neighborhood gradient operator under 3 yardsticks, for a certain yardstick, with pixel (x a certain to be asked, y) centered by, the texture filtering response vector corresponding with it by each element in the neighborhood gradient operator of this yardstick carries out multiplication operation, obtain the Neighborhood matrix group Neibor [< >] of this pixel, choose the straight line of vertical direction (90 °) as separatrix, disk in yardstick contiguous range is divided into left semicircle and right semi-circle, left semicircle comprises the 0th sector to the 7th sector, right semi-circle comprises the 8th sector to the 15th sector, the element of the Neighborhood matrix group Neibor [< >] that each semicircle is corresponding forms a texton histogram, H 1represent the histogram corresponding to half-circle area, the left side, H 2represent the histogram of the right corresponding to half-circle area, provide histogrammic bin by step C and mark, identical with the 2. step in the step D of step 112, try to achieve the final texture gradient of each pixel to be asked in training image, be designated as TextureGradient (x, y).
Further, described steps A is specific as follows:
Convert training image to gray level image, be designated as I gray(x, y), and to gray level image I graythe gray component of each pixel (x, y) of (x, y) is normalized; Choose three kinds of wave filters, be respectively the wave filter after Gauss's Second Order Partial waveguide filter and Hilbert transform thereof and center ring around wave filter; Build multi-dimension texture filter set from 8 directions and 3 yardsticks, be designated as Filters (x, y)[n f, filter, r, θ], wherein, n frepresent the number of wave filter, filter represents the set of wave filter kind, and r represents yardstick, and θ represents the direction chosen; n f=51, filter=(fil cs, fil 1, fil 2), r=5,10,20, θ=0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 °; Multi-dimension texture filter set Filters (x, y)[n f, filter, r, θ]; Shown in 5,6,7:
Gauss's Second Order Partial waveguide filter of 3 yardsticks in 8 directions:
f 1 ( x , y ) = d 2 dy 2 ( 1 C exp ( y 2 &sigma; 2 ) exp ( x 2 l 2 &sigma; 2 ) ) - - - ( 5 )
Wave filter after Gauss's second order local derviation Hilbert transform of 3 yardsticks in 8 directions:
f 2(x,y)=Hilbert(f 1(x,y)) (6)
The center ring of 3 yardsticks is around wave filter:
Gaussian_cs<>=m_surround<>-m_center<> (7)
The standard deviation sigma value corresponding around wave filter, center-filter, Gauss's Second Order Partial waveguide filter and hilbert-transform filter thereof is respectively 2 and
Further, in described step 12, multi-instance learning is introduced into saliency and detects the conspicuousness testing result obtaining test pattern, specifically comprise step 121 and step 122:
Step 121, the brightness utilizing method described in step 11 to obtain, color and texture gradient feature, in conjunction with multi-instance learning EMDD algorithm realization to the study of training set, obtain the conspicuousness detection model succeeded in school;
Step 122, substitutes into test pattern the conspicuousness testing result that the conspicuousness detection model succeeded in school obtains test pattern.
Further, described step 2 specifically comprises the steps:
Step 21, the conspicuousness testing result of image step 1 obtained cuts the input of algorithm as figure, the conspicuousness mark according to bag builds such as formula the weight function shown in (3) with exemplary characteristics vector; And obtain the figure after such as formula the optimization shown in (4) and cut cost function;
w i j = { 1 2 &lsqb; S a l i e n ( i ) + S a l i e n ( j ) &rsqb; exp ( - Sim ( f i , f j ) / &delta; 2 ) i &NotEqual; j 0 i = j - - - ( 3 )
R ( U ) = &Sigma; i > j w i j ( U i - U j ) 2 &Sigma; i > j w i j U i U j = U T ( D - W ) U 1 2 U T W U - - - ( 4 )
In formula (3), w ijrepresent the visual signature similarity of i example bag and j example bag corresponding region, Salien (i) and Salien (j) represent the remarkable angle value after region i and region j normalization respectively, σ is the sensitive parameter regulating visual signature difference, and value is 10 ~ 20; Region i is 0 to the similar weights of himself; Similarity matrix W={w ijto be diagonal line be 0 symmetric matrix, and w ij∈ [0,1]; f i, f jrepresent the exemplary characteristics vector of correspondence respectively in i and j example bag respectively, namely the brightness step feature of image, color gradient feature and texture gradient proper vector synthesize the 4 mix vector Mixvector tieed up i={ BrightnessGradient i, ColorGradient i, TextureGradient i, then Sim (f i, f j)=|| Mixvector i-Mixvector j|| 2; Figure represented by formula (4) cuts in framework, and D is that N ties up diagonal matrix, element on its diagonal line d i = &Sigma; j w i j ; U = { U 1 , U 2 , ... , U i , ... , U j , ... U N } For cutting state vector, each component of a vector U irepresent the cutting state of region i; The visual similarity divided between subrepresentation region i and region j of formula (4), denominator represents the visual similarity in the i of region;
Step 22, adopts Agglomerative Hierarchical Clustering algorithm, solves the vector of the cutting state corresponding to minimum value eigenwert of R (U), namely obtains the optimum segmentation result of image.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the inventive method.
Fig. 2 to be application number be method in the patent of 201210257591.1, the spectrum dividing method decomposed based on multiple dimensioned figure and the contrast schematic diagram cutting the segmentation result of the Target Segmentation method of optimization based on multi-instance learning and figure of the present invention.
Fig. 3 is disk left and right subregion schematic diagram.
Fig. 4 is brightness H 1, H 2histogram schematic diagram.
Fig. 5 changes direction, disk separatrix schematic diagram.
Fig. 6 is that texton histogram forms schematic diagram.
Fig. 7 is texture H 1, H 2histogram schematic diagram.
Below in conjunction with accompanying drawing and embodiment, further explanation is explained to the present invention.
Embodiment
As shown in Figure 1, the vehicle target dividing method under the weak contrast that the present invention provides, specifically comprises the steps:
Step 1: choose night Expressway Road image as training image, adopts the method for multi-instance learning to carry out remarkable model modeling to training image; Then utilize remarkable model to predict the bag in test pattern and example, obtain the saliency map of test pattern;
Step 2: the significance introducing figure of test pattern is cut framework, the mark of foundation exemplary characteristics vector and example bag cuts framework to figure and is optimized, and adopts Agglomerative Hierarchical Clustering Algorithm for Solving figure to cut the suboptimal solution of optimization, obtains the Accurate Segmentation of target.
Further, described step 1 specifically comprises step 11 and step 12:
Step 11, carries out pre-service to training image, and extracts gradient of image intensity feature, color gradient feature and texture gradient feature;
Step 12, is incorporated into multi-instance learning in saliency detection, obtains the conspicuousness testing result of test pattern.
Further, in described step 11, pre-service is carried out to training image, and extracts brightness step feature, color gradient feature and texture gradient feature, specifically comprise step 111 ~ step 113:
Step 111, carries out the conversion of color space and the quantification pre-service of each component thereof to training image, obtains the luminance component L after normalization and color component a, b; Specific as follows:
First, training image is carried out gamma correction, to realize the nonlinear adjustment to image color component, training image is converted to Lab color space by rgb color space; Again training image luminance component L and two color component a, b under Lab color space are normalized, obtain the luminance component L after normalization and color component a, b;
After completing the pre-service of training image, the present invention analyzes vehicle shadow feature in training image, and choosing for ensuing Gradient Features provides theoretical foundation.Training image is all the road image with characteristic feature at night, due to driving at night, all there is the situation of shadow interference in the vehicle target in every width training image, the existence of vehicle shadow can cause the enlargement deformation of car body area, many cars are even caused to be connected, have a strong impact on the accurate segmentation of car body and the extraction of car body information, and night car light illumination range and intensity also can affect Target Segmentation to a certain extent, the segmentation effect that obtain will eliminate the shade formed by illumination.
Shade is that the light sent due to light source is subject to blocking of object in scene and a kind of physical phenomenon produced, and comprises from shade and cast shadow.It is the part seeming darker because object stop light source itself causes uneven illumination from shade; Cast shadow refers to the shadow of object on other body surfaces (as road).By the training image of the vehicle and shade thereof that comprise highway driving at night in a large number, can show that the feature that shade is different from vehicle target is mainly:
(1) can not there is the change of conspicuousness in the color on road surface that covers of shade and texture.
(2) generally cast shadow brightness, lower than background luminance, and is the numerical value being less than 1 relative to the luminance gain of background area; But it is then contrary under the interference having vehicular high beam headlight.
(3) gray-value variation in shaded interior region is inviolent, gradient shows as smooth, or locally flat.
To sum up analyze, the present invention adopts the brightness step feature of training image, color gradient feature and texture gradient feature to carry out the study of conspicuousness model.
Step 112, calculates the brightness step of each pixel corresponding to the matrix of luminance component L.Specifically comprise steps A-D:
The weight matrix Wights < > of A, structure 3 yardsticks.Specific as follows:
Build the weight matrix Wights < > of 3 yardsticks respectively; Described weight matrix Wights < > is the square formation that line number and columns are equal to 2r+1; Element in weight matrix Wights < > is non-zero is 1, the Elemental redistribution equaling 1 is with square formation central element (r+1, r+1) for the center of circle, with r be radius disk within the scope of, form the incircle of square formation, in square formation, all the other elements are 0; In the present invention, when 3 yardsticks are respectively r=3, r=5 and r=10, weight matrix Wights < > corresponding is respectively as follows:
The key map matrix S lice_map < > of B, structure 3 yardsticks; The weight matrix Wights < > of the corresponding yardstick of the key map matrix S lice_map < > of each yardstick has identical dimension, i.e. the square formation of each key map Slice_map < > matrix to be also line number and columns be 2r+1; Choose 8 directions (0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 °) and matrix is divided into 16 regions, in each region, the value of element is identical with the numbering 0 ~ 15 in this region; The object setting up key map matrix S lice_map < > is the quick position in order to realize subregion.In the present invention, 3 key map matrix S lice_map < > are as follows respectively:
C, being multiplied by the element one_to_one corresponding in the weight matrix Wights < > of each key map matrix S lice_map < > yardstick corresponding to it obtains the matrix of corresponding yardstick, i.e. neighborhood gradient operator.Neighborhood gradient operator under 3 yardsticks is as follows:
D, utilize neighborhood gradient operator, calculate the brightness step of a pixel to be asked in the matrix of luminance component L.Specific as follows:
1. for some yardsticks, in the matrix of the luminance component L obtained by step 111 centered by a pixel to be asked, carry out dot product by the neighborhood gradient operator of a certain yardstick with each luminance component within the scope of neighborhood of pixel points to be asked, obtain the matrix N eibor < > within the scope of neighborhood of pixel points to be asked; Disk in neighborhood gradient operator, as separatrix, is divided into left semicircle and right semi-circle by the straight line choosing vertical direction (90 °), and left semicircle comprises the 0th sector to the 7th sector, and right semi-circle comprises the 8th sector to the 15th sector; The element of the matrix N eibor < > that each semicircle is corresponding forms a histogram and is normalized it, is designated as Slice_hist respectively 1< > and Slice_hist 2< >; As shown in Figure 4.H 1represent the histogram corresponding to half-circle area, the left side, H 2represent the histogram of the right corresponding to half-circle area, i is the value of histogrammic bin, is defined as [0,24], i.e. brightness range.
2. the card side's distance shown in through type (1) calculates the difference between two normalization histograms, namely obtains the brightness step on the vertical direction of the next pixel to be asked of a certain yardstick;
d C h i _ s q u a r e d ( H 1 , H 2 ) = 1 2 &Sigma; i ( H 1 ( i ) - H 2 ( i ) ) 2 H 1 ( i ) + H 2 ( i ) - - - ( 1 )
After having calculated the brightness step on a certain yardstick vertical direction, as shown in Figure 5, choose other place, direction straight lines respectively as separatrix, obtain the brightness step on the every other direction of a certain yardstick of this pixel to be asked; Mode same according to step D again calculates the directive brightness step on these other yardsticks of pixel to be asked.When completing this after asking the brightness step on all directions of all yardsticks of pixel to calculate, calculated the final brightness step of this pixel to be asked by formula (2):
f(x,y,r,n_ori;r=3,5,10;n_ori=1,2,......8)->Brightness Gradient(x,y) (2)
In formula, f is a mapping function, and (x, y) is arbitrary pixel to be asked, and r represents the yardstick chosen, and n_ori represents the direction chosen; The final brightness step that Brightness Gradient (x, y) is pixel (x, y); The correspondence rule of f is select the high-high brightness Grad of each direction in 3 yardsticks as the party's luminance gradient value upwards, the brightness step summation on 8 directions is obtained the final brightness step of pixel (x, y);
Step 113, calculates the color gradient of each pixel in the matrix of color component a and color component b respectively.Specific as follows:
The calculating of color gradient and the compute classes of brightness step seemingly, are the color gradient for two color components unlike color gradient feature, color component a and b namely under Lab color space; Be with the calculating difference of brightness step, 3 yardsticks chosen are respectively r=5, r=10 and r=20; Therefore, the size of corresponding weight matrix and map reference matrix is respectively 11*11,21*21 and 41*41; The calculating of the color gradient of two color components and brightness step adopt identical computing method, obtain the final color gradient of each pixel to be asked in color component a and b matrix.
Step 114, calculates the texture gradient of each pixel.Specific as follows:
A, structure multi-dimension texture bank of filters set Filters (x, y)(n f, filter, r, θ), n frepresent the number of wave filter, filter represents the set of wave filter kind, and r represents yardstick, and θ represents the direction chosen.Specific as follows:
Convert training image to gray level image, be designated as I gray(x, y), and to gray level image I graythe gray component of each pixel (x, y) of (x, y) is normalized; Choose three kinds of wave filters, be respectively Gauss's Second Order Partial waveguide filter and (be designated as fil 1< >) and Hilbert transform after wave filter (be designated as fil 2< >) with center ring around wave filter (being designated as Gaussian_cs < >); Build multi-dimension texture filter set from 8 directions and 3 yardsticks, be designated as Filters (x, y)[n f, filter, r, θ], wherein, n frepresent the number of wave filter, filter represents the set of wave filter kind, and r represents yardstick, and θ represents the direction chosen; n f=51, filter=(fil cs, fil 1, fil 2), r=5,10,20, θ=0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 °; Multi-dimension texture filter set Filters (x, y)[n f, filter, r, θ].Shown in 5,6,7:
Gauss's Second Order Partial waveguide filter of 3 yardsticks in 8 directions:
f 1 ( x , y ) = d 2 dy 2 ( 1 C exp ( y 2 &sigma; 2 ) exp ( x 2 l 2 &sigma; 2 ) ) - - - ( 5 )
Wave filter after Gauss's second order local derviation Hilbert transform of 3 yardsticks in 8 directions:
f 2(x,y)=Hilbert(f 1(x,y)) (6)
The center ring of 3 yardsticks is around wave filter:
Gaussian_cs<>=m_surround<>-m_center<> (7)
Bank of filters set Filters (x, y)[n f, filter, r, θ] in center ring there is no directivity around wave filter, be the difference around wave filter and center-filter.All Gauss's Second Order Partial waveguide filter around wave filter and center-filter.The standard deviation sigma value corresponding around wave filter, center-filter, Gauss's Second Order Partial waveguide filter and hilbert-transform filter thereof is respectively 2 and
The corresponding texture filtering response vector of each pixel in B, calculation training image, i.e. Tex (x, y)=(fil 1, fil 2, fil 3..., fil nf), specific as follows:
By gray level image I gray(x, y) and the multi-dimension texture filter set Filters built (x, y)[n f, filter, r, θ] and carry out convolution in corresponding scale neighborhood centered by pixel (x, y), obtain the texture filtering response vector of pixel (x, y).During as yardstick r=5, in the 11*11 neighborhood centered by a certain pixel, carry out convolution, i.e. I gray(x, y) * Filters (n f, filter, r, θ), wherein n f=17, filter=(fil cs, fil 1, fil 2), r=5, θ=0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 °; Obtain texture filtering response vector Tex (x, the y)=(fil of pixel (x, y) 1, fil 2, fil 3..., fil 17).
With a certain pixel (x when calculating r=5, r=10, r=20 respectively with said method, the texture feature vector of the corresponding scale neighborhood y), obtain corresponding texture filtering response vector Tex (x, the y)=(fil of a certain pixel (x, y) 1, fil 2, fil 3..., fil 51).
C, structure texton histogram.Specific as follows:
Adopt K-means method to pixel (x all in training image, y) texture filtering response vector carries out cluster, in cluster process, get K=32 as initial value, obtain 32 cluster centres altogether, take out texture filtering response vector corresponding for 32 cluster centres obtained as texture primitive, 32 bin be used as in textural characteristics statistic histogram mark to build texton histogram; As shown in Figure 6.
D, calculate the texture gradient of each pixel.Specific as follows:
First steps A-the C in step 112 is adopted, obtain the neighborhood gradient operator under 3 yardsticks, for a certain yardstick, with pixel (x a certain to be asked, y) centered by, the texture filtering response vector corresponding with it by each element in the neighborhood gradient operator of this yardstick carries out multiplication operation, obtain the Neighborhood matrix group Neibor [< >] of this pixel, choose the straight line of vertical direction (90 °) as separatrix, disk in yardstick contiguous range is divided into left semicircle and right semi-circle, left semicircle comprises the 0th sector to the 7th sector, right semi-circle comprises the 8th sector to the 15th sector, the element of the Neighborhood matrix group Neibor [< >] that each semicircle is corresponding forms a texton histogram, as shown in Figure 7.H 1represent the histogram corresponding to half-circle area, the left side, H 2represent the histogram of the right corresponding to half-circle area, provide histogrammic bin by step C and mark.Identical with the 2. step in the step D of step 112, try to achieve the final texture gradient of each pixel to be asked in training image, be designated as TextureGradient (x, y).
Further, in step 12, multi-instance learning is introduced into saliency and detects the conspicuousness testing result obtaining test pattern, specifically comprise step 121 and step 122:
Step 121, the brightness utilizing method described in step 11 to obtain, color and texture gradient feature, in conjunction with multi-instance learning EMDD algorithm realization to the study of training set, obtain the conspicuousness detection model succeeded in school.Concrete steps are as follows:
First adopt hyperfractionated method to carry out region segmentation to training image, the minimum pixel number that each region is comprised is 200; Each region is taken as a bag, and carry out stochastic sampling to each region, the pixel in the region be sampled is taken as example, extracts corresponding brightness step feature and color gradient eigenvector as sampling instances eigenvector; According to sampling instances eigenvector, adopt multi-instance learning method EMDD algorithm to carry out the training of sorter, obtain the conspicuousness detection model succeeded in school;
Step 122, substitutes into the conspicuousness detection model succeeded in school, obtains the conspicuousness testing result of test pattern by test pattern.
To each width test pattern, utilize the process identical with step 11 to carry out pre-service to test pattern, obtain brightness step characteristic sum color gradient feature; Then adopt hyperfractionated method to carry out region segmentation to test pattern, the minimum pixel number that each region is comprised is 200; Each region is used as a bag and stochastic sampling is carried out to each region, in the region be sampled, pixel is taken as example, extract corresponding brightness step feature and color gradient eigenvector as sampling instances eigenvector, the conspicuousness detection model succeeded in school utilizing step 121 to obtain, obtain the conspicuousness of each bag of significant exemplary characteristics vector, thus obtain the conspicuousness testing result of test pattern.
Further, described step 2 specifically comprises the steps:
Step 21, the conspicuousness testing result of image step 1 obtained cuts the input of algorithm as figure, the conspicuousness mark according to bag builds such as formula the weight function shown in (3) with exemplary characteristics vector; And obtain the figure after such as formula the optimization shown in (4) and cut cost function;
w i j = { 1 2 &lsqb; S a l i e n ( i ) + S a l i e n ( j ) &rsqb; exp ( - Sim ( f i , f j ) / &delta; 2 ) i &NotEqual; j 0 i = j - - - ( 3 )
R ( U ) = &Sigma; i > j w i j ( U i - U j ) 2 &Sigma; i > j w i j U i U j = U T ( D - W ) U 1 2 U T W U - - - ( 4 )
In formula (3), w ijrepresent the visual signature similarity of i example bag and j example bag corresponding region, Salien (i) and Salien (j) represent the remarkable angle value after region i and region j normalization respectively, σ is the sensitive parameter regulating visual signature difference, and value is 10 ~ 20; Region i is 0 to the similar weights of himself; Similarity matrix W={w ijto be diagonal line be 0 symmetric matrix, and w ij∈ [0,1]; f i, f jrepresent the exemplary characteristics vector of correspondence respectively in i and j example bag respectively, namely the brightness step feature of image, color gradient feature and texture gradient proper vector synthesize the 4 mix vector Mixvector tieed up i={ BrightnessGradient i, ColorGradient i, TextureGradient i, then Sim (f i, f j)=|| Mixvector i-Mixvector j|| 2.Figure represented by formula (4) cuts in framework, and D is that N ties up diagonal matrix, element on its diagonal line d i = &Sigma; j w i j ; U = { U 1 , U 2 , ... , U i , ... , U j , ... U N } For cutting state vector, each component of a vector U irepresent the cutting state of region i; The visual similarity divided between subrepresentation region i and region j of formula (4), denominator represents the visual similarity in the i of region;
Step 22, adopts Agglomerative Hierarchical Clustering algorithm, solves the vector of the cutting state corresponding to minimum value eigenwert of R (U), namely obtains the optimum segmentation result of image.
Wherein, affiliated Agglomerative Hierarchical Clustering algorithm refers to that number of patent application is the step 2 of the method for 201210257591.1 and the method for step 3.
Verification experimental verification
For verifying the validity of the inventive method, adopt linear array CCD camera in general acquisition of road traffic information and detection system to gather night Expressway Road view data as research object, choose wherein 200 comprise vehicle target and there is the road image of characteristic feature at night, using 100 images as training image, the bottom visual signature of study highway night running vehicle target, adopts the inventive method remaining 100 images to be carried out to the segmentation of vehicle target.Enumerate part of test results as shown in Figure 2, Fig. 2 sets forth the spectrum partitioning algorithm that decomposes based on multiple dimensioned figure to the segmentation result of test pattern and the segmentation result adopting the inventive method.Be described as follows:
In Fig. 2, subgraph (a ?1) is original image to (a ?5), subgraph (b ?1) is to (b ?the 5) segmentation result of spectrum partitioning algorithm for decomposing based on multiple dimensioned figure, and subgraph (c ?1) is the inventive method to (c ?5).Comparative result by experiment, can find out, the spectrum partitioning algorithm decomposed based on multiple dimensioned figure can obtain comparatively complete vehicle target for the vehicle target that phase road pavement contrast is higher, as the white vehicle target of the white vehicle target in the middle of figure (a ?1) and figure (a ?4) centre, but be split unsuccessfully substantially for weak contrast's vehicle target, and most vehicle target in night Expressway Road image can split by paper algorithm, especially the segmentation effect of weak contrast's vehicle target obviously will be better than the spectrum partitioning algorithm that decomposes based on multiple dimensioned figure.Because the inventive method can obtain the marking area mark in image very soon in conjunction with multi-instance learning method, and the exemplary characteristics vector in each example bag contains the reflection bottom visual signature of target information and the feature on the middle and senior level of objective contour, at the beginning of alligatoring, be that subsequent treatment provides and splits foundation comparatively accurately with regard to considering the comprehensive character of image, therefore when object and background border transition slowly and difference is minimum, the situation that contrast is weak, still can obtain good segmentation result.

Claims (10)

1. the vehicle target dividing method under weak contrast, is characterized in that, specifically comprise the steps:
Step 1: adopt the method for multi-instance learning to carry out remarkable model modeling to training image; Then utilize remarkable model to predict the bag in test pattern and example, obtain the saliency map of test pattern; Specifically comprise:
Step 11, carries out pre-service to training image, and extracts gradient of image intensity feature, color gradient feature and texture gradient feature;
Step 12, is incorporated into multi-instance learning in saliency detection, obtains the conspicuousness testing result of test pattern;
Step 2: the significance introducing figure of test pattern is cut framework, the mark of foundation exemplary characteristics vector and example bag cuts framework to figure and is optimized, and solves the suboptimal solution that figure cuts optimization, obtains the Accurate Segmentation of target.
2. the vehicle target dividing method under weak contrast as claimed in claim 1, it is characterized in that, in described step 11, pre-service is carried out to training image, and extracts brightness step feature, color gradient feature and texture gradient feature, specifically comprise step 111 ~ step 113:
Step 111, carries out the conversion of color space and the quantification pre-service of each component thereof to training image, obtains the luminance component L after normalization and color component a, b;
Step 112, calculates the brightness step of each pixel corresponding to the matrix of luminance component L;
Step 113, calculates the color gradient of each pixel in the matrix of color component a and color component b respectively;
Step 114, calculates the texture gradient of each pixel.
3. the vehicle target dividing method under weak contrast as claimed in claim 2, it is characterized in that, described step 111 is specific as follows:
First, training image is carried out gamma correction, to realize the nonlinear adjustment to image color component, training image is converted to Lab color space by rgb color space; Again training image luminance component L and two color component a, b under Lab color space are normalized, obtain the luminance component L after normalization and color component a, b.
4. the vehicle target dividing method under weak contrast as claimed in claim 2, it is characterized in that, described step 113 specifically comprises steps A-D:
The weight matrix Wights < > of A, structure 3 yardsticks;
The key map matrix S lice_map < > of B, structure 3 yardsticks; The weight matrix Wights < > of the corresponding yardstick of the key map matrix S lice_map < > of each yardstick has identical dimension, i.e. the square formation of each key map Slice_map < > matrix to be also line number and columns be 2r+1; Choose 8 directions (0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 °) and matrix is divided into 16 regions, in each region, the value of element is identical with the numbering 0 ~ 15 in this region;
C, being multiplied by the element one_to_one corresponding in the weight matrix Wights < > of each key map matrix S lice_map < > yardstick corresponding to it obtains the matrix of corresponding yardstick, i.e. neighborhood gradient operator;
D, utilize neighborhood gradient operator, calculate the brightness step of a pixel to be asked in the matrix of luminance component L.
5. the vehicle target dividing method under weak contrast as claimed in claim 4, it is characterized in that, described steps A is specific as follows:
Build the weight matrix Wights < > of 3 yardsticks respectively; Described weight matrix Wights < > is the square formation that line number and columns are equal to 2r+1; Element in weight matrix Wights < > is non-zero is 1, the Elemental redistribution equaling 1 is with square formation central element (r+1, r+1) for the center of circle, with r be radius disk within the scope of, form the incircle of square formation, in square formation, all the other elements are 0; 3 yardsticks are respectively r=3, r=5 and r=10.
6. the vehicle target dividing method under weak contrast as claimed in claim 4, it is characterized in that, described step D is specific as follows:
1. for some yardsticks, in the matrix of the luminance component L obtained by step 111 centered by a pixel to be asked, carry out dot product by the neighborhood gradient operator of a certain yardstick with each luminance component within the scope of neighborhood of pixel points to be asked, obtain the matrix N eibor < > within the scope of neighborhood of pixel points to be asked; Disk in neighborhood gradient operator, as separatrix, is divided into left semicircle and right semi-circle by the straight line choosing vertical direction (90 °), and left semicircle comprises the 0th sector to the 7th sector, and right semi-circle comprises the 8th sector to the 15th sector; The element of the matrix N eibor < > that each semicircle is corresponding forms a histogram and is normalized it, is designated as Slice_hist respectively 1< > and Slice_hist 2< >; H 1represent the histogram corresponding to half-circle area, the left side, H 2represent the histogram of the right corresponding to half-circle area, i is the value of histogrammic bin, is defined as [0,24], i.e. brightness range.
2. the card side's distance shown in through type (1) calculates the difference between two normalization histograms, namely obtains the brightness step on the vertical direction of the next pixel to be asked of a certain yardstick;
d C h i _ s q u a r e d ( H 1 , H 2 ) = 1 2 &Sigma; i ( H 1 ( i ) - H 2 ( i ) ) 2 H 1 ( i ) - H 2 ( i ) - - - ( 1 )
After having calculated the brightness step on a certain yardstick vertical direction, choose other place, direction straight lines respectively as separatrix, obtained the brightness step on the every other direction of a certain yardstick of this pixel to be asked; Mode same according to step D again calculates the directive brightness step on these other yardsticks of pixel to be asked.When completing this after asking the brightness step on all directions of all yardsticks of pixel to calculate, calculated the final brightness step of this pixel to be asked by formula (2):
f(x,y,r,n_ori;r=3,5,10;n_ori=1,2,......8)->Brightness Gradient(x,y) (2)
In formula, f is a mapping function, and (x, y) is arbitrary pixel to be asked, and r represents the yardstick chosen, and n_ori represents the direction chosen; The final brightness step that Brightness Gradient (x, y) is pixel (x, y); The correspondence rule of f is select the high-high brightness Grad of each direction in 3 yardsticks as the party's luminance gradient value upwards, the brightness step summation on 8 directions is obtained the final brightness step of pixel (x, y).
7. the vehicle target dividing method under weak contrast as claimed in claim 2, it is characterized in that, described step 114 is specific as follows:
A, structure multi-dimension texture bank of filters set Filters (x, y)(n f, filter, r, θ), n frepresent the number of wave filter, filter represents the set of wave filter kind, and r represents yardstick, and θ represents the direction chosen;
The corresponding texture filtering response vector of each pixel in B, calculation training image, namely T e x ( x , y ) = ( fil 1 , fil 2 , fil 3 ... , fil n f ) , Specific as follows:
By gray level image I gray(x, y) and the multi-dimension texture filter set Filters built (x, y)[n f, filter, r, θ] and carry out convolution in corresponding scale neighborhood centered by pixel (x, y), obtain the texture filtering response vector of pixel (x, y).During as yardstick r=5, in the 11*11 neighborhood centered by a certain pixel, carry out convolution, i.e. I gray(x, y) * Filters (n f, filter, r, θ), wherein n f=17, filter=(fil cs, fil 1, fil 2), r=5, θ=0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 °; Obtain texture filtering response vector Tex (x, the y)=(fil of pixel (x, y) 1, fil 2, fil 3..., fil 17).
With a certain pixel (x when calculating r=5, r=10, r=20 respectively with said method, the texture feature vector of the corresponding scale neighborhood y), obtain corresponding texture filtering response vector Tex (x, the y)=(fil of a certain pixel (x, y) 1, fil 2, fil 3..., fil 51).
C, structure texton histogram; Specific as follows:
Adopt K-means method to pixel (x all in training image, y) texture filtering response vector carries out cluster, in cluster process, get K=32 as initial value, obtain 32 cluster centres altogether, take out texture filtering response vector corresponding for 32 cluster centres obtained as texture primitive, 32 bin be used as in textural characteristics statistic histogram mark to build texton histogram;
D, calculate the texture gradient of each pixel; Specific as follows:
First steps A-the C in step 112 is adopted, obtain the neighborhood gradient operator under 3 yardsticks, for a certain yardstick, with pixel (x a certain to be asked, y) centered by, the texture filtering response vector corresponding with it by each element in the neighborhood gradient operator of this yardstick carries out multiplication operation, obtain the Neighborhood matrix group Neibor [< >] of this pixel, choose the straight line of vertical direction (90 °) as separatrix, disk in yardstick contiguous range is divided into left semicircle and right semi-circle, left semicircle comprises the 0th sector to the 7th sector, right semi-circle comprises the 8th sector to the 15th sector, the element of the Neighborhood matrix group Neibor [< >] that each semicircle is corresponding forms a texton histogram, H 1represent the histogram corresponding to half-circle area, the left side, H 2represent the histogram of the right corresponding to half-circle area, provide histogrammic bin by step C and mark, identical with the 2. step in the step D of step 112, try to achieve the final texture gradient of each pixel to be asked in training image, be designated as TextureGradient (x, y).
8. the vehicle target dividing method under weak contrast as claimed in claim 7, it is characterized in that, described steps A is specific as follows:
Convert training image to gray level image, be designated as I gray(x, y), and to gray level image I graythe gray component of each pixel (x, y) of (x, y) is normalized; Choose three kinds of wave filters, be respectively the wave filter after Gauss's Second Order Partial waveguide filter and Hilbert transform thereof and center ring around wave filter; Build multi-dimension texture filter set from 8 directions and 3 yardsticks, be designated as Filters (x, y)[n f, filter, r, θ], wherein, n frepresent the number of wave filter, filter represents the set of wave filter kind, and r represents yardstick, and θ represents the direction chosen; n f=51, filter=(fil cs, fil 1, fil 2), r=5,10,20, θ=0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 °; Multi-dimension texture filter set Filters (x, y)[n f, filter, r, θ]; Shown in 5,6,7:
Gauss's Second Order Partial waveguide filter of 3 yardsticks in 8 directions:
f 1 ( x , y ) = d 2 dy 2 ( 1 C exp ( y 2 &sigma; 2 ) exp ( x 2 l 2 &sigma; 2 ) ) - - - ( 5 )
Wave filter after Gauss's second order local derviation Hilbert transform of 3 yardsticks in 8 directions:
f 2(x,y)=Hilbert(f 1(x,y)) (6)
The center ring of 3 yardsticks is around wave filter:
Gaussian_cs<>=m_surround<>-m_center<> (7)
The standard deviation sigma value corresponding around wave filter, center-filter, Gauss's Second Order Partial waveguide filter and hilbert-transform filter thereof is respectively 2 and
9. the vehicle target dividing method under weak contrast as claimed in claim 1, is characterized in that, in described step 12, multi-instance learning is introduced into saliency and detects the conspicuousness testing result obtaining test pattern, specifically comprise step 121 and step 122:
Step 121, the brightness utilizing method described in step 11 to obtain, color and texture gradient feature, in conjunction with multi-instance learning EMDD algorithm realization to the study of training set, obtain the conspicuousness detection model succeeded in school;
Step 122, substitutes into test pattern the conspicuousness testing result that the conspicuousness detection model succeeded in school obtains test pattern.
10. the vehicle target dividing method under weak contrast as claimed in claim 1, it is characterized in that, described step 2 specifically comprises the steps:
Step 21, the conspicuousness testing result of image step 1 obtained cuts the input of algorithm as figure, the conspicuousness mark according to bag builds such as formula the weight function shown in (3) with exemplary characteristics vector; And obtain the figure after such as formula the optimization shown in (4) and cut cost function;
w i j = 1 2 &lsqb; S a l i e n ( i ) + S a l i e n ( j ) &rsqb; exp ( - S i m ( f i , f j ) / &delta; 2 ) i &NotEqual; j 0 i = j - - - ( 3 )
R ( U ) = &Sigma; i > j w i j ( U i - U j ) 2 &Sigma; i > j w i j U i - U j = U T ( D - W ) U 1 2 U T W U - - - ( 4 )
In formula (3), w ijrepresent the visual signature similarity of i example bag and j example bag corresponding region, Salien (i) and Salien (j) represent the remarkable angle value after region i and region j normalization respectively, σ is the sensitive parameter regulating visual signature difference, and value is 10 ~ 20; Region i is 0 to the similar weights of himself; Similarity matrix W={w ijto be diagonal line be 0 symmetric matrix, and w ij∈ [0,1]; f i, f jrepresent the exemplary characteristics vector of correspondence respectively in i and j example bag respectively, namely the brightness step feature of image, color gradient feature and texture gradient proper vector synthesize the 4 mix vector Mixvector tieed up i={ BrightnessGradient i, ColorGradient i, TextureGradient i, then Sim (f i, f j)=|| Mixvector i-Mixvector j|| 2; Figure represented by formula (4) cuts in framework, and D is that N ties up diagonal matrix, element on its diagonal line u={U 1, U 2..., U i..., U j... U nbe cutting state vector, each component of a vector U irepresent the cutting state of region i; The visual similarity divided between subrepresentation region i and region j of formula (4), denominator represents the visual similarity in the i of region;
Step 22, adopts Agglomerative Hierarchical Clustering algorithm, solves the vector of the cutting state corresponding to minimum value eigenwert of R (U), namely obtains the optimum segmentation result of image.
CN201510374899.8A 2015-06-30 2015-06-30 A kind of vehicle target dividing method under weak contrast Active CN105005989B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510374899.8A CN105005989B (en) 2015-06-30 2015-06-30 A kind of vehicle target dividing method under weak contrast

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510374899.8A CN105005989B (en) 2015-06-30 2015-06-30 A kind of vehicle target dividing method under weak contrast

Publications (2)

Publication Number Publication Date
CN105005989A true CN105005989A (en) 2015-10-28
CN105005989B CN105005989B (en) 2018-02-13

Family

ID=54378646

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510374899.8A Active CN105005989B (en) 2015-06-30 2015-06-30 A kind of vehicle target dividing method under weak contrast

Country Status (1)

Country Link
CN (1) CN105005989B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106600550A (en) * 2016-11-29 2017-04-26 深圳开立生物医疗科技股份有限公司 Ultrasonic image processing method and system
CN107871321A (en) * 2016-09-23 2018-04-03 南开大学 Image partition method and device
CN108090511A (en) * 2017-12-15 2018-05-29 泰康保险集团股份有限公司 Image classification method, device, electronic equipment and readable storage medium storing program for executing
CN108625844A (en) * 2017-03-17 2018-10-09 中石化石油工程技术服务有限公司 A kind of calibration of gamma and test device
CN109241865A (en) * 2018-08-14 2019-01-18 长安大学 A kind of vehicle detection partitioning algorithm under weak contrast's traffic scene
CN109960984A (en) * 2017-12-25 2019-07-02 大连楼兰科技股份有限公司 Vehicle checking method based on contrast and significance analysis
CN109961637A (en) * 2017-12-25 2019-07-02 大连楼兰科技股份有限公司 Vehicle detection apparatus and system based on more subgraphs fusion and significance analysis
CN110866460A (en) * 2019-10-28 2020-03-06 衢州学院 Method and device for detecting specific target area in complex scene video
CN112284287A (en) * 2020-09-24 2021-01-29 哈尔滨工业大学 Stereoscopic vision three-dimensional displacement measurement method based on structural surface gray scale characteristics
CN113239964A (en) * 2021-04-13 2021-08-10 联合汽车电子有限公司 Vehicle data processing method, device, equipment and storage medium
CN113688670A (en) * 2021-07-14 2021-11-23 南京四维向量科技有限公司 Method for monitoring street lamp brightness based on image recognition technology

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070189602A1 (en) * 2006-02-07 2007-08-16 Siemens Medical Solutions Usa, Inc. System and Method for Multiple Instance Learning for Computer Aided Detection
CN102509084A (en) * 2011-11-18 2012-06-20 中国科学院自动化研究所 Multi-examples-learning-based method for identifying horror video scene
CN102831600A (en) * 2012-07-24 2012-12-19 长安大学 Image layer cutting method based on weighting cut combination

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070189602A1 (en) * 2006-02-07 2007-08-16 Siemens Medical Solutions Usa, Inc. System and Method for Multiple Instance Learning for Computer Aided Detection
CN102509084A (en) * 2011-11-18 2012-06-20 中国科学院自动化研究所 Multi-examples-learning-based method for identifying horror video scene
CN102831600A (en) * 2012-07-24 2012-12-19 长安大学 Image layer cutting method based on weighting cut combination

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘红才: "基于视觉显著模型和图割的图像分割方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
许可: "基于多特征的图像轮廓检测算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
谢延涛: "基于多示例学习的图像显著性分析", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871321B (en) * 2016-09-23 2021-08-27 南开大学 Image segmentation method and device
CN107871321A (en) * 2016-09-23 2018-04-03 南开大学 Image partition method and device
CN106600550A (en) * 2016-11-29 2017-04-26 深圳开立生物医疗科技股份有限公司 Ultrasonic image processing method and system
CN108625844A (en) * 2017-03-17 2018-10-09 中石化石油工程技术服务有限公司 A kind of calibration of gamma and test device
CN108090511A (en) * 2017-12-15 2018-05-29 泰康保险集团股份有限公司 Image classification method, device, electronic equipment and readable storage medium storing program for executing
CN109960984A (en) * 2017-12-25 2019-07-02 大连楼兰科技股份有限公司 Vehicle checking method based on contrast and significance analysis
CN109961637A (en) * 2017-12-25 2019-07-02 大连楼兰科技股份有限公司 Vehicle detection apparatus and system based on more subgraphs fusion and significance analysis
CN109241865A (en) * 2018-08-14 2019-01-18 长安大学 A kind of vehicle detection partitioning algorithm under weak contrast's traffic scene
CN109241865B (en) * 2018-08-14 2022-05-31 长安大学 Vehicle detection segmentation algorithm under weak contrast traffic scene
CN110866460B (en) * 2019-10-28 2020-11-27 衢州学院 Method and device for detecting specific target area in complex scene video
CN110866460A (en) * 2019-10-28 2020-03-06 衢州学院 Method and device for detecting specific target area in complex scene video
CN112284287A (en) * 2020-09-24 2021-01-29 哈尔滨工业大学 Stereoscopic vision three-dimensional displacement measurement method based on structural surface gray scale characteristics
CN112284287B (en) * 2020-09-24 2022-02-11 哈尔滨工业大学 Stereoscopic vision three-dimensional displacement measurement method based on structural surface gray scale characteristics
CN113239964A (en) * 2021-04-13 2021-08-10 联合汽车电子有限公司 Vehicle data processing method, device, equipment and storage medium
CN113239964B (en) * 2021-04-13 2024-03-01 联合汽车电子有限公司 Method, device, equipment and storage medium for processing vehicle data
CN113688670A (en) * 2021-07-14 2021-11-23 南京四维向量科技有限公司 Method for monitoring street lamp brightness based on image recognition technology

Also Published As

Publication number Publication date
CN105005989B (en) 2018-02-13

Similar Documents

Publication Publication Date Title
CN105005989A (en) Vehicle target segmentation method under weak contrast
CN105160309B (en) Three lanes detection method based on morphological image segmentation and region growing
Sakhare et al. Review of vehicle detection systems in advanced driver assistant systems
CN103258432B (en) Traffic accident automatic identification processing method and system based on videos
CN105354568A (en) Convolutional neural network based vehicle logo identification method
CN104778721A (en) Distance measuring method of significant target in binocular image
CN102354457B (en) General Hough transformation-based method for detecting position of traffic signal lamp
Lin et al. Vaid: An aerial image dataset for vehicle detection and classification
CN104200228B (en) Recognizing method and system for safety belt
CN103279759A (en) Vehicle front trafficability analyzing method based on convolution nerve network
CN103605977A (en) Extracting method of lane line and device thereof
Ding et al. Fast lane detection based on bird’s eye view and improved random sample consensus algorithm
CN104050447A (en) Traffic light identification method and device
CN202134079U (en) Unmanned vehicle lane marker line identification and alarm device
CN103679191A (en) An automatic fake-licensed vehicle detection method based on static state pictures
CN105069774A (en) Object segmentation method based on multiple-instance learning and graph cuts optimization
Sugiharto et al. Traffic sign detection based on HOG and PHOG using binary SVM and k-NN
CN103544488B (en) A kind of face identification method and device
CN105989334A (en) Monocular vision-based road detection method
Zang et al. Traffic lane detection using fully convolutional neural network
Mammeri et al. North-American speed limit sign detection and recognition for smart cars
CN106919939A (en) A kind of traffic signboard Tracking Recognition method and system
Kühnl et al. Visual ego-vehicle lane assignment using spatial ray features
Chen et al. Contrast limited adaptive histogram equalization for recognizing road marking at night based on YOLO models
Al-Shemarry et al. Developing learning-based preprocessing methods for detecting complicated vehicle licence plates

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Liu Zhanwen

Inventor after: Chen Ting

Inventor after: Lin Shan

Inventor after: Hao Ruru

Inventor after: Zhou Zhou

Inventor after: Zhao Xiangmo

Inventor after: Shen Chao

Inventor after: Duan Zongtao

Inventor after: Gao Tao

Inventor after: Fan Xing

Inventor after: Wang Runmin

Inventor after: Xu Jiang

Inventor after: Zhou Jingmei

Inventor before: Liu Zhanwen

Inventor before: Lin Shan

Inventor before: Kang Junmin

Inventor before: Wang Jiaojiao

Inventor before: Xu Jiang

Inventor before: Zhao Xiangmo

Inventor before: Fang Jianwu

Inventor before: Duan Zongtao

Inventor before: Wang Runmin

Inventor before: Hao Ruru

Inventor before: Qi Xiuzhen

Inventor before: Zhou Zhou

Inventor before: Zhou Jingmei

GR01 Patent grant
GR01 Patent grant