CN105005989B - A kind of vehicle target dividing method under weak contrast - Google Patents

A kind of vehicle target dividing method under weak contrast Download PDF

Info

Publication number
CN105005989B
CN105005989B CN201510374899.8A CN201510374899A CN105005989B CN 105005989 B CN105005989 B CN 105005989B CN 201510374899 A CN201510374899 A CN 201510374899A CN 105005989 B CN105005989 B CN 105005989B
Authority
CN
China
Prior art keywords
mrow
msub
pixel
msup
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510374899.8A
Other languages
Chinese (zh)
Other versions
CN105005989A (en
Inventor
刘占文
赵祥模
沈超
段宗涛
高涛
樊星
王润民
徐江
周经美
陈婷
林杉
郝茹茹
周洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changan University
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN201510374899.8A priority Critical patent/CN105005989B/en
Publication of CN105005989A publication Critical patent/CN105005989A/en
Application granted granted Critical
Publication of CN105005989B publication Critical patent/CN105005989B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses the vehicle target dividing method under a kind of weak contrast, step 1:Notable model modeling is carried out using the method for multi-instance learning to training image;Then the bag in test image and example are predicted using notable model, obtain the saliency map of test image;Step 2:The significance introducing figure of test image is cut into framework, the mark according to exemplary characteristics vector and example bag cuts framework to figure and optimized, and solution figure cuts the suboptimal solution of optimization, obtains the Accurate Segmentation of target.The vision noticing mechanism of the reference mankind of the present invention, with reference to the image partition method based on graph theory, establish a kind of vehicle target parted pattern of view-based access control model significant characteristics, accurately entire vehicle can not only be split under the conditions of good environment, and there is certain adaptability and robustness, the weak contrast's vehicle target that can be relatively accurately partitioned into the case of night-environment, shadow occlusion in traffic scene.

Description

A kind of vehicle target dividing method under weak contrast
Technical field
The invention belongs to image processing field, it is related to a kind of image partition method, the car under specifically a kind of weak contrast Target Segmentation method.
Background technology
With the progress of science and technology, intelligent transportation system (ITS) has turned into the mankind and has improved traffic intelligence degree and friendship The important means of logical managerial skills.In particular with the fast development of computer technology and sensor technology, based on machine vision Vehicle detection and monitoring become an important component in ITS, for improving the traffic administration in ITS, transport information Collection, emergency management and rescue etc. all have the function that important.To vehicle target interested from traffic monitoring image It is key technology in vehicle detection and vehicle monitoring system based on machine vision to carry out segmentation extraction, the essence of its segmentation result Degree directly affects the accuracy of vehicle detection, and is the processing such as follow-up vehicle classification, vehicle identification and tracking.Current many passes It is used for the segmentation of vehicle target, such as image partition method based on area information, based on edge in the conventional method of image segmentation The image partition method of information, the image partition method in feature based space, image partition method based on threshold value etc., these sides Method, which all has different threshold value and set, can cause segmentation result very big with splitting mass discrepancy, the shortcomings that poor robustness;In recent years Based on the image Segmentation Technology of graph theory in nearly 30 years extensive concerns for causing scholars, turn into one of image segmentation field compared with New study hotspot.Pictorial element is successfully mapped in figure and carries out image segmentation to obtain by the image Segmentation Technology based on graph theory Area-of-interest is taken, such as dividing method is composed, directly using the figure side of cutting of brightness and color character based on what multiple dimensioned figure decomposed Method (i.e. method in the patent of Application No. 201210257591.1) etc., these methods are split for vehicle target, had on daytime Preferably vehicle target can be split in the case of daylight shade, but still suffer from the shortcomings of environmental suitability is poor, especially It is difficult for weak contrast's Target Segmentation under vehicles in complex traffic scene, night scenes or bad weather (such as dense fog, sleet) To obtain gratifying segmentation effect.
The content of the invention
For above-mentioned the shortcomings of the prior art, it is an object of the present invention to use for reference the vision noticing mechanism of the mankind, tie The image partition method based on graph theory is closed, establishes a kind of vehicle target parted pattern of view-based access control model significant characteristics, can not only The accurately segmentation entire vehicle, and have certain adaptability and robustness under the conditions of good environment, can be in night-environment, the moon The weak contrast's vehicle target being relatively accurately partitioned under shadow circumstance of occlusion in traffic scene.
A kind of vehicle target dividing method under weak contrast, specifically comprises the following steps:
Step 1:Notable model modeling is carried out using the method for multi-instance learning to training image;Then notable model is utilized Bag in test image and example are predicted, obtain the saliency map of test image;Specifically include:
Step 11, training image is pre-processed, and extracts gradient of image intensity feature, color gradient feature and texture Gradient Features;
Step 12, multi-instance learning is incorporated into saliency detection, obtains the conspicuousness detection knot of test image Fruit;
Step 2:The significance introducing figure of test image is cut into framework, the mark pair according to exemplary characteristics vector and example bag Figure cuts framework and optimized, and solution figure cuts the suboptimal solution of optimization, obtains the Accurate Segmentation of target.
Further, training image is pre-processed in the step 11, and extracts brightness step feature, color gradient Feature and texture gradient feature, specifically include step 111~step 113:
Step 111, the conversion of color space is carried out to training image and its quantization of each component pre-processes, is normalized Luminance component L and color component a, b afterwards;
Step 112, the brightness step of each pixel corresponding to luminance component L matrix is calculated;
Step 113, the color gradient of each pixel in color component a and color component b matrix is calculated respectively;
Step 114, the texture gradient of each pixel is calculated.
Further, the step 111 is specific as follows:
First, training image is subjected to gamma correction, to realize the nonlinear adjustment to image color component, training schemed As being changed by rgb color space to Lab color spaces;Again to luminance component L of the training image under Lab color spaces and two Color component a, b are normalized, luminance component L and color component a, b after being normalized.
Further, the step 113 specifically includes step A-D:
A, the weight matrix Wights < > of 3 yardsticks are built;
B, the key map matrix Slice_map < > of 3 yardsticks are built;The key map matrix Slice_ of each yardstick The weight matrix Wights < > that map < > correspond to yardstick have identical dimension, i.e., each key map Slice_map < > Matrix is also the square formation that line number and columns are all 2r+1;Choose 8 directions (0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 °) matrix is divided into 16 regions, the value of element is identical with the numbering 0~15 in the region in each region;
C, by the weight matrix Wights < > of the corresponding yardsticks of each key map matrix Slice_map < > Element, which corresponds to be multiplied, obtains the matrix of corresponding yardstick, i.e. neighborhood gradient operator;
D, using neighborhood gradient operator, the brightness step of a pixel to be asked in luminance component L matrix is calculated.
Further, the step A is specific as follows:
The weight matrix Wights < > of 3 yardsticks are built respectively;Described weight matrix Wights < > be line number and Columns is equal to 2r+1 square formation;Element non-zero i.e. 1 in weight matrix Wights < >, the Elemental redistribution equal to 1 is with square formation Central element (r+1, r+1) be the center of circle, using r as the disk of radius in the range of, form the inscribed circle of square formation, remaining element in square formation It is 0;3 yardsticks are respectively r=3, r=5 and r=10.
Further, the step D is specific as follows:
1. for some yardstick, in the luminance component L obtained by step 111 matrix centered on a pixel to be asked, Dot product is carried out by the neighborhood gradient operator and each luminance component in the range of neighborhood of pixel points to be asked of a certain yardstick, treated Seek the matrix N eibor < > in the range of neighborhood of pixel points;The straight line of vertical direction (90 °) is chosen as line of demarcation, by neighborhood ladder Disk in degree operator is divided into left semicircle and right semi-circle, and left semicircle includes the 0th sector to the 7th sector, and right semi-circle includes the 8th fan Area to the 15th sector;Matrix N eibor < > element forms a histogram and carries out normalizing to it corresponding to each semicircle Change, be designated as Slice_hist respectively1< > and Slice_hist2< >;H1The histogram corresponding to the half-circle area of the left side is represented, H2The histogram corresponding to the half-circle area of the right is represented, i is the bin of histogram value, is defined as [0,24], i.e. brightness model Enclose.
2. calculating the difference between two normalization histograms by card side's distance shown in formula (1), that is, obtain a certain chi The brightness step spent on the vertical direction of next pixel to be asked;
After brightness step on a certain yardstick vertical direction has been calculated, straight line conduct where other directions is chosen respectively Line of demarcation, obtain the brightness step on the every other direction of a certain yardstick of pixel to be asked;Further according to the same modes of step D The directive brightness step of institute on the pixel to be asked other yardsticks is calculated.When completion all yardsticks of pixel to be asked After brightness step on all directions calculates, the final brightness step of the pixel to be asked is calculated by formula (2):
f(x,y,r,n_ori;R=3,5,10;N_ori=1,2 ... 8)-> Brightness Gradient (x, y) (2)
In formula, f is a mapping function, and (x, y) is any pixel to be asked, and r represents the yardstick chosen, and n_ori represents choosing The direction taken;Brightness Gradient (x, y) are the final brightness step of pixel (x, y);F correspondence rule is choosing High-high brightness Grad of each direction in 3 yardsticks is selected as luminance gradient value in this direction, will be bright on 8 directions Degree gradient sums to obtain the final brightness step of pixel (x, y).
Further, the step 114 is specific as follows:
A, multi-dimension texture wave filter group set Filters is built(x,y)(nf, filter, r, θ), nfRepresent wave filter Number, filter represent the set of wave filter species, and r represents yardstick, and θ represents the direction chosen;
B, the corresponding texture filtering response vector of each pixel in training image, i.e. Tex (x, y)=(fil are calculated1, fil2,fil3...,filnf), it is specific as follows:
By gray level image Igray(x, y) and structure multi-dimension texture filter set Filters(x,y)[nf,filter, R, θ] convolution is carried out in corresponding scale neighborhood centered on pixel (x, y), obtain the texture filtering response of pixel (x, y) Vector.During such as yardstick r=5, convolution, i.e. I are carried out in the 11*11 neighborhoods centered on a certain pixelgray(x,y)*Filters (nf, filter, r, θ), wherein nf=17, filter=(filcs,fil1,fil2), r=5, θ=0 °, 22.5 °, 45 °, 67.5°、90°、112.5°、135°、157.5°;Obtain the texture filtering response vector Tex (x, y) of pixel (x, y)= (fil1,fil2,fil3...,fil17)。
Corresponding scale when method described above calculates r=5, r=10, r=20 respectively centered on a certain pixel (x, y) The texture feature vector of neighborhood, obtain corresponding texture filtering response vector Tex (x, the y)=(fil of a certain pixel (x, y)1, fil2,fil3...,fil51)。
C, texton histogram is built;It is specific as follows:
The texture filtering response vector of all pixels point (x, y) in training image is clustered using K-means methods, In cluster process, K=32 is taken 32 cluster centres to be obtained, by line corresponding to obtain 32 cluster centres as initial value Reason filter response vector, which takes out, is used as texture primitive, is marked as 32 bin in textural characteristics statistic histogram to build line Manage primitive histogram;
D, the texture gradient of each pixel is calculated;It is specific as follows:
The step A-C in step 112 is used first, obtains the neighborhood gradient operator under 3 yardsticks, for a certain yardstick, It is corresponding by each element in the neighborhood gradient operator of the yardstick centered on a certain pixel (x, y) to be asked Texture filtering response vector carries out multiplication operation, obtains the Neighborhood matrix group Neibor [< >] of the pixel, chooses vertical side To (90 °) straight line as line of demarcation, the disk in yardstick contiguous range is divided into left semicircle and right semi-circle, left semicircle includes 0th sector to the 7th sector, right semi-circle include the 8th sector to the 15th sector;Neighborhood matrix group Neibor corresponding to each semicircle The element of [< >] forms a texton histogram;H1Represent the histogram corresponding to the half-circle area of the left side, H2Represent the right Histogram corresponding to half-circle area, the bin that histogram is provided by step C are marked;With the 2. step in the step D of step 112 It is identical, each final texture gradient of pixel to be asked is tried to achieve in training image, is designated as TextureGradient (x, y).
Further, the step A is specific as follows:
Training image is converted into gray level image, is designated as Igray(x, y), and to gray level image IgrayEach picture of (x, y) The gray component of vegetarian refreshments (x, y) is normalized;Choose three kinds of wave filters, respectively Gauss second order derviation wave filter and its Xi Er Wave filter after Bert conversion is with center ring around wave filter;From 8 directions and 3 yardstick structure multi-dimension texture wave filter collection Close, be designated as Filters(x,y)[nf, filter, r, θ], wherein, nfThe number of wave filter is represented, filter represents wave filter species Set, r represent yardstick, θ represent choose direction;nf=51, filter=(filcs,fil1,fil2), r=5,10,20, θ =0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 °;Multi-dimension texture filter set Filters(x,y) [nf,filter,r,θ];As shown in formula 5,6,7:
The Gauss second order derviation wave filter of 83, direction yardsticks:
Wave filter after the Gauss second order derviation Hilbert transform of 83, direction yardsticks:
f2(x, y)=Hilbert (f1(x,y)) (6)
The center ring of 3 yardsticks is around wave filter:
Gaussian_cs < >=m_surround < >-m_center < > (7)
Around corresponding to wave filter, center-filter, Gauss second order derviation wave filter and its hilbert-transform filter Standard deviation sigma value is respectively2 and
Further, multi-instance learning is introduced into saliency in the step 12 to detect to obtain the aobvious of test image Work property testing result, specifically includes step 121 and step 122:
Step 121, brightness, color and the texture gradient feature obtained using method described in step 11, with reference to more examples Learn study of the EMDD algorithms realization to training set, obtain the conspicuousness detection model succeeded in school;
Step 122, test image is substituted into the conspicuousness detection model succeeded in school and obtains the conspicuousness detection of test image As a result.
Further, the step 2 specifically comprises the following steps:
Step 21, the conspicuousness testing result of image step 1 obtained cuts the input of algorithm as figure, according to the aobvious of bag Work property mark builds the weight function as shown in formula (3) with exemplary characteristics vector;And the figure obtained after the optimization as shown in formula (4) is cut Cost function;
In formula (3), wijRepresent the visual signature similitude of i examples bag and j example bags corresponding region, Salien (i) with Salien (j) represents the notable angle value after region i and region j normalization respectively, and σ is the sensitive ginseng of regulation visual signature difference Number, value are 10~20;I weights similar to its own in region are 0;Similarity matrix W={ wijBe diagonal be 0 it is symmetrical Matrix, and wij∈[0,1];fi,fjRepresent i with distinguishing corresponding exemplary characteristics vector, the i.e. brightness of image in j example bags respectively Gradient Features, color gradient feature synthesize the mix vector Mixvector of 4 dimensions with texture gradient characteristic vectori= {BrightnessGradienti,ColorGradienti, TextureGradienti, then Sim (fi,fj)=| | Mixvectori-Mixvectorj||2;Figure represented by formula (4) is cut in framework, and D is N-dimensional diagonal matrix, element on its diagonalFor cutting state vector, each component of a vector UiRepresent region i's Cutting state;The molecule of formula (4) represents the visual similarity between region i and region j, and denominator represents the vision phase in the i of region Like property;
Step 22, using Agglomerative Hierarchical Clustering algorithm, solve cutting state corresponding to R (U) minimum value characteristic value to Amount, that is, obtain the optimum segmentation result of image.
Brief description of the drawings
Fig. 1 is the flow chart of the inventive method.
Fig. 2 be Application No. 201210257591.1 patent in method, based on multiple dimensioned figure decompose spectrum dividing method With the contrast schematic diagram of the segmentation result of the Target Segmentation method that optimization is cut based on multi-instance learning and figure of the present invention.
Fig. 3 is disk or so subregion schematic diagram.
Fig. 4 is brightness H1,H2Histogram schematic diagram.
Fig. 5 is to change disk line of demarcation direction schematic diagram.
Fig. 6 is that texton histogram forms schematic diagram.
Fig. 7 is texture H1,H2Histogram schematic diagram.
The present invention is further explained with embodiment below in conjunction with accompanying drawing.
Embodiment
As shown in figure 1, the vehicle target dividing method under the weak contrast that the present invention provides, specifically comprises the following steps:
Step 1:Night Expressway Road image is chosen as training image, to training image using multi-instance learning Method carries out notable model modeling;Then the bag in test image and example are predicted using notable model, tested The saliency map of image;
Step 2:The significance introducing figure of test image is cut into framework, the mark pair according to exemplary characteristics vector and example bag Figure cuts framework and optimized, and the suboptimal solution of optimization is cut using Agglomerative Hierarchical Clustering Algorithm for Solving figure, obtains the Accurate Segmentation of target.
Further, described step 1 specifically includes step 11 and step 12:
Step 11, training image is pre-processed, and extracts gradient of image intensity feature, color gradient feature and texture Gradient Features;
Step 12, multi-instance learning is incorporated into saliency detection, obtains the conspicuousness detection knot of test image Fruit.
Further, training image is pre-processed in the step 11, and extracts brightness step feature, color gradient Feature and texture gradient feature, specifically include step 111~step 113:
Step 111, the conversion of color space is carried out to training image and its quantization of each component pre-processes, is normalized Luminance component L and color component a, b afterwards;It is specific as follows:
First, training image is subjected to gamma correction, to realize the nonlinear adjustment to image color component, training schemed As being changed by rgb color space to Lab color spaces;Again to luminance component L of the training image under Lab color spaces and two Color component a, b are normalized, luminance component L and color component a, b after being normalized;
After the pretreatment for completing training image, the present invention is analyzed vehicle shadow feature in training image, to connect The Gradient Features to get off, which are chosen, provides theoretical foundation.Training image is all the road image for having night characteristic feature, due to night Between drive a vehicle, all there is shadow interference in the vehicle target in every width training image, the presence of vehicle shadow can cause car body The enlargement deformation in region, or even cause more cars to be connected, have a strong impact on the accurate segmentation of car body and the extraction of car body information, and night Between car light illumination range and intensity can also influence Target Segmentation to a certain extent, the segmentation effect to be got well will eliminate The shade formed by illumination.
Shade is due to that the light that light source is sent is blocked and a kind of caused physical phenomenon by object in scene, including from Shade and cast shadow.It is due to the part that object stops that light source causes uneven illumination and seems dark in itself from shade;Projection Shade refers to shadow of the object on other body surfaces (such as road).By largely include night highway driving vehicle and The training image of its shade, it can be deduced that the feature that shade is different from vehicle target is mainly:
(1) change of conspicuousness will not occur for the color on the road surface that shade is covered and texture.
(2) generally cast shadow brightness is less than background luminance, and is one relative to the luminance gain of background area Numerical value less than 1;It is but then opposite under the interference for having vehicular high beam headlight.
(3) gray-value variation in shaded interior region is not violent, and flat, or local flat is shown as in gradient 's.
To sum up analyze, the present invention uses brightness step feature, color gradient feature and the texture gradient feature of training image Carry out the study of conspicuousness model.
Step 112, the brightness step of each pixel corresponding to luminance component L matrix is calculated.Specifically include step A-D:
A, the weight matrix Wights < > of 3 yardsticks are built.It is specific as follows:
The weight matrix Wights < > of 3 yardsticks are built respectively;Described weight matrix Wights < > be line number and Columns is equal to 2r+1 square formation;Element non-zero i.e. 1 in weight matrix Wights < >, the Elemental redistribution equal to 1 is with square formation Central element (r+1, r+1) be the center of circle, using r as the disk of radius in the range of, form the inscribed circle of square formation, remaining element in square formation It is 0;In the present invention, when 3 yardsticks are respectively r=3, r=5 and r=10, weight matrix Wights < > corresponding to difference are such as Under:
B, the key map matrix Slice_map < > of 3 yardsticks are built;The key map matrix Slice_ of each yardstick The weight matrix Wights < > that map < > correspond to yardstick have identical dimension, i.e., each key map Slice_map < > Matrix is also the square formation that line number and columns are all 2r+1;Choose 8 directions (0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 °) matrix is divided into 16 regions, the value of element is identical with the numbering 0~15 in the region in each region;Build Vertical key map matrix Slice_map < > purpose is to realize the fast positioning to subregion.In the present invention, 3 indexes Map matrix Slice_map < > difference is as follows:
C, by the weight matrix Wights < > of the corresponding yardsticks of each key map matrix Slice_map < > Element, which corresponds to be multiplied, obtains the matrix of corresponding yardstick, i.e. neighborhood gradient operator.Neighborhood gradient operator under 3 yardsticks is such as Under:
D, using neighborhood gradient operator, the brightness step of a pixel to be asked in luminance component L matrix is calculated.Specifically It is as follows:
1. for some yardstick, in the luminance component L obtained by step 111 matrix centered on a pixel to be asked, Dot product is carried out by the neighborhood gradient operator and each luminance component in the range of neighborhood of pixel points to be asked of a certain yardstick, treated Seek the matrix N eibor < > in the range of neighborhood of pixel points;The straight line of vertical direction (90 °) is chosen as line of demarcation, by neighborhood ladder Disk in degree operator is divided into left semicircle and right semi-circle, and left semicircle includes the 0th sector to the 7th sector, and right semi-circle includes the 8th fan Area to the 15th sector;Matrix N eibor < > element forms a histogram and carries out normalizing to it corresponding to each semicircle Change, be designated as Slice_hist respectively1< > and Slice_hist2< >;As shown in Figure 4.H1Represent corresponding to the half-circle area of the left side Histogram, H2The histogram corresponding to the half-circle area of the right is represented, i is the bin of histogram value, is defined as [0,24], That is brightness range.
2. calculating the difference between two normalization histograms by card side's distance shown in formula (1), that is, obtain a certain chi The brightness step spent on the vertical direction of next pixel to be asked;
After brightness step on a certain yardstick vertical direction has been calculated, as shown in figure 5, choosing other direction institutes respectively In straight line as line of demarcation, the brightness step on the every other direction of a certain yardstick of pixel to be asked is obtained;Further according to step D The directive brightness step of institute on the pixel to be asked other yardsticks is calculated in same mode.When the completion pixel to be asked After brightness step on all directions of yardstick of point calculates, the final brightness of the pixel to be asked is calculated by formula (2) Gradient:
f(x,y,r,n_ori;R=3,5,10;N_ori=1,2 ... 8)-> Brightness Gradient (x, y) (2)
In formula, f is a mapping function, and (x, y) is any pixel to be asked, and r represents the yardstick chosen, and n_ori represents choosing The direction taken;Brightness Gradient (x, y) are the final brightness step of pixel (x, y);F correspondence rule is choosing High-high brightness Grad of each direction in 3 yardsticks is selected as luminance gradient value in this direction, will be bright on 8 directions Degree gradient sums to obtain the final brightness step of pixel (x, y);
Step 113, the color gradient of each pixel in color component a and color component b matrix is calculated respectively.Tool Body is as follows:
The calculating of color gradient is similar with the calculating of brightness step, the difference is that color gradient is characterized in being directed to two colors Color component a and b under the color gradient of component, i.e. Lab color spaces;It is with the calculating difference of brightness step, selects 3 yardsticks taken are respectively r=5, r=10 and r=20;Therefore, the size of corresponding weight matrix and map reference matrix point Wei not 11*11,21*21 and 41*41;The calculating of the color gradient of two color components and brightness step use identical calculating side Method, obtain in color component a and b matrix each final color gradient of pixel to be asked.
Step 114, the texture gradient of each pixel is calculated.It is specific as follows:
A, multi-dimension texture wave filter group set Filters is built(x,y)(nf, filter, r, θ), nfRepresent wave filter Number, filter represent the set of wave filter species, and r represents yardstick, and θ represents the direction chosen.It is specific as follows:
Training image is converted into gray level image, is designated as Igray(x, y), and to gray level image IgrayEach picture of (x, y) The gray component of vegetarian refreshments (x, y) is normalized;Three kinds of wave filters are chosen, respectively Gauss second order derviation wave filter (is designated as fil1< >) and its Hilbert transform after wave filter (be designated as fil2< >) (it is designated as around wave filter with center ring Gaussian_cs < >);From 8 directions and 3 yardstick structure multi-dimension texture filter sets, Filters is designated as(x,y) [nf, filter, r, θ], wherein, nfThe number of wave filter is represented, filter represents the set of wave filter species, and r represents yardstick, θ Represent the direction chosen;nf=51, filter=(filcs,fil1,fil2), r=5,10,20, θ=0 °, 22.5 °, 45 °, 67.5°、90°、112.5°、135°、157.5°;Multi-dimension texture filter set Filters(x,y)[nf,filter,r,θ].Such as Shown in formula 5,6,7:
The Gauss second order derviation wave filter of 83, direction yardsticks:
Wave filter after the Gauss second order derviation Hilbert transform of 83, direction yardsticks:
f2(x, y)=Hilbert (f1(x,y)) (6)
The center ring of 3 yardsticks is around wave filter:
Gaussian_cs < >=m_surround < >-m_center < > (7)
Wave filter group set Filters(x,y)[nf, filter, r, θ] in center ring there is no directionality around wave filter, be Around wave filter and the difference of center-filter.All it is Gauss second order derviation wave filter around wave filter and center-filter.It surround Wave filter, center-filter, Gauss second order derviation wave filter and its standard deviation sigma value corresponding to hilbert-transform filter are distinguished For 2 and
B, the corresponding texture filtering response vector of each pixel in training image, i.e. Tex (x, y)=(fil are calculated1, fil2,fil3...,filnf), it is specific as follows:
By gray level image Igray(x, y) and structure multi-dimension texture filter set Filters(x,y)[nf,filter, R, θ] convolution is carried out in corresponding scale neighborhood centered on pixel (x, y), obtain the texture filtering response of pixel (x, y) Vector.During such as yardstick r=5, convolution, i.e. I are carried out in the 11*11 neighborhoods centered on a certain pixelgray(x,y)*Filters (nf, filter, r, θ), wherein nf=17, filter=(filcs,fil1,fil2), r=5, θ=0 °, 22.5 °, 45 °, 67.5°、90°、112.5°、135°、157.5°;Obtain the texture filtering response vector Tex (x, y) of pixel (x, y)= (fil1,fil2,fil3...,fil17)。
Corresponding scale when method described above calculates r=5, r=10, r=20 respectively centered on a certain pixel (x, y) The texture feature vector of neighborhood, obtain corresponding texture filtering response vector Tex (x, the y)=(fil of a certain pixel (x, y)1, fil2,fil3...,fil51)。
C, texton histogram is built.It is specific as follows:
The texture filtering response vector of all pixels point (x, y) in training image is clustered using K-means methods, In cluster process, K=32 is taken 32 cluster centres to be obtained, by line corresponding to obtain 32 cluster centres as initial value Reason filter response vector, which takes out, is used as texture primitive, is marked as 32 bin in textural characteristics statistic histogram to build line Manage primitive histogram;As shown in Figure 6.
D, the texture gradient of each pixel is calculated.It is specific as follows:
The step A-C in step 112 is used first, obtains the neighborhood gradient operator under 3 yardsticks, for a certain yardstick, It is corresponding by each element in the neighborhood gradient operator of the yardstick centered on a certain pixel (x, y) to be asked Texture filtering response vector carries out multiplication operation, obtains the Neighborhood matrix group Neibor [< >] of the pixel, chooses vertical side To (90 °) straight line as line of demarcation, the disk in yardstick contiguous range is divided into left semicircle and right semi-circle, left semicircle includes 0th sector to the 7th sector, right semi-circle include the 8th sector to the 15th sector;Neighborhood matrix group Neibor corresponding to each semicircle The element of [< >] forms a texton histogram;As shown in Figure 7.H1Represent the Nogata corresponding to the half-circle area of the left side Figure, H2The histogram corresponding to the half-circle area of the right is represented, the bin that histogram is provided by step C is marked.With the step of step 112 2. step in rapid D is identical, tries to achieve in training image each final texture gradient of pixel to be asked, is designated as TextureGradient(x,y)。
Further, multi-instance learning is introduced into saliency in step 12 to detect to obtain the conspicuousness of test image Testing result, specifically include step 121 and step 122:
Step 121, brightness, color and the texture gradient feature obtained using method described in step 11, with reference to more examples Learn study of the EMDD algorithms realization to training set, obtain the conspicuousness detection model succeeded in school.Comprise the following steps that:
Region segmentation, the minimum pixel number for including each region are carried out to training image using oversubscription segmentation method first For 200;Each region is taken as a bag, and to each region progress stochastical sampling, the pixel in the region being sampled is taken as Example, corresponding brightness step feature is extracted with color gradient characteristic vector as sampling instances characteristic vector;Shown according to sampling Example characteristic vector, the training of grader is carried out using multi-instance learning method EMDD algorithms, obtains the conspicuousness detection succeeded in school Model;
Step 122, test image is substituted into the conspicuousness detection model succeeded in school, obtains the conspicuousness detection of test image As a result.
To each width test image, test image is pre-processed using with step 11 identical process, obtains brightness Gradient Features and color gradient feature;Then region segmentation is carried out to test image using oversubscription segmentation method, wraps each region The minimum pixel number contained is 200;Each region as a bag and is subjected to stochastical sampling, the area being sampled to each region Pixel is taken as example in domain, extracts corresponding brightness step feature with color gradient characteristic vector as sampling instances Characteristic Vectors Amount, conspicuousness detection model succeed in school obtained using step 121, obtain that significant exemplary characteristics vector each wraps shows Work property, so as to obtain the conspicuousness testing result of test image.
Further, described step 2 specifically comprises the following steps:
Step 21, the conspicuousness testing result of image step 1 obtained cuts the input of algorithm as figure, according to the aobvious of bag Work property mark builds the weight function as shown in formula (3) with exemplary characteristics vector;And the figure obtained after the optimization as shown in formula (4) is cut Cost function;
In formula (3), wijRepresent the visual signature similitude of i examples bag and j example bags corresponding region, Salien (i) with Salien (j) represents the notable angle value after region i and region j normalization respectively, and σ is the sensitive ginseng of regulation visual signature difference Number, value are 10~20;I weights similar to its own in region are 0;Similarity matrix W={ wijBe diagonal be 0 it is symmetrical Matrix, and wij∈[0,1];fi,fjRepresent i with distinguishing corresponding exemplary characteristics vector, the i.e. brightness of image in j example bags respectively Gradient Features, color gradient feature synthesize the mix vector Mixvector of 4 dimensions with texture gradient characteristic vectori= {BrightnessGradienti,ColorGradienti, TextureGradienti, then Sim (fi,fj)=| | Mixvectori-Mixvectorj||2.Figure represented by formula (4) is cut in framework, and D is N-dimensional diagonal matrix, element on its diagonalFor cutting state vector, each component of a vector UiRepresent region i's Cutting state;The molecule of formula (4) represents the visual similarity between region i and region j, and denominator represents the vision phase in the i of region Like property;
Step 22, using Agglomerative Hierarchical Clustering algorithm, solve cutting state corresponding to R (U) minimum value characteristic value to Amount, that is, obtain the optimum segmentation result of image.
Wherein, affiliated Agglomerative Hierarchical Clustering algorithm refers to the step 2 for the method that number of patent application is 201210257591.1 With the method for step 3.
Verification experimental verification
To verify the validity of the inventive method, using linear array in general acquisition of road traffic information and detecting system The night Expressway Road view data of ccd video camera collection chooses wherein 200 and includes vehicle target as research object And the road image with night characteristic feature, using 100 images as training image, learn highway night running vehicle The bottom visual signature of target, the segmentation of vehicle target is carried out to remaining 100 images using the inventive method.Enumerate part Experimental result is as shown in Fig. 2 Fig. 2 sets forth segmentation knot of the spectrum partitioning algorithm based on the decomposition of multiple dimensioned figure to test image Fruit and the segmentation result using the inventive method.It is described as follows:
Subgraph (a-1) to (a-5) is original image in Fig. 2, and subgraph (b-1) to (b-5) is to be decomposed based on multiple dimensioned figure The segmentation result of partitioning algorithm is composed, subgraph (c-1) to (c-5) is the inventive method.Contrasted by experimental result, it can be seen that The spectrum partitioning algorithm decomposed based on multiple dimensioned figure can obtain more complete for the higher vehicle target of relative road surface contrast Vehicle target, white vehicle target among (a-1) and white vehicle target among figure (a-4) are such as schemed, but for weak right It is substantially to split failure than degree vehicle target, and paper algorithm can be by most car in night Expressway Road image Target Segmentation comes out, and especially the segmentation effect of weak contrast's vehicle target will be substantially better than the spectrum decomposed based on multiple dimensioned figure Partitioning algorithm.Because the inventive method combination multi-instance learning method can obtain the marking area mark in image quickly, and Exemplary characteristics vector in each example bag contain reflection target information bottom visual signature and objective contour it is on the middle and senior level Feature, at the beginning of roughening, the comprehensive character for just considering image provides accurately segmentation foundation for subsequent treatment, therefore When target, slow and difference is minimum with background border transition, the weak situation of contrast, can still obtain preferable segmentation result.

Claims (9)

1. the vehicle target dividing method under a kind of weak contrast, it is characterised in that specifically comprise the following steps:
Step 1:Notable model modeling is carried out using the method for multi-instance learning to training image;Then using notable model to surveying Attempt the bag as in and example is predicted, obtain the saliency map of test image;Specifically include:
Step 11, training image is pre-processed, and extracts gradient of image intensity feature, color gradient feature and texture gradient Feature;
Step 12, multi-instance learning is incorporated into saliency detection, obtains the conspicuousness testing result of test image;
Step 2:The significance introducing figure of test image is cut into framework, the mark according to exemplary characteristics vector and example bag is cut to figure Framework optimizes, and solution figure cuts the suboptimal solution of optimization, obtains the Accurate Segmentation of target;
The step 2 specifically comprises the following steps:
Step 21, the conspicuousness testing result of image step 1 obtained cuts the input of algorithm as figure, the conspicuousness according to bag Mark builds the weight function as shown in formula (3) with exemplary characteristics vector;And the figure obtained after the optimization as shown in formula (4) cuts cost Function;
<mrow> <msub> <mi>w</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <mo>&amp;lsqb;</mo> <mi>S</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>e</mi> <mi>n</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>S</mi> <mi>a</mi> <mi>l</mi> <mi>i</mi> <mi>e</mi> <mi>n</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mi>S</mi> <mi>i</mi> <mi>m</mi> <mo>(</mo> <mrow> <msub> <mi>f</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>f</mi> <mi>j</mi> </msub> </mrow> <mo>)</mo> <mo>/</mo> <msup> <mi>&amp;delta;</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mo>&amp;NotEqual;</mo> <mi>j</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mi>i</mi> <mo>=</mo> <mi>j</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mi>R</mi> <mrow> <mo>(</mo> <mi>U</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>&gt;</mo> <mi>j</mi> </mrow> </munder> <msub> <mi>w</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <msup> <mrow> <mo>(</mo> <mrow> <msub> <mi>U</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>U</mi> <mi>j</mi> </msub> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>&gt;</mo> <mi>j</mi> </mrow> </munder> <msub> <mi>w</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <msub> <mi>U</mi> <mi>i</mi> </msub> <msub> <mi>U</mi> <mi>j</mi> </msub> </mrow> </mfrac> <mo>=</mo> <mfrac> <mrow> <msup> <mi>U</mi> <mi>T</mi> </msup> <mrow> <mo>(</mo> <mrow> <mi>D</mi> <mo>-</mo> <mi>W</mi> </mrow> <mo>)</mo> </mrow> <mi>U</mi> </mrow> <mrow> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msup> <mi>U</mi> <mi>T</mi> </msup> <mi>W</mi> <mi>U</mi> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
In formula (3), wijRepresent i examples bag and the visual signature similitude of j example bags corresponding region, Salien (i) and Salien (j) the notable angle value after region i and region j normalization is represented respectively, δ is the sensitive parameter of regulation visual signature difference, value For 10~20;I weights similar to its own in region are 0;Similarity matrix W={ wijIt is the symmetrical matrix that diagonal is 0, and wij∈[0,1];fi,fjI is represented respectively with distinguishing corresponding exemplary characteristics vector in j example bags, i.e. the brightness step of image is special Sign, color gradient feature synthesize the mix vector Mixvector of 3-dimensional with texture gradient characteristic vectori= {BrightnessGradienti,ColorGradienti, TextureGradienti, then Sim (fi, fj)=| | Mixvectori-Mixvectorj||2;Figure represented by formula (4) is cut in framework, and D is N-dimensional diagonal matrix, element on its diagonalU={ U1,U2,...,Ui,...,Uj,...UNIt is that cutting state is vectorial, each component of a vector UiRepresent region i Cutting state;The molecule of formula (4) represents the visual similarity between region i and region j, and denominator represents the vision in the i of region Similitude;
Step 22, using Agglomerative Hierarchical Clustering algorithm, the cutting state vector corresponding to R (U) minimum value characteristic value is solved, i.e., Obtain the optimum segmentation result of image.
2. the vehicle target dividing method under weak contrast as claimed in claim 1, it is characterised in that right in the step 11 Training image is pre-processed, and extracts brightness step feature, color gradient feature and texture gradient feature, specifically includes step 111~step 113:
Step 111, the conversion of color space is carried out to training image and its quantization of each component pre-processes, after being normalized Luminance component L and color component a, b;
Step 112, the brightness step of each pixel corresponding to luminance component L matrix is calculated;
Step 113, the color gradient of each pixel in color component a and color component b matrix is calculated respectively;
Step 114, the texture gradient of each pixel is calculated.
3. the vehicle target dividing method under weak contrast as claimed in claim 2, it is characterised in that the step 111 has Body is as follows:
First, training image is subjected to gamma correction, to realize to the nonlinear adjustment of image color component, by training image by Rgb color space is changed to Lab color spaces;Luminance component L and two colors to training image under Lab color spaces again Component a, b are normalized, luminance component L and color component a, b after being normalized.
4. the vehicle target dividing method under weak contrast as claimed in claim 2, it is characterised in that the step 113 has Body includes step A-D:
A, the weight matrix Wights < > of 3 yardsticks are built;
B, the key map matrix Slice_map < > of 3 yardsticks are built;The key map matrix Slice_map of each yardstick The weight matrix Wights < > that < > correspond to yardstick have identical dimension, i.e., each key map Slice_map < > squares Battle array and line number and columns are all 2r+1 square formation;Choose 8 directions (0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 °) matrix is divided into 16 regions, the value of element is identical with the numbering 0~15 in the region in each region;
C, by the element in the weight matrix Wights < > of the corresponding yardsticks of each key map matrix Slice_map < > Correspond to be multiplied and obtain the matrix of corresponding yardstick, i.e. neighborhood gradient operator;
D, using neighborhood gradient operator, the color of each pixel to be asked in color component a and color component b matrix is calculated Gradient.
5. the vehicle target dividing method under weak contrast as claimed in claim 4, it is characterised in that the step A is specific It is as follows:
The weight matrix Wights < > of 3 yardsticks are built respectively;Described weight matrix Wights < > are line number and columns It is equal to 2r+1 square formation;Element non-zero i.e. 1 in weight matrix Wights < >, the Elemental redistribution equal to 1 is with square formation center Element (r+1, r+1) be the center of circle, using r as the disk of radius in the range of, form the inscribed circle of square formation, remaining element is in square formation 0;3 yardsticks are respectively r=3, r=5 and r=10.
6. the vehicle target dividing method under weak contrast as claimed in claim 4, it is characterised in that the step D is specific It is as follows:
1. for some yardstick, in the luminance component L obtained by step 111 matrix centered on a pixel to be asked, pass through The neighborhood gradient operator of a certain yardstick carries out dot product with each luminance component in the range of neighborhood of pixel points to be asked, and obtains picture to be asked Matrix N eibor < > in vegetarian refreshments contiguous range;The straight line of vertical direction (90 °) is chosen as line of demarcation, neighborhood gradient is calculated Disk in son is divided into left semicircle and right semi-circle, and left semicircle includes the 0th sector to the 7th sector, and right semi-circle arrives including the 8th sector 15th sector;Matrix N eibor < > element forms a histogram and it is normalized corresponding to each semicircle, point Slice_hist is not designated as it1< > and Slice_hist2< >;H1Represent the histogram corresponding to the half-circle area of the left side, H2Represent Histogram corresponding to the half-circle area of the right, i are the bin of histogram value, are defined as [0,24], i.e. brightness range;
2. calculating the difference between two normalization histograms by card side's distance shown in formula (1), that is, obtain under a certain yardstick Brightness step on the vertical direction of one pixel to be asked;
<mrow> <msub> <mi>d</mi> <mrow> <mi>C</mi> <mi>h</mi> <mi>i</mi> <mo>_</mo> <mi>s</mi> <mi>q</mi> <mi>u</mi> <mi>a</mi> <mi>r</mi> <mi>e</mi> <mi>d</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>H</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>H</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <munder> <mo>&amp;Sigma;</mo> <mi>i</mi> </munder> <mfrac> <msup> <mrow> <mo>(</mo> <msub> <mi>H</mi> <mn>1</mn> </msub> <mo>(</mo> <mi>i</mi> <mo>)</mo> <mo>-</mo> <msub> <mi>H</mi> <mn>2</mn> </msub> <mo>(</mo> <mi>i</mi> <mo>)</mo> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mrow> <msub> <mi>H</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>H</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
After brightness step on a certain yardstick vertical direction has been calculated, straight line is as boundary where choosing other directions respectively Line, obtain the brightness step on the every other direction of a certain yardstick of pixel to be asked;Calculated further according to the same modes of step D Obtain the directive brightness step of institute on the pixel to be asked other yardsticks;Own when completing all yardsticks of pixel to be asked After brightness step on direction calculates, the final brightness step of the pixel to be asked is calculated by formula (2):
f(x,y,r,n_ori;R=3,5,10;N_ori=1,2 ... 8)-> Brightness Gradient (x, y) (2)
In formula, f is a mapping function, and (x, y) is any pixel to be asked, and r represents the yardstick chosen, and n_ori represents what is chosen Direction;Brightness Gradient (x, y) are the final brightness step of pixel (x, y);F correspondence rule is every for selection High-high brightness Grad of the individual direction in 3 yardsticks is as luminance gradient value in this direction, by the brightness ladder on 8 directions Degree summation obtains the final brightness step of pixel (x, y).
7. the vehicle target dividing method under weak contrast as claimed in claim 2, it is characterised in that the step 114 has Body is as follows:
A, multi-dimension texture wave filter group set Filters is built(x,y)(nf, filter, r, θ), nfThe number of wave filter is represented, Filter represents the set of wave filter species, and r represents yardstick, and θ represents the direction chosen;
B, the corresponding texture filtering response vector of each pixel in training image is calculated, i.e.,
It is specific as follows:
By gray level image Igray(x, y) and structure multi-dimension texture filter set Filters(x,y)[nf, filter, r, θ] Convolution is carried out in corresponding scale neighborhood centered on pixel (x, y), obtains the texture filtering response vector of pixel (x, y); During such as yardstick r=5, convolution, i.e. I are carried out in the 11*11 neighborhoods centered on a certain pixelgray(x,y)*Filters(nf, Filter, r, θ), wherein nf=1, filter=(filcs,fil1,fil2), r=5, θ=0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5°、135°、157.5°;Obtain texture filtering response vector Tex (x, y)=(fil of pixel (x, y)1,fil2, fil3...,fil17);
Corresponding scale neighborhood when method described above calculates r=5, r=10, r=20 respectively centered on a certain pixel (x, y) Texture feature vector, obtain corresponding texture filtering response vector Tex (x, the y)=(fil of a certain pixel (x, y)1,fil2, fil3...,fil51);
C, texton histogram is built;It is specific as follows:
The texture filtering response vector of all pixels point (x, y) in training image is clustered using K-means methods, clustered During, take K=32 that 32 cluster centres are obtained, texture corresponding to obtain 32 cluster centres is filtered as initial value Ripple response vector takes out and is used as texture primitive, is marked as 32 bin in textural characteristics statistic histogram to build texture base First histogram;
D, the texture gradient of each pixel is calculated;It is specific as follows:
The step A-C in step 112 is used first, the neighborhood gradient operator under 3 yardsticks is obtained, for a certain yardstick, with certain Centered on one pixel (x, y) to be asked, pass through the corresponding texture of each element in the neighborhood gradient operator of the yardstick Filter response vector carries out multiplication operation, obtains the Neighborhood matrix group Neibor [< >] of the pixel, chooses vertical direction The straight line of (90 °) is divided into left semicircle and right semi-circle as line of demarcation, by the disk in yardstick contiguous range, and left semicircle includes the 0 sector to the 7th sector, right semi-circle include the 8th sector to the 15th sector;Neighborhood matrix group Neibor [< corresponding to each semicircle >] element form a texton histogram;H1Represent the histogram corresponding to the half-circle area of the left side, H2Represent the right half Histogram corresponding to circle region, the bin that histogram is provided by step C are marked;With the 2. step phase in the step D of step 112 Together, each final texture gradient of pixel to be asked is tried to achieve in training image, is designated as TextureGradient (x, y).
8. the vehicle target dividing method under weak contrast as claimed in claim 7, it is characterised in that the step A is specific It is as follows:
Training image is converted into gray level image, is designated as Igray(x, y), and to gray level image IgrayEach pixel of (x, y) The gray component of (x, y) is normalized;Choose three kinds of wave filters, respectively Gauss second order derviation wave filter and its Hilbert Wave filter after conversion is with center ring around wave filter;Multi-dimension texture filter sets are built from 8 directions and 3 yardsticks, It is designated as Filters(x,y)[nf, filter, r, θ], wherein, nfThe number of wave filter is represented, filter represents wave filter species Set, r represent yardstick, and θ represents the direction chosen;nf=51, filter=(filcs,fil1,fil2), r=5,10,20, θ= 0°、22.5°、45°、67.5°、90°、112.5°、135°、157.5°;Multi-dimension texture filter set Filters(x,y)[nf, filter,r,θ];As shown in formula 5,6,7:
The Gauss second order derviation wave filter of 83, direction yardsticks:
<mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <msup> <mi>d</mi> <mn>2</mn> </msup> <mrow> <msup> <mi>dy</mi> <mn>2</mn> </msup> </mrow> </mfrac> <mrow> <mo>(</mo> <mfrac> <mn>1</mn> <mi>C</mi> </mfrac> <mi>exp</mi> <mo>(</mo> <mfrac> <msup> <mi>y</mi> <mn>2</mn> </msup> <msup> <mi>&amp;sigma;</mi> <mn>2</mn> </msup> </mfrac> <mo>)</mo> <mi>exp</mi> <mo>(</mo> <mfrac> <msup> <mi>x</mi> <mn>2</mn> </msup> <mrow> <msup> <mi>l</mi> <mn>2</mn> </msup> <msup> <mi>&amp;sigma;</mi> <mn>2</mn> </msup> </mrow> </mfrac> <mo>)</mo> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
Wave filter after the Gauss second order derviation Hilbert transform of 83, direction yardsticks:
f2(x, y)=Hilbert (f1(x,y)) (6)
The center ring of 3 yardsticks is around wave filter:
Gaussian_cs < >=m_surround < >-m_center < > (7)
Around wave filter, center-filter, Gauss second order derviation wave filter and its standard corresponding to hilbert-transform filter Poor σ values are respectively2 and
9. the vehicle target dividing method under weak contrast as claimed in claim 1, it is characterised in that will in the step 12 Multi-instance learning is introduced to saliency and detects to obtain the conspicuousness testing result of test image, specifically include step 121 and Step 122:
Step 121, brightness, color and the texture gradient feature obtained using method described in step 11, with reference to multi-instance learning EMDD algorithms realize the study to training set, obtain the conspicuousness detection model succeeded in school;
Step 122, test image is substituted into the conspicuousness detection model succeeded in school and obtains the conspicuousness testing result of test image.
CN201510374899.8A 2015-06-30 2015-06-30 A kind of vehicle target dividing method under weak contrast Active CN105005989B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510374899.8A CN105005989B (en) 2015-06-30 2015-06-30 A kind of vehicle target dividing method under weak contrast

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510374899.8A CN105005989B (en) 2015-06-30 2015-06-30 A kind of vehicle target dividing method under weak contrast

Publications (2)

Publication Number Publication Date
CN105005989A CN105005989A (en) 2015-10-28
CN105005989B true CN105005989B (en) 2018-02-13

Family

ID=54378646

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510374899.8A Active CN105005989B (en) 2015-06-30 2015-06-30 A kind of vehicle target dividing method under weak contrast

Country Status (1)

Country Link
CN (1) CN105005989B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871321B (en) * 2016-09-23 2021-08-27 南开大学 Image segmentation method and device
CN106600550B (en) * 2016-11-29 2020-08-11 深圳开立生物医疗科技股份有限公司 Ultrasonic image processing method and system
CN108625844B (en) * 2017-03-17 2024-06-25 中国石油化工集团有限公司 Gamma calibration and testing device
CN108090511B (en) * 2017-12-15 2020-09-01 泰康保险集团股份有限公司 Image classification method and device, electronic equipment and readable storage medium
CN109961637A (en) * 2017-12-25 2019-07-02 大连楼兰科技股份有限公司 Vehicle detection apparatus and system based on more subgraphs fusion and significance analysis
CN109960984A (en) * 2017-12-25 2019-07-02 大连楼兰科技股份有限公司 Vehicle checking method based on contrast and significance analysis
CN109241865B (en) * 2018-08-14 2022-05-31 长安大学 Vehicle detection segmentation algorithm under weak contrast traffic scene
CN110866460B (en) * 2019-10-28 2020-11-27 衢州学院 Method and device for detecting specific target area in complex scene video
CN112284287B (en) * 2020-09-24 2022-02-11 哈尔滨工业大学 Stereoscopic vision three-dimensional displacement measurement method based on structural surface gray scale characteristics
CN113239964B (en) * 2021-04-13 2024-03-01 联合汽车电子有限公司 Method, device, equipment and storage medium for processing vehicle data
CN113688670A (en) * 2021-07-14 2021-11-23 南京四维向量科技有限公司 Method for monitoring street lamp brightness based on image recognition technology

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509084A (en) * 2011-11-18 2012-06-20 中国科学院自动化研究所 Multi-examples-learning-based method for identifying horror video scene
CN102831600A (en) * 2012-07-24 2012-12-19 长安大学 Image layer cutting method based on weighting cut combination

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7986827B2 (en) * 2006-02-07 2011-07-26 Siemens Medical Solutions Usa, Inc. System and method for multiple instance learning for computer aided detection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509084A (en) * 2011-11-18 2012-06-20 中国科学院自动化研究所 Multi-examples-learning-based method for identifying horror video scene
CN102831600A (en) * 2012-07-24 2012-12-19 长安大学 Image layer cutting method based on weighting cut combination

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于多特征的图像轮廓检测算法研究;许可;《中国优秀硕士学位论文全文数据库信息科技辑》;20150115;第3节 *
基于多示例学习的图像显著性分析;谢延涛;《中国优秀硕士学位论文全文数据库信息科技辑》;20150215;第3.2节 *
基于视觉显著模型和图割的图像分割方法研究;刘红才;《中国优秀硕士学位论文全文数据库信息科技辑》;20140615;第3.1节,图3.1 *

Also Published As

Publication number Publication date
CN105005989A (en) 2015-10-28

Similar Documents

Publication Publication Date Title
CN105005989B (en) A kind of vehicle target dividing method under weak contrast
CN105160309B (en) Three lanes detection method based on morphological image segmentation and region growing
CN103605977B (en) Extracting method of lane line and device thereof
CN102509098B (en) Fisheye image vehicle identification method
CN107729801A (en) A kind of vehicle color identifying system based on multitask depth convolutional neural networks
CN103034863B (en) The remote sensing image road acquisition methods of a kind of syncaryon Fisher and multiple dimensioned extraction
CN103824081B (en) Method for detecting rapid robustness traffic signs on outdoor bad illumination condition
CN107103317A (en) Fuzzy license plate image recognition algorithm based on image co-registration and blind deconvolution
CN105354568A (en) Convolutional neural network based vehicle logo identification method
Zhang et al. Study on traffic sign recognition by optimized Lenet-5 algorithm
CN105069774B (en) The Target Segmentation method of optimization is cut based on multi-instance learning and figure
CN103942546B (en) Traffic marking identifying system and method are oriented in a kind of urban environment
CN106650731A (en) Robust license plate and logo recognition method
CN107491756B (en) Lane direction information recognition methods based on traffic sign and surface mark
CN104050447A (en) Traffic light identification method and device
Le et al. Real time traffic sign detection using color and shape-based features
CN104573685A (en) Natural scene text detecting method based on extraction of linear structures
CN105205489A (en) License plate detection method based on color texture analyzer and machine learning
CN104835175A (en) Visual attention mechanism-based method for detecting target in nuclear environment
CN104200228A (en) Recognizing method and system for safety belt
CN107704853A (en) A kind of recognition methods of the traffic lights based on multi-categorizer
CN106529461A (en) Vehicle model identifying algorithm based on integral characteristic channel and SVM training device
CN107330365A (en) Traffic sign recognition method based on maximum stable extremal region and SVM
CN104282008A (en) Method for performing texture segmentation on image and device thereof
CN109993806A (en) A kind of color identification method, device and electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Liu Zhanwen

Inventor after: Chen Ting

Inventor after: Lin Shan

Inventor after: Hao Ruru

Inventor after: Zhou Zhou

Inventor after: Zhao Xiangmo

Inventor after: Shen Chao

Inventor after: Duan Zongtao

Inventor after: Gao Tao

Inventor after: Fan Xing

Inventor after: Wang Runmin

Inventor after: Xu Jiang

Inventor after: Zhou Jingmei

Inventor before: Liu Zhanwen

Inventor before: Lin Shan

Inventor before: Kang Junmin

Inventor before: Wang Jiaojiao

Inventor before: Xu Jiang

Inventor before: Zhao Xiangmo

Inventor before: Fang Jianwu

Inventor before: Duan Zongtao

Inventor before: Wang Runmin

Inventor before: Hao Ruru

Inventor before: Qi Xiuzhen

Inventor before: Zhou Zhou

Inventor before: Zhou Jingmei

GR01 Patent grant
GR01 Patent grant