CN104778721A - Distance measuring method of significant target in binocular image - Google Patents

Distance measuring method of significant target in binocular image Download PDF

Info

Publication number
CN104778721A
CN104778721A CN201510233157.3A CN201510233157A CN104778721A CN 104778721 A CN104778721 A CN 104778721A CN 201510233157 A CN201510233157 A CN 201510233157A CN 104778721 A CN104778721 A CN 104778721A
Authority
CN
China
Prior art keywords
point
image
formula
key point
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510233157.3A
Other languages
Chinese (zh)
Other versions
CN104778721B (en
Inventor
王进祥
杜奥博
石金进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaopeng Automobile Technology Co Ltd
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201510233157.3A priority Critical patent/CN104778721B/en
Publication of CN104778721A publication Critical patent/CN104778721A/en
Application granted granted Critical
Publication of CN104778721B publication Critical patent/CN104778721B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a distance measuring method of a significant target in a binocular image. The distance measuring method of the significant target in the binocular image aims to solve the problem that an existing target distance measuring method is low in the processing speed. The method includes the steps that step 1, significant feature extraction is conducted on the binocular image through a visual significance model, and a seed point and a background point are marked; step 2, a weighed graph is established for the binocular image; step 3, a significant target in the binocular image is partitioned through a random walk image partitioning algorithm by means of the seed point and the background point in step 1 and the weighed graph in step 2; step 4, key point matching is conducted on the significant target separately through an SIFT algorithm; step 5, the significant target distance is worked out by applying a parallax matrix K' worked out in step 4 into a binocular distance measuring model. The distance measuring method can be applied to distance measurement of the significant target of the image in front of the vision in the intelligent automobile running process.

Description

The distance measurement method of conspicuousness target in a kind of binocular image
Technical field
The present invention relates to the distance measurement method of target in a kind of binocular image, particularly relate to the distance measurement method of conspicuousness target in a kind of binocular image, belong to technical field of image processing.
Background technology
Range information control system be mainly used in as automobile in the middle of traffic image process provides analysis.In the research process of intelligent automobile, traditional target measuring method utilizes specific wavelength radar or laser to find range to target.Compare with laser with radar, vision sensor has the advantage in price, and visual angle is also openr simultaneously.And utilize vision sensor while measurement target distance, the particular content of target can be judged.
But current traffic image information versus busy, traditional target distance measurement algorithm is difficult to obtain desired result in complicated image, due to conspicuousness target in image cannot be found but global detection, make processing speed comparatively slow and add a lot of extraneous data, making algorithm to meet application request.
Summary of the invention
The object of the invention is the distance measurement method proposing conspicuousness target in a kind of binocular image, to solve the slow problem of existing target distance measurement method processing speed.
The distance measurement method of conspicuousness target in a kind of binocular image of the present invention, realizes according to following steps: step one, utilize vision significance model to carry out significant characteristics extraction to binocular image, and marks Seed Points and background dot, specifically comprises:
Step one, utilize vision significance model to carry out significant characteristics extraction to binocular image, and mark Seed Points and background dot, specifically comprise:
Step carries out pre-service one by one, first, carries out rim detection to binocular image, generates the outline map of binocular image; Step one two, utilize vision significance model to carry out significant characteristics extraction to binocular image, generate significant characteristics figure;
Step one three, find out gray-scale value maximum pixel point in figure according to significant characteristics figure, be labeled as Seed Points; And centered by Seed Points 25 × 25 window in traversal pixel, the gray-scale value finding out pixel be less than 0.1 and distance Seed Points pixel is farthest labeled as background dot;
Step 2, weighted graph is set up to binocular image;
Classical Gauss's weight function is utilized to set up weighted graph to binocular image:
W ij = e - β ( g i - g j ) 2 - - - ( 1 )
Wherein, W ijrepresent the weights between summit i and summit j, g irepresent the brightness of summit i, g jrepresent the brightness of summit j, β is free parameter, and e is the nature truth of a matter;
The Laplacian Matrix L of weighted graph is obtained by following formula:
Wherein, L ijfor the element of corresponding vertex i to j in Laplacian Matrix L, d ifor summit i and surrounding point weights and, d i=∑ W ij;
Step 3, the weighted graph utilized in Seed Points in step one and background dot and step 2, by random walk image segmentation algorithm by the conspicuousness Target Segmentation in binocular image out;
Step 3 one, the Seed Points marked according to step one by the pixel of binocular image and background dot separate two class set, i.e. gauge point set V mv is gathered with unmarked u, Laplacian Matrix L is according to V mand V u, prioritization gauge point and then arrange non-marked point; Wherein, described L is divided into L m, L u, B, B tfour parts, be then expressed as follows Laplacian Matrix:
L = L M B B T L U - - - ( 3 )
Wherein, L mfor gauge point is to the Laplacian Matrix of gauge point, L ufor non-marked point is to the Laplacian Matrix of non-marked point, B and B tbe respectively gauge point to non-marked point and non-marked point to the Laplacian Matrix of gauge point;
Step 3 two, according to Laplacian Matrix and gauge point solve combination dirichlet integral D [x];
Combination dirichlet integral formula is as follows:
D [ x ] = 1 2 Σ w ij ( x i - x j ) 2 = 1 2 x T Lx - - - ( 4 )
Wherein, x be in weighted graph summit to the probability matrix of gauge point, x iand x jbe respectively the probability of summit i and j to gauge point;
According to gauge point set V mv is gathered with unmarked u, x is divided into x mand x utwo parts, x mfor gauge point set V mcorresponding probability matrix, x ufor unmarked set V ucorresponding probability matrix; Formula (4) is decomposed into:
D [ x U ] = 1 2 [ x M T x U T ] L M B B T L U x M x U = 1 2 ( x M T L M x M + 2 x U T B T x M + x U T L U x U ) - - - ( 5 )
For gauge point s, setting m sif summit i is s arbitrarily, then otherwise to D [x u] for x udifferentiate, obtain the Di Li Cray probable value that formula (5) minimizing solution is gauge point s:
L U x i s = - B m s - - - ( 6 )
Wherein, represent that summit i arrives the probability of gauge point s first;
According to what obtained by combination dirichlet integral carry out Threshold segmentation according to formula (7), generate segmentation figure:
Wherein, s ifor the pixel size of a certain summit i correspondence position in segmentation figure;
Wherein, in described segmentation figure brightness be 1 pixel be expressed as conspicuousness target in image, brightness be 0 be background;
Step 3 three, pixel corresponding with original image for segmentation figure be multiplied, generate target figure, namely extract the conspicuousness target be partitioned into, formula is as follows:
t i=s i·I i(8)
Wherein, t ifor the gray-scale value of a certain summit i of target figure T, I ifor the gray-scale value of input picture I (σ) correspondence position i;
Step 4, by SIFT algorithm, conspicuousness target is carried out key point coupling separately;
Step 4 one, target figure is set up gaussian pyramid, ask difference to obtain DOG image between two to filtered image, DOG image definition is D (x, y, σ), asks for formula as follows:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*T(x,y)
(9)
=C(x,y,kσ)-C(x,y,σ)
Wherein, be the Gaussian function of a change yardstick, p, q represent the dimension of Gaussian template, (x, y) be the position of pixel in gaussian pyramid image, σ is the metric space factor of image, and k represents a certain concrete scale-value, C (x, y, σ) is defined as G (x, y, σ) with the convolution of target figure T (x, y), i.e. C (x, y, σ)=G (x, y, σ) * T (x, y);
Step 4 two, in adjacent DOG image, obtain extreme point, by the position of matching three-dimensional quadratic function determination extreme point and yardstick as key point, and according to Hessian matrix, Detection of Stability is carried out to eliminate skirt response to key point, specific as follows:
(1) by carrying out Taylor expansion, its curve D (X) is asked to metric space DOG:
D ( X ) = D + ∂ D T ∂ X X + 1 2 X T ∂ 2 D ∂ X 2 X - - - ( 10 )
Wherein, X=(x, y, σ) t, D is curve, makes it be 0, obtain the side-play amount formula (11) of extreme point to formula (10) differentiate:
X ^ = - ∂ 2 D - 1 ∂ X 2 ∂ D ∂ X - - - ( 11 )
For removing the extreme point of low contrast, formula (11) being substituted into formula (10), obtains formula (12):
D ( X ^ ) = D + 1 2 ∂ D T ∂ X X ^ - - - ( 12 )
If the value of formula (12) is greater than 0.03, retains this extreme point and obtain exact position and the yardstick of this extreme point, otherwise abandoning;
(2) unstable key point is eliminated by the Hessian matrix screening at key point place;
The ratio between Hessian proper value of matrix is utilized to calculate curvature;
Curvature according to key point neighborhood judges marginal point;
The ratio of curvature is set to 10, is greater than 10 deletions, otherwise then retain, what remain is then stable key point;
Step 4 three, the pixel of the window of key point neighborhood 16 × 16 is utilized to be each key point assigned direction parameter;
For the key point detected in DOG image, the size and Orientation computing formula of gradient is as follows:
m ( x , y ) = ( C ( x + 1 , y ) - C ( x - 1 , y ) 2 + ( C ( x , y + 1 ) - C ( x , y - 1 ) ) 2
(13)
θ(x,y)=tan -1((C(x,y+1)-C(x,y-1))/(C(x+1,y)-C(x-1,y)))
Wherein, C is the metric space at key point place, and m is the gradient magnitude of key point, and θ is the gradient direction of required point; Centered by key point, regional assignment 16 × 16 neighborhoods, obtain gradient magnitude and the gradient direction of wherein pixel around, use histogram to add up the gradient of point in this neighborhood; Histogrammic horizontal ordinate is direction, is divided into 36 parts by 360 degree, and every part is in the middle of 10 degree of corresponding histograms, and histogrammic ordinate is gradient magnitude, and the size corresponding to the point of corresponding gradient direction is added, itself and as the size of ordinate; Principal direction is defined as the interval direction that gradient magnitude is hm to the maximum, by the interval of gradient magnitude on 08*hm as principal direction assist to, to strengthen the stability of coupling;
Step 4 four, set up descriptor statement key point local feature information
First the rotation of coordinate around key point is the direction of key point;
Then the window of around key point 16 × 16 is chosen, the wicket of 16 4 × 4 is divided in neighborhood, in the wicket of 4 × 4, calculate the size and Orientation of its corresponding gradient, and add up the gradient information of each wicket with the histogram of 8 bin, by Gauss's weighting algorithm to around key point 16 × 16 window calculation descriptor as shown in the formula:
h = m g ( a + x , b + y ) * e - ( - x ′ ) 2 + ( y ′ ) 2 2 × ( 0.5 d ) 2 - - - ( 14 )
Wherein, h is descriptor, (a, b) for key point is in the position of gaussian pyramid image, m gfor the gradient magnitude of key point and the gradient magnitude of step 4 three histogram principal direction, d is the length of side that is 16 of window, (x, y) be the position of pixel in gaussian pyramid image, (x ', y ') be the new coordinate of pixel in the neighborhood in direction being key point by rotation of coordinate, the computing formula of new coordinate such as formula:
x ′ y ′ = cos θ g - sin θ g sin θ g cos θ g x y - - - ( 15 )
θ gfor the gradient direction of key point;
By obtaining the proper vector of 128 key points to the window calculation of 16 × 16, be designated as H=(h 1, h 2, h 3..., h 128), be normalized proper vector, after normalization, proper vector is designated as L g, normalization formula such as formula:
l i = h i Σ j = 1 128 h j , j = 1,2,3 , . . . - - - ( 16 )
Wherein, L g=(l 1, l 2..., l i..., l 128) be the proper vector of the key point after normalization, l i, i=1,2,3 .... be a certain normalized vector;
Adopt the Euclidean distance of the proper vector of key point as the decision metric of the similarity of key point in binocular image, mate the key point in binocular image, the crucial pixel coordinate information of coupling is as one group of key message mutually;
Step 4 five, to generate coupling key point screen;
Obtain the coordinate horizontal parallax of often pair of key point, generate parallax matrix, parallax defined matrix is K n={ k 1, k 2... k n, n is the logarithm of coupling, k 1, k 2, k nfor single match point parallax;
Obtain the median k of parallax matrix m, and obtain, with reference to parallax matrix, being designated as K n', formula is as follows:
K n'={k 1-k m,k 2-k m,...,k n-k m} (17)
Setting parallax threshold value is 3, by K n' in be greater than threshold value corresponding parallax delete, finally inspected matrix result K', k 1', k 2', k n'be the parallax of the correct match point after screening, n' is the logarithm of final correct coupling, and formula is as follows:
K'={k 1',k 2',...,k n'} (18)
Step 5, step 4 is obtained parallax matrix K ' substitute into binocular range finding model in obtain conspicuousness target range;
The focal length of two identical imaging systems is in the horizontal direction at a distance of J, and two optical axises are all parallel to surface level, and the plane of delineation parallels with perpendicular;
Supposing an impact point M (X, Y, Z) in scene, is Pl (x at left and right two imaging points respectively 1, y 1) and Pr (x 2, y 2), x 1, y 1with x 2, y 2be respectively the coordinate of Pl and Pr at the perpendicular of imaging, in binocular model, parallax is defined as k=|pl-pr|=|x 2-x 1|, obtain range formula by triangle similarity relation, X, Y, Z are transverse axis in space coordinates, vertical pivot, the coordinate of the longitudinal axis:
z = J f k = J f | x 2 - x 1 | d x ′ - - - ( 19 )
Wherein dx' represents the physical distance of each pixel in the egative film of imaging in horizontal axis, f is the focal length of imaging system, z is the distance of impact point M to two imaging center lines, parallax matrix step 4 obtained is brought in formula (19), obtains corresponding distance matrix Z'={z according to the physical message of binocular model 1, z 2..., z n', z 1, z 2, z n'for the conspicuousness target range that single coupling parallax is obtained, the mean value finally obtaining distance matrix is the distance Z of conspicuousness target in binocular image f, formula is as follows:
Z f = 1 n Σ k = 1 n ′ z k - - - ( 20 ) .
The invention has the beneficial effects as follows:
1, the present invention adopts the method for simulating human vision system, extract the interested region of human eye, it is basic consistent with human eye detection result that algorithm extracts conspicuousness target, makes to extract the present invention can be realized with human eye equally automatic identification conspicuousness target.
2, the present invention completes conspicuousness target distance measurement automatically, selects conspicuousness target without the need to manual.
3, the present invention is mated same target, thus ensures that the parallax result of key point coupling is close, and can to make mistake match point by Effective selection, matching accuracy is close to 100%, and the relative error of parallax, less than 2%, adds the accuracy of range finding.
4, match information of the present invention is less, effectively can reduce extra irrelevant calculating, at least reduce by the matching primitives of 75%, and reduce the introducing of extraneous data, matched data utilization factor, more than 90%, makes can realize conspicuousness target distance measurement under complicated image environment, improves image processing efficiency.
5, the present invention in intelligent vehicle running to the range observation of visual field forward image conspicuousness target, thus provide key message for vehicle safety travel, solve traditional image distance and measure the shortcoming can only carrying out depth detection to whole picture, and it is comparatively large to avoid error very well, the problem that noise is too much.
6, the present invention is by extracting the significant characteristics of binocular image and realizing the segmentation to conspicuousness target, thus target zone is reduced, reduce the coupling time used, raise the efficiency, conspicuousness target critical point is mated thus obtains parallax, and then realizes range observation, because target is on a vertical plane, can filter out the coupling key point of mistake well, precision is improved, and the inventive method can identify conspicuousness target and the distance of Measurement accuracy conspicuousness target fast.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the inventive method;
Fig. 2 is vision significance analysis process figure;
Fig. 3 is Random Walk Algorithm process flow diagram;
Fig. 4 is SIFT algorithm flow chart;
Fig. 5 is binocular measuring system, and X, Y, Z are the space coordinates of definition, M be space a bit, Pl and Pr is the imaging point of M at imaging surface, and M is more spatially, and f is the focal length of imaging system.
Embodiment
Further describe the specific embodiment of the present invention by reference to the accompanying drawings.
Embodiment one: present embodiment is described below in conjunction with Fig. 1 ~ Fig. 5, the method described in present embodiment comprises the following steps:
Step one, utilize vision significance model to carry out significant characteristics extraction to binocular image, and mark Seed Points and background dot, specifically comprise:
Utilize vision significance model to carry out conspicuousness extraction to binocular image, calculate the brightness of each pixel of binocular image, color, three kinds, direction notable feature respectively, and the weighting that three significant characteristics normalization obtain image is significantly schemed.The conspicuousness size of relevant position in each pixel representative image on remarkable figure.Find out the point that pixel value in figure is maximum, the point that namely conspicuousness is the strongest, is designated as Seed Points; Around Seed Points, progressively expanded scope finds out the most weak point of conspicuousness, is designated as background dot.Utilize vision significance model extraction saliency flow process as shown in Figure 2.
Step carries out pre-service one by one, first, carries out rim detection to binocular image, generates vision significance model, and marginal information is the important conspicuousness information of image;
Step one two, utilize vision significance model to carry out significant characteristics extraction to binocular image, generate significant characteristics figure;
Step one three, find out brightness maximum pixel point in figure according to significant characteristics figure, be labeled as Seed Points; And centered by Seed Points 25 × 25 window in traversal pixel, the gray-scale value finding out pixel be less than 0.1 and distance Seed Points pixel is farthest labeled as background dot;
Step 2, weighted graph is set up to binocular image;
Classical Gauss's weight function is utilized to set up weighted graph to binocular image, certain weight is given as limit between pixel each in binocular image and its surrounding pixel by the gray scale difference of pixel, simultaneously using each pixel as summit, set up and comprise the weighted graph on summit and limit;
Utilize the theory of graph theory to regard entire image as undirected weighted graph, each pixel is regarded as the summit in weighted graph, wherein, the described limit of the gray-scale value of pixel to weighted graph that utilize is weighted, and the classical Gauss's weight function of concrete employing is as follows:
W ij = e - β ( g i - g j ) 2 - - - ( 1 )
Wherein, W ijrepresent the weights between summit i and summit j, g irepresent the brightness of pixel i, g jrepresent the brightness of pixel j, β is free parameter, and e is the nature truth of a matter;
The Laplacian Matrix L of weighted graph is obtained by following formula:
Wherein, L ijfor the element of corresponding vertex i to j in Laplacian Matrix L, d ifor summit i and surrounding point weights and, d i=∑ W ij;
Step 3, the weighted graph utilized in Seed Points in step one and background dot and step 2, by random walk image segmentation algorithm by the conspicuousness Target Segmentation in binocular image out;
Step 3, the weighted graph utilized in Seed Points in step one and background dot and step 2, by random walk image segmentation algorithm by the conspicuousness Target Segmentation in binocular image out;
Step 3 one, the Seed Points marked according to step one by the pixel of binocular image and background dot separate two class set, i.e. gauge point set V mv is gathered with unmarked u, Laplacian Matrix L is according to V mand V u, prioritization gauge point and then arrange non-marked point; Wherein, described L is divided into L m, L u, B, B tfour parts, be then expressed as follows Laplacian Matrix:
L = L M B B T L U - - - ( 3 )
Wherein, L mfor gauge point is to the Laplacian Matrix of gauge point, L ufor non-marked point is to the Laplacian Matrix of non-marked point, B and B tbe respectively gauge point to non-marked point and non-marked point to the Laplacian Matrix of gauge point;
Step 3 two, according to Laplacian Matrix and gauge point solve combination dirichlet integral D [x];
Combination dirichlet integral formula is as follows:
D [ x ] = 1 2 Σ w ij ( x i - x j ) 2 = 1 2 x T Lx - - - ( 4 )
Wherein, x be in weighted graph summit to the probability matrix of gauge point, x iand x jbe respectively the probability of summit i and j to gauge point;
According to gauge point set V mv is gathered with unmarked u, x is divided into x mand x utwo parts, x mfor gauge point set V mcorresponding probability matrix, x ufor unmarked set V ucorresponding probability matrix; Formula (4) is decomposed into:
D [ x U ] = 1 2 [ x M T x U T ] L M B B T L U x M x U = 1 2 ( x M T L M x M + 2 x U T B T x M + x U T L U x U ) - - - ( 5 )
Setting m sbe defined as gauge point s, if summit i is s arbitrarily, then otherwise to D [x u] for x udifferentiate, obtain the Di Li Cray probable value that formula (5) minimizing solution is gauge point s:
L U x i s = - B m s - - - ( 6 )
Wherein, represent that summit i arrives the probability of gauge point s first;
According to what obtained by combination dirichlet integral carry out Threshold segmentation according to formula (7), generate segmentation figure:
Wherein, s ifor the pixel size of a certain summit i correspondence position in segmentation figure;
Wherein, in described segmentation figure brightness be 1 pixel be expressed as conspicuousness target in image, brightness be 0 be background;
Step 3 three, pixel corresponding with original image for segmentation figure be multiplied, generate target figure, namely extract the conspicuousness target be partitioned into, formula is as follows:
t i=s i·I i(8)
Wherein, t ifor the gray-scale value of the correspondence position i of target figure T, I ifor the gray-scale value of input picture I (σ) correspondence position i;
Step 4, by SIFT algorithm, conspicuousness target is carried out key point coupling separately;
By SIFT algorithm, the conspicuousness target split is carried out critical point detection and coupling separately, the coupling coordinate obtained is screened, the result of erroneous matching is proposed, leave correct matching result.
SIFT algorithm carries out coupling flow process as shown in Figure 4 to binocular image.
Step 4 one, target figure is set up gaussian pyramid, ask difference to obtain DOG image between two to filtered image, DOG image definition is D (x, y, σ), asks for formula as follows:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*T(x,y)
(9)
=C(x,y,kσ)-C(x,y,σ)
Wherein, be the Gaussian function of a change yardstick, p, q represent the dimension of Gaussian template, (x, y) be the position of pixel in gaussian pyramid image, σ is the metric space factor of image, and k represents a certain concrete scale-value, C (x, y, σ) is defined as G (x, y, σ) with the convolution of target figure T (x, y), i.e. C (x, y, σ)=G (x, y, σ) * T (x, y);
Step 4 two, in adjacent DOG image, obtain extreme point, by the position of matching three-dimensional quadratic function determination extreme point and yardstick as key point, and according to Hessian matrix, Detection of Stability is carried out to eliminate skirt response to key point, specific as follows:
Key point is the Local Extremum composition of DOG image, each point on traversal DOG image, detecting and the gray-scale value size with 8 consecutive point of yardstick and adjacent 2 × 9 points up and down totally 26 points each, if it is than around consecutive point are all large or all little, is extreme point.
The extreme point obtained not is real key point, in order to improve stability, needs (one) to ask its curve D (X) to metric space DOG by carrying out Taylor expansion:
D ( X ) = D + ∂ D T ∂ X X + 1 2 X T ∂ 2 D ∂ X 2 X - - - ( 10 )
Wherein, X=(x, y, σ) t, D is curve, makes it be 0, obtain the side-play amount formula (11) of extreme point to formula (10) differentiate:
X ^ = - ∂ 2 D - 1 ∂ X 2 ∂ D ∂ X - - - ( 11 )
For removing the extreme point of low contrast, formula (11) being substituted into formula (10), obtains formula (12):
D ( X ^ ) = D + 1 2 ∂ D T ∂ X X ^ - - - ( 12 )
If the value of formula (12) is greater than 0.03, retains this extreme point and obtain exact position and the yardstick of this extreme point, otherwise abandoning;
(2) unstable key point is eliminated by the Hessian matrix screening at key point place;
The ratio between Hessian proper value of matrix is utilized to calculate curvature;
Curvature according to key point neighborhood judges marginal point;
The ratio of curvature is set to 10, is greater than 10 deletions, otherwise then retain, what remain is then stable key point;
If the value of formula (12) is greater than 0.03, retains this extreme point and obtain exact position (original position add matching after side-play amount) and the yardstick of this extreme point, otherwise abandoning.In order to eliminate unstable key point, screened by the Hessian matrix at key point place:
Step 4 three, determine key point position and place yardstick after, need for key point composes a direction, definition key point descriptor is relative to this direction.The pixel of the window of key point neighborhood 16 × 16 is utilized to be each key point assigned direction parameter;
For the key point detected in DOG image, the size and Orientation computing formula of gradient is as follows:
m ( x , y ) = ( C ( x + 1 , y ) - C ( x - 1 , y ) 2 + ( C ( x , y + 1 ) - C ( x , y - 1 ) ) 2
(13)
θ(x,y)=tan -1((C(x,y+1)-C(x,y-1))/(C(x+1,y)-C(x-1,y)))
Wherein, C is the metric space at key point place, and m is the gradient magnitude of key point, and θ is the gradient direction of key point; Centered by key point, regional assignment neighborhood around, uses histogram to add up the gradient of point in this neighborhood;
Histogrammic horizontal ordinate is direction, is divided into 36 parts by 360 degree, and every part is in the middle of 10 degree of corresponding histograms.Histogrammic ordinate is the size of gradient, and the size corresponding to the point of corresponding gradient direction is added, itself and as the size of ordinate.Principal direction is defined as that interval direction that gradient magnitude is hm to the maximum, by make other highly for the interval on 08*hm as principal direction assist to, to strengthen the stability of coupling.
Step 4 four, by after stage above, each key point detected just has had position, direction, these three kinds of information of residing yardstick.For each key point sets up a descriptor to state the local feature information of key point.
First the rotation of coordinate around key point is the direction of key point.Then choose the window of around key point 16 × 16, in neighborhood, be divided into the wicket of 16 4 × 4.In the wicket of 4 × 4, calculate the size and Orientation of its corresponding gradient.And the gradient information of each wicket is added up with the histogram of 8 bin.By Gauss's weighting algorithm to around key point 16 × 16 window calculation descriptor as shown in the formula:
h = m ( a + x , b + y ) * e - ( - x ′ ) 2 + ( y ′ ) 2 2 × ( 0.5 d ) 2 - - - ( 14 )
Wherein, h is descriptor, (a, b) for key point is in the position of gaussian pyramid image, d is the length of side that is 16, (x of window, y) be the position of pixel in gaussian pyramid image, (x ', y ') is the new coordinate of pixel in the neighborhood in direction being key point by rotation of coordinate, the computing formula of new coordinate such as formula:
x ′ y ′ = cos θ - sin θ sin θ cos θ x y - - - ( 15 )
θ is the direction of key point.
By obtaining the proper vector of 128 key points to the window calculation of 16 × 16, be designated as H=(h 1, h 2, h 3..., h 128), in order to reduce the impact of light, be normalized proper vector, after normalization, proper vector is designated as L g, normalization formula such as formula:
l i = h i Σ j = 1 128 h j , j = 1,2,3 , . . . . - - - ( 16 )
Wherein, L g=(l 1, l 2, l 3..., l 128) be the proper vector of the key point after normalization;
After the descriptor of the key point of two width figure of binocular image all generates, adopt the decision metric of Euclidean distance as the similarity of key point in binocular image of the proper vector of key point, mate the key point in binocular image, the crucial pixel coordinate information of coupling is as one group of key message mutually;
Step 4 five, at utmost avoiding the generation of error, the coupling key point generated to be screened;
Because measuring system is binocular model, so the key point of conspicuousness target is a surface level in both images, the level error of often pair of key point is equal in theory.So obtain the coordinate horizontal parallax of often pair of key point, generate parallax matrix, parallax defined matrix is K n={ k 1, k 2... k n, n is the logarithm of coupling, k 1, k 2, k nfor single match point parallax;
Obtain the median k of parallax matrix m, and obtain, with reference to parallax matrix, being designated as K n', formula is as follows:
K n'={k 1-k m,k 2-k m,...,k n-k m}
Setting parallax threshold value is 3, by K n' in be greater than threshold value corresponding parallax delete, finally inspected matrix result K', with the interference avoiding erroneous matching key point to bring.K 1', k 2', k n'be the parallax of the correct match point after screening, n' is the logarithm of final correct coupling, and formula is as follows:
K'={k 1',k 2',...,k n'}
Step 5, step 4 is obtained parallax matrix K ' substitute into binocular range finding model in obtain conspicuousness target range;
The key point coordinate gone out by conspicuousness object matching is done to subtract the parallax obtaining conspicuousness target in binocular image.Parallax is brought into binocular range finding model in thus obtain conspicuousness target range.
Binocular imaging can obtain the image of two width different visual angles of Same Scene, and binocular model is as Fig. 5.
The focal length of two identical imaging systems is in the horizontal direction at a distance of B, and two optical axises are all parallel to surface level, and the plane of delineation parallels with perpendicular;
Supposing 1 M (X, Y, Z) in scene, is Pl (x at left and right two imaging points respectively 1, y 1) and Pr (x 2, y 2), x 1, y 1with x 2, y 2be respectively the coordinate of Pl and Pr at the perpendicular of imaging, in binocular model, parallax is defined as k=|pl-pr|=|x 2-x 1|, obtain range formula by triangle similarity relation, X, Y, Z are transverse axis in space coordinates, vertical pivot, the coordinate of the longitudinal axis:
z = B f k = B f | x 2 - x 1 | d x - - - ( 17 )
Wherein dx represents the physical distance of each pixel in the egative film of imaging in horizontal axis, f is the focal length of imaging system, z is the distance of impact point M to two imaging center lines, parallax matrix step 4 obtained is brought in formula (17), obtains corresponding distance matrix Z'={z according to the physical message of binocular model 1, z 2..., z n', z 1, z 2, z n'for the conspicuousness target range that single coupling parallax is obtained, the mean value finally obtaining distance matrix is the distance Z of conspicuousness target in binocular image f, formula is as follows:
Z f = 1 n Σ k = 1 n ′ z k - - - ( 18 ) .
Embodiment two: present embodiment is described below in conjunction with figure, present embodiment and embodiment one unlike: step one by one described in the detailed process that image carries out rim detection be:
Step one one by one, adopt 2D gaussian filtering template binocular image to be carried out to the noise of convolution algorithm removal of images;
Step one by one two, utilize gradient magnitude and the gradient direction of binocular image I (x, y) the upper pixel of the difference of the single order local derviation of level and vertical direction respectively after calculation of filtered, wherein partial derivative dx and dy in x direction and y direction is respectively:
dx=[I(x+1,y)-I(x-1,y)]/2 (21)
dy=[I(x,y+1)-I(x,y-1)]/2 (22)
Then gradient magnitude is:
D'=(dx 2+dy 2) 1/2(23)
Gradient direction is:
θ'=arctan(dy/dx) (24);
D' and θ ' represents gradient magnitude and the gradient direction of the upper pixel of filtered binocular image I (x, y) respectively;
Step one by one three, carry out non-maxima suppression to gradient, is then carried out dual threshold process to image, is generated edge image; Wherein, the marginal point gray-scale value of described edge image is 255, and non-edge point gray-scale value is 0.
Embodiment three: present embodiment is described below in conjunction with figure, present embodiment and embodiment one or two unlike: the vision significance model that utilizes described in step one two carries out significant characteristics extraction to binocular image, and the detailed process generating significant characteristics figure is:
After step one 21, binocular image rim detection, original image and edge image are superposed:
I 1(σ)=0.7I(σ)+0.3C(σ) (25)
Wherein, I (σ) is the former figure of input binocular image, and C (σ) is edge image, I 1(σ) be the image after overlap-add procedure;
Step one two or two, employing Gaussian difference function calculate nine layers of gaussian pyramid of the image after overlap-add procedure, wherein the 0th layer is the superimposed image of input, 1 to 8 layers be respectively to last layer adopt gaussian filtering and depression of order sampling form, size correspond to 1/2 to 1/256 of input picture, brightness is extracted to every one deck of gaussian pyramid, color, direction character also generates corresponding brightness pyramid, color pyramid and direction pyramid;
Extract brightness formula as follows:
I n=(r+g+b)/3 (26)
Wherein r, g, b correspond to red, green, blue three components of input binocular image color respectively, I nfor brightness;
Extract color characteristic formula as follows:
R=r-(g+b)/2 (27)
G=g-(r+b)/2 (28)
B=b-(r+g)/2 (29)
Y=r+g-2(|r-g|+b) (30)
R, G, B, Y correspond to superposition after the color component of image;
O (σ, ω) is to brightness I ncarry out the direction character of Gabor function filtering extraction in dimension, ω is direction and the gaussian pyramid number of plies of Gabor function, and σ is total direction quantity of Gabor function, wherein σ ∈ [0,1,2 ..., 8], ω ∈ [0 °, 45 °, 90 °, 135 °];
Step one two or three, brightness to the different scale of the gaussian pyramid obtained, color and three, direction feature carry out central peripheral to being compared to difference, are specially:
If yardstick centered by yardstick c (c ∈ { 2,3,4}), yardstick u (u=c+ δ, δ ∈ 3,4}) be peripheral yardstick; 6 kinds of combinations (2-5,2-6,3-6,3-7,4-7,4-8) are had between center yardstick c in the gaussian pyramid of 9 layers and periphery yardstick u;
By the difference of the characteristic pattern of yardstick c and yardstick s represent central authorities and periphery to be compared to poor local orientation feature contrast as shown in the formula:
I n(c,u)=|I n(c)-I n(u)| (31)
RG(c,u)=|(R(c)-G(c))-(G(u)-R(u))| (32)
BY(c,u)=|(B(c)-Y(c))-(Y(u)-B(u))| (33)
O(c,u,ω)=|O(c,ω)-O(u,ω)| (34)
Wherein, needed before doing difference by interpolation make two width figure in the same size carry out again poor;
Step one two or four, to be merged making the characteristic pattern of different characteristic that difference generates by normalization, generating the significant characteristics figure of input binocular image, being specially:
First the yardstick contrast characteristic figure of each feature is normalized to the comprehensive characteristics figure merging and generate this feature for brightness normalization characteristic figure, for color characteristic normalization characteristic figure, for direction character normalization characteristic figure; Computation process as the following formula shown in:
I n ‾ = ⊕ c = 2 4 ⊕ s = c + 3 c + 4 N ( I n ( c , s ) ) - - - ( 35 )
C ‾ = ⊕ c = 2 4 ⊕ s = c + 3 c + 4 [ N ( RG ( c , s ) ) + N ( BY ( c , s ) ) ] - - - ( 36 )
Wherein, N () represents normalization computing function, first for the characteristic pattern that need calculate, the eigenwert of pixel each in characteristic pattern is normalized to an enclosed region [0,255] in, then in each characteristic pattern normalized, find overall maximum saliency value A, then obtain the mean value a of local maximum in characteristic pattern, finally 2 (A-a) are multiplied by each pixel characteristic of correspondence value of feature;
The comprehensive characteristics figure recycling each feature is normalized and obtains final significant characteristics figure S, and computation process is as follows:
S = 1 3 ( N ( I n ‾ ) + N ( C ‾ ) + N ( O ‾ ) ) - - - ( 38 ) .

Claims (3)

1. the distance measurement method of conspicuousness target in binocular image, is characterized in that said method comprising the steps of:
Step one, utilize vision significance model to carry out significant characteristics extraction to binocular image, and mark Seed Points and background dot, specifically comprise:
Step carries out pre-service one by one, first, carries out rim detection to binocular image, generates the outline map of binocular image;
Step one two, utilize vision significance model to carry out significant characteristics extraction to binocular image, generate significant characteristics figure;
Step one three, find out gray-scale value maximum pixel point in figure according to significant characteristics figure, be labeled as Seed Points; And centered by Seed Points 25 × 25 window in traversal pixel, the gray-scale value finding out pixel be less than 0.1 and distance Seed Points pixel is farthest labeled as background dot;
Step 2, weighted graph is set up to binocular image;
Classical Gauss's weight function is utilized to set up weighted graph to binocular image:
W ij = e - β ( g i - g j ) 2 - - - ( 1 )
Wherein, W ijrepresent the weights between summit i and summit j, g irepresent the brightness of summit i, g jrepresent the brightness of summit j, β is free parameter, and e is the nature truth of a matter;
The Laplacian Matrix L of weighted graph is obtained by following formula:
Wherein, L ijfor the element of corresponding vertex i to j in Laplacian Matrix L, d ifor summit i and surrounding point weights and, d i = Σ W ij ;
Step 3, the weighted graph utilized in Seed Points in step one and background dot and step 2, by random walk image segmentation algorithm by the conspicuousness Target Segmentation in binocular image out;
Step 3 one, the Seed Points marked according to step one by the pixel of binocular image and background dot separate two class set, i.e. gauge point set V mv is gathered with unmarked u, Laplacian Matrix L is according to V mand V u, prioritization gauge point and then arrange non-marked point; Wherein, described L is divided into L m, L u, B, B tfour parts, be then expressed as follows Laplacian Matrix:
L = L M B B T L U - - - ( 3 )
Wherein, L mfor gauge point is to the Laplacian Matrix of gauge point, L ufor non-marked point is to the Laplacian Matrix of non-marked point, B and B tbe respectively gauge point to non-marked point and non-marked point to the Laplacian Matrix of gauge point;
Step 3 two, according to Laplacian Matrix and gauge point solve combination dirichlet integral D [x];
Combination dirichlet integral formula is as follows:
D [ x ] = 1 2 Σ w ij ( x i - x j ) 2 = 1 2 x T Lx - - - ( 4 )
Wherein, x be in weighted graph summit to the probability matrix of gauge point, x iand x jbe respectively the probability of summit i and j to gauge point;
According to gauge point set V mv is gathered with unmarked u, x is divided into x mand x utwo parts, x mfor gauge point set V mcorresponding probability matrix, x ufor unmarked set V ucorresponding probability matrix; Formula (4) is decomposed into:
D [ x U ] = 1 2 [ x M T x U T ] L M B B T L U x M x U = 1 2 ( x M T L M x M + 2 x U T B T x M + x U T L U x U ) - - - ( 5 )
For gauge point s, setting m sif summit i is s arbitrarily, then otherwise to D [x u] for x udifferentiate, obtain the Di Li Cray probable value that formula (5) minimizing solution is gauge point s:
L U x i s = - Bm s - - - ( 6 )
Wherein, represent that summit i arrives the probability of gauge point s first;
According to what obtained by combination dirichlet integral carry out Threshold segmentation according to formula (7), generate segmentation figure:
Wherein, s ifor the pixel size of a certain summit i correspondence position in segmentation figure;
Wherein, in described segmentation figure brightness be 1 pixel be expressed as conspicuousness target in image, brightness be 0 be background;
Step 3 three, pixel corresponding with original image for segmentation figure be multiplied, generate target figure, namely extract the conspicuousness target be partitioned into, formula is as follows:
t i=s i·I i(8)
Wherein, t ifor the gray-scale value of a certain summit i of target figure T, I ifor the gray-scale value of input picture I (σ) correspondence position i;
Step 4, by SIFT algorithm, conspicuousness target is carried out key point coupling separately;
Step 4 one, target figure is set up gaussian pyramid, ask difference to obtain DOG image between two to filtered image, DOG image definition is D (x, y, σ), asks for formula as follows:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*T(x,y)
(9)
=C(x,y,kσ)-C(x,y,σ)
Wherein, be the Gaussian function of a change yardstick, p, q represent the dimension of Gaussian template, (x, y) be the position of pixel in gaussian pyramid image, σ is the metric space factor of image, and k represents a certain concrete scale-value, C (x, y, σ) is defined as G (x, y, σ) with the convolution of target figure T (x, y), i.e. C (x, y, σ)=G (x, y, σ) * T (x, y);
Step 4 two, in adjacent DOG image, obtain extreme point, by the position of matching three-dimensional quadratic function determination extreme point and yardstick as key point, and according to Hessian matrix, Detection of Stability is carried out to eliminate skirt response to key point, specific as follows:
(1) by carrying out Taylor expansion, its curve D (X) is asked to metric space DOG:
D ( X ) = D + ∂ D T ∂ X X + 1 2 X T ∂ 2 D ∂ X 2 X - - - ( 10 )
Wherein, X=(x, y, σ) t, D is curve, makes it be 0, obtain the side-play amount formula (11) of extreme point to formula (10) differentiate:
X ^ = - ∂ 2 D - 1 ∂ X 2 ∂ D ∂ X - - - ( 11 )
For removing the extreme point of low contrast, formula (11) being substituted into formula (10), obtains formula (12):
D ( X ^ ) = D + 1 2 ∂ D T ∂ X X ^ - - - ( 12 )
If the value of formula (12) is greater than 0.03, retains this extreme point and obtain exact position and the yardstick of this extreme point, otherwise abandoning;
(2) unstable key point is eliminated by the Hessian matrix screening at key point place;
The ratio between Hessian proper value of matrix is utilized to calculate curvature;
Curvature according to key point neighborhood judges marginal point;
The ratio of curvature is set to 10, is greater than 10 deletions, otherwise then retain, what remain is then stable key point;
Step 4 three, the pixel of the window of key point neighborhood 16 × 16 is utilized to be each key point assigned direction parameter;
For the key point detected in DOG image, the size and Orientation computing formula of gradient is as follows:
m ( x , y ) = ( C ( x + 1 , y ) - C ( x - 1 , y ) ) 2 + ( C ( x , y + 1 ) - C ( x , y - 1 ) ) 2 θ ( x , y ) = tan - 1 ( ( C ( x , y + 1 ) - C ( x , y - 1 ) ) / ( C ( x + 1 , y ) - C ( x + 1 , y ) - C ( x - 1 , y ) ) ) - - - ( 13 )
Wherein, C is the metric space at key point place, and m is the gradient magnitude of key point, and θ is the gradient direction of required point; Centered by key point, regional assignment 16 × 16 neighborhoods, obtain gradient magnitude and the gradient direction of wherein pixel around, use histogram to add up the gradient of point in this neighborhood; Histogrammic horizontal ordinate is direction, is divided into 36 parts by 360 degree, and every part is in the middle of 10 degree of corresponding histograms, and histogrammic ordinate is gradient magnitude, and the size corresponding to the point of corresponding gradient direction is added, itself and as the size of ordinate; Principal direction is defined as the interval direction that gradient magnitude is hm to the maximum, by the interval of gradient magnitude on 08*hm as principal direction assist to, to strengthen the stability of coupling;
Step 4 four, set up descriptor statement key point local feature information
First the rotation of coordinate around key point is the direction of key point;
Then the window of around key point 16 × 16 is chosen, the wicket of 16 4 × 4 is divided in neighborhood, in the wicket of 4 × 4, calculate the size and Orientation of its corresponding gradient, and add up the gradient information of each wicket with the histogram of 8 bin, by Gauss's weighting algorithm to around key point 16 × 16 window calculation descriptor as shown in the formula:
h = m g ( a + x , b + y ) * e - ( - x ′ ) 2 + ( y ′ ) 2 2 × ( 0.5 d ) 2 - - - ( 14 )
Wherein, h is descriptor, (a, b) for key point is in the position of gaussian pyramid image, m gfor the gradient magnitude of key point and the gradient magnitude of step 4 three histogram principal direction, d is the length of side that is 16 of window, (x, y) be the position of pixel in gaussian pyramid image, (x ', y ') be the new coordinate of pixel in the neighborhood in direction being key point by rotation of coordinate, the computing formula of new coordinate such as formula:
x ′ y ′ = cos θ g - sin θ g sin θ g cos θ g x y - - - ( 15 )
θ gfor the gradient direction of key point;
By obtaining the proper vector of 128 key points to the window calculation of 16 × 16, be designated as H=(h 1, h 2, h 3..., h 128), be normalized proper vector, after normalization, proper vector is designated as L g, normalization formula such as formula:
l i = h i Σ j = 1 128 h j , j = 1,2,3 , . . . . ( 16 )
Wherein, L g=(l 1, l 2..., l i..., l 128) be the proper vector of the key point after normalization, l i, i=1,2,3 .... be a certain normalized vector;
Adopt the Euclidean distance of the proper vector of key point as the decision metric of the similarity of key point in binocular image, mate the key point in binocular image, the crucial pixel coordinate information of coupling is as one group of key message mutually;
Step 4 five, to generate coupling key point screen;
Obtain the coordinate horizontal parallax of often pair of key point, generate parallax matrix, parallax defined matrix is K n={ k 1, k 2... k n, n is the logarithm of coupling, k 1, k 2, k nfor single match point parallax;
Obtain the median k of parallax matrix m, and obtain, with reference to parallax matrix, being designated as K n', formula is as follows:
K n'={k 1-k m,k 2-k m,...,k n-k m} (17)
Setting parallax threshold value is 3, by K n' in be greater than threshold value corresponding parallax delete, finally inspected matrix result K', k 1', k 2', k n'be the parallax of the correct match point after screening, n' is the logarithm of final correct coupling, and formula is as follows:
K'={k 1',k 2',...,k n'} (18)
Step 5, step 4 is obtained parallax matrix K ' substitute into binocular range finding model in obtain conspicuousness target range;
The focal length of two identical imaging systems is in the horizontal direction at a distance of J, and two optical axises are all parallel to surface level, and the plane of delineation parallels with perpendicular;
Supposing an impact point M (X, Y, Z) in scene, is Pl (x at left and right two imaging points respectively 1, y 1) and Pr (x 2, y 2), x 1, y 1with x 2, y 2be respectively the coordinate of Pl and Pr at the perpendicular of imaging, in binocular model, parallax is defined as k=|pl-pr|=|x 2-x 1|, obtain range formula by triangle similarity relation, X, Y, Z are transverse axis in space coordinates, vertical pivot, the coordinate of the longitudinal axis:
z = J f k = J f | x 2 - x 1 | dx ′ - - - ( 19 )
Wherein dx' represents the physical distance of each pixel in the egative film of imaging in horizontal axis, f is the focal length of imaging system, z is the distance of impact point M to two imaging center lines, parallax matrix step 4 obtained is brought in formula (19), obtains corresponding distance matrix Z'={z according to the physical message of binocular model 1, z 2..., z n', z 1, z 2, z n'for the conspicuousness target range that single coupling parallax is obtained, the mean value finally obtaining distance matrix is the distance Z of conspicuousness target in binocular image f, formula is as follows:
Z f = 1 n Σ k = 1 n ′ z k - - - ( 20 ) .
2. the distance measurement method of conspicuousness target in a kind of binocular image according to claim 1, it is characterized in that step is described one by one to the detailed process that image carries out rim detection is:
Step one one by one, adopt 2D gaussian filtering template binocular image to be carried out to the noise of convolution algorithm removal of images;
Step one by one two, utilize gradient magnitude and the gradient direction of binocular image I (x, y) the upper pixel of the difference of the single order local derviation of level and vertical direction respectively after calculation of filtered, wherein partial derivative dx and dy in x direction and y direction is respectively:
dx=[I(x+1,y)-I(x-1,y)]/2 (21)
dy=[I(x,y+1)-I(x,y-1)]/2 (22)
Then gradient magnitude is:
D'=(dx 2+dy 2) 1/2(23)
Gradient direction is:
θ'=arctan(dy/dx) (24);
D' and θ ' represents gradient magnitude and the gradient direction of the upper pixel of filtered binocular image I (x, y) respectively;
Step one by one three, carry out non-maxima suppression to gradient, is then carried out dual threshold process to image, is generated edge image; Wherein, the marginal point gray-scale value of described edge image is 255, and non-edge point gray-scale value is 0.
3. the distance measurement method of conspicuousness target in a kind of binocular image according to claim 2, it is characterized in that the vision significance model that utilizes described in step one two carries out significant characteristics extraction to binocular image, the detailed process generating significant characteristics figure is:
After step one 21, binocular image rim detection, original image and edge image are superposed:
I 1(σ)=0.7I(σ)+0.3C(σ) (25)
Wherein, I (σ) is the former figure of input binocular image, and C (σ) is edge image, I 1(σ) be the image after overlap-add procedure;
Step one two or two, employing Gaussian difference function calculate nine layers of gaussian pyramid of the image after overlap-add procedure, wherein the 0th layer is the superimposed image of input, 1 to 8 layers be respectively to last layer adopt gaussian filtering and depression of order sampling form, size correspond to 1/2 to 1/256 of input picture, brightness is extracted to every one deck of gaussian pyramid, color, direction character also generates corresponding brightness pyramid, color pyramid and direction pyramid;
Extract brightness formula as follows:
I n=(r+g+b)/3 (26)
Wherein r, g, b correspond to red, green, blue three components of input binocular image color respectively, I nfor brightness;
Extract color characteristic formula as follows:
R=r-(g+b)/2 (27)
G=g-(r+b)/2 (28)
B=b-(r+g)/2 (29)
Y=r+g-2(|r-g|+b) (30)
R, G, B, Y correspond to superposition after the color component of image;
O (σ, be ω) direction character brightness In being carried out to Gabor function filtering extraction in dimension, ω is direction and the gaussian pyramid number of plies of Gabor function, and σ is total direction quantity of Gabor function, wherein σ ∈ [0,1,2 ..., 8], ω ∈ [0 °, 45 °, 90 °, 135 °];
Step one two or three, brightness to the different scale of the gaussian pyramid obtained, color and three, direction feature carry out central peripheral to being compared to difference, are specially:
If yardstick centered by yardstick c (c ∈ { 2,3,4}), yardstick u (u=c+ δ, δ ∈ 3,4}) be peripheral yardstick; 6 kinds of combinations (2-5,2-6,3-6,3-7,4-7,4-8) are had between center yardstick c in the gaussian pyramid of 9 layers and periphery yardstick u;
By the difference of the characteristic pattern of yardstick c and yardstick s represent central authorities and periphery to be compared to poor local orientation feature contrast as shown in the formula:
I n(c,u)=|I n(c)-I n(u)| (31)
RG(c,u)=|(R(c)-G(c))-(G(u)-R(u))| (32)
BY(c,u)=|(B(c)-Y(c))-(Y(u)-B(u))| (33)
O(c,u,ω)=|O(c,ω)-O(u,ω)| (34)
Wherein, needed before doing difference by interpolation make two width figure in the same size carry out again poor;
Step one two or four, to be merged making the characteristic pattern of different characteristic that difference generates by normalization, generating the significant characteristics figure of input binocular image, being specially:
First the yardstick contrast characteristic figure of each feature is normalized to the comprehensive characteristics figure merging and generate this feature for brightness normalization characteristic figure, for color characteristic normalization characteristic figure, for direction character normalization characteristic figure; Computation process as the following formula shown in:
I n ‾ = ⊕ c = 2 4 ⊕ s = c + 3 c + 4 N ( I n ( c , s ) ) - - - ( 35 )
C ‾ = ⊕ c = 2 4 ⊕ s = c + 3 c + 4 [ N ( RG ( c , s ) ) + N ( BY ( c , s ) ) ] - - - ( 36 )
Wherein, N () represents normalization computing function, first for the characteristic pattern that need calculate, the eigenwert of pixel each in characteristic pattern is normalized to an enclosed region [0,255] in, then in each characteristic pattern normalized, find overall maximum saliency value A, then obtain the mean value a of local maximum in characteristic pattern, finally 2 (A-a) are multiplied by each pixel characteristic of correspondence value of feature;
The comprehensive characteristics figure recycling each feature is normalized and obtains final significant characteristics figure S, and computation process is as follows:
S = 1 3 ( N ( I n ‾ ) + N ( C ‾ ) + N ( O ‾ ) ) - - - ( 38 ) .
CN201510233157.3A 2015-05-08 2015-05-08 The distance measurement method of conspicuousness target in a kind of binocular image Active CN104778721B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510233157.3A CN104778721B (en) 2015-05-08 2015-05-08 The distance measurement method of conspicuousness target in a kind of binocular image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510233157.3A CN104778721B (en) 2015-05-08 2015-05-08 The distance measurement method of conspicuousness target in a kind of binocular image

Publications (2)

Publication Number Publication Date
CN104778721A true CN104778721A (en) 2015-07-15
CN104778721B CN104778721B (en) 2017-08-11

Family

ID=53620167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510233157.3A Active CN104778721B (en) 2015-05-08 2015-05-08 The distance measurement method of conspicuousness target in a kind of binocular image

Country Status (1)

Country Link
CN (1) CN104778721B (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574928A (en) * 2015-12-11 2016-05-11 深圳易嘉恩科技有限公司 Driving image processing method and first electronic equipment
CN106023198A (en) * 2016-05-16 2016-10-12 天津工业大学 Hessian matrix-based method for extracting aortic dissection of human thoracoabdominal cavity CT image
CN106094516A (en) * 2016-06-08 2016-11-09 南京大学 A kind of robot self-adapting grasping method based on deeply study
CN106780476A (en) * 2016-12-29 2017-05-31 杭州电子科技大学 A kind of stereo-picture conspicuousness detection method based on human-eye stereoscopic vision characteristic
CN106920244A (en) * 2017-01-13 2017-07-04 广州中医药大学 A kind of method of background dot near detection image edges of regions
CN106918321A (en) * 2017-03-30 2017-07-04 西安邮电大学 A kind of method found range using object parallax on image
CN107392929A (en) * 2017-07-17 2017-11-24 河海大学常州校区 A kind of intelligent target detection and dimension measurement method based on human vision model
CN107423739A (en) * 2016-05-23 2017-12-01 北京陌上花科技有限公司 Image characteristic extracting method and device
CN107633498A (en) * 2017-09-22 2018-01-26 成都通甲优博科技有限责任公司 Image dark-state Enhancement Method, device and electronic equipment
CN107644398A (en) * 2017-09-25 2018-01-30 上海兆芯集成电路有限公司 Image interpolation method and its associated picture interpolating device
CN107730521A (en) * 2017-04-29 2018-02-23 安徽慧视金瞳科技有限公司 The quick determination method of roof edge in a kind of image
CN108036730A (en) * 2017-12-22 2018-05-15 福建和盛高科技产业有限公司 A kind of fire point distance measuring method based on thermal imaging
CN108460794A (en) * 2016-12-12 2018-08-28 南京理工大学 A kind of infrared well-marked target detection method of binocular solid and system
CN108665740A (en) * 2018-04-25 2018-10-16 衢州职业技术学院 A kind of classroom instruction control system of feeling and setting happily blended Internet-based
CN109300154A (en) * 2018-11-27 2019-02-01 郑州云海信息技术有限公司 A kind of distance measuring method and device based on binocular solid
WO2019029099A1 (en) * 2017-08-11 2019-02-14 浙江大学 Image gradient combined optimization-based binocular visual sense mileage calculating method
CN110060240A (en) * 2019-04-09 2019-07-26 南京链和科技有限公司 A kind of tyre contour outline measurement method based on camera shooting
CN110889866A (en) * 2019-12-04 2020-03-17 南京美基森信息技术有限公司 Background updating method for depth map
CN112489104A (en) * 2020-12-03 2021-03-12 海宁奕斯伟集成电路设计有限公司 Distance measurement method and device, electronic equipment and readable storage medium
CN112784814A (en) * 2021-02-10 2021-05-11 中联重科股份有限公司 Posture recognition method for vehicle backing and warehousing and conveying vehicle backing and warehousing guide system
CN116523900A (en) * 2023-06-19 2023-08-01 东莞市新通电子设备有限公司 Hardware processing quality detection method
CN117152144A (en) * 2023-10-30 2023-12-01 潍坊华潍新材料科技有限公司 Guide roller monitoring method and device based on image processing
CN117889867A (en) * 2024-03-18 2024-04-16 南京师范大学 Path planning method based on local self-attention moving window algorithm

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110065790B (en) * 2019-04-25 2021-07-06 中国矿业大学 Method for detecting blockage of coal mine belt transfer machine head based on visual algorithm

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008040945A1 (en) * 2006-10-06 2008-04-10 Imperial Innovations Limited A method of identifying a measure of feature saliency in a sequence of images
CN103824284A (en) * 2014-01-26 2014-05-28 中山大学 Key frame extraction method based on visual attention model and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008040945A1 (en) * 2006-10-06 2008-04-10 Imperial Innovations Limited A method of identifying a measure of feature saliency in a sequence of images
CN103824284A (en) * 2014-01-26 2014-05-28 中山大学 Key frame extraction method based on visual attention model and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘英哲 等: "H.264中一种基于搜索范围自适应调整的运动估计算法", 《电子与信息学报》 *
蒋寓文 等: "选择性背景优先的显著性检测模型", 《电子与信息学报》 *

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574928A (en) * 2015-12-11 2016-05-11 深圳易嘉恩科技有限公司 Driving image processing method and first electronic equipment
CN106023198A (en) * 2016-05-16 2016-10-12 天津工业大学 Hessian matrix-based method for extracting aortic dissection of human thoracoabdominal cavity CT image
CN107423739A (en) * 2016-05-23 2017-12-01 北京陌上花科技有限公司 Image characteristic extracting method and device
CN107423739B (en) * 2016-05-23 2020-11-13 北京陌上花科技有限公司 Image feature extraction method and device
CN106094516A (en) * 2016-06-08 2016-11-09 南京大学 A kind of robot self-adapting grasping method based on deeply study
CN108460794A (en) * 2016-12-12 2018-08-28 南京理工大学 A kind of infrared well-marked target detection method of binocular solid and system
CN108460794B (en) * 2016-12-12 2021-12-28 南京理工大学 Binocular three-dimensional infrared salient target detection method and system
CN106780476A (en) * 2016-12-29 2017-05-31 杭州电子科技大学 A kind of stereo-picture conspicuousness detection method based on human-eye stereoscopic vision characteristic
CN106920244A (en) * 2017-01-13 2017-07-04 广州中医药大学 A kind of method of background dot near detection image edges of regions
CN106920244B (en) * 2017-01-13 2019-08-02 广州中医药大学 A kind of method of the neighbouring background dot of detection image edges of regions
CN106918321A (en) * 2017-03-30 2017-07-04 西安邮电大学 A kind of method found range using object parallax on image
CN107730521A (en) * 2017-04-29 2018-02-23 安徽慧视金瞳科技有限公司 The quick determination method of roof edge in a kind of image
CN107730521B (en) * 2017-04-29 2020-11-03 安徽慧视金瞳科技有限公司 Method for rapidly detecting ridge type edge in image
CN107392929A (en) * 2017-07-17 2017-11-24 河海大学常州校区 A kind of intelligent target detection and dimension measurement method based on human vision model
CN107392929B (en) * 2017-07-17 2020-07-10 河海大学常州校区 Intelligent target detection and size measurement method based on human eye vision model
WO2019029099A1 (en) * 2017-08-11 2019-02-14 浙江大学 Image gradient combined optimization-based binocular visual sense mileage calculating method
CN107633498A (en) * 2017-09-22 2018-01-26 成都通甲优博科技有限责任公司 Image dark-state Enhancement Method, device and electronic equipment
CN107644398A (en) * 2017-09-25 2018-01-30 上海兆芯集成电路有限公司 Image interpolation method and its associated picture interpolating device
CN108036730A (en) * 2017-12-22 2018-05-15 福建和盛高科技产业有限公司 A kind of fire point distance measuring method based on thermal imaging
CN108036730B (en) * 2017-12-22 2019-12-10 福建和盛高科技产业有限公司 Fire point distance measuring method based on thermal imaging
CN108665740A (en) * 2018-04-25 2018-10-16 衢州职业技术学院 A kind of classroom instruction control system of feeling and setting happily blended Internet-based
CN109300154A (en) * 2018-11-27 2019-02-01 郑州云海信息技术有限公司 A kind of distance measuring method and device based on binocular solid
CN110060240A (en) * 2019-04-09 2019-07-26 南京链和科技有限公司 A kind of tyre contour outline measurement method based on camera shooting
CN110060240B (en) * 2019-04-09 2023-08-01 南京链和科技有限公司 Tire contour measurement method based on image pickup
CN110889866A (en) * 2019-12-04 2020-03-17 南京美基森信息技术有限公司 Background updating method for depth map
CN112489104A (en) * 2020-12-03 2021-03-12 海宁奕斯伟集成电路设计有限公司 Distance measurement method and device, electronic equipment and readable storage medium
CN112489104B (en) * 2020-12-03 2024-06-18 海宁奕斯伟集成电路设计有限公司 Ranging method, ranging device, electronic equipment and readable storage medium
CN112784814A (en) * 2021-02-10 2021-05-11 中联重科股份有限公司 Posture recognition method for vehicle backing and warehousing and conveying vehicle backing and warehousing guide system
CN112784814B (en) * 2021-02-10 2024-06-07 中联重科股份有限公司 Gesture recognition method for reversing and warehousing of vehicle and reversing and warehousing guiding system of conveying vehicle
CN116523900A (en) * 2023-06-19 2023-08-01 东莞市新通电子设备有限公司 Hardware processing quality detection method
CN116523900B (en) * 2023-06-19 2023-09-08 东莞市新通电子设备有限公司 Hardware processing quality detection method
CN117152144A (en) * 2023-10-30 2023-12-01 潍坊华潍新材料科技有限公司 Guide roller monitoring method and device based on image processing
CN117152144B (en) * 2023-10-30 2024-01-30 潍坊华潍新材料科技有限公司 Guide roller monitoring method and device based on image processing
CN117889867A (en) * 2024-03-18 2024-04-16 南京师范大学 Path planning method based on local self-attention moving window algorithm
CN117889867B (en) * 2024-03-18 2024-05-24 南京师范大学 Path planning method based on local self-attention moving window algorithm

Also Published As

Publication number Publication date
CN104778721B (en) 2017-08-11

Similar Documents

Publication Publication Date Title
CN104778721A (en) Distance measuring method of significant target in binocular image
CN109657632B (en) Lane line detection and identification method
CN103400151B (en) The optical remote sensing image of integration and GIS autoregistration and Clean water withdraw method
EP2811423B1 (en) Method and apparatus for detecting target
KR101856401B1 (en) Method, apparatus, storage medium, and device for processing lane line data
Yin et al. Hot region selection based on selective search and modified fuzzy C-means in remote sensing images
CN104850850A (en) Binocular stereoscopic vision image feature extraction method combining shape and color
CN109635733B (en) Parking lot and vehicle target detection method based on visual saliency and queue correction
CN105678318B (en) The matching process and device of traffic sign
CN111382658B (en) Road traffic sign detection method in natural environment based on image gray gradient consistency
CN111126393A (en) Vehicle appearance refitting judgment method and device, computer equipment and storage medium
Yuan et al. Combining maps and street level images for building height and facade estimation
CN110889840A (en) Effectiveness detection method of high-resolution 6 # remote sensing satellite data for ground object target
Wei et al. Detection of lane line based on Robert operator
CN116279592A (en) Method for dividing travelable area of unmanned logistics vehicle
Liu et al. Object-oriented detection of building shadow in TripleSat-2 remote sensing imagery
Singh et al. A hybrid approach for information extraction from high resolution satellite imagery
CN113219472B (en) Ranging system and method
Jing et al. Island road centerline extraction based on a multiscale united feature
CN110909656A (en) Pedestrian detection method and system with integration of radar and camera
CN110929782A (en) River channel abnormity detection method based on orthophoto map comparison
CN113158954B (en) Automatic detection method for zebra crossing region based on AI technology in traffic offsite
Senthilnath et al. Automatic road extraction using high resolution satellite image based on texture progressive analysis and normalized cut method
Eken et al. An automated technique to determine spatio-temporal changes in satellite island images with vectorization and spatial queries
EP4287137A1 (en) Method, device, equipment, storage media and system for detecting drivable space of road

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20170717

Address after: 510000, Guangdong, Guangzhou, Guangzhou new Guangzhou knowledge city nine Buddha, Jianshe Road 333, room 245

Applicant after: Guangzhou Xiaopeng Automobile Technology Co. Ltd.

Address before: 150001 Harbin, Nangang, West District, large straight street, No. 92

Applicant before: Harbin Institute of Technology

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant