CN104778721B  The distance measurement method of conspicuousness target in a kind of binocular image  Google Patents
The distance measurement method of conspicuousness target in a kind of binocular image Download PDFInfo
 Publication number
 CN104778721B CN104778721B CN201510233157.3A CN201510233157A CN104778721B CN 104778721 B CN104778721 B CN 104778721B CN 201510233157 A CN201510233157 A CN 201510233157A CN 104778721 B CN104778721 B CN 104778721B
 Authority
 CN
 China
 Prior art keywords
 point
 image
 formula
 pixel
 binocular
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Active
Links
 238000000691 measurement method Methods 0.000 title claims abstract description 14
 239000011159 matrix material Substances 0.000 claims abstract description 64
 230000011218 segmentation Effects 0.000 claims abstract description 18
 238000004422 calculation algorithm Methods 0.000 claims abstract description 15
 238000000605 extraction Methods 0.000 claims abstract description 12
 238000005295 random walk Methods 0.000 claims abstract description 6
 230000000875 corresponding Effects 0.000 claims description 27
 238000003384 imaging method Methods 0.000 claims description 21
 238000001514 detection method Methods 0.000 claims description 15
 238000004364 calculation method Methods 0.000 claims description 14
 238000000034 method Methods 0.000 claims description 14
 238000010606 normalization Methods 0.000 claims description 12
 239000000284 extract Substances 0.000 claims description 11
 230000002093 peripheral Effects 0.000 claims description 7
 239000000203 mixture Substances 0.000 claims description 6
 238000001914 filtration Methods 0.000 claims description 4
 238000009795 derivation Methods 0.000 claims description 3
 230000003287 optical Effects 0.000 claims description 3
 230000004927 fusion Effects 0.000 claims description 2
 238000005070 sampling Methods 0.000 claims description 2
 230000001629 suppression Effects 0.000 claims description 2
 238000005259 measurement Methods 0.000 abstract description 7
 230000000007 visual effect Effects 0.000 abstract description 5
 238000003709 image segmentation Methods 0.000 abstract 1
 238000000638 solvent extraction Methods 0.000 description 3
 238000004458 analytical method Methods 0.000 description 2
 241001269238 Data Species 0.000 description 1
 230000000694 effects Effects 0.000 description 1
 238000003379 elimination reaction Methods 0.000 description 1
 238000005516 engineering process Methods 0.000 description 1
 238000004088 simulation Methods 0.000 description 1
Abstract
The distance measurement method of conspicuousness target in a kind of binocular image, the present invention relates to a kind of distance measurement method of target in binocular image.The purpose of the present invention is to propose to a kind of distance measurement method of conspicuousness target in binocular image, to solve the problem of existing target distance measurement method processing speed is slow.Step 1: carrying out significant characteristics extraction to binocular image using vision significance model, and mark seed point and background dot；Step 2: setting up weighted graph to binocular image；Step 3: using the weighted graph in the seed point and background dot and step 2 in step one, the conspicuousness Target Segmentation in binocular image is come out by random walk image segmentation algorithm；Step 4: conspicuousness target is individually carried out into crucial Point matching by SIFT algorithms；Step 5: the parallax matrix K that step 4 is obtained ' substitute into the model of binocular ranging and obtain conspicuousness target range.Present invention can apply to the range measurement in intelligent vehicle running to visual field forward image conspicuousness target.
Description
Technical field
The present invention relates to a kind of distance measurement method of target in binocular image, more particularly in a kind of binocular image significantly
The distance measurement method of property target, belongs to technical field of image processing.
Background technology
Range information is mainly used among traffic image processing and provides analysis for the control system of automobile.In intelligence
In the research process of energy automobile, traditional target measuring method is to carry out ranging to target using specific wavelength radar or laser.
Compared with radar and laser, vision sensor has the advantage in price, while visual angle is also openr.And passed using vision
Sensor can judge the particular content of target while target range is measured.
But current traffic image information versus busy, traditional target distance measurement algorithm is difficult in complicated image
Obtain desired result, due to can not find conspicuousness target in image but global detection so that processing speed is relatively slow and increases
Many extraneous datas so that algorithm can not meet application request.
The content of the invention
It is existing to solve the purpose of the present invention is to propose to a kind of distance measurement method of conspicuousness target in binocular image
The problem of target distance measurement method processing speed is slow.
The distance measurement method of conspicuousness target, is realized according to following steps in a kind of binocular image of the present invention
's：Step 1: carrying out significant characteristics extraction to binocular image using vision significance model, and mark seed point and background
Point, is specifically included：
Step 1: using vision significance model to binocular image carry out significant characteristics extraction, and mark seed point and
Background dot, is specifically included：
Step is preprocessed one by one, first, and rim detection is carried out to binocular image, generates the edge graph of binocular image；
Step one two, using vision significance model to binocular image carry out significant characteristics extraction, generate significant characteristics figure；
Step one three, gray value maximum pixel point in figure found out according to significant characteristics figure, labeled as seed point；And to plant
Son point centered on 25 × 25 window in traversal pixel, find out pixel gray value be less than 0.1 and apart from seed point most
Remote pixel is labeled as background dot；
Step 2: setting up weighted graph to binocular image；
Weighted graph is set up to binocular image using classical Gauss weight function：
Wherein, W_{ij}Represent the weights between summit i and summit j, g_{i}Represent summit i brightness, g_{j}Represent that summit j's is bright
Degree, β is free parameter, and e is the nature truth of a matter；
The Laplacian Matrix L of weighted graph is obtained by following formula：
Wherein, L_{ij}For the element of corresponding vertex i to j in Laplacian Matrix L, d_{i}For summit i and surrounding point weights and,
d_{i}=Σ W_{ij}；
Step 3: using the weighted graph in the seed point and background dot and step 2 in step one, passing through random walk figure
As partitioning algorithm comes out the conspicuousness Target Segmentation in binocular image；
Step 3 one, seed point and background dot that the pixel of binocular image is marked according to step one separate two class sets
Close, that is, mark point set V_{M}With unmarked point set V_{U}, Laplacian Matrix L is according to V_{M}And V_{U}, prioritization mark point and then again
Arrange nonmarked point；Wherein, the L is divided into L_{M}、L_{U}、B、B^{T}Laplacian Matrix, then be expressed as follows by four parts：
Wherein, L_{M}For the Laplacian Matrix of mark point to mark point, L_{U}For the Laplce of nonmarked point to nonmarked point
Matrix, B and B^{T}Respectively mark point is to nonmarked point and nonmarked point to the Laplacian Matrix of mark point；
Step 3 two, combination dirichlet integral D [x] solved according to Laplacian Matrix and mark point；
Combine dirichlet integral formula as follows：
Wherein, x is summit in weighted graph to the probability matrix of mark point, x_{i}And x_{j}Respectively summit i and j arrive mark point
Probability；
According to mark point set V_{M}With unmarked point set V_{U}, it is x by x points_{M}And x_{U}Two parts, x_{M}For mark point set V_{M}It is right
The probability matrix answered, x_{U}For unmarked point set V_{U}Corresponding probability matrix；Formula (4) is decomposed into：
For mark point s, m is set^{s}If any summit i is s,OtherwiseTo D [x_{u}] it is directed to x_{U}Ask
Differential, the solution for obtaining formula (5) minimum is mark point s Di Li Cray probable values：
Wherein,Represent that summit i reaches mark point s probability first；
According to what is obtained by combining dirichlet integralEnter row threshold division according to formula (7), generate segmentation figure：
Wherein, s_{i}For the pixel size of a certain summit i correspondence positions in segmentation figure；
Wherein, brightness is expressed as conspicuousness target in image for 1 pixel in the segmentation figure, brightness for 0 i.e.
For background；
Step 3 three, corresponding with the original image pixel of segmentation figure is multiplied, generates target figure, that is, extract be partitioned into aobvious
Work property target, formula is as follows：
t_{i}=s_{i}·I_{i} (8)
Wherein, t_{i}For target figure T a certain summit i gray value, I_{i}For input picture I (σ) correspondence positions i gray value；
Step 4: conspicuousness target is individually carried out into crucial Point matching by SIFT algorithms；
Step 4 one, target figure set up into gaussian pyramid, ask difference to obtain DOG images, DOG twobytwo filtered image
Image definition is D (x, y, σ), asks for formula as follows：
Wherein,For the Gaussian function of a change yardstick, p, q represents Gaussian mode
The dimension of plate, (x, y) is position of the pixel in gaussian pyramid image, and σ is the metric space factor of image, and k represents certain
One specific scalevalue, C (x, y, σ) is defined as G (x, y, σ) and target figure T (x, y) convolution, i.e. C (x, y, σ)=G (x, y, σ) *
T(x,y)；
Step 4 two, in adjacent DOG images extreme point is obtained, extreme point is determined by being fitted threedimensional quadratic function
Position and yardstick carry out Detection of Stability to eliminate skirt response as key point, and according to Hessian matrixes to key point, have
Body is as follows：
(1) its curve matching D (X) is asked by carrying out Taylor expansion to metric space DOG：
Wherein, X=(x, y, σ)^{T}, D is curve matching, to formula (10) derivation and make its be 0, obtain the offset of extreme point
Formula (11)：
To remove the extreme point of low contrast, formula (11) is substituted into formula (10), formula (12) is obtained：
If the value of formula (12) is more than 0.03, retains the extreme point and obtain exact position and the yardstick of the extreme point, otherwise
Abandon；
(2) unstable key point is eliminated by the Hessian matrixes screening at key point；
Utilize the ratio calculation curvature between Hessian matrix exgenvalues；
Marginal point is judged according to the curvature of crucial vertex neighborhood；
The ratio of curvature is set to 10, more than 10 deletions, conversely, then retaining, what is remained is then stable key
Point；
Step 4 three, using crucial vertex neighborhood 16 × 16 window pixel be each key point assigned direction parameter；
For the key point detected in DOG images, the size and Orientation calculation formula of gradient is as follows：
Wherein, C is the metric space where key point, and m is the gradient magnitude of key point, and θ is the gradient direction of required point；
Centered on key point, 16 × 16 neighborhoods delimited in peripheral region, the wherein gradient magnitude of pixel and gradient side is obtained
To counting the gradient put in this neighborhood using histogram；Histogrammic abscissa is direction, is divided into 36 parts by 360 degree, often
Part is one among 10 degree of correspondence histograms, and histogrammic ordinate is gradient magnitude, corresponds to the point of corresponding gradient direction
Size be added, itself and be used as the size of ordinate；Principal direction is defined as the interval direction that gradient magnitude is hm to the maximum, leads to
Interval auxiliary as principal direction of the gradient magnitude on 08*hm is crossed to strengthen the stability of matching；
Step 4 four, foundation description sublist state the local feature information of key point
Coordinate first around key point rotates to be the direction of key point；
Then choose around key point 16 × 16 window, the wicket of 16 4 × 4 is divided into neighborhood, 4 × 4
In wicket, the size and Orientation of its corresponding gradient is calculated, and it is small to count each with 8 bin histogram
The gradient information of window, son such as following formula are described by Gauss weighting algorithm to around key point 16 × 16 window calculation：
Wherein, h is description, and (a, b) is key point in the position of gaussian pyramid image, m_{g}It is big for the gradient of key point
The gradient magnitude of the small i.e. histogram principal direction of step 4 three, d is 16 for the length of side of window, and (x, y) is pixel in the golden word of Gauss
Position in tower image, (x ', y ') is new coordinate of the pixel in the neighborhood in direction that coordinate is rotated to be to key point, new coordinate
Calculation formula such as formula：
θ_{g}For the gradient direction of key point；
The characteristic vector of 128 key points is obtained by the window calculation to 16 × 16, H=(h are designated as_{1},h_{2},h_{3},...,
h_{128}), characteristic vector is normalized, characteristic vector is designated as L after normalization_{g}, normalize formula such as formula：
Wherein, L_{g}=(l_{1},l_{2},...,l_{i},...,l_{128}) for normalization after key point characteristic vector, l_{i}, i=1,
2,3 ... is a certain normalized vector；
Using key point characteristic vector Euclidean distance as the similarity of key point in binocular image decision metric,
Key point in binocular image is matched, the crucial pixel coordinate information being mutually matched is as one group of key message；
Step 4 five, the matching key point to generation are screened；
The coordinate horizontal parallax of each pair key point is obtained, parallax matrix is generated, parallax matrix is defined as K_{n}={ k_{1},
k_{2}...k_{n}, n is the logarithm of matching, k_{1}、k_{2}、k_{n}For single match point parallax；
Obtain the median k of parallax matrix_{m}, and obtain referring to parallax matrix, it is designated as K_{n}', formula is as follows：
K_{n}'={ k_{1}k_{m},k_{2}k_{m},...,k_{n}k_{m}} (17)
Parallax threshold value is set as 3, by K_{n}' in be more than threshold value corresponding parallax deletion, obtain finally inspecting matrix result K',
k_{1'}、k_{2'}、k_{n'}It is the parallax of the correct match point after screening, n' is the final logarithm correctly matched, and formula is as follows：
K'={ k_{1'},k_{2'},...,k_{n'}} (18)
Step 5: the parallax matrix K that step 4 is obtained ' substitute into obtained in the model of binocular ranging conspicuousness target away from
From；
The focal length of two identical imaging systems is in the horizontal direction at a distance of J, and two optical axises are each parallel to horizontal plane, figure
Image plane is parallel with perpendicular；
Assuming that a target point M (X, Y, Z) in scene, is Pl (x respectively in left and right two imaging points_{1},y_{1}) and Pr (x_{2},
y_{2}), x_{1},y_{1}With x_{2},y_{2}Respectively Pl and Pr are in the coordinate of the perpendicular of imaging, and parallax is defined as k=in binocular model  pl
Pr = x_{2}x_{1}, range formula is obtained by triangle similarity relation, X, Y, Z is transverse axis in space coordinates, vertical pivot, the longitudinal axis
Coordinate：
Wherein dx' represents physical distance of each pixel in the egative film of imaging in horizontal axis, and f is imaging system
Focal length, z is distances of the target point M to two imaging center lines, and the parallax matrix that step 4 is obtained is brought into formula (19), according to
The physical message of binocular model obtains corresponding distance matrix Z'={ z_{1},z_{2},...,z_{n'}, z_{1}, z_{2}, z_{n'}For single matching parallax
The conspicuousness target range obtained, the average value for finally obtaining distance matrix is the distance of conspicuousness target in binocular image
Z_{f}, formula is as follows：
The beneficial effects of the invention are as follows：
1st, the present invention extracts human eye region interested, algorithm extracts aobvious using the method for simulation human visual system
Work property target is substantially consistent with human eye detection result so that extracts and allows the invention to realize with human eye automatically knowing
Other conspicuousness target.
2nd, the present invention is automatically performed conspicuousness target distance measurement, without selection conspicuousness target by hand.
3rd, the present invention is matched to same target, so as to ensure that the parallax result of crucial Point matching is close, can effectively be sieved
Error matching points are selected, matching accuracy is close to 100%, and the relative error of parallax adds the accuracy of ranging less than 2%.
4th, match information of the present invention is less, can effectively reduce the extra unrelated matching primitives for calculating, at least reducing 75%,
And the introducing of extraneous data is reduced, matched data utilization rate is more than 90% so that can be achieved under complicated image environment notable
Property target distance measurement, improve image processing efficiency.
5th, the present invention is to the range measurement in intelligent vehicle running to visual field forward image conspicuousness target, so as to be automobile
Safety traffic provides key message, and solving traditional image distance measurement can only lack to whole picture progress depth detection
Point, and it is larger to avoid error very well, the problem of noise is excessive.
6th, the present invention is extracted by the significant characteristics to binocular image and realizes the segmentation to conspicuousness target, so that
Obtain target zone to reduce, reduce the matching time used, raising efficiency carries out matching to conspicuousness target critical point and regarded so as to obtain
Difference, and then range measurement is realized, because target is on a vertical plane, the matching key point made mistake can be screened well,
Improve precision, the inventive method can quickly recognize conspicuousness target and accurately measure the distance of conspicuousness target.
Brief description of the drawings
Fig. 1 is the flow chart of the inventive method；
Fig. 2 is vision significance analysis process figure；
Fig. 3 is Random Walk Algorithm flow chart；
Fig. 4 is SIFT algorithm flow charts；
Fig. 5 is binocular measuring system, X, Y, and Z is the space coordinates of definition, and M is space certain point, Pl and Pr be M into
The imaging point of image planes, M is more spatially, f is the focal length of imaging system.
Embodiment
The embodiment of the present invention is further described with reference to accompanying drawing.
Embodiment one：Illustrate present embodiment with reference to Fig. 1~Fig. 5, the method bag described in present embodiment
Include following steps：
Step 1: using vision significance model to binocular image carry out significant characteristics extraction, and mark seed point and
Background dot, is specifically included：
Conspicuousness extraction is carried out to binocular image using vision significance model, each pixel of binocular image is calculated respectively
The brightness of point, color, three kinds of direction notable feature, and three significant characteristics are normalized to the weighting notable figure for obtaining image.
In notable figure in each pixel representative image relevant position conspicuousness size.The maximum point of pixel value in figure is found out, that is, is shown
Work property most strong point, is designated as seed point；Progressively expanded scope finds out conspicuousness most weak point around seed point, is designated as background
Point.It is as shown in Figure 2 using vision significance model extraction saliency flow.
Step is preprocessed one by one, first, and rim detection is carried out to binocular image, generates vision significance model, side
Edge information is the important conspicuousness information of image；
Step one two, using vision significance model significant characteristics extraction is carried out to binocular image, generation conspicuousness is special
Levy figure；
Step one three, brightness maximum pixel point in figure found out according to significant characteristics figure, labeled as seed point；And with seed
Point centered on 25 × 25 window in traversal pixel, find out pixel gray value be less than 0.1 and it is farthest apart from seed point
Pixel be labeled as background dot；
Step 2: setting up weighted graph to binocular image；
Weighted graph is set up to binocular image using classical Gauss weight function, by the gray scale difference of pixel in binocular image
Certain weight is each assigned between pixel and its surrounding pixel as side, while using each pixel as summit, setting up bag
Weighted graph containing summit and side；
Entire image is regarded as undirected weighted graph using the theory of graph theory, each pixel is regarded as the top in weighted graph
Point, wherein, the gray value of the utilization pixel is weighted to the side of weighted graph, specific as follows using classical Gauss weight function：
Wherein, W_{ij}Represent the weights between summit i and summit j, g_{i}Represent pixel i brightness, g_{j}Represent that pixel j's is bright
Degree, β is free parameter, and e is the nature truth of a matter；
The Laplacian Matrix L of weighted graph is obtained by following formula：
Wherein, L_{ij}For the element of corresponding vertex i to j in Laplacian Matrix L, d_{i}For summit i and surrounding point weights and,
d_{i}=∑ W_{ij}；
Step 3: using the weighted graph in the seed point and background dot and step 2 in step one, passing through random walk figure
As partitioning algorithm comes out the conspicuousness Target Segmentation in binocular image；
Step 3: using the weighted graph in the seed point and background dot and step 2 in step one, passing through random walk figure
As partitioning algorithm comes out the conspicuousness Target Segmentation in binocular image；
Step 3 one, seed point and background dot that the pixel of binocular image is marked according to step one separate two class sets
Close, that is, mark point set V_{M}With unmarked point set V_{U}, Laplacian Matrix L is according to V_{M}And V_{U}, prioritization mark point and then again
Arrange nonmarked point；Wherein, the L is divided into L_{M}、L_{U}、B、B^{T}Laplacian Matrix, then be expressed as follows by four parts：
Wherein, L_{M}For the Laplacian Matrix of mark point to mark point, L_{U}For the Laplce of nonmarked point to nonmarked point
Matrix, B and B^{T}Respectively mark point is to nonmarked point and nonmarked point to the Laplacian Matrix of mark point；
Step 3 two, combination dirichlet integral D [x] solved according to Laplacian Matrix and mark point；
Combine dirichlet integral formula as follows：
Wherein, x is summit in weighted graph to the probability matrix of mark point, x_{i}And x_{j}Respectively summit i and j arrive mark point
Probability；
According to mark point set V_{M}With unmarked point set V_{U}, it is x by x points_{M}And x_{U}Two parts, x_{M}For mark point set V_{M}It is right
The probability matrix answered, x_{U}For unmarked point set V_{U}Corresponding probability matrix；Formula (4) is decomposed into：
Set m^{s}It is defined as mark point s, if any summit i is s,OtherwiseTo D [x_{u}] pin
To x_{U}Differentiate, the solution for obtaining formula (5) minimum is mark point s Di Li Cray probable values：
Wherein,Represent that summit i reaches mark point s probability first；
According to what is obtained by combining dirichlet integralEnter row threshold division according to formula (7), generate segmentation figure：
Wherein, s_{i}For the pixel size of a certain summit i correspondence positions in segmentation figure；
Wherein, brightness is expressed as conspicuousness target in image for 1 pixel in the segmentation figure, brightness for 0 i.e.
For background；
Step 3 three, corresponding with the original image pixel of segmentation figure is multiplied, generates target figure, that is, extract be partitioned into aobvious
Work property target, formula is as follows：
t_{i}=s_{i}·I_{i}(8)
Wherein, t_{i}For target figure T correspondence position i gray value, I_{i}For input picture I (σ) correspondence positions i gray value；
Step 4: conspicuousness target is individually carried out into crucial Point matching by SIFT algorithms；
The conspicuousness target split is individually carried out by critical point detection and matching by SIFT algorithms, to obtained
Screened with coordinate, the result of erroneous matching is proposed, correct matching result is left.
It is as shown in Figure 4 that SIFT algorithms carry out matching flow to binocular image.
Step 4 one, target figure set up into gaussian pyramid, ask difference to obtain DOG images, DOG twobytwo filtered image
Image definition is D (x, y, σ), asks for formula as follows：
Wherein,For the Gaussian function of a change yardstick, p, q represents Gaussian mode
The dimension of plate, (x, y) is position of the pixel in gaussian pyramid image, and σ is the metric space factor of image, and k represents certain
One specific scalevalue, C (x, y, σ) is defined as G (x, y, σ) and target figure T (x, y) convolution, i.e. C (x, y, σ)=G (x, y, σ) *
T(x,y)；
Step 4 two, in adjacent DOG images extreme point is obtained, extreme point is determined by being fitted threedimensional quadratic function
Position and yardstick carry out Detection of Stability to eliminate skirt response as key point, and according to Hessian matrixes to key point, have
Body is as follows：
Key point is the Local Extremum composition of DOG images, travels through each point on DOG images, to each point detection and together
8 consecutive points of yardstick and it is adjacent above and below 2 × 9 points totally 26 points gray value size, if it is than consecutive points around
It is all big or all it is small then be extreme point.
The extreme point obtained not is real key point, in order to improve stability, it is necessary to which (one) is logical to metric space DOG
Cross carry out Taylor expansion and seek its curve matching D (X)：
Wherein, X=(x, y, σ)^{T}, D is curve matching, to formula (10) derivation and make its be 0, obtain the offset of extreme point
Formula (11)：
To remove the extreme point of low contrast, formula (11) is substituted into formula (10), formula (12) is obtained：
If the value of formula (12) is more than 0.03, retains the extreme point and obtain exact position and the yardstick of the extreme point, otherwise
Abandon；
(2) unstable key point is eliminated by the Hessian matrixes screening at key point；
Utilize the ratio calculation curvature between Hessian matrix exgenvalues；
Marginal point is judged according to the curvature of crucial vertex neighborhood；
The ratio of curvature is set to 10, more than 10 deletions, conversely, then retaining, what is remained is then stable key
Point；
If the value of formula (12) is more than 0.03, retains the extreme point and obtain the exact position of the extreme point (original position is added
Offset after fitting) and yardstick, otherwise abandon.In order to eliminate unstable key point, pass through the Hessian at key point
Matrix is screened：
Step 4 three, determine after key point position and place yardstick that definition is crucial, it is necessary to assign a direction for key point
Point description is relative to this direction.It is each key point designated parties using the pixel of the window of crucial vertex neighborhood 16 × 16
To parameter；
For the key point detected in DOG images, the size and Orientation calculation formula of gradient is as follows：
Wherein, C is the metric space where key point, and m is the gradient magnitude of key point, and θ is the gradient direction of key point；
Centered on key point, a neighborhood delimited in peripheral region, the gradient put in this neighborhood is counted using histogram；
Histogrammic abscissa is direction, is divided into 36 parts by 360 degree, and every part is 10 degree of one corresponded among histograms.
Histogrammic ordinate is the size of gradient, and the size for corresponding to the point of corresponding gradient direction is added, and seat is indulged in itself and conduct
Target size.Principal direction is defined as that interval direction that gradient magnitude is hm to the maximum, by making other highly on 08*hm
The interval auxiliary as principal direction to strengthen the stability of matching.
Step 4 four, by the stage above after, each key point detected just has position, direction, residing chi
Spend these three information.Set up description to state the local feature information of key point for each key point.
Coordinate first around key point rotates to be the direction of key point.Then choose key point around 16 × 16 window
Mouthful, the wicket of 16 4 × 4 is divided into neighborhood.In 4 × 4 wicket, size and the side of its corresponding gradient are calculated
To.And count the gradient information of each wicket with 8 bin histogram.By Gauss weighting algorithm to key
16 × 16 window calculation description son such as following formula around point：
Wherein, h is description, (a, b) be key point in the position of gaussian pyramid image, d is 16 for the length of side of window,
(x, y) is position of the pixel in gaussian pyramid image, and (x ', y ') is that pixel is rotating to be coordinate in the direction of key point
Neighborhood in new coordinate, the calculation formula such as formula of new coordinate：
θ is the direction of key point.
The characteristic vector of 128 key points is obtained by the window calculation to 16 × 16, H=(h are designated as_{1},h_{2},h_{3},...,
h_{128}), in order to reduce the influence of light, characteristic vector is normalized, characteristic vector is designated as L after normalization_{g}, normalizing
Change formula such as formula：
Wherein, L_{g}=(l_{1},l_{2},l_{3},...,l_{128}) for normalization after key point characteristic vector；
After description of the key point of two width figures of binocular image is all generated, using the Europe of the characteristic vector of key point
Family name's distance is matched, phase as the decision metric of the similarity of key point in binocular image to the key point in binocular image
The crucial pixel coordinate information mutually matched is as one group of key message；
Step 4 five, the generation at utmost to avoid error, the matching key point to generation are screened；
Because measuring system is binocular model, so the key point of conspicuousness target is in both images a level
Face, the level error of each pair key point is theoretically equal.So obtaining the coordinate horizontal parallax of each pair key point, parallax is generated
Matrix, parallax matrix is defined as K_{n}={ k_{1},k_{2}...k_{n}, n is the logarithm of matching, k_{1}、k_{2}、k_{n}For single match point parallax；
Obtain the median k of parallax matrix_{m}, and obtain referring to parallax matrix, it is designated as K_{n}', formula is as follows：
K_{n}'={ k_{1}k_{m},k_{2}k_{m},...,k_{n}k_{m}}
Parallax threshold value is set as 3, by K_{n}' in be more than threshold value corresponding parallax deletion, obtain finally inspecting matrix result K',
To avoid the interference that erroneous matching key point is brought.k_{1'}、k_{2'}、k_{n'}It is the parallax of the correct match point after screening, n' is final
The logarithm correctly matched, formula is as follows：
K'={ k_{1'},k_{2'},...,k_{n'}}
Step 5: the parallax matrix K that step 4 is obtained ' substitute into obtained in the model of binocular ranging conspicuousness target away from
From；
The crucial point coordinates that conspicuousness object matching goes out is made to subtract the parallax for obtaining conspicuousness target in binocular image.It will regard
Difference band enters in the model of binocular ranging to obtain conspicuousness target range.
Binocular imaging can obtain the image of two width different visual angles of Same Scene, binocular model such as Fig. 5.
The focal length of two identical imaging systems is in the horizontal direction at a distance of B, and two optical axises are each parallel to horizontal plane, figure
Image plane is parallel with perpendicular；
Assuming that a point M (X, Y, Z) in scene, is Pl (x respectively in left and right two imaging points_{1},y_{1}) and Pr (x_{2},y_{2}), x_{1},
y_{1}With x_{2},y_{2}Respectively Pl and Pr are in the coordinate of the perpendicular of imaging, and parallax is defined as k=in binocular model  plpr =
x_{2}x_{1}, range formula is obtained by triangle similarity relation, X, Y, Z is transverse axis, vertical pivot, the coordinate of the longitudinal axis in space coordinates：
Wherein dx represents physical distance of each pixel in the egative film of imaging in horizontal axis, and f is imaging system
Focal length, z is distances of the target point M to two imaging center lines, and the parallax matrix that step 4 is obtained is brought into formula (17), according to
The physical message of binocular model obtains corresponding distance matrix Z'={ z_{1},z_{2},...,z_{n'}, z_{1}, z_{2}, z_{n'}For single matching parallax
The conspicuousness target range obtained, the average value for finally obtaining distance matrix is the distance of conspicuousness target in binocular image
Z_{f}, formula is as follows：
Embodiment two：Illustrate present embodiment with reference to figure, present embodiment and embodiment one are not
Be：The detailed process to image progress rim detection is step one by one：
The noise that step one carries out convolution algorithm elimination image to binocular image one by one, using 2D gaussian filterings template is done
Disturb；
Step calculates filtered binocular image respectively one by one two, using the difference of the single order local derviation in horizontally and vertically direction
The partial derivative dx and dy of the gradient magnitude and gradient direction of pixel on I (x, y), wherein x directions and y directions be respectively：
Dx=[I (x+1, y)I (x1, y)]/2 (21)
Dy=[I (x, y+1)I (x, y1)]/2 (22)
Then gradient magnitude is：
D'=(dx^{2}+dy^{2})^{1/2} (23)
Gradient direction is：
θ '=arctan (dy/dx) (24)；
D' and θ ' represent the gradient magnitude and gradient direction of pixel on filtered binocular image I (x, y) respectively；
Step carries out nonmaxima suppression one by one three, to gradient, then carries out dual threshold processing to image, generates edge graph
Picture；Wherein, the marginal point gray value of the edge image is 255, and nonedge point gray value is 0.
Embodiment three：Illustrate present embodiment with reference to figure, present embodiment and embodiment one or
Unlike two：Significant characteristics extraction, generation are carried out to binocular image using vision significance model described in step one two
The detailed process of significant characteristics figure is：
After step one 21, binocular image rim detection, original image and edge image are overlapped：
I_{1}(σ)=0.7I (σ)+0.3C (σ) (25)
Wherein, I (σ) is the artwork of input binocular image, and C (σ) is edge image, I_{1}(σ) is the figure after overlapadd procedure
Picture；
Step one two or two, nine layers of gaussian pyramid using the image after Gauss difference function calculating overlapadd procedure, wherein
The 0th layer of superimposed image for input, 1 to 8 layers are respectively that last layer is formed using gaussian filtering and depression of order sampling, size correspondence
1/2 to the 1/256 of input picture, brightness is extracted to each layer of gaussian pyramid, color, direction character is simultaneously generated corresponding
Brightness pyramid, color pyramid and direction pyramid；
Extract brightness formula as follows：
I_{n}=(r+g+b)/3 (26)
Wherein r, g, b correspond to input respectively three components of red, green, blue of binocular image color, I_{n}For brightness；
Extract color characteristic formula as follows：
R=r (g+b)/2 (27)
G=g (r+b)/2 (28)
B=b (r+g)/2 (29)
Y=r+g2 ( rg +b) (30)
R, G, B, Y correspond to the color component of image after superposition；
O (σ, ω) is to brightness I_{n}The direction character that the filtering of Gabor functions is extracted is carried out in dimension, ω is
The direction of Gabor functions is the gaussian pyramid number of plies, and σ is total direction quantity of Gabor functions, wherein σ ∈ [0,1,2 ...,
8],ω∈[0°,45°,90°,135°]；
Step one two or three, brightness, color and three, direction feature to the different scale for the gaussian pyramid obtained are carried out
Central peripheral is to being compared to difference, specially：
If yardstick centered on yardstick c (c ∈ { 2,3,4 }), yardstick u (u=c+ δ, δ ∈ { 3,4 }) is peripheral yardstick；At 9 layers
Gaussian pyramid in center yardstick c and periphery yardstick u between have 6 kinds of combinations (25,26,36,37,47,48)；
Represent center and periphery to being compared to poor local orientation feature by yardstick c and yardstick u characteristic pattern difference
Contrast such as following formula：
I_{n}(c, u)= I_{n}(c)I_{n}(u) (31)
RG (c, u)= (R (c)G (c))(G (u)R (u))  (32)
BY (c, u)= (B (c)Y (c))(Y (u)B (u))  (33)
O (c, u, ω)= O (c, ω)O (u, ω)  (34)
Wherein, need to make the in the same size of two width figures carry out making poor again by interpolation before making the difference；
Step one two or four, by normalizing the characteristic pattern of different characteristic for making difference generation is merged, generation input pair
The significant characteristics figure of mesh image, be specially：
The comprehensive characteristics figure of fusion generation this feature is normalized to yardstick contrast characteristic's figure of each feature first For brightness normalization characteristic figure,For color characteristic normalization characteristic figure,For direction character normalizing
Change characteristic pattern；Calculating process is shown as the following formula：
Wherein, N () represents normalization and calculates function, firstly for the characteristic pattern that need to be calculated, by each picture in characteristic pattern
The characteristic value of element is all normalized in an enclosed region [0,255], then finds the overall situation in each normalized characteristic pattern
Maximum saliency value A, then the average value a of local maximum in characteristic pattern is obtained, finally to the corresponding spy of each pixel of feature
Value indicative is all multiplied by 2 (Aa)；
Recycle the comprehensive characteristics figure of each feature to be normalized and obtain final significant characteristics figure S, calculate
Process is as follows：
Claims (3)
1. the distance measurement method of conspicuousness target in a kind of binocular image, it is characterised in that the described method comprises the following steps：
Step 1: carrying out significant characteristics extraction to binocular image using vision significance model, and mark seed point and background
Point, is specifically included：
Step is preprocessed one by one, first, and rim detection is carried out to binocular image, generates the edge graph of binocular image；
Step one two, using vision significance model to binocular image carry out significant characteristics extraction, generate significant characteristics figure；
Step one three, gray value maximum pixel point in figure found out according to significant characteristics figure, labeled as seed point；And with seed point
Centered on 25 × 25 window in traversal pixel, find out pixel gray value be less than 0.1 and it is farthest apart from seed point
Pixel is labeled as background dot；
Step 2: setting up weighted graph to binocular image；
Weighted graph is set up to binocular image using classical Gauss weight function：
Wherein, W_{ij}Represent the weights between summit i and summit j, g_{i}Represent summit i brightness, g_{j}Summit j brightness is represented, β is
Free parameter, e is the nature truth of a matter；
The Laplacian Matrix L of weighted graph is obtained by following formula：
Wherein, L_{ij}For the element of corresponding vertex i to j in Laplacian Matrix L, d_{i}For summit i and surrounding point weights and, d_{i}=
∑W_{ij}；
Step 3: using the weighted graph in the seed point and background dot and step 2 in step one, passing through random walk image point
Algorithm is cut to come out the conspicuousness Target Segmentation in binocular image；
Step 3 one, seed point and background dot that the pixel of binocular image is marked according to step one separate two class set,
That is mark point set V_{M}With unmarked point set V_{U}, Laplacian Matrix L is according to V_{M}And V_{U}, then prioritization mark point arrange again
Row nonmarked point；Wherein, the L is divided into L_{M}、L_{U}、B、B^{T}Laplacian Matrix, then be expressed as follows by four parts：
Wherein, L_{M}For the Laplacian Matrix of mark point to mark point, L_{U}For Laplce's square of nonmarked point to nonmarked point
Battle array, B and B^{T}Respectively mark point is to nonmarked point and nonmarked point to the Laplacian Matrix of mark point；
Step 3 two, combination dirichlet integral D [x] solved according to Laplacian Matrix and mark point；
Combine dirichlet integral formula as follows：
Wherein, x is summit in weighted graph to the probability matrix of mark point, x_{i}And x_{j}Respectively probability of the summit i and j to mark point；
According to mark point set V_{M}With unmarked point set V_{U}, it is x by x points_{M}And x_{U}Two parts, x_{M}For mark point set V_{M}It is corresponding
Probability matrix, x_{U}For unmarked point set V_{U}Corresponding probability matrix；Formula (4) is decomposed into：
For mark point s, m is set^{s}If any summit i is s,OtherwiseTo D [x_{u}] it is directed to x_{U}Differentiate,
The solution for obtaining formula (5) minimum is mark point s Di Li Cray probable values：
Wherein,Represent that summit i reaches mark point s probability first；
According to what is obtained by combining dirichlet integralEnter row threshold division according to formula (7), generate segmentation figure：
Wherein, s_{i}For the pixel size of a certain summit i correspondence positions in segmentation figure；
Wherein, brightness is that 1 pixel is expressed as the conspicuousness target in image, the as back of the body that brightness is 0 in the segmentation figure
Scape；
Step 3 three, corresponding with the original image pixel of segmentation figure is multiplied, generates target figure, that is, extract the conspicuousness being partitioned into
Target, formula is as follows：
t_{i}=s_{i}·I_{i} (8)
Wherein, t_{i}For target figure T a certain summit i gray value, I_{i}For input picture I (σ) correspondence positions i gray value；
Step 4: conspicuousness target is individually carried out into crucial Point matching by SIFT algorithms；
Step 4 one, target figure set up into gaussian pyramid, ask filtered image difference to obtain DOG images, DOG images twobytwo
D (x, y, σ) is defined as, formula is asked for as follows：
Wherein,For the Gaussian function of a change yardstick, p, q represents Gaussian template
Dimension, (x, y) is position of the pixel in gaussian pyramid image, and σ is the metric space factor of image, and k represents a certain tool
Body scalevalue, C (x, y, σ) is defined as G (x, y, σ) and target figure T (x, y) convolution, i.e. C (x, y, σ)=G (x, y, σ) * T (x,
y)；
Step 4 two, in adjacent DOG images extreme point is obtained, the position of extreme point is determined by being fitted threedimensional quadratic function
With yardstick as key point, and Detection of Stability is carried out to key point to eliminate skirt response according to Hessian matrixes, specifically such as
Under：
(1) its curve matching D (X) is asked by carrying out Taylor expansion to metric space DOG：
Wherein, X=(x, y, σ)^{T}, D is curve matching, to formula (10) derivation and make its be 0, obtain the offset formula of extreme point
(11)：
To remove the extreme point of low contrast, formula (11) is substituted into formula (10), formula (12) is obtained：
If the value of formula (12) is more than 0.03, retains the extreme point and obtain exact position and the yardstick of the extreme point, otherwise abandon；
(2) unstable key point is eliminated by the Hessian matrixes screening at key point；
Utilize the ratio calculation curvature between Hessian matrix exgenvalues；
Marginal point is judged according to the curvature of crucial vertex neighborhood；
The ratio of curvature is set to 10, more than 10 deletions, conversely, then retaining, what is remained is then stable key point；
Step 4 three, using crucial vertex neighborhood 16 × 16 window pixel be each key point assigned direction parameter；
For the key point detected in DOG images, the size and Orientation calculation formula of gradient is as follows：
Wherein, C is the metric space where key point, and m is the gradient magnitude of key point, and θ is the gradient direction of required point；To close
Centered on key point, 16 × 16 neighborhoods delimited in peripheral region, the gradient magnitude and gradient direction of wherein pixel is obtained, makes
The gradient put in this neighborhood is counted with histogram；Histogrammic abscissa is direction, is divided into 36 parts by 360 degree, every part is
10 degree correspondence histograms among one, histogrammic ordinate be gradient magnitude, correspond to corresponding gradient direction point it is big
It is small to be added, itself and be used as the size of ordinate；Principal direction is defined as the interval direction that gradient magnitude is hm to the maximum, passes through ladder
Interval auxiliary as principal direction of the size on 08*hm is spent to strengthen the stability of matching；
Step 4 four, foundation description sublist state the local feature information of key point
Coordinate first around key point rotates to be the direction of key point；
Then choose around key point 16 × 16 window, the wicket of 16 4 × 4 is divided into neighborhood, in 4 × 4 small window
In mouthful, the size and Orientation of its corresponding gradient is calculated, and each wicket is counted with 8 bin histogram
Gradient information, son is described such as following formula to around key point 16 × 16 window calculation by Gauss weighting algorithm：
Wherein, h is description, and (a, b) is key point in the position of gaussian pyramid image, m_{g}It is for the gradient magnitude of key point
The gradient magnitude of the histogram principal direction of step 4 three, d is 16 for the length of side of window, and (x, y) is pixel in gaussian pyramid figure
Position as in, (x ', y ') is new coordinate of the pixel in the neighborhood in direction that coordinate is rotated to be to key point, the meter of new coordinate
Calculate formula such as formula：
θ_{g}For the gradient direction of key point；
The characteristic vector of 128 key points is obtained by the window calculation to 16 × 16_{,}It is designated as H=(h_{1},h_{2},h_{3},...,h_{128}),
Characteristic vector is normalized, characteristic vector is designated as L after normalization_{g}, normalize formula such as formula：
Wherein, L_{g}=(l_{1},l_{2},...,l_{i},...,l_{128}) for normalization after key point characteristic vector, l_{i}, i=1,2,
3 ... is a certain normalized vector；
Using key point characteristic vector Euclidean distance as the similarity of key point in binocular image decision metric, to double
Key point in mesh image is matched, and the crucial pixel coordinate information being mutually matched is as one group of key message；
Step 4 five, the matching key point to generation are screened；
The coordinate horizontal parallax of each pair key point is obtained, parallax matrix is generated, parallax matrix is defined as K_{n}={ k_{1},k_{2}...k_{n}, n
For the logarithm of matching, k_{1}、k_{2}、k_{n}For single match point parallax；
Obtain the median k of parallax matrix_{m}, and obtain referring to parallax matrix, it is designated as K_{n}', formula is as follows：
K_{n}'={ k_{1}k_{m},k_{2}k_{m},...,k_{n}k_{m}} (17)
Parallax threshold value is set as 3, by K_{n}' in be more than threshold value corresponding parallax deletion, obtain final parallax matrix result K', k_{1'}、
k_{2'}、k_{n'}It is the parallax of the correct match point after screening, n' is the final logarithm correctly matched, and formula is as follows：
K'={ k_{1'},k_{2'},...,k_{n'}} (18)
Step 5: the parallax matrix K that step 4 is obtained ' substitute into the model of binocular ranging and obtain conspicuousness target range；
The focal length of two identical imaging systems is in the horizontal direction at a distance of J, and two optical axises are put down each parallel to horizontal plane, image
Face is parallel with perpendicular；
Assuming that a target point M (X, Y, Z) in scene, is Pl (x respectively in left and right two imaging points_{1},y_{1}) and Pr (x_{2},y_{2}), x_{1},
y_{1}With x_{2},y_{2}Respectively Pl and Pr are in the coordinate of the perpendicular of imaging, and parallax is defined as k=in binocular model  plpr =
x_{2}x_{1}, range formula is obtained by triangle similarity relation, X, Y, Z is transverse axis, vertical pivot, the coordinate of the longitudinal axis in space coordinates：
Wherein dx' represents physical distance of each pixel in the egative film of imaging in horizontal axis, and f is Jiao of imaging system
Away from, z is distances of the target point M to two imaging center lines, and the parallax matrix that step 4 is obtained is brought into formula (19), according to
The physical message of binocular model obtains corresponding distance matrix Z'={ z_{1},z_{2},...,z_{n'}, z_{1}, z_{2}, z_{n'}For single matching parallax
The conspicuousness target range obtained, the average value for finally obtaining distance matrix is the distance of conspicuousness target in binocular image
Z_{f}, formula is as follows：
2. the distance measurement method of conspicuousness target in a kind of binocular image according to claim 1, it is characterised in that step
Suddenly the detailed process that described binocular carries out rim detection to image one by one is：
Step one carries out the noise jamming that convolution algorithm eliminates image to binocular image one by one, using 2D gaussian filterings template；
Step calculate respectively one by one two, using the difference of the single order local derviation in horizontally and vertically direction filtered binocular image I (x,
Y) the partial derivative dx and dy of the gradient magnitude and gradient direction of pixel on, wherein x directions and y directions be respectively：
Dx=[I (x+1, y)I (x1, y)]/2 (21)
Dy=[I (x, y+1)I (x, y1)]/2 (22)
Then gradient magnitude is：
D'=(dx^{2}+dy^{2})^{1/2} (23)
Gradient direction is：
θ '=arctan (dy/dx) (24)；
D' and θ ' represent the gradient magnitude and gradient direction of pixel on filtered binocular image I (x, y) respectively；
Step carries out nonmaxima suppression one by one three, to gradient, then carries out dual threshold processing to image, generates edge image；
Wherein, the marginal point gray value of the edge image is 255, and nonedge point gray value is 0.
3. the distance measurement method of conspicuousness target in a kind of binocular image according to claim 2, it is characterised in that step
Significant characteristics extraction is carried out to binocular image using vision significance model described in rapid 1, generation significant characteristics figure
Detailed process is：
After step one 21, binocular image rim detection, original image and edge image are overlapped：
I_{1}(σ)=0.7I (σ)+0.3C (σ) (25)
Wherein, I (σ) is the artwork of input binocular image, and C (σ) is edge image, I_{1}(σ) is the image after overlapadd procedure；
Step one two or two, nine layers of gaussian pyramid using the image after Gauss difference function calculating overlapadd procedure, wherein the 0th layer
For the superimposed image of input, 1 to 8 layers are respectively that last layer is formed using gaussian filtering and depression of order sampling, and size correspond to defeated
Enter 1/2 to the 1/256 of image, extract brightness to each layer of gaussian pyramid, color, direction character simultaneously generates corresponding brightness
Pyramid, color pyramid and direction pyramid；
Extract brightness formula as follows：
I_{n}=(r+g+b)/3 (26)
Wherein r, g, b correspond to input respectively three components of red, green, blue of binocular image color, I_{n}For brightness；
Extract color characteristic formula as follows：
R=r (g+b)/2 (27)
G=g (r+b)/2 (28)
B=b (r+g)/2 (29)
Y=r+g2 ( rg +b) (30)
R, G, B, Y correspond to the color component of image after superposition；
O (σ, ω) is to brightness I_{n}The direction character that the filtering of Gabor functions is extracted is carried out in dimension, ω is Gabor letters
Several directions is the gaussian pyramid number of plies, and σ is total direction quantity of Gabor functions, wherein σ ∈ [0,1,2 ..., 8], ω ∈
[0°,45°,90°,135°]；
Step one two or three, brightness, color and three, direction feature to the different scale for the gaussian pyramid obtained carry out center
Periphery is specially to being compared to difference：
If yardstick centered on yardstick c, c ∈ { 2,3,4 }, yardstick u, u=c+ δ, δ ∈ { 3,4 } is peripheral yardstick；In 9 layers of Gauss
There are 6 kinds of combinations to be specially 25,26,36,37,47,48 between center yardstick c and peripheral yardstick u in pyramid；
Represent center and periphery to being compared to the local orientation feature of difference to such as by yardstick c and yardstick u characteristic pattern difference
Following formula：
I_{n}(c, u)= I_{n}(c)I_{n}(u) (31)
RG (c, u)= (R (c)G (c))(G (u)R (u))  (32)
BY (c, u)= (B (c)Y (c))(Y (u)B (u))  (33)
O (c, u, ω)= O (c, ω)O (u, ω)  (34)
Wherein, need to make the in the same size of two width figures carry out making poor again by interpolation before making the difference；
Step one two or four, by normalizing the characteristic pattern of different characteristic for making difference generation is merged, generation inputs binocular figure
The significant characteristics figure of picture, be specially：
The comprehensive characteristics figure of fusion generation this feature is normalized to yardstick contrast characteristic's figure of each feature first For brightness normalization characteristic figure,For color characteristic normalization characteristic figure,For direction character normalizing
Change characteristic pattern；Calculating process is shown as the following formula：
Wherein, N () represents normalization and calculates function, firstly for the characteristic pattern that need to be calculated, by each pixel in characteristic pattern
Characteristic value is all normalized in an enclosed region [0,255], and global maximum is then found in each normalized characteristic pattern
Saliency value A, then the average value a of local maximum in characteristic pattern is obtained, finally to the corresponding characteristic value of each pixel of feature
All it is multiplied by 2 (Aa)；
Recycle the comprehensive characteristics figure of each feature to be normalized and obtain final significant characteristics figure S, calculating process
It is as follows：
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

CN201510233157.3A CN104778721B (en)  20150508  20150508  The distance measurement method of conspicuousness target in a kind of binocular image 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

CN201510233157.3A CN104778721B (en)  20150508  20150508  The distance measurement method of conspicuousness target in a kind of binocular image 
Publications (2)
Publication Number  Publication Date 

CN104778721A CN104778721A (en)  20150715 
CN104778721B true CN104778721B (en)  20170811 
Family
ID=53620167
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

CN201510233157.3A Active CN104778721B (en)  20150508  20150508  The distance measurement method of conspicuousness target in a kind of binocular image 
Country Status (1)
Country  Link 

CN (1)  CN104778721B (en) 
Cited By (1)
Publication number  Priority date  Publication date  Assignee  Title 

CN110065790A (en) *  20190425  20190730  中国矿业大学  A kind of coal mine leather belt transhipment head choke detecting method of viewbased access control model algorithm 
Families Citing this family (16)
Publication number  Priority date  Publication date  Assignee  Title 

CN105574928A (en) *  20151211  20160511  深圳易嘉恩科技有限公司  Driving image processing method and first electronic equipment 
CN106023198A (en) *  20160516  20161012  天津工业大学  Hessian matrixbased method for extracting aortic dissection of human thoracoabdominal cavity CT image 
CN107423739B (en) *  20160523  20201113  北京陌上花科技有限公司  Image feature extraction method and device 
CN106094516A (en) *  20160608  20161109  南京大学  A kind of robot selfadapting grasping method based on deeply study 
CN108460794A (en) *  20161212  20180828  南京理工大学  A kind of infrared wellmarked target detection method of binocular solid and system 
CN106780476A (en) *  20161229  20170531  杭州电子科技大学  A kind of stereopicture conspicuousness detection method based on humaneye stereoscopic vision characteristic 
CN106920244B (en) *  20170113  20190802  广州中医药大学  A kind of method of the neighbouring background dot of detection image edges of regions 
CN106918321A (en) *  20170330  20170704  西安邮电大学  A kind of method found range using object parallax on image 
CN107730521B (en) *  20170429  20201103  安徽慧视金瞳科技有限公司  Method for rapidly detecting ridge type edge in image 
CN107392929B (en) *  20170717  20200710  河海大学常州校区  Intelligent target detection and size measurement method based on human eye vision model 
CN107564061B (en) *  20170811  20201120  浙江大学  Binocular vision mileage calculation method based on image gradient joint optimization 
CN107633498B (en) *  20170922  20200623  成都通甲优博科技有限责任公司  Image dark state enhancement method and device and electronic equipment 
CN107644398B (en) *  20170925  20210126  上海兆芯集成电路有限公司  Image interpolation method and related image interpolation device 
CN108036730B (en) *  20171222  20191210  福建和盛高科技产业有限公司  Fire point distance measuring method based on thermal imaging 
CN108665740A (en) *  20180425  20181016  衢州职业技术学院  A kind of classroom instruction control system of feeling and setting happily blended Internetbased 
CN109300154A (en) *  20181127  20190201  郑州云海信息技术有限公司  A kind of distance measuring method and device based on binocular solid 
Citations (1)
Publication number  Priority date  Publication date  Assignee  Title 

CN103824284A (en) *  20140126  20140528  中山大学  Key frame extraction method based on visual attention model and system 
Family Cites Families (1)
Publication number  Priority date  Publication date  Assignee  Title 

GB0619817D0 (en) *  20061006  20061115  Imp Innovations Ltd  A method of identifying a measure of feature saliency in a sequence of images 

2015
 20150508 CN CN201510233157.3A patent/CN104778721B/en active Active
Patent Citations (1)
Publication number  Priority date  Publication date  Assignee  Title 

CN103824284A (en) *  20140126  20140528  中山大学  Key frame extraction method based on visual attention model and system 
NonPatent Citations (2)
Title 

H.264中一种基于搜索范围自适应调整的运动估计算法;刘英哲 等;《电子与信息学报》;20130630;第35卷(第6期);第13821387页 * 
选择性背景优先的显著性检测模型;蒋寓文 等;《电子与信息学报》;20150131;第37卷(第1期);第130136页 * 
Cited By (1)
Publication number  Priority date  Publication date  Assignee  Title 

CN110065790A (en) *  20190425  20190730  中国矿业大学  A kind of coal mine leather belt transhipment head choke detecting method of viewbased access control model algorithm 
Also Published As
Publication number  Publication date 

CN104778721A (en)  20150715 
Similar Documents
Publication  Publication Date  Title 

CN108961235B (en)  Defective insulator identification method based on YOLOv3 network and particle filter algorithm  
Li et al.  Multifeature combined cloud and cloud shadow detection in GaoFen1 wide field of view imagery  
KR101856401B1 (en)  Method, apparatus, storage medium, and device for processing lane line data  
Lin et al.  Line segment extraction for large scale unorganized point clouds  
CN105667518B (en)  The method and device of lane detection  
CN105046235B (en)  The identification modeling method and device of lane line, recognition methods and device  
Kong et al.  A generalized Laplacian of Gaussian filter for blob detection and its applications  
CN105809138B (en)  A kind of road warning markers detection and recognition methods based on piecemeal identification  
Wang et al.  Individual treecrown delineation and treetop detection in highspatialresolution aerial imagery  
Liasis et al.  Satellite images analysis for shadow detection and building height estimation  
CN103047943B (en)  Based on the door skin geomery detection method of single projection coded structured light  
CN103605953B (en)  Vehicle interest target detection method based on sliding window search  
CN104700414B (en)  A kind of road ahead pedestrian's fast ranging method based on vehiclemounted binocular camera  
Awrangjeb et al.  Building detection in complex scenes thorough effective separation of buildings from trees  
CN104867135B (en)  A kind of High Precision Stereo matching process guided based on guide image  
CN104299008B (en)  Vehicle type classification method based on multifeature fusion  
Awrangjeb et al.  Automatic detection of residential buildings using LIDAR data and multispectral imagery  
CN102254319B (en)  Method for carrying out change detection on multilevel segmented remote sensing image  
CN107016357B (en)  Video pedestrian detection method based on time domain convolutional neural network  
CN102521859B (en)  Reality augmenting method and device on basis of artificial targets  
Liu et al.  Building extraction from high resolution imagery based on multiscale object oriented classification and probabilistic Hough transform  
Sirmacek et al.  Urbanarea and building detection using SIFT keypoints and graph theory  
KR20160143494A (en)  Saliency information acquisition apparatus and saliency information acquisition method  
CN105022990B (en)  A kind of waterborne target rapid detection method based on unmanned boat application  
CN102509098B (en)  Fisheye image vehicle identification method 
Legal Events
Date  Code  Title  Description 

C06  Publication  
PB01  Publication  
EXSB  Decision made by sipo to initiate substantive examination  
SE01  Entry into force of request for substantive examination  
TA01  Transfer of patent application right  
TA01  Transfer of patent application right 
Effective date of registration: 20170717 Address after: 510000, Guangdong, Guangzhou, Guangzhou new Guangzhou knowledge city nine Buddha, Jianshe Road 333, room 245 Applicant after: Guangzhou Xiaopeng Automobile Technology Co. Ltd. Address before: 150001 Harbin, Nangang, West District, large straight street, No. 92 Applicant before: Harbin Institute of Technology 

GR01  Patent grant  
GR01  Patent grant 