CN104778721A - Distance measuring method of significant target in binocular image - Google Patents

Distance measuring method of significant target in binocular image Download PDF

Info

Publication number
CN104778721A
CN104778721A CN201510233157.3A CN201510233157A CN104778721A CN 104778721 A CN104778721 A CN 104778721A CN 201510233157 A CN201510233157 A CN 201510233157A CN 104778721 A CN104778721 A CN 104778721A
Authority
CN
China
Prior art keywords
image
point
points
feature
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510233157.3A
Other languages
Chinese (zh)
Other versions
CN104778721B (en
Inventor
王进祥
杜奥博
石金进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaopeng Motors Technology Co Ltd
Original Assignee
Harbin Institute of Technology Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology Shenzhen filed Critical Harbin Institute of Technology Shenzhen
Priority to CN201510233157.3A priority Critical patent/CN104778721B/en
Publication of CN104778721A publication Critical patent/CN104778721A/en
Application granted granted Critical
Publication of CN104778721B publication Critical patent/CN104778721B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

一种双目图像中显著性目标的距离测量方法,本发明涉及一种双目图像中目标的距离测量方法。本发明的目的是提出一种双目图像中显著性目标的距离测量方法,以解决现有的目标距离测量方法处理速度慢的问题。步骤一、利用视觉显著性模型对双目图像进行显著性特征提取,并标出种子点和背景点;步骤二、对双目图像建立加权图;步骤三、利用步骤一中的种子点和背景点和步骤二中的加权图,通过随机游走图像分割算法将双目图像中的显著性目标分割出来;步骤四、通过SIFT算法将显著性目标单独进行关键点匹配;步骤五、将步骤四求出的视差矩阵K'代入双目测距的模型中求出显著性目标距离。本发明可应用于智能汽车行驶中对视野前方图像显著性目标的距离测量。

A distance measurement method for a salient target in a binocular image, the invention relates to a distance measurement method for a target in a binocular image. The object of the present invention is to propose a method for measuring the distance of a salient target in a binocular image, so as to solve the problem of slow processing speed of the existing target distance measuring method. Step 1, use the visual saliency model to extract the salient features of the binocular image, and mark the seed point and background point; Step 2, build a weighted map for the binocular image; Step 3, use the seed point and background in step 1 Points and the weighted graph in step 2, the salient target in the binocular image is segmented through the random walk image segmentation algorithm; The calculated disparity matrix K' is substituted into the binocular distance measurement model to obtain the salient target distance. The invention can be applied to the distance measurement of the prominent target in the image in front of the field of vision when the smart car is running.

Description

一种双目图像中显著性目标的距离测量方法A distance measurement method for salient objects in binocular images

技术领域technical field

本发明涉及一种双目图像中目标的距离测量方法,尤其涉及一种双目图像中显著性目标的距离测量方法,属于图像处理技术领域。The invention relates to a distance measurement method for a target in a binocular image, in particular to a distance measurement method for a salient target in a binocular image, and belongs to the technical field of image processing.

背景技术Background technique

距离信息在交通图像处理当中主要应用于为汽车的控制系统提供安全判断。在智能汽车的研究过程中,传统的目标测量方法是利用特定波长雷达或激光对目标进行测距。与雷达和激光相比,视觉传感器具有价格上的优势,同时视角也更加开阔。并且利用视觉传感器在测量目标距离的同时,能判断出目标的具体内容。Distance information is mainly used in traffic image processing to provide safety judgments for vehicle control systems. In the research process of smart cars, the traditional target measurement method is to use specific wavelength radar or laser to measure the distance of the target. Compared with radar and laser, vision sensors have a price advantage, and at the same time, they have a wider viewing angle. And the visual sensor can be used to measure the distance of the target and at the same time, the specific content of the target can be judged.

但是目前的交通图像信息相对繁杂,传统的目标距离测量算法很难在复杂图像中得到理想结果,由于无法找到图像中显著性目标而是全局检测,使得处理速度较慢并增加了很多的无关数据,使得算法无法满足实际应用要求。However, the current traffic image information is relatively complicated, and it is difficult for the traditional target distance measurement algorithm to obtain ideal results in complex images. Since the salient target in the image cannot be found but global detection, the processing speed is slow and a lot of irrelevant data is added. , so that the algorithm cannot meet the requirements of practical applications.

发明内容Contents of the invention

本发明的目的是提出一种双目图像中显著性目标的距离测量方法,以解决现有的目标距离测量方法处理速度慢的问题。The purpose of the present invention is to propose a method for measuring the distance of a salient target in a binocular image, so as to solve the problem of slow processing speed of the existing target distance measuring method.

本发明所述的一种双目图像中显著性目标的距离测量方法,是按照以下步骤实现的:步骤一、利用视觉显著性模型对双目图像进行显著性特征提取,并标出种子点和背景点,具体包括:The method for measuring the distance of a salient target in a binocular image according to the present invention is realized according to the following steps: Step 1, using a visual saliency model to extract salient features of the binocular image, and marking the seed points and Background points, including:

步骤一、利用视觉显著性模型对双目图像进行显著性特征提取,并标出种子点和背景点,具体包括:Step 1. Use the visual saliency model to extract the saliency features of the binocular image, and mark the seed points and background points, specifically including:

步骤一一、首先进行预处理,对双目图像进行边缘检测,生成双目图像的边缘图;步骤一二、利用视觉显著性模型对双目图像进行显著性特征提取,生成显著性特征图;Step 11, first perform preprocessing, perform edge detection on the binocular image, and generate an edge map of the binocular image; step 12, use a visual saliency model to extract salient features from the binocular image, and generate a salient feature map;

步骤一三、根据显著性特征图找出图中灰度值最大像素点,标记为种子点;并以种子点为中心的25×25的窗口内遍历像素,找出像素点的灰度值小于0.1的且距离种子点最远的像素点标记为背景点;Step 13. According to the saliency feature map, find out the pixel point with the largest gray value in the graph, and mark it as a seed point; and traverse the pixels in a window of 25×25 centered on the seed point, and find out that the gray value of the pixel point is less than 0.1 and the pixel farthest from the seed point is marked as the background point;

步骤二、对双目图像建立加权图;Step 2, establishing a weighted map for the binocular image;

利用经典高斯权函数对双目图像建立加权图:Use the classic Gaussian weight function to create a weighted image for the binocular image:

WW ijij == ee -- ββ (( gg ii -- gg jj )) 22 -- -- -- (( 11 ))

其中,Wij表示顶点i和顶点j之间的权值,gi表示顶点i的亮度,gj表示顶点j的亮度,β是自由参数,e为自然底数;Among them, W ij represents the weight between vertex i and vertex j, g i represents the brightness of vertex i, g j represents the brightness of vertex j, β is a free parameter, and e is a natural base;

通过下式求出加权图的拉普拉斯矩阵L:The Laplacian matrix L of the weighted graph is obtained by the following formula:

其中,Lij为拉普拉斯矩阵L中对应顶点i到j的元素,di为顶点i与周围点权值的和,di=∑WijAmong them, L ij is the element corresponding to vertex i to j in the Laplacian matrix L, d i is the sum of vertex i and surrounding point weights, d i =∑W ij ;

步骤三、利用步骤一中的种子点和背景点和步骤二中的加权图,通过随机游走图像分割算法将双目图像中的显著性目标分割出来;Step 3, using the seed point and background point in step 1 and the weighted map in step 2, using the random walk image segmentation algorithm to segment the salient target in the binocular image;

步骤三一、将双目图像的像素点根据步骤一标记出的种子点和背景点分出两类集合,即标记点集合VM与未标记点集合VU,拉普拉斯矩阵L根据VM和VU,优先排列标记点然后再排列非标记点;其中,所述L分成LM、LU、B、BT四部分,则将拉普拉斯矩阵表示如下:Step 31. Divide the pixel points of the binocular image into two types of sets according to the seed points and background points marked in step 1, namely, the set of marked points V M and the set of unmarked points V U , and the Laplacian matrix L is based on V M and V U , first arrange the marked points and then arrange the non-marked points; wherein, the L is divided into four parts: L M , L U , B, and B T , and the Laplacian matrix is expressed as follows:

LL == LL Mm BB BB TT LL Uu -- -- -- (( 33 ))

其中,LM为标记点到标记点的拉普拉斯矩阵,LU为非标记点到非标记点的拉普拉斯矩阵,B和BT分别为标记点到非标记点和非标记点到标记点的拉普拉斯矩阵;Among them, L M is the Laplacian matrix from the marked point to the marked point, L U is the Laplacian matrix from the unmarked point to the unmarked point, B and B T are the marked point to the non-marked point and the non-marked point respectively to the Laplacian matrix of the marked points;

步骤三二、根据拉普拉斯矩阵和标记点求解组合狄利克雷积分D[x];Step 32, solving the combined Dirichlet integral D[x] according to the Laplace matrix and the marked points;

组合狄利克雷积分公式如下:The combined Dirichlet integral formula is as follows:

DD. [[ xx ]] == 11 22 ΣΣ ww ijij (( xx ii -- xx jj )) 22 == 11 22 xx TT LxLx -- -- -- (( 44 ))

其中,x为加权图中顶点到标记点的概率矩阵,xi和xj分别为顶点i和j到标记点的概率;Among them, x is the probability matrix from the vertex to the marked point in the weighted graph, and x i and x j are the probabilities from vertices i and j to the marked point respectively;

根据标记点集合VM与未标记点集合VU,将x分为xM和xU两部分,xM为标记点集合VM对应的概率矩阵,xU为未标记点集合VU对应的概率矩阵;将式(4)分解为:According to the set of marked points V M and the set of unmarked points V U , divide x into two parts x M and x U , x M is the probability matrix corresponding to the set of marked points V M , x U is the corresponding probability matrix of the set of unmarked points V U Probability matrix; formula (4) is decomposed into:

DD. [[ xx Uu ]] == 11 22 [[ xx Mm TT xx Uu TT ]] LL Mm BB BB TT LL Uu xx Mm xx Uu == 11 22 (( xx Mm TT LL Mm xx Mm ++ 22 xx Uu TT BB TT xx Mm ++ xx Uu TT LL Uu xx Uu )) -- -- -- (( 55 ))

对于标记点s,设定ms,如果任意顶点i为s,则否则对D[xu]针对xU求微分,得到式(5)极小值的解即为标记点s的狄利克雷概率值:For a marker point s, set m s , if any vertex i is s, then otherwise Differentiate D[x u ] for x U , and the solution to the minimum value of formula (5) is the Dirichlet probability value of the marked point s:

LL Uu xx ii sthe s == -- BB mm sthe s -- -- -- (( 66 ))

其中,表示顶点i首次到达标记点s的概率;in, Indicates the probability that vertex i reaches mark point s for the first time;

根据通过组合狄利克雷积分求出的按照式(7)进行阈值分割,生成分割图:According to the obtained by combinatorial Dirichlet integral Perform threshold segmentation according to formula (7) to generate a segmentation map:

其中,si为某一顶点i在分割图中对应位置的像素大小;Among them, s i is the pixel size of the corresponding position of a vertex i in the segmentation map;

其中,所述分割图中亮度为1的像素点表示为图像中的显著性目标,亮度为0的即为背景;Wherein, the pixel points with a brightness of 1 in the segmentation map are represented as salient objects in the image, and those with a brightness of 0 are the background;

步骤三三、将分割图与原图像对应的像素相乘,生成目标图,即提取出分割出的显著性目标,公式如下:Step 33: Multiply the segmentation map with the pixels corresponding to the original image to generate the target map, that is, extract the segmented salient target, the formula is as follows:

ti=si·Ii  (8)t i =s i ·I i (8)

其中,ti为目标图T的某一顶点i的灰度值,Ii为输入图像I(σ)对应位置i的灰度值;Among them, t i is the gray value of a certain vertex i of the target image T, and I i is the gray value of the corresponding position i of the input image I(σ);

步骤四、通过SIFT算法将显著性目标单独进行关键点匹配;Step 4, using the SIFT algorithm to perform key point matching on the saliency target alone;

步骤四一、将目标图建立高斯金字塔,对滤波后的图像两两求差得到DOG图像,DOG图像定义为D(x,y,σ),求取公式如下:Step 41. Establish a Gaussian pyramid for the target image, and obtain a DOG image by calculating the difference between the filtered images in pairs. The DOG image is defined as D(x, y, σ), and the calculation formula is as follows:

D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*T(x,y)D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*T(x,y)

                                       (9) (9)

=C(x,y,kσ)-C(x,y,σ)=C(x,y,kσ)-C(x,y,σ)

其中,为一个变化尺度的高斯函数,p,q表示高斯模板的维度,(x,y)为像素点在高斯金字塔图像中的位置,σ是图像的尺度空间因子,k表示某一具体尺度值,C(x,y,σ)定义为G(x,y,σ)与目标图T(x,y)的卷积,即C(x,y,σ)=G(x,y,σ)*T(x,y);in, is a Gaussian function with varying scales, p, q represent the dimensions of the Gaussian template, (x, y) is the position of the pixel in the Gaussian pyramid image, σ is the scale space factor of the image, k represents a specific scale value, C (x,y,σ) is defined as the convolution of G(x,y,σ) and the target graph T(x,y), that is, C(x,y,σ)=G(x,y,σ)*T (x,y);

步骤四二、在相邻的DOG图像中求出极值点,通过拟合三维二次函数确定极值点的位置和尺度作为关键点,并根据Hessian矩阵对关键点进行稳定性检测以消除边缘响应,具体如下:Step 42. Find the extreme points in the adjacent DOG images, determine the position and scale of the extreme points as key points by fitting the three-dimensional quadratic function, and perform stability detection on the key points according to the Hessian matrix to eliminate the edge Response, as follows:

(一)对尺度空间DOG通过进行泰勒展开求其曲线拟合D(X):(1) Find the curve fitting D(X) of the scale space DOG by performing Taylor expansion:

DD. (( Xx )) == DD. ++ ∂∂ DD. TT ∂∂ Xx Xx ++ 11 22 Xx TT ∂∂ 22 DD. ∂∂ Xx 22 Xx -- -- -- (( 1010 ))

其中,X=(x,y,σ)T,D为曲线拟合,对式(10)求导并令其为0,得到极值点的偏移量式(11):Among them, X=(x, y, σ) T , D is the curve fitting, take the derivative of formula (10) and make it 0, and get the offset formula (11) of the extremum point:

Xx ^^ == -- ∂∂ 22 DD. -- 11 ∂∂ Xx 22 ∂∂ DD. ∂∂ Xx -- -- -- (( 1111 ))

为去除低对比度的极值点,将式(11)代入公式(10),得到式(12):In order to remove the extreme points of low contrast, formula (11) is substituted into formula (10), and formula (12) is obtained:

DD. (( Xx ^^ )) == DD. ++ 11 22 ∂∂ DD. TT ∂∂ Xx Xx ^^ -- -- -- (( 1212 ))

若式(12)的值大于0.03,保留该极值点并获取该极值点的精确位置和尺度,否则丢弃;If the value of formula (12) is greater than 0.03, keep the extreme point and obtain the precise position and scale of the extreme point, otherwise discard;

(二)通过关键点处的Hessian矩阵筛选消除不稳定的关键点;(2) Eliminate unstable key points by screening the Hessian matrix at the key points;

利用Hessian矩阵特征值之间的比率计算曲率;Calculate the curvature using the ratio between the eigenvalues of the Hessian matrix;

根据关键点邻域的曲率判断边缘点;Judging the edge point according to the curvature of the key point neighborhood;

曲率的比率设置为10,大于10则删除,反之,则保留,保留下来的则是稳定的关键点;The curvature ratio is set to 10, if it is greater than 10, it will be deleted, otherwise, it will be retained, and the remaining key points will be stable;

步骤四三、利用关键点邻域16×16的窗口的像素为每个关键点指定方向参数;Step 43, using the pixels of the 16×16 window of the key point neighborhood to specify the direction parameter for each key point;

对于在DOG图像中检测出的关键点,梯度的大小和方向计算公式如下:For the key points detected in the DOG image, the calculation formula of the magnitude and direction of the gradient is as follows:

mm (( xx ,, ythe y )) == (( CC (( xx ++ 11 ,, ythe y )) -- CC (( xx -- 11 ,, ythe y )) 22 ++ (( CC (( xx ,, ythe y ++ 11 )) -- CC (( xx ,, ythe y -- 11 )) )) 22

                                 (13)(13)

θ(x,y)=tan-1((C(x,y+1)-C(x,y-1))/(C(x+1,y)-C(x-1,y)))θ(x,y)=tan -1 ((C(x,y+1)-C(x,y-1))/(C(x+1,y)-C(x-1,y)) )

其中,C为关键点所在的尺度空间,m为关键点的梯度大小,θ为所求点的梯度方向;以关键点为中心,在周围区域划定一个16×16邻域,求出其中像素点的梯度大小和梯度方向,使用直方图来统计这个邻域内点的梯度;直方图的横坐标为方向,将360度分为36份,每份是10度对应直方图当中的一项,直方图的纵坐标为梯度大小,对应为相应梯度方向的点的大小进行相加,其和作为纵坐标的大小;主方向定义为梯度大小最大为hm的区间方向,通过梯度大小在08*hm之上的区间作为主方向的辅助向,以增强匹配的稳定性;Among them, C is the scale space where the key point is located, m is the gradient size of the key point, and θ is the gradient direction of the desired point; with the key point as the center, a 16×16 neighborhood is defined in the surrounding area, and the pixel The gradient size and gradient direction of the point, use the histogram to count the gradient of the points in this neighborhood; the abscissa of the histogram is the direction, divide 360 degrees into 36 parts, and each part is an item in the histogram corresponding to 10 degrees, the histogram The ordinate of the graph is the gradient size, which corresponds to the sum of the points in the corresponding gradient direction, and the sum is used as the size of the ordinate; the main direction is defined as the direction of the interval with the maximum gradient size of hm, and the gradient size is between 08*hm The interval above is used as the auxiliary direction of the main direction to enhance the stability of matching;

步骤四四、建立描述子表述关键点的局部特征信息Step 44: Establish descriptors to express local feature information of key points

首先在关键点周围的坐标旋转为关键点的方向;First, the coordinates around the key point are rotated to the direction of the key point;

然后选取关键点周围16×16的窗口,在邻域内分为16个4×4的小窗口,在4×4的小窗口中,计算其相对应的梯度的大小和方向,并用一个8个bin的直方图来统计每一个小窗口的梯度信息,通过高斯加权算法对关键点周围16×16的窗口计算描述子如下式:Then select a 16×16 window around the key point, and divide it into 16 4×4 small windows in the neighborhood. In the 4×4 small window, calculate the size and direction of the corresponding gradient, and use an 8 bin The histogram of each small window is used to count the gradient information of each small window, and the descriptor is calculated for the 16×16 window around the key point through the Gaussian weighting algorithm as follows:

hh == mm gg (( aa ++ xx ,, bb ++ ythe y )) ** ee -- (( -- xx ′′ )) 22 ++ (( ythe y ′′ )) 22 22 ×× (( 0.50.5 dd )) 22 -- -- -- (( 1414 ))

其中,h为描述子,(a,b)为关键点在高斯金字塔图像的位置,mg为关键点的梯度大小即步骤四三直方图主方向的梯度大小,d为窗口的边长即16,(x,y)为像素点在高斯金字塔图像中的位置,(x′,y′)为像素在将坐标旋转为关键点的方向的邻域内的新坐标,新坐标的计算公式如式:Among them, h is the descriptor, (a, b) is the position of the key point in the Gaussian pyramid image, m g is the gradient size of the key point, which is the gradient size of the main direction of the histogram in step 43, and d is the side length of the window, which is 16 , (x, y) is the position of the pixel in the Gaussian pyramid image, (x′, y′) is the new coordinate of the pixel in the neighborhood of the direction that rotates the coordinate to the key point, and the calculation formula of the new coordinate is as follows:

xx ′′ ythe y ′′ == coscos θθ gg -- sinsin θθ gg sinsin θθ gg coscos θθ gg xx ythe y -- -- -- (( 1515 ))

θg为关键点的梯度方向;θ g is the gradient direction of the key point;

通过对16×16的窗口计算得到128个关键点的特征向量,记为H=(h1,h2,h3,...,h128),对特征向量进行归一化处理,归一化后特征向量记为Lg,归一化公式如式:The feature vectors of 128 key points are obtained by calculating the 16×16 window, which is recorded as H=(h 1 ,h 2 ,h 3 ,...,h 128 ), and the feature vectors are normalized. The eigenvector after normalization is denoted as L g , and the normalization formula is as follows:

ll ii == hh ii ΣΣ jj == 11 128128 hh jj ,, jj == 1,2,31,2,3 ,, .. .. .. -- -- -- (( 1616 ))

其中,Lg=(l1,l2,...,li,...,l128)为归一化之后的关键点的特征向量,li,i=1,2,3,....为某一归一化向量;Among them, L g =(l 1 ,l 2 ,...,l i ,...,l 128 ) is the feature vector of the key point after normalization, l i ,i=1,2,3,. ... is a normalized vector;

采用关键点的特征向量的欧氏距离作为双目图像中关键点的相似度的判定度量,对双目图像中的关键点进行匹配,相互匹配的关键像素点坐标信息作为一组关键信息;The Euclidean distance of the feature vector of the key point is used as the judgment measure of the similarity of the key point in the binocular image, and the key point in the binocular image is matched, and the coordinate information of the key pixel points matched with each other is used as a set of key information;

步骤四五、对生成的匹配关键点进行筛选;Steps 4 and 5, screening the generated matching key points;

求出每对关键点的坐标水平视差,生成视差矩阵,视差矩阵定义为Kn={k1,k2...kn},n为匹配的对数,k1、k2、kn为单个匹配点视差;Find the coordinate horizontal disparity of each pair of key points, and generate a disparity matrix. The disparity matrix is defined as K n ={k 1 ,k 2 ... k n }, n is the logarithm of matching, k 1 , k 2 , k n Disparity for a single matching point;

求出视差矩阵的中位数km,并得到参考视差矩阵,记为Kn',公式如下:Find the median k m of the disparity matrix, and obtain the reference disparity matrix, denoted as K n ', the formula is as follows:

Kn'={k1-km,k2-km,...,kn-km} (17)K n '={k 1 -k m ,k 2 -k m ,...,k n -k m } (17)

设定视差阈值为3,将Kn'中大于阈值的对应视差删除,得到最终视察矩阵结果K',k1'、k2'、kn'均为筛选后的正确匹配点的视差,n'为最终正确匹配的对数,公式如下:Set the parallax threshold to 3, delete the corresponding parallax in K n ' that is greater than the threshold, and obtain the final observation matrix result K', k 1' , k 2' , and k n' are the parallaxes of the correct matching points after screening, n 'is the logarithm of the final correct match, the formula is as follows:

K'={k1',k2',...,kn'} (18)K'={k 1' ,k 2' ,...,k n' } (18)

步骤五、将步骤四求出的视差矩阵K'代入双目测距的模型中求出显著性目标距离;Step five, substituting the disparity matrix K' obtained in step four into the binocular distance measurement model to obtain the salient target distance;

两个完全相同的成像系统的焦距沿水平方向相距J,两个光轴均平行于水平面,图像平面与竖直平面相平行;The focal lengths of two identical imaging systems are separated by J along the horizontal direction, the two optical axes are parallel to the horizontal plane, and the image plane is parallel to the vertical plane;

假设场景中一目标点M(X,Y,Z),在左、右两个成像点分别是Pl(x1,y1)和Pr(x2,y2),x1,y1与x2,y2分别为Pl与Pr在成像的竖直平面的坐标,双目模型中视差定义为k=|pl-pr|=|x2-x1|,由三角形相似关系得到距离公式,X,Y,Z为空间坐标系中横轴,竖轴,纵轴的坐标:Assuming a target point M(X, Y, Z) in the scene, the two imaging points on the left and right are Pl(x 1 ,y 1 ) and Pr(x 2 ,y 2 ), x 1 ,y 1 and x 2 , y 2 are the coordinates of Pl and Pr in the vertical plane of imaging respectively. In the binocular model, the parallax is defined as k=|pl-pr|=|x 2 -x 1 |, and the distance formula is obtained from the triangle similarity relationship, X , Y, Z are the coordinates of the horizontal axis, vertical axis, and vertical axis in the spatial coordinate system:

zz == JJ ff kk == JJ ff || xx 22 -- xx 11 || dd xx ′′ -- -- -- (( 1919 ))

其中dx'表示每一像素在成像的底片中水平轴方向上的物理距离,f为成像系统的焦距,z是目标点M到两成像中心连线的距离,将步骤四求出的视差矩阵带入式(19)中,根据双目模型的物理信息求出对应的距离矩阵Z'={z1,z2,...,zn'},z1,z2,zn'为单个匹配视差求出的显著性目标距离,最后求出距离矩阵的平均值即为双目图像中显著性目标的距离Zf,公式如下:Where dx' represents the physical distance of each pixel in the horizontal axis direction of the imaged film, f is the focal length of the imaging system, z is the distance from the target point M to the line connecting the two imaging centers, and the disparity matrix obtained in step 4 is taken with In formula (19), the corresponding distance matrix Z'={z 1 , z 2 ,...,z n' } is obtained according to the physical information of the binocular model, z 1 , z 2 , z n' are a single The distance of the salient target obtained by matching the disparity, and finally the average value of the distance matrix is calculated as the distance Z f of the salient target in the binocular image. The formula is as follows:

ZZ ff == 11 nno ΣΣ kk == 11 nno ′′ zz kk -- -- -- (( 2020 )) ..

本发明的有益效果是:The beneficial effects of the present invention are:

1、本发明采用模拟人类视觉系统的方法,提取人眼感兴趣的区域,算法提取出显著性目标基本与人眼检测结果一致,使得提取出使得本发明能够实现跟人眼一样自动的识别显著性目标。1. The present invention adopts the method of simulating the human visual system to extract the area of interest to the human eye. The salient target extracted by the algorithm is basically consistent with the detection result of the human eye, so that the extraction enables the present invention to realize the same automatic recognition as the human eye. sexual target.

2、本发明自动完成显著性目标距离测量,无需手工选择显著性目标。2. The present invention automatically completes the distance measurement of the salient target without manual selection of the salient target.

3、本发明对同一目标进行匹配,从而保证关键点匹配的视差结果相近,能有效筛选出错误匹配点,匹配准确度接近100%,视差的相对误差不到2%,增加了测距的准确性。3. The present invention matches the same target, so as to ensure that the parallax results of key point matching are similar, and can effectively screen out wrong matching points. The matching accuracy is close to 100%, and the relative error of parallax is less than 2%, which increases the accuracy of distance measurement sex.

4、本发明匹配信息较少,可以有效减少额外无关计算,至少减少75%的匹配计算,并减少无关数据的引入,匹配数据利用率在90%以上,使得在复杂图像环境下可实现显著性目标距离测量,提高图像处理效率。4. The present invention has less matching information, which can effectively reduce extra irrelevant calculations, reduce at least 75% of matching calculations, and reduce the introduction of irrelevant data. The utilization rate of matching data is more than 90%, so that salience can be realized in complex image environments Target distance measurement, improve image processing efficiency.

5、本发明对智能汽车行驶中对视野前方图像显著性目标的距离测量,从而为汽车安全行驶提供关键信息,解决了传统的图像距离测量只能对整个图片进行深度检测的缺点,并很好避免了误差较大,噪声过多的问题。5. The present invention measures the distance of the salient target of the image in front of the field of vision while the smart car is driving, thereby providing key information for the safe driving of the car, and solving the disadvantage that the traditional image distance measurement can only perform depth detection on the entire picture, and it is very good The problems of large error and excessive noise are avoided.

6、本发明通过对双目图像的显著性特征提取并实现对显著性目标的分割,从而使得目标范围缩小,减少匹配所用时间,提高效率,对显著性目标关键点进行匹配从而求出视差,进而实现距离测量,由于目标在一个竖直面上,可以很好地筛选出错误的匹配关键点,使精准度提高,本发明方法能够快速识别显著性目标并准确测量显著性目标的距离。6. The present invention extracts the salient features of the binocular image and realizes the segmentation of the salient target, thereby reducing the range of the target, reducing the time used for matching, improving efficiency, and matching the key points of the salient target to obtain the parallax. Furthermore, the distance measurement is realized. Since the target is on a vertical plane, the wrong matching key points can be well screened out to improve the accuracy. The method of the present invention can quickly identify the salient target and accurately measure the distance of the salient target.

附图说明Description of drawings

图1为本发明方法的流程图;Fig. 1 is the flowchart of the inventive method;

图2为视觉显著性分析流程图;Figure 2 is a flow chart of visual salience analysis;

图3为随机游走算法流程图;Fig. 3 is the flow chart of random walk algorithm;

图4为SIFT算法流程图;Fig. 4 is a flowchart of the SIFT algorithm;

图5为双目测量系统,X,Y,Z为定义的空间坐标系,M为空间某一点,Pl和Pr为M在成像面的成像点,M为空间上一点,f为成像系统的焦距。Figure 5 is a binocular measurement system, X, Y, Z are the defined space coordinate system, M is a certain point in space, Pl and Pr are the imaging points of M on the imaging plane, M is a point in space, and f is the focal length of the imaging system .

具体实施方式Detailed ways

结合附图进一步详细说明本发明的具体实施方式。The specific implementation manner of the present invention will be further described in detail in conjunction with the accompanying drawings.

具体实施方式一:下面结合图1~图5说明本实施方式,本实施方式所述的方法包括以下步骤:Specific implementation mode 1: The following describes this implementation mode in conjunction with FIGS. 1 to 5 . The method described in this implementation mode includes the following steps:

步骤一、利用视觉显著性模型对双目图像进行显著性特征提取,并标出种子点和背景点,具体包括:Step 1. Use the visual saliency model to extract the saliency features of the binocular image, and mark the seed points and background points, specifically including:

利用视觉显著性模型对双目图像进行显著性提取,分别计算双目图像的每个像素点的亮度、颜色、方向三种显著特征,并将三个显著性特征归一化得到图像的加权显著图。在显著图上每个像素代表图像中相应位置的显著性大小。找出图中像素值最大的点,即显著性最强的点,标为种子点;在种子点周围逐步扩大范围找出显著性最弱的点,标为背景点。利用视觉显著性模型提取图像显著性流程如图2所示。Use the visual saliency model to extract the saliency of the binocular image, calculate the three salient features of brightness, color, and direction of each pixel of the binocular image, and normalize the three salient features to obtain the weighted saliency of the image picture. Each pixel on the saliency map represents the saliency of the corresponding position in the image. Find the point with the largest pixel value in the picture, that is, the point with the strongest significance, and mark it as a seed point; gradually expand the range around the seed point to find the point with the weakest significance, and mark it as a background point. The process of extracting image saliency using visual saliency model is shown in Figure 2.

步骤一一、首先进行预处理,对双目图像进行边缘检测,生成视觉显著性模型,边缘信息为图像重要的显著性信息;Step 11, first preprocessing, edge detection is performed on the binocular image, and a visual saliency model is generated, and the edge information is important saliency information of the image;

步骤一二、利用视觉显著性模型对双目图像进行显著性特征提取,生成显著性特征图;Step 12, using the visual saliency model to extract the saliency feature of the binocular image, and generate a saliency feature map;

步骤一三、根据显著性特征图找出图中亮度最大像素点,标记为种子点;并以种子点为中心的25×25的窗口内遍历像素,找出像素点的灰度值小于0.1的且距离种子点最远的像素点标记为背景点;Step 13. According to the saliency feature map, find out the pixel point with the maximum brightness in the picture, and mark it as a seed point; and traverse the pixels in a 25×25 window centered on the seed point, and find out the gray value of the pixel point that is less than 0.1 And the pixel point farthest from the seed point is marked as the background point;

步骤二、对双目图像建立加权图;Step 2, establishing a weighted map for the binocular image;

利用经典高斯权函数对双目图像建立加权图,通过像素的灰度不同对双目图像中每个像素点与其周围像素之间赋予一定权重作为边,同时将每个像素点作为顶点,建立包含顶点和边的加权图;Using the classic Gaussian weight function to establish a weighted graph for the binocular image, assign a certain weight to each pixel in the binocular image and its surrounding pixels as an edge through the different gray levels of the pixels, and at the same time use each pixel as a vertex to establish a weighted graph that includes weighted graph of vertices and edges;

利用图论的理论将整幅图像看成无向的加权图,把每个像素看成加权图中的顶点,其中,所述利用像素的灰度值对加权图的边进行加权,具体采用经典高斯权函数如下:The entire image is regarded as an undirected weighted graph using the theory of graph theory, and each pixel is regarded as a vertex in the weighted graph. The Gaussian weight function is as follows:

WW ijij == ee -- ββ (( gg ii -- gg jj )) 22 -- -- -- (( 11 ))

其中,Wij表示顶点i和顶点j之间的权值,gi表示像素i的亮度,gj表示像素j的亮度,β是自由参数,e为自然底数;Among them, W ij represents the weight between vertex i and vertex j, g i represents the brightness of pixel i, g j represents the brightness of pixel j, β is a free parameter, and e is a natural base;

通过下式求出加权图的拉普拉斯矩阵L:The Laplacian matrix L of the weighted graph is obtained by the following formula:

其中,Lij为拉普拉斯矩阵L中对应顶点i到j的元素,di为顶点i与周围点权值的和,di=∑WijAmong them, L ij is the element corresponding to vertex i to j in the Laplacian matrix L, d i is the sum of vertex i and surrounding point weights, d i =∑W ij ;

步骤三、利用步骤一中的种子点和背景点和步骤二中的加权图,通过随机游走图像分割算法将双目图像中的显著性目标分割出来;Step 3, using the seed point and background point in step 1 and the weighted map in step 2, using the random walk image segmentation algorithm to segment the salient target in the binocular image;

步骤三、利用步骤一中的种子点和背景点和步骤二中的加权图,通过随机游走图像分割算法将双目图像中的显著性目标分割出来;Step 3, using the seed point and background point in step 1 and the weighted map in step 2, using the random walk image segmentation algorithm to segment the salient target in the binocular image;

步骤三一、将双目图像的像素点根据步骤一标记出的种子点和背景点分出两类集合,即标记点集合VM与未标记点集合VU,拉普拉斯矩阵L根据VM和VU,优先排列标记点然后再排列非标记点;其中,所述L分成LM、LU、B、BT四部分,则将拉普拉斯矩阵表示如下:Step 31. Divide the pixel points of the binocular image into two types of sets according to the seed points and background points marked in step 1, namely, the set of marked points V M and the set of unmarked points V U , and the Laplacian matrix L is based on V M and V U , first arrange the marked points and then arrange the non-marked points; wherein, the L is divided into four parts: L M , L U , B, and B T , and the Laplacian matrix is expressed as follows:

LL == LL Mm BB BB TT LL Uu -- -- -- (( 33 ))

其中,LM为标记点到标记点的拉普拉斯矩阵,LU为非标记点到非标记点的拉普拉斯矩阵,B和BT分别为标记点到非标记点和非标记点到标记点的拉普拉斯矩阵;Among them, L M is the Laplacian matrix from the marked point to the marked point, L U is the Laplacian matrix from the unmarked point to the unmarked point, B and B T are the marked point to the non-marked point and the non-marked point respectively to the Laplacian matrix of the marked points;

步骤三二、根据拉普拉斯矩阵和标记点求解组合狄利克雷积分D[x];Step 32, solving the combined Dirichlet integral D[x] according to the Laplace matrix and the marked points;

组合狄利克雷积分公式如下:The combined Dirichlet integral formula is as follows:

DD. [[ xx ]] == 11 22 ΣΣ ww ijij (( xx ii -- xx jj )) 22 == 11 22 xx TT LxLx -- -- -- (( 44 ))

其中,x为加权图中顶点到标记点的概率矩阵,xi和xj分别为顶点i和j到标记点的概率;Among them, x is the probability matrix from the vertex to the marked point in the weighted graph, and x i and x j are the probabilities from vertices i and j to the marked point respectively;

根据标记点集合VM与未标记点集合VU,将x分为xM和xU两部分,xM为标记点集合VM对应的概率矩阵,xU为未标记点集合VU对应的概率矩阵;将式(4)分解为:According to the set of marked points V M and the set of unmarked points V U , divide x into two parts x M and x U , x M is the probability matrix corresponding to the set of marked points V M , x U is the corresponding probability matrix of the set of unmarked points V U Probability matrix; formula (4) is decomposed into:

DD. [[ xx Uu ]] == 11 22 [[ xx Mm TT xx Uu TT ]] LL Mm BB BB TT LL Uu xx Mm xx Uu == 11 22 (( xx Mm TT LL Mm xx Mm ++ 22 xx Uu TT BB TT xx Mm ++ xx Uu TT LL Uu xx Uu )) -- -- -- (( 55 ))

设定ms定义为对于标记点s,如果任意顶点i为s,则否则对D[xu]针对xU求微分,得到式(5)极小值的解即为标记点s的狄利克雷概率值:Let m s be defined as for a marker point s, if any vertex i is s, then otherwise Differentiate D[x u ] for x U , and the solution to the minimum value of formula (5) is the Dirichlet probability value of the marked point s:

LL Uu xx ii sthe s == -- BB mm sthe s -- -- -- (( 66 ))

其中,表示顶点i首次到达标记点s的概率;in, Indicates the probability that vertex i reaches mark point s for the first time;

根据通过组合狄利克雷积分求出的按照式(7)进行阈值分割,生成分割图:According to the obtained by combinatorial Dirichlet integral Perform threshold segmentation according to formula (7) to generate a segmentation map:

其中,si为某一顶点i在分割图中对应位置的像素大小;Among them, s i is the pixel size of the corresponding position of a vertex i in the segmentation map;

其中,所述分割图中亮度为1的像素点表示为图像中的显著性目标,亮度为0的即为背景;Wherein, the pixel points with a brightness of 1 in the segmentation map are represented as salient objects in the image, and those with a brightness of 0 are the background;

步骤三三、将分割图与原图像对应的像素相乘,生成目标图,即提取出分割出的显著性目标,公式如下:Step 33: Multiply the segmentation map with the pixels corresponding to the original image to generate the target map, that is, extract the segmented salient target, the formula is as follows:

ti=si·Ii(8)t i =s i ·I i (8)

其中,ti为目标图T的对应位置i的灰度值,Ii为输入图像I(σ)对应位置i的灰度值;Among them, t i is the gray value of the corresponding position i of the target image T, and I i is the gray value of the corresponding position i of the input image I(σ);

步骤四、通过SIFT算法将显著性目标单独进行关键点匹配;Step 4, using the SIFT algorithm to perform key point matching on the saliency target alone;

通过SIFT算法将分割出来的显著性目标单独进行关键点检测和匹配,对得到的匹配坐标进行筛选,将错误匹配的结果提出,留下正确匹配结果。Through the SIFT algorithm, the segmented salient targets are individually detected and matched for key points, and the obtained matching coordinates are screened, and the wrong matching results are proposed, leaving the correct matching results.

SIFT算法对双目图像进行匹配流程如图4所示。The matching process of binocular images by SIFT algorithm is shown in Figure 4.

步骤四一、将目标图建立高斯金字塔,对滤波后的图像两两求差得到DOG图像,DOG图像定义为D(x,y,σ),求取公式如下:Step 41. Establish a Gaussian pyramid for the target image, and obtain the DOG image by calculating the difference between the filtered images in pairs. The DOG image is defined as D(x, y, σ), and the calculation formula is as follows:

D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*T(x,y)D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*T(x,y)

                                   (9) (9)

=C(x,y,kσ)-C(x,y,σ)=C(x,y,kσ)-C(x,y,σ)

其中,为一个变化尺度的高斯函数,p,q表示高斯模板的维度,(x,y)为像素点在高斯金字塔图像中的位置,σ是图像的尺度空间因子,k表示某一具体尺度值,C(x,y,σ)定义为G(x,y,σ)与目标图T(x,y)的卷积,即C(x,y,σ)=G(x,y,σ)*T(x,y);in, is a Gaussian function with varying scales, p, q represent the dimensions of the Gaussian template, (x, y) is the position of the pixel in the Gaussian pyramid image, σ is the scale space factor of the image, k represents a specific scale value, C (x,y,σ) is defined as the convolution of G(x,y,σ) and the target graph T(x,y), that is, C(x,y,σ)=G(x,y,σ)*T (x,y);

步骤四二、在相邻的DOG图像中求出极值点,通过拟合三维二次函数确定极值点的位置和尺度作为关键点,并根据Hessian矩阵对关键点进行稳定性检测以消除边缘响应,具体如下:Step 42. Find the extreme points in the adjacent DOG images, determine the position and scale of the extreme points as key points by fitting the three-dimensional quadratic function, and perform stability detection on the key points according to the Hessian matrix to eliminate the edge Response, as follows:

关键点为DOG图像的局部极值点组成,遍历DOG图像上每个点,对每个点检测与同尺度的8个相邻点以及相邻上下的2×9个点共26个点的灰度值大小,如果其比周围相邻点都大或者都小则为极值点。The key points are composed of local extremum points of the DOG image, traverse each point on the DOG image, and detect 8 adjacent points of the same scale and 2×9 points adjacent to the upper and lower gray points of a total of 26 points for each point. If it is larger or smaller than the surrounding adjacent points, it is an extreme point.

求出的极值点并不是真正的关键点,为了提高稳定性,需要(一)对尺度空间DOG通过进行泰勒展开求其曲线拟合D(X):The obtained extreme points are not the real key points. In order to improve the stability, it is necessary (1) to obtain the curve fitting D(X) of the scale space DOG through Taylor expansion:

DD. (( Xx )) == DD. ++ ∂∂ DD. TT ∂∂ Xx Xx ++ 11 22 Xx TT ∂∂ 22 DD. ∂∂ Xx 22 Xx -- -- -- (( 1010 ))

其中,X=(x,y,σ)T,D为曲线拟合,对式(10)求导并令其为0,得到极值点的偏移量式(11):Among them, X=(x, y, σ) T , D is the curve fitting, take the derivative of formula (10) and make it 0, and get the offset formula (11) of the extremum point:

Xx ^^ == -- ∂∂ 22 DD. -- 11 ∂∂ Xx 22 ∂∂ DD. ∂∂ Xx -- -- -- (( 1111 ))

为去除低对比度的极值点,将式(11)代入公式(10),得到式(12):In order to remove the extreme points of low contrast, formula (11) is substituted into formula (10), and formula (12) is obtained:

DD. (( Xx ^^ )) == DD. ++ 11 22 ∂∂ DD. TT ∂∂ Xx Xx ^^ -- -- -- (( 1212 ))

若式(12)的值大于0.03,保留该极值点并获取该极值点的精确位置和尺度,否则丢弃;If the value of formula (12) is greater than 0.03, keep the extreme point and obtain the precise position and scale of the extreme point, otherwise discard;

(二)通过关键点处的Hessian矩阵筛选消除不稳定的关键点;(2) Eliminate unstable key points by screening the Hessian matrix at the key points;

利用Hessian矩阵特征值之间的比率计算曲率;Calculate the curvature using the ratio between the eigenvalues of the Hessian matrix;

根据关键点邻域的曲率判断边缘点;Judging the edge point according to the curvature of the key point neighborhood;

曲率的比率设置为10,大于10则删除,反之,则保留,保留下来的则是稳定的关键点;The curvature ratio is set to 10, if it is greater than 10, it will be deleted, otherwise, it will be retained, and the remaining key points will be stable;

若式(12)的值大于0.03,保留该极值点并获取该极值点的精确位置(原位置加上拟合之后的偏移量)和尺度,否则丢弃。为了消除不稳定的关键点,通过关键点处的Hessian矩阵进行筛选:If the value of formula (12) is greater than 0.03, keep the extreme point and obtain the exact position (original position plus the offset after fitting) and scale of the extreme point, otherwise discard it. To eliminate unstable keypoints, filter through the Hessian matrix at the keypoints:

步骤四三、确定关键点位置和所在尺度之后,需要为关键点赋一个方向,定义关键点描述子是相对于这个方向的。利用关键点邻域16×16的窗口的像素为每个关键点指定方向参数;Step 43: After determining the position and scale of the key point, it is necessary to assign a direction to the key point, and define the key point descriptor to be relative to this direction. Use the pixels of the 16×16 window of the key point neighborhood to specify the direction parameter for each key point;

对于在DOG图像中检测出的关键点,梯度的大小和方向计算公式如下:For the key points detected in the DOG image, the calculation formula of the magnitude and direction of the gradient is as follows:

mm (( xx ,, ythe y )) == (( CC (( xx ++ 11 ,, ythe y )) -- CC (( xx -- 11 ,, ythe y )) 22 ++ (( CC (( xx ,, ythe y ++ 11 )) -- CC (( xx ,, ythe y -- 11 )) )) 22

                                    (13)(13)

θ(x,y)=tan-1((C(x,y+1)-C(x,y-1))/(C(x+1,y)-C(x-1,y)))θ(x,y)=tan -1 ((C(x,y+1)-C(x,y-1))/(C(x+1,y)-C(x-1,y)) )

其中,C为关键点所在的尺度空间,m为关键点的梯度大小,θ为关键点的梯度方向;以关键点为中心,在周围区域划定一个邻域,使用直方图来统计这个邻域内点的梯度;Among them, C is the scale space where the key point is located, m is the gradient size of the key point, and θ is the gradient direction of the key point; with the key point as the center, a neighborhood is defined in the surrounding area, and the histogram is used to count the neighborhood point gradient;

直方图的横坐标为方向,将360度分为36份,每份是10度对应直方图当中的一项。直方图的纵坐标为梯度的大小,对应为相应梯度方向的点的大小进行相加,其和作为纵坐标的大小。主方向定义为梯度大小最大为hm的那个区间方向,通过使其他高度为08*hm之上的区间作为主方向的辅助向,以增强匹配的稳定性。The abscissa of the histogram is the direction, which divides 360 degrees into 36 parts, and each part is an item in the histogram corresponding to 10 degrees. The ordinate of the histogram is the size of the gradient, which corresponds to the sum of the points in the corresponding gradient direction, and the sum is used as the size of the ordinate. The main direction is defined as the interval direction whose gradient size is the largest hm, and the stability of matching is enhanced by making other intervals with a height above 08*hm the auxiliary direction of the main direction.

步骤四四、通过上面阶段之后,检测出的每个关键点就都有了位置、方向、所处尺度这三种信息。为每个关键点建立一个描述子以表述关键点的局部特征信息。Step 4. After passing the above stages, each key point detected has three kinds of information: position, direction, and scale. A descriptor is established for each key point to express the local feature information of the key point.

首先在关键点周围的坐标旋转为关键点的方向。然后选取关键点周围16×16的窗口,在邻域内分为16个4×4的小窗口。在4×4的小窗口中,计算其相对应的梯度的大小和方向。并用一个8个bin的直方图来统计每一个小窗口的梯度信息。通过高斯加权算法对关键点周围16×16的窗口计算描述子如下式:First the coordinates around the key point are rotated to the direction of the key point. Then select a 16×16 window around the key point, and divide it into 16 small 4×4 windows in the neighborhood. In the 4×4 small window, calculate the magnitude and direction of the corresponding gradient. And use a histogram of 8 bins to count the gradient information of each small window. The descriptor is calculated for the 16×16 window around the key point by the Gaussian weighting algorithm as follows:

hh == mm (( aa ++ xx ,, bb ++ ythe y )) ** ee -- (( -- xx ′′ )) 22 ++ (( ythe y ′′ )) 22 22 ×× (( 0.50.5 dd )) 22 -- -- -- (( 1414 ))

其中,h为描述子,(a,b)为关键点在高斯金字塔图像的位置,d为窗口的边长即16,(x,y)为像素点在高斯金字塔图像中的位置,(x′,y′)为像素在将坐标旋转为关键点的方向的邻域内的新坐标,新坐标的计算公式如式:Among them, h is the descriptor, (a, b) is the position of the key point in the Gaussian pyramid image, d is the side length of the window, which is 16, (x, y) is the position of the pixel in the Gaussian pyramid image, (x' ,y′) is the new coordinate of the pixel in the neighborhood where the coordinate is rotated to the direction of the key point. The calculation formula of the new coordinate is as follows:

xx ′′ ythe y ′′ == coscos θθ -- sinsin θθ sinsin θθ coscos θθ xx ythe y -- -- -- (( 1515 ))

θ为关键点的方向。θ is the direction of the key point.

通过对16×16的窗口计算得到128个关键点的特征向量,记为H=(h1,h2,h3,...,h128),为了减少光线的影响,对特征向量进行归一化处理,归一化后特征向量记为Lg,归一化公式如式:The eigenvectors of 128 key points are obtained by calculating the 16×16 window, which is recorded as H=(h 1 ,h 2 ,h 3 ,...,h 128 ). In order to reduce the influence of light, the eigenvectors are normalized After normalization, the eigenvector after normalization is recorded as L g , and the normalization formula is as follows:

ll ii == hh ii ΣΣ jj == 11 128128 hh jj ,, jj == 1,2,31,2,3 ,, .. .. .. .. -- -- -- (( 1616 ))

其中,Lg=(l1,l2,l3,...,l128)为归一化之后的关键点的特征向量;Wherein, L g =(l 1 ,l 2 ,l 3 ,...,l 128 ) is the feature vector of the key point after normalization;

当双目图像的两幅图的关键点的描述子都生成之后,采用关键点的特征向量的欧氏距离作为双目图像中关键点的相似度的判定度量,对双目图像中的关键点进行匹配,相互匹配的关键像素点坐标信息作为一组关键信息;When the descriptors of the key points of the two pictures of the binocular image are generated, the Euclidean distance of the feature vector of the key point is used as the judgment measure of the similarity of the key points in the binocular image, and the key points in the binocular image Matching is performed, and the coordinate information of key pixels matched with each other is used as a set of key information;

步骤四五、为最大程度避免误差的产生,对生成的匹配关键点进行筛选;Step 4 and 5, in order to avoid the occurrence of errors to the greatest extent, the generated matching key points are screened;

由于测量系统为双目模型,所以显著性目标的关键点在两个图像中为一个水平面,每对关键点的水平差理论上是相等的。所以求出每对关键点的坐标水平视差,生成视差矩阵,视差矩阵定义为Kn={k1,k2...kn},n为匹配的对数,k1、k2、kn为单个匹配点视差;Since the measurement system is a binocular model, the key points of the saliency target are a horizontal plane in the two images, and the level difference of each pair of key points is theoretically equal. Therefore, the coordinate horizontal disparity of each pair of key points is calculated, and a disparity matrix is generated. The disparity matrix is defined as K n ={k 1 ,k 2 ... k n }, n is the logarithm of matching, k 1 , k 2 , k n is the disparity of a single matching point;

求出视差矩阵的中位数km,并得到参考视差矩阵,记为Kn',公式如下:Find the median k m of the disparity matrix, and obtain the reference disparity matrix, denoted as K n ', the formula is as follows:

Kn'={k1-km,k2-km,...,kn-km}K n '={k 1 -k m ,k 2 -k m ,...,k n -k m }

设定视差阈值为3,将Kn'中大于阈值的对应视差删除,得到最终视察矩阵结果K',以避免错误匹配关键点带来的干扰。k1'、k2'、kn'均为筛选后的正确匹配点的视差,n'为最终正确匹配的对数,公式如下:Set the disparity threshold to 3, delete the corresponding disparity in K n ' that is greater than the threshold, and obtain the final inspection matrix result K', so as to avoid the interference caused by wrongly matching key points. k 1' , k 2' , and k n' are the disparities of the correct matching points after screening, and n' is the logarithm of the final correct matching. The formula is as follows:

K'={k1',k2',...,kn'}K'={k 1' ,k 2' ,...,k n' }

步骤五、将步骤四求出的视差矩阵K'代入双目测距的模型中求出显著性目标距离;Step five, substituting the disparity matrix K' obtained in step four into the binocular distance measurement model to obtain the salient target distance;

将显著性目标匹配出的关键点坐标作减求出双目图像中显著性目标的视差。将视差带入双目测距的模型中从而求出显著性目标距离。The parallax of the salient objects in the binocular image is obtained by subtracting the coordinates of the key points matched by the salient objects. The parallax is brought into the model of binocular ranging to calculate the distance of the salient target.

双目成像能获取同一场景的两幅不同视角的图像,双目模型如图5。Binocular imaging can acquire two images of the same scene with different perspectives. The binocular model is shown in Figure 5.

两个完全相同的成像系统的焦距沿水平方向相距B,两个光轴均平行于水平面,图像平面与竖直平面相平行;The focal lengths of two identical imaging systems are separated by B along the horizontal direction, the two optical axes are parallel to the horizontal plane, and the image plane is parallel to the vertical plane;

假设场景中一点M(X,Y,Z),在左、右两个成像点分别是Pl(x1,y1)和Pr(x2,y2),x1,y1与x2,y2分别为Pl与Pr在成像的竖直平面的坐标,双目模型中视差定义为k=|pl-pr|=|x2-x1|,由三角形相似关系得到距离公式,X,Y,Z为空间坐标系中横轴,竖轴,纵轴的坐标:Assuming a point M(X, Y, Z) in the scene, the left and right imaging points are Pl(x 1 ,y 1 ) and Pr(x 2 ,y 2 ), x 1 ,y 1 and x 2 , y 2 are the coordinates of Pl and Pr in the vertical plane of imaging respectively. In the binocular model, the parallax is defined as k=|pl-pr|=|x 2 -x 1 |, and the distance formula is obtained from the triangle similarity relationship, X, Y , Z is the horizontal axis, vertical axis, and vertical axis coordinates in the spatial coordinate system:

zz == BB ff kk == BB ff || xx 22 -- xx 11 || dd xx -- -- -- (( 1717 ))

其中dx表示每一像素在成像的底片中水平轴方向上的物理距离,f为成像系统的焦距,z是目标点M到两成像中心连线的距离,将步骤四求出的视差矩阵带入式(17)中,根据双目模型的物理信息求出对应的距离矩阵Z'={z1,z2,...,zn'},z1,z2,zn'为单个匹配视差求出的显著性目标距离,最后求出距离矩阵的平均值即为双目图像中显著性目标的距离Zf,公式如下:Where dx represents the physical distance of each pixel in the horizontal axis direction of the imaged film, f is the focal length of the imaging system, z is the distance from the target point M to the line connecting the two imaging centers, and the parallax matrix obtained in step 4 is brought into In formula (17), the corresponding distance matrix Z'={z 1 , z 2 ,...,z n' } is obtained according to the physical information of the binocular model, and z 1 , z 2 , z n' are a single match The distance of the salient target calculated by the parallax, and finally the average value of the distance matrix is calculated as the distance Z f of the salient target in the binocular image. The formula is as follows:

ZZ ff == 11 nno ΣΣ kk == 11 nno ′′ zz kk -- -- -- (( 1818 )) ..

具体实施方式二:下面结合图说明本实施方式,本实施方式与具体实施方式一不同的是:步骤一一所述的对图像进行边缘检测的具体过程为:Specific embodiment two: the present embodiment is described below in conjunction with the figure, and the difference between this embodiment and the specific embodiment one is: the specific process of performing edge detection on the image described in step one by one is:

步骤一一一、采用2D高斯滤波模板对双目图像进行卷积运算消除图像的噪声干扰;Step 111, using 2D Gaussian filter template to perform convolution operation on the binocular image to eliminate the noise interference of the image;

步骤一一二、利用水平和竖直方向的一阶偏导的差分分别计算滤波后的双目图像I(x,y)上像素的梯度幅值和梯度方向,其中x方向和y方向的偏导数dx和dy分别为:Step 112, using the difference of the first-order partial derivatives in the horizontal and vertical directions to calculate the gradient magnitude and gradient direction of the pixel on the filtered binocular image I(x, y) respectively, where the partial derivatives in the x direction and y direction The derivatives dx and dy are respectively:

dx=[I(x+1,y)-I(x-1,y)]/2 (21)dx=[I(x+1,y)-I(x-1,y)]/2 (21)

dy=[I(x,y+1)-I(x,y-1)]/2 (22)dy=[I(x,y+1)-I(x,y-1)]/2 (22)

则梯度幅值为:Then the gradient magnitude is:

D'=(dx2+dy2)1/2 (23)D'=(dx 2 +dy 2 ) 1/2 (23)

梯度方向为:The gradient direction is:

θ'=arctan(dy/dx) (24);θ'=arctan(dy/dx) (24);

D'和θ'分别表示滤波后的双目图像I(x,y)上像素的梯度幅值和梯度方向;D' and θ' represent the gradient magnitude and gradient direction of the pixel on the filtered binocular image I(x,y) respectively;

步骤一一三、对梯度进行非极大值抑制,然后对图像进行双阈值处理,生成边缘图像;其中,所述边缘图像的边缘点灰度值为255,非边缘点灰度值为0。Step 113: Perform non-maximum suppression on the gradient, and then perform double-threshold processing on the image to generate an edge image; wherein, the gray value of the edge points of the edge image is 255, and the gray value of the non-edge points is 0.

具体实施方式三:下面结合图说明本实施方式,本实施方式与具体实施方式一或二不同的是:步骤一二所述的利用视觉显著性模型对双目图像进行显著性特征提取,生成显著性特征图的具体过程为:Specific embodiment three: The following describes this embodiment in conjunction with the figures. The difference between this embodiment and specific embodiment one or two is that: the use of the visual saliency model to extract the salient features of the binocular image described in step 12 generates a salient feature. The specific process of the characteristic map is as follows:

步骤一二一、双目图像边缘检测之后,将原始图像和边缘图像进行叠加:Step 121, after binocular image edge detection, superimpose the original image and the edge image:

I1(σ)=0.7I(σ)+0.3C(σ) (25)I 1 (σ)=0.7I(σ)+0.3C(σ) (25)

其中,I(σ)为输入双目图像的原图,C(σ)为边缘图像,I1(σ)为叠加处理之后的图像;Among them, I(σ) is the original image of the input binocular image, C(σ) is the edge image, and I 1 (σ) is the image after superposition processing;

步骤一二二、采用高斯差函数计算叠加处理之后的图像的九层高斯金字塔,其中第0层为输入的叠加图像,1到8层分别为对上一层采用高斯滤波和降阶采样而成,大小对应着输入图像的1/2到1/256,对高斯金字塔的每一层提取亮度,颜色,方向特征并生成对应的亮度金字塔、颜色金字塔和方向金字塔;Step 122: Use the Gaussian difference function to calculate the nine-layer Gaussian pyramid of the superimposed image, wherein the 0th layer is the input superimposed image, and the 1st to 8th layers are respectively formed by using Gaussian filtering and downsampling on the previous layer , the size corresponds to 1/2 to 1/256 of the input image, and extracts brightness, color, and direction features for each layer of the Gaussian pyramid and generates corresponding brightness pyramids, color pyramids, and direction pyramids;

提取亮度特征公式如下:The formula for extracting brightness features is as follows:

In=(r+g+b)/3 (26)I n =(r+g+b)/3 (26)

其中r、g、b分别对应着输入双目图像颜色的红、绿、蓝三个分量,In为亮度特征;Where r, g, b correspond to the red, green, and blue components of the input binocular image color respectively, and I n is the brightness feature;

提取颜色特征公式如下:The formula for extracting color features is as follows:

R=r-(g+b)/2 (27)R=r-(g+b)/2 (27)

G=g-(r+b)/2 (28)G=g-(r+b)/2 (28)

B=b-(r+g)/2 (29)B=b-(r+g)/2 (29)

Y=r+g-2(|r-g|+b) (30)Y=r+g-2(|r-g|+b) (30)

R,G,B,Y对应着叠加之后图像的颜色分量;R, G, B, Y correspond to the color components of the superimposed image;

O(σ,ω)是对亮度特征In在尺度方向进行Gabor函数滤波提取的方向特征,ω为Gabor函数的方向即高斯金字塔层数,σ为Gabor函数的总的方向数量,其中σ∈[0,1,2…,8],ω∈[0°,45°,90°,135°];O(σ,ω) is the directional feature extracted by Gabor function filtering on the brightness feature I n in the scale direction, ω is the direction of the Gabor function, that is, the number of Gaussian pyramid layers, and σ is the total number of directions of the Gabor function, where σ∈[ 0,1,2...,8], ω∈[0°,45°,90°,135°];

步骤一二三、对求出的高斯金字塔的不同尺度的亮度、颜色和方向三个特征进行中央周边对比作差,具体为:Steps 1, 2, and 3, compare and compare the three features of the obtained Gaussian pyramid with brightness, color, and direction at the center and periphery, specifically:

设尺度c(c∈{2,3,4})为中心尺度,尺度u(u=c+δ,δ∈{3,4})为外围尺度;在9层的高斯金字塔中的中心尺度c和外周尺度u之间有6种组合(2-5,2-6,3-6,3-7,4-7,4-8);Let the scale c(c∈{2,3,4}) be the central scale, and the scale u(u=c+δ,δ∈{3,4}) be the peripheral scale; the central scale c in the 9-layer Gaussian pyramid There are 6 combinations (2-5, 2-6, 3-6, 3-7, 4-7, 4-8) between and the peripheral scale u;

通过尺度c和尺度s的特征图的差值表示中央和周边对比作差的的局部方向特征对比如下式:The difference between the feature maps of scale c and scale s represents the local direction feature comparison between the central and peripheral contrasts as follows:

In(c,u)=|In(c)-In(u)| (31)I n (c,u)=|I n (c)-I n (u)| (31)

RG(c,u)=|(R(c)-G(c))-(G(u)-R(u))| (32)RG(c,u)=|(R(c)-G(c))-(G(u)-R(u))| (32)

BY(c,u)=|(B(c)-Y(c))-(Y(u)-B(u))| (33)BY(c,u)=|(B(c)-Y(c))-(Y(u)-B(u))| (33)

O(c,u,ω)=|O(c,ω)-O(u,ω)| (34)O(c,u,ω)=|O(c,ω)-O(u,ω)| (34)

其中,在做差之前需要通过插值使两幅图的大小一致再进行作差;Among them, before making the difference, it is necessary to make the size of the two images consistent through interpolation before making the difference;

步骤一二四、通过归一化对作差生成的不同特征的特征图进行融合,生成输入双目图像的显著性特征图,具体为:Step 124: Fuse the feature maps of different features generated by the difference by normalization to generate the saliency feature map of the input binocular image, specifically:

首先对每个特征的尺度对比特征图进行归一化融合生成该特征的综合特征图 为亮度特征归一化特征图,为颜色特征归一化特征图,为方向特征归一化特征图;计算过程如下面公式所示:First, the scale comparison feature map of each feature is normalized and fused to generate a comprehensive feature map of the feature Normalize feature maps for brightness features, Normalize feature maps for color features, The feature map is normalized for the direction feature; the calculation process is shown in the following formula:

II nno ‾‾ == ⊕⊕ cc == 22 44 ⊕⊕ sthe s == cc ++ 33 cc ++ 44 NN (( II nno (( cc ,, sthe s )) )) -- -- -- (( 3535 ))

CC ‾‾ == ⊕⊕ cc == 22 44 ⊕⊕ sthe s == cc ++ 33 cc ++ 44 [[ NN (( RGRG (( cc ,, sthe s )) )) ++ NN (( BYBY (( cc ,, sthe s )) )) ]] -- -- -- (( 3636 ))

其中,N(·)代表归一化计算函数,首先对于需计算的特征图,将特征图中每个像素的特征值都归一化到一个闭合区域[0,255]内,然后在归一化的各个特征图中找到全局最大显著值A,再求出特征图中局部极大值的平均值a,最后对特征的每一个像素对应的特征值都乘以2(A-a);Among them, N( ) represents the normalized calculation function. First, for the feature map to be calculated, the feature value of each pixel in the feature map is normalized to a closed area [0,255], and then the normalized Find the global maximum saliency value A in each feature map, then find the average value a of the local maximum value in the feature map, and finally multiply the eigenvalue corresponding to each pixel of the feature by 2(A-a);

再利用每个特征的综合特征图进行归一化处理得到最终的显著性特征图S,计算过程如下:Then use the comprehensive feature map of each feature to perform normalization processing to obtain the final saliency feature map S. The calculation process is as follows:

SS == 11 33 (( NN (( II nno ‾‾ )) ++ NN (( CC ‾‾ )) ++ NN (( Oo ‾‾ )) )) -- -- -- (( 3838 )) ..

Claims (3)

1.一种双目图像中显著性目标的距离测量方法,其特征在于所述方法包括以下步骤:1. a distance measurement method of salient target in binocular image, it is characterized in that described method comprises the following steps: 步骤一、利用视觉显著性模型对双目图像进行显著性特征提取,并标出种子点和背景点,具体包括:Step 1. Use the visual saliency model to extract the saliency features of the binocular image, and mark the seed points and background points, specifically including: 步骤一一、首先进行预处理,对双目图像进行边缘检测,生成双目图像的边缘图;Step 11, first perform preprocessing, perform edge detection on the binocular image, and generate an edge map of the binocular image; 步骤一二、利用视觉显著性模型对双目图像进行显著性特征提取,生成显著性特征图;Step 12, using the visual saliency model to extract the saliency feature of the binocular image, and generate a saliency feature map; 步骤一三、根据显著性特征图找出图中灰度值最大像素点,标记为种子点;并以种子点为中心的25×25的窗口内遍历像素,找出像素点的灰度值小于0.1的且距离种子点最远的像素点标记为背景点;Step 13. According to the saliency feature map, find out the pixel point with the largest gray value in the graph, and mark it as a seed point; and traverse the pixels in a window of 25×25 centered on the seed point, and find out that the gray value of the pixel point is less than 0.1 and the pixel farthest from the seed point is marked as the background point; 步骤二、对双目图像建立加权图;Step 2, establishing a weighted map for the binocular image; 利用经典高斯权函数对双目图像建立加权图:Use the classic Gaussian weight function to create a weighted image for binocular images: WW ijij == ee -- ββ (( gg ii -- gg jj )) 22 -- -- -- (( 11 )) 其中,Wij表示顶点i和顶点j之间的权值,gi表示顶点i的亮度,gj表示顶点j的亮度,β是自由参数,e为自然底数;Among them, W ij represents the weight between vertex i and vertex j, g i represents the brightness of vertex i, g j represents the brightness of vertex j, β is a free parameter, and e is a natural base; 通过下式求出加权图的拉普拉斯矩阵L:The Laplacian matrix L of the weighted graph is obtained by the following formula: 其中,Lij为拉普拉斯矩阵L中对应顶点i到j的元素,di为顶点i与周围点权值的和, d i = Σ W ij ; Among them, L ij is the element corresponding to vertex i to j in the Laplacian matrix L, d i is the sum of the weight of vertex i and surrounding points, d i = Σ W ij ; 步骤三、利用步骤一中的种子点和背景点和步骤二中的加权图,通过随机游走图像分割算法将双目图像中的显著性目标分割出来;Step 3, using the seed point and background point in step 1 and the weighted map in step 2, using the random walk image segmentation algorithm to segment the salient target in the binocular image; 步骤三一、将双目图像的像素点根据步骤一标记出的种子点和背景点分出两类集合,即标记点集合VM与未标记点集合VU,拉普拉斯矩阵L根据VM和VU,优先排列标记点然后再排列非标记点;其中,所述L分成LM、LU、B、BT四部分,则将拉普拉斯矩阵表示如下:Step 31. Divide the pixel points of the binocular image into two types of sets according to the seed points and background points marked in step 1, namely, the set of marked points V M and the set of unmarked points V U , and the Laplacian matrix L is based on V M and V U , first arrange the marked points and then arrange the non-marked points; wherein, the L is divided into four parts: L M , L U , B, and B T , and the Laplacian matrix is expressed as follows: LL == LL Mm BB BB TT LL Uu -- -- -- (( 33 )) 其中,LM为标记点到标记点的拉普拉斯矩阵,LU为非标记点到非标记点的拉普拉斯矩阵,B和BT分别为标记点到非标记点和非标记点到标记点的拉普拉斯矩阵;Among them, L M is the Laplacian matrix from the marked point to the marked point, L U is the Laplacian matrix from the unmarked point to the unmarked point, B and B T are the marked point to the non-marked point and the non-marked point respectively to the Laplacian matrix of the marked points; 步骤三二、根据拉普拉斯矩阵和标记点求解组合狄利克雷积分D[x];Step 32, solving the combined Dirichlet integral D[x] according to the Laplace matrix and the marked points; 组合狄利克雷积分公式如下:The combined Dirichlet integral formula is as follows: DD. [[ xx ]] == 11 22 ΣΣ ww ijij (( xx ii -- xx jj )) 22 == 11 22 xx TT LxLx -- -- -- (( 44 )) 其中,x为加权图中顶点到标记点的概率矩阵,xi和xj分别为顶点i和j到标记点的概率;Among them, x is the probability matrix from the vertex to the marked point in the weighted graph, and x i and x j are the probabilities from vertices i and j to the marked point respectively; 根据标记点集合VM与未标记点集合VU,将x分为xM和xU两部分,xM为标记点集合VM对应的概率矩阵,xU为未标记点集合VU对应的概率矩阵;将式(4)分解为:According to the set of marked points V M and the set of unmarked points V U , divide x into two parts x M and x U , x M is the probability matrix corresponding to the set of marked points V M , x U is the corresponding probability matrix of the set of unmarked points V U Probability matrix; formula (4) is decomposed into: DD. [[ xx Uu ]] == 11 22 [[ xx Mm TT xx Uu TT ]] LL Mm BB BB TT LL Uu xx Mm xx Uu == 11 22 (( xx Mm TT LL Mm xx Mm ++ 22 xx Uu TT BB TT xx Mm ++ xx Uu TT LL Uu xx Uu )) -- -- -- (( 55 )) 对于标记点s,设定ms,如果任意顶点i为s,则否则对D[xu]针对xU求微分,得到式(5)极小值的解即为标记点s的狄利克雷概率值:For a marker point s, set m s , if any vertex i is s, then otherwise Differentiate D[x u ] for x U , and the solution to the minimum value of formula (5) is the Dirichlet probability value of the marked point s: LL Uu xx ii sthe s == -- BmB m sthe s -- -- -- (( 66 )) 其中,表示顶点i首次到达标记点s的概率;in, Indicates the probability that vertex i reaches mark point s for the first time; 根据通过组合狄利克雷积分求出的按照式(7)进行阈值分割,生成分割图:According to the obtained by combinatorial Dirichlet integral Perform threshold segmentation according to formula (7) to generate a segmentation map: 其中,si为某一顶点i在分割图中对应位置的像素大小;Among them, s i is the pixel size of the corresponding position of a vertex i in the segmentation map; 其中,所述分割图中亮度为1的像素点表示为图像中的显著性目标,亮度为0的即为背景;Wherein, the pixel points with a brightness of 1 in the segmentation map are represented as salient objects in the image, and those with a brightness of 0 are the background; 步骤三三、将分割图与原图像对应的像素相乘,生成目标图,即提取出分割出的显著性目标,公式如下:Step 33: Multiply the segmentation map with the pixels corresponding to the original image to generate the target map, that is, extract the segmented salient target, the formula is as follows: ti=si·Ii   (8)t i =s i ·I i (8) 其中,ti为目标图T的某一顶点i的灰度值,Ii为输入图像I(σ)对应位置i的灰度值;Among them, t i is the gray value of a certain vertex i of the target image T, and I i is the gray value of the corresponding position i of the input image I(σ); 步骤四、通过SIFT算法将显著性目标单独进行关键点匹配;Step 4, using the SIFT algorithm to perform key point matching on the saliency target alone; 步骤四一、将目标图建立高斯金字塔,对滤波后的图像两两求差得到DOG图像,DOG图像定义为D(x,y,σ),求取公式如下:Step 41. Establish a Gaussian pyramid for the target image, and obtain the DOG image by calculating the difference between the filtered images in pairs. The DOG image is defined as D(x, y, σ), and the calculation formula is as follows: D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*T(x,y)D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*T(x,y)                                        (9) (9) =C(x,y,kσ)-C(x,y,σ)=C(x,y,kσ)-C(x,y,σ) 其中,为一个变化尺度的高斯函数,p,q表示高斯模板的维度,(x,y)为像素点在高斯金字塔图像中的位置,σ是图像的尺度空间因子,k表示某一具体尺度值,C(x,y,σ)定义为G(x,y,σ)与目标图T(x,y)的卷积,即C(x,y,σ)=G(x,y,σ)*T(x,y);in, is a Gaussian function with varying scales, p, q represent the dimensions of the Gaussian template, (x, y) is the position of the pixel in the Gaussian pyramid image, σ is the scale space factor of the image, k represents a specific scale value, C (x,y,σ) is defined as the convolution of G(x,y,σ) and the target graph T(x,y), that is, C(x,y,σ)=G(x,y,σ)*T (x,y); 步骤四二、在相邻的DOG图像中求出极值点,通过拟合三维二次函数确定极值点的位置和尺度作为关键点,并根据Hessian矩阵对关键点进行稳定性检测以消除边缘响应,具体如下:Step 42. Find the extreme points in the adjacent DOG images, determine the position and scale of the extreme points as key points by fitting the three-dimensional quadratic function, and perform stability detection on the key points according to the Hessian matrix to eliminate the edge Response, as follows: (一)对尺度空间DOG通过进行泰勒展开求其曲线拟合D(X):(1) Find the curve fitting D(X) of the scale space DOG by performing Taylor expansion: DD. (( Xx )) == DD. ++ ∂∂ DD. TT ∂∂ Xx Xx ++ 11 22 Xx TT ∂∂ 22 DD. ∂∂ Xx 22 Xx -- -- -- (( 1010 )) 其中,X=(x,y,σ)T,D为曲线拟合,对式(10)求导并令其为0,得到极值点的偏移量式(11):Among them, X=(x, y, σ) T , D is the curve fitting, take the derivative of formula (10) and make it 0, and get the offset formula (11) of the extremum point: Xx ^^ == -- ∂∂ 22 DD. -- 11 ∂∂ Xx 22 ∂∂ DD. ∂∂ Xx -- -- -- (( 1111 )) 为去除低对比度的极值点,将式(11)代入公式(10),得到式(12):In order to remove the low-contrast extreme points, formula (11) is substituted into formula (10), and formula (12) is obtained: DD. (( Xx ^^ )) == DD. ++ 11 22 ∂∂ DD. TT ∂∂ Xx Xx ^^ -- -- -- (( 1212 )) 若式(12)的值大于0.03,保留该极值点并获取该极值点的精确位置和尺度,否则丢弃;If the value of formula (12) is greater than 0.03, keep the extreme point and obtain the precise position and scale of the extreme point, otherwise discard; (二)通过关键点处的Hessian矩阵筛选消除不稳定的关键点;(2) Eliminate unstable key points by screening the Hessian matrix at the key points; 利用Hessian矩阵特征值之间的比率计算曲率;Calculate the curvature using the ratio between the eigenvalues of the Hessian matrix; 根据关键点邻域的曲率判断边缘点;Judging the edge point according to the curvature of the key point neighborhood; 曲率的比率设置为10,大于10则删除,反之,则保留,保留下来的则是稳定的关键点;The curvature ratio is set to 10, if it is greater than 10, it will be deleted, otherwise, it will be retained, and the remaining key points will be stable; 步骤四三、利用关键点邻域16×16的窗口的像素为每个关键点指定方向参数;Step 43, using the pixels of the 16×16 window of the key point neighborhood to specify the direction parameter for each key point; 对于在DOG图像中检测出的关键点,梯度的大小和方向计算公式如下:For the key points detected in the DOG image, the calculation formula of the magnitude and direction of the gradient is as follows: mm (( xx ,, ythe y )) == (( CC (( xx ++ 11 ,, ythe y )) -- CC (( xx -- 11 ,, ythe y )) )) 22 ++ (( CC (( xx ,, ythe y ++ 11 )) -- CC (( xx ,, ythe y -- 11 )) )) 22 θθ (( xx ,, ythe y )) == tanthe tan -- 11 (( (( CC (( xx ,, ythe y ++ 11 )) -- CC (( xx ,, ythe y -- 11 )) )) // (( CC (( xx ++ 11 ,, ythe y )) -- CC (( xx ++ 11 ,, ythe y )) -- CC (( xx -- 11 ,, ythe y )) )) )) -- -- -- (( 1313 )) 其中,C为关键点所在的尺度空间,m为关键点的梯度大小,θ为所求点的梯度方向;以关键点为中心,在周围区域划定一个16×16邻域,求出其中像素点的梯度大小和梯度方向,使用直方图来统计这个邻域内点的梯度;直方图的横坐标为方向,将360度分为36份,每份是10度对应直方图当中的一项,直方图的纵坐标为梯度大小,对应为相应梯度方向的点的大小进行相加,其和作为纵坐标的大小;主方向定义为梯度大小最大为hm的区间方向,通过梯度大小在08*hm之上的区间作为主方向的辅助向,以增强匹配的稳定性;Among them, C is the scale space where the key point is located, m is the gradient size of the key point, and θ is the gradient direction of the desired point; with the key point as the center, a 16×16 neighborhood is defined in the surrounding area, and the pixel in it is calculated The gradient size and gradient direction of the point, use the histogram to count the gradient of the points in this neighborhood; the abscissa of the histogram is the direction, divide 360 degrees into 36 parts, and each part is an item in the histogram corresponding to 10 degrees, the histogram The ordinate of the graph is the gradient size, which corresponds to the sum of the points in the corresponding gradient direction, and the sum is used as the size of the ordinate; the main direction is defined as the direction of the interval with the maximum gradient size of hm, and the gradient size is between 08*hm The interval above is used as the auxiliary direction of the main direction to enhance the stability of matching; 步骤四四、建立描述子表述关键点的局部特征信息Step 44: Establish descriptors to express local feature information of key points 首先在关键点周围的坐标旋转为关键点的方向;First, the coordinates around the key point are rotated to the direction of the key point; 然后选取关键点周围16×16的窗口,在邻域内分为16个4×4的小窗口,在4×4的小窗口中,计算其相对应的梯度的大小和方向,并用一个8个bin的直方图来统计每一个小窗口的梯度信息,通过高斯加权算法对关键点周围16×16的窗口计算描述子如下式:Then select a 16×16 window around the key point, and divide it into 16 4×4 small windows in the neighborhood. In the 4×4 small window, calculate the size and direction of the corresponding gradient, and use an 8 bin The histogram of each small window is used to count the gradient information of each small window, and the descriptor is calculated for the 16×16 window around the key point through the Gaussian weighting algorithm as follows: hh == mm gg (( aa ++ xx ,, bb ++ ythe y )) ** ee -- (( -- xx ′′ )) 22 ++ (( ythe y ′′ )) 22 22 ×× (( 0.50.5 dd )) 22 -- -- -- (( 1414 )) 其中,h为描述子,(a,b)为关键点在高斯金字塔图像的位置,mg为关键点的梯度大小即步骤四三直方图主方向的梯度大小,d为窗口的边长即16,(x,y)为像素点在高斯金字塔图像中的位置,(x′,y′)为像素在将坐标旋转为关键点的方向的邻域内的新坐标,新坐标的计算公式如式:Among them, h is the descriptor, (a, b) is the position of the key point in the Gaussian pyramid image, m g is the gradient size of the key point, which is the gradient size of the main direction of the histogram in step 43, and d is the side length of the window, which is 16 , (x, y) is the position of the pixel in the Gaussian pyramid image, (x′, y′) is the new coordinate of the pixel in the neighborhood of the direction that rotates the coordinate to the key point, and the calculation formula of the new coordinate is as follows: xx ′′ ythe y ′′ == coscos θθ gg -- sinsin θθ gg sinsin θθ gg coscos θθ gg xx ythe y -- -- -- (( 1515 )) θg为关键点的梯度方向;θ g is the gradient direction of the key point; 通过对16×16的窗口计算得到128个关键点的特征向量,记为H=(h1,h2,h3,...,h128),对特征向量进行归一化处理,归一化后特征向量记为Lg,归一化公式如式:The feature vectors of 128 key points are obtained by calculating the 16×16 window, which is recorded as H=(h 1 ,h 2 ,h 3 ,...,h 128 ), and the feature vectors are normalized. The eigenvector after normalization is denoted as L g , and the normalization formula is as follows: ll ii == hh ii ΣΣ jj == 11 128128 hh jj ,, jj == 1,2,31,2,3 ,, .. .. .. .. (( 1616 )) 其中,Lg=(l1,l2,...,li,...,l128)为归一化之后的关键点的特征向量,li,i=1,2,3,....为某一归一化向量;Among them, L g =(l 1 ,l 2 ,...,l i ,...,l 128 ) is the feature vector of the key point after normalization, l i ,i=1,2,3,. ... is a normalized vector; 采用关键点的特征向量的欧氏距离作为双目图像中关键点的相似度的判定度量,对双目图像中的关键点进行匹配,相互匹配的关键像素点坐标信息作为一组关键信息;The Euclidean distance of the feature vector of the key point is used as the judgment measure of the similarity of the key point in the binocular image, and the key point in the binocular image is matched, and the coordinate information of the key pixel points matched with each other is used as a set of key information; 步骤四五、对生成的匹配关键点进行筛选;Steps 4 and 5, screening the generated matching key points; 求出每对关键点的坐标水平视差,生成视差矩阵,视差矩阵定义为Kn={k1,k2...kn},n为匹配的对数,k1、k2、kn为单个匹配点视差;Find the coordinate horizontal disparity of each pair of key points, and generate a disparity matrix. The disparity matrix is defined as K n ={k 1 ,k 2 ... k n }, n is the logarithm of matching, k 1 , k 2 , k n Disparity for a single matching point; 求出视差矩阵的中位数km,并得到参考视差矩阵,记为Kn',公式如下:Find the median k m of the disparity matrix, and obtain the reference disparity matrix, denoted as K n ', the formula is as follows: Kn'={k1-km,k2-km,...,kn-km}   (17)K n '={k 1 -k m ,k 2 -k m ,...,k n -k m } (17) 设定视差阈值为3,将Kn'中大于阈值的对应视差删除,得到最终视察矩阵结果K',k1'、k2'、kn'均为筛选后的正确匹配点的视差,n'为最终正确匹配的对数,公式如下:Set the parallax threshold to 3, delete the corresponding parallax in K n ' that is greater than the threshold, and obtain the final observation matrix result K', k 1' , k 2' , and k n' are the parallaxes of the correct matching points after screening, n 'is the logarithm of the final correct match, the formula is as follows: K'={k1',k2',...,kn'}   (18)K'={k 1' ,k 2' ,...,k n' } (18) 步骤五、将步骤四求出的视差矩阵K'代入双目测距的模型中求出显著性目标距离;Step five, substituting the disparity matrix K' obtained in step four into the binocular distance measurement model to obtain the salient target distance; 两个完全相同的成像系统的焦距沿水平方向相距J,两个光轴均平行于水平面,图像平面与竖直平面相平行;The focal lengths of two identical imaging systems are separated by J along the horizontal direction, the two optical axes are parallel to the horizontal plane, and the image plane is parallel to the vertical plane; 假设场景中一目标点M(X,Y,Z),在左、右两个成像点分别是Pl(x1,y1)和Pr(x2,y2),x1,y1与x2,y2分别为Pl与Pr在成像的竖直平面的坐标,双目模型中视差定义为k=|pl-pr|=|x2-x1|,由三角形相似关系得到距离公式,X,Y,Z为空间坐标系中横轴,竖轴,纵轴的坐标:Assuming a target point M(X, Y, Z) in the scene, the two imaging points on the left and right are Pl(x 1 ,y 1 ) and Pr(x 2 ,y 2 ), x 1 ,y 1 and x 2 , y 2 are the coordinates of Pl and Pr in the vertical plane of imaging respectively. In the binocular model, the parallax is defined as k=|pl-pr|=|x 2 -x 1 |, and the distance formula is obtained from the triangle similarity relationship, X , Y, Z are the coordinates of the horizontal axis, vertical axis, and vertical axis in the spatial coordinate system: zz == JJ ff kk == JJ ff || xx 22 -- xx 11 || dxdx ′′ -- -- -- (( 1919 )) 其中dx'表示每一像素在成像的底片中水平轴方向上的物理距离,f为成像系统的焦距,z是目标点M到两成像中心连线的距离,将步骤四求出的视差矩阵带入式(19)中,根据双目模型的物理信息求出对应的距离矩阵Z'={z1,z2,...,zn'},z1,z2,zn'为单个匹配视差求出的显著性目标距离,最后求出距离矩阵的平均值即为双目图像中显著性目标的距离Zf,公式如下:Where dx' represents the physical distance of each pixel in the horizontal axis direction of the imaged film, f is the focal length of the imaging system, z is the distance from the target point M to the line connecting the two imaging centers, and the disparity matrix obtained in step 4 is taken with In formula (19), the corresponding distance matrix Z'={z 1 , z 2 ,...,z n' } is obtained according to the physical information of the binocular model, z 1 , z 2 , z n' are a single The distance of the salient target obtained by matching the disparity, and finally the average value of the distance matrix is calculated as the distance Z f of the salient target in the binocular image. The formula is as follows: ZZ ff == 11 nno ΣΣ kk == 11 nno ′′ zz kk -- -- -- (( 2020 )) .. 2.根据权利要求1所述的一种双目图像中显著性目标的距离测量方法,其特征在于步骤一一所述的对图像进行边缘检测的具体过程为:2. the distance measuring method of salient target in a kind of binocular image according to claim 1, it is characterized in that the concrete process that step one by one is carried out edge detection to image is: 步骤一一一、采用2D高斯滤波模板对双目图像进行卷积运算消除图像的噪声干扰;Step 111, using 2D Gaussian filter template to perform convolution operation on the binocular image to eliminate the noise interference of the image; 步骤一一二、利用水平和竖直方向的一阶偏导的差分分别计算滤波后的双目图像I(x,y)上像素的梯度幅值和梯度方向,其中x方向和y方向的偏导数dx和dy分别为:Step 112, using the difference of the first-order partial derivatives in the horizontal and vertical directions to calculate the gradient magnitude and gradient direction of the pixel on the filtered binocular image I(x, y) respectively, where the partial derivatives in the x direction and y direction The derivatives dx and dy are respectively: dx=[I(x+1,y)-I(x-1,y)]/2  (21)dx=[I(x+1,y)-I(x-1,y)]/2 (21) dy=[I(x,y+1)-I(x,y-1)]/2  (22)dy=[I(x,y+1)-I(x,y-1)]/2 (22) 则梯度幅值为:Then the gradient magnitude is: D'=(dx2+dy2)1/2  (23)D'=(dx 2 +dy 2 ) 1/2 (23) 梯度方向为:The gradient direction is: θ'=arctan(dy/dx)  (24);θ'=arctan(dy/dx) (24); D'和θ'分别表示滤波后的双目图像I(x,y)上像素的梯度幅值和梯度方向;D' and θ' represent the gradient magnitude and gradient direction of the pixel on the filtered binocular image I(x,y) respectively; 步骤一一三、对梯度进行非极大值抑制,然后对图像进行双阈值处理,生成边缘图像;其中,所述边缘图像的边缘点灰度值为255,非边缘点灰度值为0。Step 113: Perform non-maximum suppression on the gradient, and then perform double-threshold processing on the image to generate an edge image; wherein, the gray value of the edge points of the edge image is 255, and the gray value of the non-edge points is 0. 3.根据权利要求2所述的一种双目图像中显著性目标的距离测量方法,其特征在于步骤一二所述的利用视觉显著性模型对双目图像进行显著性特征提取,生成显著性特征图的具体过程为:3. the distance measurement method of a salient target in a binocular image according to claim 2, characterized in that the binocular image is extracted using a visual saliency model described in step 12 to generate a salient feature The specific process of the feature map is: 步骤一二一、双目图像边缘检测之后,将原始图像和边缘图像进行叠加:Step 121, after binocular image edge detection, superimpose the original image and the edge image: I1(σ)=0.7I(σ)+0.3C(σ)  (25)I 1 (σ)=0.7I(σ)+0.3C(σ) (25) 其中,I(σ)为输入双目图像的原图,C(σ)为边缘图像,I1(σ)为叠加处理之后的图像;Among them, I(σ) is the original image of the input binocular image, C(σ) is the edge image, and I 1 (σ) is the image after superposition processing; 步骤一二二、采用高斯差函数计算叠加处理之后的图像的九层高斯金字塔,其中第0层为输入的叠加图像,1到8层分别为对上一层采用高斯滤波和降阶采样而成,大小对应着输入图像的1/2到1/256,对高斯金字塔的每一层提取亮度,颜色,方向特征并生成对应的亮度金字塔、颜色金字塔和方向金字塔;Step 122: Use the Gaussian difference function to calculate the nine-layer Gaussian pyramid of the superimposed image, wherein the 0th layer is the input superimposed image, and the 1st to 8th layers are respectively formed by using Gaussian filtering and downsampling on the previous layer , the size corresponds to 1/2 to 1/256 of the input image, and extracts brightness, color, and direction features for each layer of the Gaussian pyramid and generates corresponding brightness pyramids, color pyramids, and direction pyramids; 提取亮度特征公式如下:The formula for extracting brightness features is as follows: In=(r+g+b)/3  (26)I n =(r+g+b)/3 (26) 其中r、g、b分别对应着输入双目图像颜色的红、绿、蓝三个分量,In为亮度特征;Where r, g, b correspond to the red, green, and blue components of the input binocular image color respectively, and I n is the brightness feature; 提取颜色特征公式如下:The formula for extracting color features is as follows: R=r-(g+b)/2  (27)R=r-(g+b)/2 (27) G=g-(r+b)/2  (28)G=g-(r+b)/2 (28) B=b-(r+g)/2  (29)B=b-(r+g)/2 (29) Y=r+g-2(|r-g|+b)  (30)Y=r+g-2(|r-g|+b) (30) R,G,B,Y对应着叠加之后图像的颜色分量;R, G, B, Y correspond to the color components of the superimposed image; O(σ,ω)是对亮度特征In在尺度方向进行Gabor函数滤波提取的方向特征,ω为Gabor函数的方向即高斯金字塔层数,σ为Gabor函数的总的方向数量,其中σ∈[0,1,2…,8],ω∈[0°,45°,90°,135°];O(σ,ω) is the directional feature extracted by Gabor function filtering on the brightness feature In in the scale direction, ω is the direction of the Gabor function, that is, the number of Gaussian pyramid layers, and σ is the total number of directions of the Gabor function, where σ∈[0 ,1,2...,8],ω∈[0°,45°,90°,135°]; 步骤一二三、对求出的高斯金字塔的不同尺度的亮度、颜色和方向三个特征进行中央周边对比作差,具体为:Steps 1, 2, and 3, compare and compare the three features of the obtained Gaussian pyramid with brightness, color, and direction at the center and periphery, specifically: 设尺度c(c∈{2,3,4})为中心尺度,尺度u(u=c+δ,δ∈{3,4})为外围尺度;在9层的高斯金字塔中的中心尺度c和外周尺度u之间有6种组合(2-5,2-6,3-6,3-7,4-7,4-8);Let the scale c(c∈{2,3,4}) be the central scale, and the scale u(u=c+δ,δ∈{3,4}) be the peripheral scale; the central scale c in the 9-layer Gaussian pyramid There are 6 combinations (2-5, 2-6, 3-6, 3-7, 4-7, 4-8) between and the peripheral scale u; 通过尺度c和尺度s的特征图的差值表示中央和周边对比作差的的局部方向特征对比如下式:The difference between the feature maps of scale c and scale s represents the local direction feature comparison between the central and peripheral contrasts as follows: In(c,u)=|In(c)-In(u)|  (31)I n (c,u)=|I n (c)-I n (u)| (31) RG(c,u)=|(R(c)-G(c))-(G(u)-R(u))|  (32)RG(c,u)=|(R(c)-G(c))-(G(u)-R(u))| (32) BY(c,u)=|(B(c)-Y(c))-(Y(u)-B(u))|  (33)BY(c,u)=|(B(c)-Y(c))-(Y(u)-B(u))| (33) O(c,u,ω)=|O(c,ω)-O(u,ω)|  (34)O(c,u,ω)=|O(c,ω)-O(u,ω)| (34) 其中,在做差之前需要通过插值使两幅图的大小一致再进行作差;Among them, before making the difference, it is necessary to make the size of the two images consistent through interpolation before making the difference; 步骤一二四、通过归一化对作差生成的不同特征的特征图进行融合,生成输入双目图像的显著性特征图,具体为:Step 124: Fuse the feature maps of different features generated by the difference by normalization to generate the saliency feature map of the input binocular image, specifically: 首先对每个特征的尺度对比特征图进行归一化融合生成该特征的综合特征图 为亮度特征归一化特征图,为颜色特征归一化特征图,为方向特征归一化特征图;计算过程如下面公式所示:First, the scale comparison feature map of each feature is normalized and fused to generate a comprehensive feature map of the feature Normalize feature maps for brightness features, Normalize feature maps for color features, The feature map is normalized for the direction feature; the calculation process is shown in the following formula: II nno ‾‾ == ⊕⊕ cc == 22 44 ⊕⊕ sthe s == cc ++ 33 cc ++ 44 NN (( II nno (( cc ,, sthe s )) )) -- -- -- (( 3535 )) CC ‾‾ == ⊕⊕ cc == 22 44 ⊕⊕ sthe s == cc ++ 33 cc ++ 44 [[ NN (( RGRG (( cc ,, sthe s )) )) ++ NN (( BYBY (( cc ,, sthe s )) )) ]] -- -- -- (( 3636 )) 其中,N(·)代表归一化计算函数,首先对于需计算的特征图,将特征图中每个像素的特征值都归一化到一个闭合区域[0,255]内,然后在归一化的各个特征图中找到全局最大显著值A,再求出特征图中局部极大值的平均值a,最后对特征的每一个像素对应的特征值都乘以2(A-a);Among them, N( ) represents the normalized calculation function. First, for the feature map to be calculated, the feature value of each pixel in the feature map is normalized into a closed area [0,255], and then the normalized Find the global maximum saliency value A in each feature map, then find the average value a of the local maximum value in the feature map, and finally multiply the eigenvalue corresponding to each pixel of the feature by 2(A-a); 再利用每个特征的综合特征图进行归一化处理得到最终的显著性特征图S,计算过程如下:Then use the comprehensive feature map of each feature to perform normalization processing to obtain the final saliency feature map S. The calculation process is as follows: SS == 11 33 (( NN (( II nno ‾‾ )) ++ NN (( CC ‾‾ )) ++ NN (( Oo ‾‾ )) )) -- -- -- (( 3838 )) ..
CN201510233157.3A 2015-05-08 2015-05-08 The distance measurement method of conspicuousness target in a kind of binocular image Active CN104778721B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510233157.3A CN104778721B (en) 2015-05-08 2015-05-08 The distance measurement method of conspicuousness target in a kind of binocular image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510233157.3A CN104778721B (en) 2015-05-08 2015-05-08 The distance measurement method of conspicuousness target in a kind of binocular image

Publications (2)

Publication Number Publication Date
CN104778721A true CN104778721A (en) 2015-07-15
CN104778721B CN104778721B (en) 2017-08-11

Family

ID=53620167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510233157.3A Active CN104778721B (en) 2015-05-08 2015-05-08 The distance measurement method of conspicuousness target in a kind of binocular image

Country Status (1)

Country Link
CN (1) CN104778721B (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574928A (en) * 2015-12-11 2016-05-11 深圳易嘉恩科技有限公司 Driving image processing method and first electronic equipment
CN106023198A (en) * 2016-05-16 2016-10-12 天津工业大学 Hessian matrix-based method for extracting aortic dissection of human thoracoabdominal cavity CT image
CN106094516A (en) * 2016-06-08 2016-11-09 南京大学 A kind of robot self-adapting grasping method based on deeply study
CN106780476A (en) * 2016-12-29 2017-05-31 杭州电子科技大学 A kind of stereo-picture conspicuousness detection method based on human-eye stereoscopic vision characteristic
CN106918321A (en) * 2017-03-30 2017-07-04 西安邮电大学 A kind of method found range using object parallax on image
CN106920244A (en) * 2017-01-13 2017-07-04 广州中医药大学 A kind of method of background dot near detection image edges of regions
CN107392929A (en) * 2017-07-17 2017-11-24 河海大学常州校区 A kind of intelligent target detection and dimension measurement method based on human vision model
CN107423739A (en) * 2016-05-23 2017-12-01 北京陌上花科技有限公司 Image characteristic extracting method and device
CN107633498A (en) * 2017-09-22 2018-01-26 成都通甲优博科技有限责任公司 Image dark-state Enhancement Method, device and electronic equipment
CN107644398A (en) * 2017-09-25 2018-01-30 上海兆芯集成电路有限公司 Image interpolation method and its associated picture interpolating device
CN107730521A (en) * 2017-04-29 2018-02-23 安徽慧视金瞳科技有限公司 The quick determination method of roof edge in a kind of image
CN108036730A (en) * 2017-12-22 2018-05-15 福建和盛高科技产业有限公司 A kind of fire point distance measuring method based on thermal imaging
CN108460794A (en) * 2016-12-12 2018-08-28 南京理工大学 A kind of infrared well-marked target detection method of binocular solid and system
CN108665740A (en) * 2018-04-25 2018-10-16 衢州职业技术学院 A kind of classroom instruction control system of feeling and setting happily blended Internet-based
CN109300154A (en) * 2018-11-27 2019-02-01 郑州云海信息技术有限公司 A kind of distance measuring method and device based on binocular solid
WO2019029099A1 (en) * 2017-08-11 2019-02-14 浙江大学 Image gradient combined optimization-based binocular visual sense mileage calculating method
CN110060240A (en) * 2019-04-09 2019-07-26 南京链和科技有限公司 A kind of tyre contour outline measurement method based on camera shooting
CN110889866A (en) * 2019-12-04 2020-03-17 南京美基森信息技术有限公司 Background updating method for depth map
CN112489104A (en) * 2020-12-03 2021-03-12 海宁奕斯伟集成电路设计有限公司 Distance measurement method and device, electronic equipment and readable storage medium
CN112784814A (en) * 2021-02-10 2021-05-11 中联重科股份有限公司 Posture recognition method for vehicle backing and warehousing and conveying vehicle backing and warehousing guide system
CN116523900A (en) * 2023-06-19 2023-08-01 东莞市新通电子设备有限公司 Hardware processing quality detection method
CN117152144A (en) * 2023-10-30 2023-12-01 潍坊华潍新材料科技有限公司 Guide roller monitoring method and device based on image processing
CN117889867A (en) * 2024-03-18 2024-04-16 南京师范大学 Path planning method based on local self-attention moving window algorithm

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110065790B (en) * 2019-04-25 2021-07-06 中国矿业大学 A method for detecting blockage of coal mine belt conveyor head based on vision algorithm

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008040945A1 (en) * 2006-10-06 2008-04-10 Imperial Innovations Limited A method of identifying a measure of feature saliency in a sequence of images
CN103824284A (en) * 2014-01-26 2014-05-28 中山大学 Key frame extraction method based on visual attention model and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008040945A1 (en) * 2006-10-06 2008-04-10 Imperial Innovations Limited A method of identifying a measure of feature saliency in a sequence of images
CN103824284A (en) * 2014-01-26 2014-05-28 中山大学 Key frame extraction method based on visual attention model and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘英哲 等: "H.264中一种基于搜索范围自适应调整的运动估计算法", 《电子与信息学报》 *
蒋寓文 等: "选择性背景优先的显著性检测模型", 《电子与信息学报》 *

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574928A (en) * 2015-12-11 2016-05-11 深圳易嘉恩科技有限公司 Driving image processing method and first electronic equipment
CN106023198A (en) * 2016-05-16 2016-10-12 天津工业大学 Hessian matrix-based method for extracting aortic dissection of human thoracoabdominal cavity CT image
CN107423739A (en) * 2016-05-23 2017-12-01 北京陌上花科技有限公司 Image characteristic extracting method and device
CN107423739B (en) * 2016-05-23 2020-11-13 北京陌上花科技有限公司 Image feature extraction method and device
CN106094516A (en) * 2016-06-08 2016-11-09 南京大学 A kind of robot self-adapting grasping method based on deeply study
CN108460794A (en) * 2016-12-12 2018-08-28 南京理工大学 A kind of infrared well-marked target detection method of binocular solid and system
CN108460794B (en) * 2016-12-12 2021-12-28 南京理工大学 Binocular three-dimensional infrared salient target detection method and system
CN106780476A (en) * 2016-12-29 2017-05-31 杭州电子科技大学 A kind of stereo-picture conspicuousness detection method based on human-eye stereoscopic vision characteristic
CN106920244A (en) * 2017-01-13 2017-07-04 广州中医药大学 A kind of method of background dot near detection image edges of regions
CN106920244B (en) * 2017-01-13 2019-08-02 广州中医药大学 A kind of method of the neighbouring background dot of detection image edges of regions
CN106918321A (en) * 2017-03-30 2017-07-04 西安邮电大学 A kind of method found range using object parallax on image
CN107730521A (en) * 2017-04-29 2018-02-23 安徽慧视金瞳科技有限公司 The quick determination method of roof edge in a kind of image
CN107730521B (en) * 2017-04-29 2020-11-03 安徽慧视金瞳科技有限公司 Method for rapidly detecting ridge type edge in image
CN107392929A (en) * 2017-07-17 2017-11-24 河海大学常州校区 A kind of intelligent target detection and dimension measurement method based on human vision model
CN107392929B (en) * 2017-07-17 2020-07-10 河海大学常州校区 An intelligent target detection and size measurement method based on human visual model
WO2019029099A1 (en) * 2017-08-11 2019-02-14 浙江大学 Image gradient combined optimization-based binocular visual sense mileage calculating method
CN107633498A (en) * 2017-09-22 2018-01-26 成都通甲优博科技有限责任公司 Image dark-state Enhancement Method, device and electronic equipment
CN107644398A (en) * 2017-09-25 2018-01-30 上海兆芯集成电路有限公司 Image interpolation method and its associated picture interpolating device
CN108036730A (en) * 2017-12-22 2018-05-15 福建和盛高科技产业有限公司 A kind of fire point distance measuring method based on thermal imaging
CN108036730B (en) * 2017-12-22 2019-12-10 福建和盛高科技产业有限公司 Fire point distance measuring method based on thermal imaging
CN108665740A (en) * 2018-04-25 2018-10-16 衢州职业技术学院 A kind of classroom instruction control system of feeling and setting happily blended Internet-based
CN109300154A (en) * 2018-11-27 2019-02-01 郑州云海信息技术有限公司 A kind of distance measuring method and device based on binocular solid
CN110060240A (en) * 2019-04-09 2019-07-26 南京链和科技有限公司 A kind of tyre contour outline measurement method based on camera shooting
CN110060240B (en) * 2019-04-09 2023-08-01 南京链和科技有限公司 Tire contour measurement method based on image pickup
CN110889866A (en) * 2019-12-04 2020-03-17 南京美基森信息技术有限公司 Background updating method for depth map
CN112489104A (en) * 2020-12-03 2021-03-12 海宁奕斯伟集成电路设计有限公司 Distance measurement method and device, electronic equipment and readable storage medium
CN112489104B (en) * 2020-12-03 2024-06-18 海宁奕斯伟集成电路设计有限公司 Ranging method, ranging device, electronic equipment and readable storage medium
CN112784814A (en) * 2021-02-10 2021-05-11 中联重科股份有限公司 Posture recognition method for vehicle backing and warehousing and conveying vehicle backing and warehousing guide system
CN112784814B (en) * 2021-02-10 2024-06-07 中联重科股份有限公司 Gesture recognition method for reversing and warehousing of vehicle and reversing and warehousing guiding system of conveying vehicle
CN116523900A (en) * 2023-06-19 2023-08-01 东莞市新通电子设备有限公司 Hardware processing quality detection method
CN116523900B (en) * 2023-06-19 2023-09-08 东莞市新通电子设备有限公司 Hardware processing quality detection method
CN117152144A (en) * 2023-10-30 2023-12-01 潍坊华潍新材料科技有限公司 Guide roller monitoring method and device based on image processing
CN117152144B (en) * 2023-10-30 2024-01-30 潍坊华潍新材料科技有限公司 Guide roller monitoring method and device based on image processing
CN117889867A (en) * 2024-03-18 2024-04-16 南京师范大学 Path planning method based on local self-attention moving window algorithm
CN117889867B (en) * 2024-03-18 2024-05-24 南京师范大学 Path planning method based on local self-attention moving window algorithm

Also Published As

Publication number Publication date
CN104778721B (en) 2017-08-11

Similar Documents

Publication Publication Date Title
CN104778721A (en) Distance measuring method of significant target in binocular image
CN110378196B (en) Road visual detection method combining laser point cloud data
CN107578418B (en) Indoor scene contour detection method fusing color and depth information
EP2811423B1 (en) Method and apparatus for detecting target
CN104850850B (en) A kind of binocular stereo vision image characteristic extracting method of combination shape and color
CN102509098B (en) A fisheye image vehicle recognition method
CN105205489B (en) Detection method of license plate based on color and vein analyzer and machine learning
CN107491756B (en) Lane turning information recognition method based on traffic signs and ground signs
CN104517095B (en) A kind of number of people dividing method based on depth image
CN104700414A (en) Rapid distance-measuring method for pedestrian on road ahead on the basis of on-board binocular camera
CN108645375B (en) Rapid vehicle distance measurement optimization method for vehicle-mounted binocular system
CN105678318B (en) The matching process and device of traffic sign
CN107392929A (en) A kind of intelligent target detection and dimension measurement method based on human vision model
CN107369158A (en) The estimation of indoor scene layout and target area extracting method based on RGB D images
CN104933398A (en) vehicle identification system and method
CN110675442B (en) Local stereo matching method and system combined with target recognition technology
CN106557740A (en) The recognition methods of oil depot target in a kind of remote sensing images
CN114581658A (en) Target detection method and device based on computer vision
EP4287137A1 (en) Method, device, equipment, storage media and system for detecting drivable space of road
CN118429524A (en) Binocular stereoscopic vision-based vehicle running environment modeling method and system
Zakharov et al. Automatic building detection from satellite images using spectral graph theory
CN111091071B (en) Underground target detection method and system based on ground penetrating radar hyperbolic wave fitting
CN110910497B (en) Method and system for realizing augmented reality map
US9087381B2 (en) Method and apparatus for building surface representations of 3D objects from stereo images
CN109063564B (en) A target change detection method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20170717

Address after: 510000, Guangdong, Guangzhou, Guangzhou new Guangzhou knowledge city nine Buddha, Jianshe Road 333, room 245

Applicant after: Guangzhou Xiaopeng Automobile Technology Co. Ltd.

Address before: 150001 Harbin, Nangang, West District, large straight street, No. 92

Applicant before: Harbin Institute of Technology

GR01 Patent grant
GR01 Patent grant