CN106971404A - A kind of robust SURF unmanned planes Color Remote Sensing Image method for registering - Google Patents

A kind of robust SURF unmanned planes Color Remote Sensing Image method for registering Download PDF

Info

Publication number
CN106971404A
CN106971404A CN201710163760.8A CN201710163760A CN106971404A CN 106971404 A CN106971404 A CN 106971404A CN 201710163760 A CN201710163760 A CN 201710163760A CN 106971404 A CN106971404 A CN 106971404A
Authority
CN
China
Prior art keywords
rsqb
lsqb
point
feature
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710163760.8A
Other languages
Chinese (zh)
Inventor
王保平
戈艺萌
张飞
顾漪
张修飞
其他发明人请求不公开姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201710163760.8A priority Critical patent/CN106971404A/en
Publication of CN106971404A publication Critical patent/CN106971404A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Image Analysis (AREA)

Abstract

本发明涉及一种鲁棒SURF无人机彩色遥感图像配准方法,即在特征点描述阶段,对特征点自身进行颜色空间变换,得到特征点自身的彩色信息,随后在描述算子中进行表述。匹配效果良好,图像没有出现交叉线。

The invention relates to a robust SURF unmanned aerial vehicle color remote sensing image registration method, that is, in the feature point description stage, the feature point itself is transformed into a color space to obtain the color information of the feature point itself, and then expressed in the description operator . The matching works well and there are no cross lines appearing in the image.

Description

一种鲁棒SURF无人机彩色遥感图像配准方法A Robust SURF UAV Color Remote Sensing Image Registration Method

技术领域technical field

本发明涉及无人机像配准技术,尤其是一种遥感图像配准方法。The invention relates to a UAV image registration technology, in particular to a remote sensing image registration method.

背景技术Background technique

SURF(Speeded Up Robust Features)算法是一种具有鲁棒性的局部特征检测算法,该算法不仅具有重复性高的检测器和可区分性强的描述符,此外具有很强的鲁棒性及较高的运算速度;在精度上能达到亚像素级别,且在尺度缩放、亮度变化、旋转、遮挡和噪声等情况下均能保证从图像中提取出的特征描述子具有良好不变性,目前该算法已被广泛应用于图像配准技术中。但由于无人机遥感图像像幅小、数量多,对于配准精度的要求极其苛刻。如何增强彩色图像特征点显著性,从而进一步提高配准精度,尤其是特征相似度较高的彩色遥感图像精确配准一直是亟待解决的难题。SURF (Speeded Up Robust Features) algorithm is a robust local feature detection algorithm. This algorithm not only has a highly repeatable detector and a highly distinguishable descriptor, but also has strong robustness and comparative High computing speed; the accuracy can reach the sub-pixel level, and it can ensure that the feature descriptor extracted from the image has good invariance under the conditions of scale scaling, brightness change, rotation, occlusion and noise. Currently, the algorithm It has been widely used in image registration technology. However, due to the small size and large number of UAV remote sensing images, the requirements for registration accuracy are extremely stringent. How to enhance the salience of color image feature points, so as to further improve the registration accuracy, especially the precise registration of color remote sensing images with high feature similarity has always been an urgent problem to be solved.

一些研究人员提出了将RGB分量的强度归一化后再用SURF算法来处理彩色遥感图像的方法和基于SURF的颜色不变特征的方法,但第一种方法在描述色彩不变特征时有局限性,无法对彩色信息进行充分利用,第二种方法耗时较长,实时性差,均不能满足实际要求。Some researchers have proposed the method of normalizing the intensity of RGB components and then using the SURF algorithm to process color remote sensing images and the method of color invariant features based on SURF, but the first method has limitations in describing color invariant features The color information cannot be fully utilized, and the second method takes a long time and has poor real-time performance, which cannot meet the actual requirements.

发明内容Contents of the invention

要解决的技术问题technical problem to be solved

本发明针对无人机彩色遥感图像使用SURF算法配准时仅利用图像亮度信息忽视彩色信息使得配准精度降低的问题,提出一种鲁棒SURF无人机彩色遥感图像配准方法。Aiming at the problem that the registration accuracy is reduced by only using image brightness information and ignoring the color information when the color remote sensing image of the UAV is registered using the SURF algorithm, the present invention proposes a robust SURF UAV color remote sensing image registration method.

技术方案Technical solutions

一种鲁棒SURF无人机彩色遥感图像配准方法,其特征在于步骤如下:A method for registration of robust SURF UAV color remote sensing images, characterized in that the steps are as follows:

步骤1:对获取的无人机彩色遥感参考图像image1、待配准图像image2进行灰度处理变为灰度图像,对灰度图像进行高斯平滑处理完成尺度空间构建,采用特征检测算子Hessian矩阵的行列式进行特征点提取,获取image1、image2的所有特征点分别记为目标集{pi},i=1,2,……,n和基准集{qj},j=1,2,……,m;Step 1: Perform grayscale processing on the acquired UAV color remote sensing reference image image1 and image2 to be registered to become a grayscale image, and perform Gaussian smoothing on the grayscale image to complete the scale space construction, and use the feature detection operator Hessian matrix The determinant of the feature points is extracted, and all the feature points of image1 and image2 are recorded as the target set {p i }, i=1,2,...,n and the reference set {q j },j=1,2, ...,m;

步骤2:对于得到的目标集{pi}和基准集{qj},以每个特征点为中心确定边长为20s的正方形区域,其中s表示不同尺度空间下的采样间隔,将其周围邻域信息分成4×4个子区域,每个小区域又分为5×5个采样点,最后用Harr小波计算每个小区域垂直和水平方向的响应及模值dx,|dx|,dy,|dy|,并统计5×5个采样点的总响应∑dx,∑|dx|,∑dy,∑|dy|,可得到特征点的64维SURF特征描述符:Step 2: For the obtained target set {p i } and reference set {q j }, a square area with a side length of 20s is determined centering on each feature point, where s represents the sampling interval in different scale spaces, and the surrounding Neighborhood information is divided into 4×4 sub-regions, and each small region is divided into 5×5 sampling points. Finally, Harr wavelet is used to calculate the response and modulus dx,|dx|,dy, |dy|, and counting the total response ∑dx, ∑|dx|, ∑dy, ∑|dy| of 5×5 sampling points, the 64-dimensional SURF feature descriptor of the feature point can be obtained:

V=(∑dx,∑|dx|,∑dy,∑|dy|)V=(∑dx,∑|dx|,∑dy,∑|dy|)

则特征点的SURF描述符在水平或垂直方向上定义为:Then the SURF descriptor of the feature point is defined in the horizontal or vertical direction as:

Vgray=(V1,V2,V3,V4,...,V16)V gray =(V 1 ,V 2 ,V 3 ,V 4 ,...,V 16 )

其中,Vi,i=1,2,3,...,15,16表示16个子区域所对应的描述向量;由此可得目标集{pi}和基准集{qj}中所有的特征点的64维SURF特征描述符;Among them, V i , i = 1, 2, 3,..., 15, 16 represent the description vectors corresponding to the 16 sub-regions; from this we can get all the 64-dimensional SURF feature descriptor for feature points;

步骤3:对于坐标为(x,y)特征点,其在RGB颜色空间表示为R(x,y)、G(x,y)、B(x,y),将其进行颜色空间转换,转换到YIQ颜色空间为Y(x,y)、I(x,y)、Q(x,y)Step 3: For the feature point with coordinates (x, y), it is expressed as R (x, y) , G (x, y) , B (x, y) in the RGB color space, and it is converted into the color space, and the conversion To the YIQ color space is Y (x,y) , I (x,y) , Q (x,y) ;

RGB颜色空间变换到YIQ颜色空间表示为:The transformation from RGB color space to YIQ color space is expressed as:

将其增加到原始算法描述符中,得到新的彩色描述符如下:Adding it to the original algorithm descriptor yields a new color descriptor as follows:

对上式进行归一化处理:Normalize the above formula:

其中ik,k=1,2,3,...,63,64代表经典SURF算法中特征点的64维描述向量;由此可得目标集{pi}和基准集{qj}中各个特征点的67维描述向量;where i k , k =1,2,3,...,63,64 represent the 64-dimensional description vector of feature points in the classic SURF algorithm; 67-dimensional description vector of each feature point;

步骤4:特征点双向匹配Step 4: Two-way matching of feature points

步骤4a:使用欧氏距离公式计算参考图像image1中的特征点p1在image1中正向的最近欧式距离dnt和次近欧式距离dsntStep 4a: use the Euclidean distance formula to calculate the nearest Euclidean distance d nt and the next closest Euclidean distance d snt in the forward direction of the feature point p1 in the reference image image1;

使用欧氏距离公式计算参考图像image1中的特征点p1在image2中正向的最近欧式距离d1nt和次近欧式距离d1snt的特征点分别为p2nt和p2sntUse the Euclidean distance formula to calculate the feature points of the feature point p1 in the reference image image1 in the forward direction of the nearest Euclidean distance d1 nt and the second closest Euclidean distance d1 snt in the image2, respectively, and the feature points are p2 nt and p2 snt ;

步骤4b:计算最近距离d1nt和次最近距离d1snt的比率T1,即Step 4b: Calculate the ratio T 1 of the closest distance d1 nt and the second closest distance d1 snt , namely

将T1与判断阈值T进行比较:如果T1<T,随之进行步骤4c,否则,自动跳出;Compare T 1 with the judgment threshold T: if T 1 <T, proceed to step 4c, otherwise, jump out automatically;

步骤4c:对于p2nt对应特征点,使用欧氏距离公式计算在image1中反向的最近距离d2nt和次最近距离d2snt的特征点分别为p1nt和p1sntStep 4c: For the feature point corresponding to p2 nt , use the Euclidean distance formula to calculate the feature points of the reverse closest distance d2 nt and the next closest distance d2 snt in image1 are p1 nt and p1 snt respectively;

步骤4d:计算最近距离和次最近距离的比率T2 Step 4d: Calculate the ratio T 2 of the closest distance to the next closest distance

将T1与判断阈值T进行比较:如果T2<T,同时p1nt所寻找的特征点和原始特征点p1是同一个特征点,则认为是正确匹配点对,否则返回步骤4a对参考图像image1中的其它特征点进行新的特征点判断;Compare T 1 with the judgment threshold T: if T 2 <T, and the feature point p1 nt is looking for is the same feature point as the original feature point p1, it is considered to be a correct matching point pair, otherwise return to step 4a for the reference image Other feature points in image1 are used to judge new feature points;

依次遍历参考图像目标集中所有特征点,最终实现双向匹配提纯;Traverse all the feature points in the reference image target set in turn, and finally achieve two-way matching and purification;

步骤5:采用RANSAC算法实现变换矩阵参数求取Step 5: Use the RANSAC algorithm to obtain the parameters of the transformation matrix

步骤5a:寻找步骤4完成后的双向匹配提纯后的特征点对mi'←→mi,i=1,2,......n;Step 5a: Find the feature point pair m i '←→m i after the two-way matching and purification after step 4 is completed, i=1,2,...n;

步骤5b:从匹配点对集合中,任意选取4对匹配点对,使用这4对匹配点对的坐标值,求得变换矩阵H;Step 5b: Randomly select 4 pairs of matching points from the set of matching points, and use the coordinate values of these 4 pairs of matching points to obtain the transformation matrix H;

步骤5c:以欧氏距离为依据,在匹配点对集合中寻找符合d(Himi,mi')<t条件的点对,其中,Himi表示对image2中特征点进行矩阵变换映射到image1中的位置坐标,mi'表示mi在image1中对应特征点的位置坐标,d(Himi,mi')表示两个坐标向量的欧氏距离,将符合条件的点对作为最终的内点,并记录满足Hi约束的内点数量;Step 5c: Based on the Euclidean distance, find the point pair in the matching point pair set that meets the condition of d(H i m i ,m i ')<t, where H i m i represents the matrix of the feature points in image2 The transformation is mapped to the position coordinates in image1, m i 'indicates the position coordinates of mi in image1 corresponding to the feature point, d(H i m i ,m i ') indicates the Euclidean distance between two coordinate vectors, and will meet the conditions The point pair is used as the final interior point, and the number of interior points satisfying the H i constraint is recorded;

步骤5d:重复5b和5c两步n次,记录每一次的内点数量;Step 5d: Repeat steps 5b and 5c n times, and record the number of interior points each time;

步骤5e:选取对应内点数量最大的Hbest,寻找所有符合d(Hbestmi,mi')<t条件的匹配点对,将它们作为最终的内点,即正确的匹配点对,不符合d(Hbestmi,mi')<t条件的错误匹配点对,即为外点,予以剔除;Step 5e: Select H best corresponding to the largest number of interior points, find all matching point pairs that meet the condition of d(H best m i ,m i ')<t, and use them as the final interior points, that is, the correct matching point pairs, Wrong matching point pairs that do not meet the condition of d(H best m i ,m i ')<t are outliers and should be eliminated;

步骤5f:根据随机抽样一致性算法求得N个匹配点对,对这个2N个由x1,y1,x2,y2所构成的矩阵进行标记,记作A,对其进行SVD奇异值分解得A=UDVT,U和V中分别是A的奇异向量,D为对角线上的奇异值按降序排列,V的最后一列重构为3*3矩阵即为所求的透视变换矩阵参数估计;Step 5f: According to the random sampling consensus algorithm, N matching point pairs are obtained, and the 2N matrices composed of x 1 , y 1 , x 2 , y 2 are marked, denoted as A, and the SVD singular value is performed on it Decompose A=UDV T , U and V are the singular vectors of A, D is the singular values on the diagonal in descending order, and the last column of V is reconstructed into a 3*3 matrix, which is the required perspective transformation matrix Parameter Estimation;

步骤6:双三次插值实现配准Step 6: Bicubic interpolation for registration

若参考图像image1上任意一点坐标为(x,y),其灰度值为g(x,y),该点在待配准图像image2上的对应点坐标为(u,v),其灰度值为f(u,v),[·]表示取整数,待配准图像中对应点4*4邻域内16个像素点组成的矩阵为B:If the coordinates of any point on the reference image image1 are (x, y), its gray value is g(x, y), and the corresponding point coordinates of this point on the image to be registered are (u, v), and its gray value The value is f(u,v), and [ ] means taking an integer, and the matrix composed of 16 pixels in the 4*4 neighborhood of the corresponding point in the image to be registered is B:

矩阵B中元素中函数f(u+i,v+j),i=-1,0,1,2;j=-1,0,1,2,定义为在待配准图像上的对应点坐标为(u+i,v+j)时的灰度值;The function f(u+i,v+j) in the elements in the matrix B, i=-1,0,1,2; j=-1,0,1,2, defined as the corresponding point on the image to be registered The gray value when the coordinates are (u+i,v+j);

则参考图像中待插值点的灰度值计算公式如下:Then the calculation formula of the gray value of the point to be interpolated in the reference image is as follows:

g(x,y)=f(u,v)=ABCg(x,y)=f(u,v)=ABC

其中:in:

s(w)为双三次插值的基函数,是对sin(x*π)/x的逼近,通过双三次插值,将image2图像插值到参考图像image1中,实现最终的高精度配准。s(w) is the basis function of bicubic interpolation, which is an approximation to sin(x*π)/x. Through bicubic interpolation, the image2 image is interpolated into the reference image image1 to achieve the final high-precision registration.

所述的步骤4b中的判断阈值T为0.4-0.8。The judgment threshold T in the step 4b is 0.4-0.8.

所述的步骤5c中的t选取4。t in the step 5c is selected as 4.

所述的步骤5d中的100≤n≤200。100≤n≤200 in the step 5d.

有益效果Beneficial effect

本发明提出的一种鲁棒SURF无人机彩色遥感图像配准方法,即在特征点描述阶段,对特征点自身进行颜色空间变换,得到特征点自身的彩色信息,随后在描述算子中进行表述。匹配效果良好,图像没有出现交叉线。A robust SURF UAV color remote sensing image registration method proposed by the present invention, that is, in the feature point description stage, the color space transformation is performed on the feature point itself to obtain the color information of the feature point itself, and then in the description operator. expression. The matching works well and there are no cross lines appearing in the image.

附图说明Description of drawings

图1为本发明无人机彩色遥感图像image1和image2,其中图1(a)为参考图像image1,图1(b)为待配准图像image2。Fig. 1 is the color remote sensing images image1 and image2 of the UAV in the present invention, wherein Fig. 1(a) is the reference image image1, and Fig. 1(b) is the image to be registered image2.

图2为原SURF算法的配准结果。Figure 2 shows the registration results of the original SURF algorithm.

图3为本发明一种鲁棒的SURF配准结果。Fig. 3 is a robust SURF registration result of the present invention.

具体实施方式detailed description

现结合实施例、附图对本发明作进一步描述:Now in conjunction with embodiment, accompanying drawing, the present invention will be further described:

由于RGB颜色空间的分量与明亮程度关系紧密,而通常采集到的彩色遥感图像对亮度过于敏感,所以RGB颜色空间不适合进行彩色图像处理。YIQ颜色空间(Y分量代表图像的亮度信息,I、Q两个分量则携带颜色信息)和RGB颜色空间通过一个矩阵相乘可直接相互变换,且为线性变换,实时性高。另一方面,YIQ具有能将亮度分量提取出来,同时彩色信息也被分离出来,计算量小,聚类特性好,因此能够有效地用于彩色图像处理。本方法在特征点描述阶段将特征点由RGB空间变换到YIQ空间中,对特征点进行YIQ模型表示,将特征点自身彩色信息添加到描述符中,在原经典SURF的64维描述符基础上增加三维彩色信息向量构成67维描述符以增强彩色图像特征点描述符区分度。随后结合RANSAC(Random SampleConsensus)、双向匹配、双三次插值算法实现无人机彩色遥感图像精确配准。主要包括如下步骤:Because the components of the RGB color space are closely related to the brightness, and the color remote sensing images usually collected are too sensitive to the brightness, the RGB color space is not suitable for color image processing. The YIQ color space (the Y component represents the brightness information of the image, and the I and Q two components carry the color information) and the RGB color space can be directly transformed through a matrix multiplication, and it is a linear transformation with high real-time performance. On the other hand, YIQ has the ability to extract the brightness component, and the color information is also separated at the same time, with a small amount of calculation and good clustering characteristics, so it can be effectively used for color image processing. This method transforms the feature points from the RGB space to the YIQ space in the feature point description stage, expresses the feature points in a YIQ model, adds the color information of the feature points to the descriptor, and adds Three-dimensional color information vectors constitute 67-dimensional descriptors to enhance the discrimination of color image feature point descriptors. Then, RANSAC (Random Sample Consensus), bidirectional matching, and bicubic interpolation algorithms are combined to achieve precise registration of UAV color remote sensing images. It mainly includes the following steps:

步骤一、利用SURF算法进行特征点检测获取目标集和基准集Step 1. Use the SURF algorithm to perform feature point detection to obtain the target set and reference set

令无人机彩色遥感图像image1为参考图像,其中的特征点描述子集为目标集{pi},i=1,2,……,n,另一幅彩色遥感图像image2为待配准图像,其中的特征点描述子集为基准集{qj},j=1,2,……,m;Let the UAV color remote sensing image image1 be the reference image, the feature point description subset in it is the target set {p i }, i=1,2,...,n, another color remote sensing image image2 is the image to be registered , where the feature point description subset is the reference set {q j }, j=1,2,...,m;

对获取的无人机彩色遥感图像image1、image2进行灰度处理,变为灰度图像,进行高斯平滑处理完成尺度空间构建,采用特征检测算子Hessian矩阵的行列式进行特征点提取,获取image1、image2的所有特征点分别记为目标集{pi},i=1,2,……,n和基准集{qj},j=1,2,……,m,具体描述如下:Gray-scale processing is performed on the acquired UAV color remote sensing images image1 and image2 to become gray-scale images, Gaussian smoothing processing is performed to complete the construction of the scale space, and feature point extraction is performed using the determinant of the feature detection operator Hessian matrix to obtain image1, All feature points of image2 are respectively recorded as the target set {p i }, i=1,2,...,n and the reference set {q j },j=1,2,...,m, the specific description is as follows:

即对于灰度处理后所得到的灰度图像I(x,y)中的某一点X=(x,y),在尺度σ上的Hessian矩阵定义为:That is, for a certain point X=(x,y) in the grayscale image I(x,y) obtained after grayscale processing, the Hessian matrix on the scale σ is defined as:

其中:Lxx(X,σ)是尺度为σ的高斯二阶微分和图像I(x,y)在点X处卷积的结果:where: L xx (X,σ) is the Gaussian second order differential with scale σ Convolved with image I(x,y) at point X:

其中, 代表了二维卷积运算,同理,Lxy(X,σ)和Lyy(X,σ)分别是尺度为σ的高斯二阶微分和I(x,y)在点X处卷积。in, Represents a two-dimensional convolution operation. Similarly, L xy (X, σ) and L yy (X, σ) are Gaussian second-order differentials with a scale of σ Convolved with I(x,y) at point X.

Hessian矩阵的行列式:Determinant of the Hessian matrix:

Det(H)=Lxx(X,σ)Lyy(X,σ)-L2 xy(X,σ) (3)Det(H)=L xx (X,σ)L yy (X,σ)-L 2 xy (X,σ) (3)

不同尺度σ代表不同空间分辨率,不同尺度σ值进行高斯平滑处理后的图像不同,进行尺度空间构建:Different scales σ represent different spatial resolutions, different scales of σ values have different images after Gaussian smoothing, and the scale space is constructed:

组间尺度:Between-group scale:

σoctave=σ0*2octave-1 (4)σ octave = σ 0 *2 octave-1 (4)

组内尺度:Intragroup scale:

σ-----尺度空间坐标,即为尺度空间的一个数值表征量;σ-----scale space coordinate, which is a numerical representation of scale space;

s-----组内尺度空间坐标,即为组内尺度空间的一个数值表征量;s ----- scale space coordinates within the group, which is a numerical representation of the scale space within the group;

σ0-----初始尺度,根据相机性能决定,一般选1.6;σ 0 -----Initial scale, determined according to camera performance, generally choose 1.6;

S-----每组层数(4层);S-----the number of layers per group (4 layers);

octave-----组数(4组);octave-----number of groups (4 groups);

由式(4)、(5)建立尺度空间,然后通过三维非最大值抑制搜索技术在图像的尺度空间中搜索行列式响应的局部极值完成特征点的检测;The scale space is established by formulas (4) and (5), and then the local extremum of the determinant response is searched in the scale space of the image through the three-dimensional non-maximum suppression search technology to complete the detection of feature points;

步骤二、特征点描述及其添加彩色信息Step 2. Description of feature points and adding color information

2.1特征点描述2.1 Feature point description

对于得到的目标集{pi}和基准集{qj},以每个特征点为中心确定边长为20s(s表示不同尺度空间下的采样间隔)的正方形区域,将其周围邻域信息分成4×4个子区域,每个小区域又分为5×5个采样点,最后用Harr小波计算每个小区域垂直和水平方向的响应及模值dx,|dx|,dy,|dy|,并统计5×5个采样点的总响应∑dx,∑|dx|,∑dy,∑|dy|,可得到特征点的64(4×4×4)维SURF特征描述符:For the obtained target set {p i } and reference set {q j }, a square area with a side length of 20s (s represents the sampling interval in different scale spaces) is determined with each feature point as the center, and the surrounding neighborhood information Divide into 4×4 sub-regions, each small region is divided into 5×5 sampling points, and finally use Harr wavelet to calculate the vertical and horizontal response and modulus dx,|dx|,dy,|dy| of each small region , and counting the total response ∑dx, ∑|dx|, ∑dy, ∑|dy| of 5×5 sampling points, the 64 (4×4×4) dimensional SURF feature descriptor of the feature point can be obtained:

V=(∑dx,∑|dx|,∑dy,∑|dy|) (6)V=(∑dx,∑|dx|,∑dy,∑|dy|) (6)

假设某特征点坐标为(x,y),则该特征点的SURF描述符在一个方向上(水平或垂直)可定义为:Assuming that the coordinates of a feature point are (x, y), the SURF descriptor of the feature point can be defined in one direction (horizontal or vertical) as:

Vgray=(V1,V2,V3,V4,...,V16) (7)V gray =(V 1 ,V 2 ,V 3 ,V 4 ,...,V 16 ) (7)

其中,Vi(i=1,2,3,...,15,16)表示16个子区域所对应的描述符。利用上述方法可得目标集{pi}和基准集{qj}中所有的特征点的64维SURF特征描述符。Wherein, V i (i=1, 2, 3, . . . , 15, 16) represents descriptors corresponding to 16 sub-regions. The 64-dimensional SURF feature descriptors of all the feature points in the target set {p i } and reference set {q j } can be obtained by using the above method.

2.2彩色信息添加2.2 Add color information

将特征点自身的彩色信息加入到原有描述符中,坐标为(x,y)特征点,其在RGB颜色空间表示为R(x,y)、G(x,y)、B(x,y),将其进行颜色空间转换,转换到YIQ颜色空间为Y(x,y)、I(x,y)、Q(x,y),RGB颜色空间在日常生活中最为常用。然而,我们通常采集到的彩色遥感图像对亮度过于敏感,而RGB颜色空间的分量与明亮程度关系紧密。所以,RGB颜色空间不适合进行彩色图像处理。而YIQ和RGB通过一个矩阵相乘可直接相互变换,成线性变换,实时性高。另一方面,YIQ颜色空间具有能将亮度分量提取出来,同时彩色信息也被分离出来,计算量小,聚类特性好,因此能够有效地用于彩色图像处理,这也是我们改进的根源所在。The color information of the feature point itself is added to the original descriptor, and the coordinates are (x, y) feature points, which are expressed as R (x, y) , G (x, y) , B (x , y) in the RGB color space y) , convert it to color space, and convert it to YIQ color space as Y (x,y) , I (x,y) , Q (x,y) , RGB color space is most commonly used in daily life. However, the color remote sensing images we usually collect are too sensitive to brightness, and the components of RGB color space are closely related to the brightness. Therefore, the RGB color space is not suitable for color image processing. However, YIQ and RGB can be directly transformed into each other through a matrix multiplication, and become a linear transformation with high real-time performance. On the other hand, the YIQ color space can extract the luminance component and separate the color information at the same time, the calculation amount is small, and the clustering characteristics are good, so it can be effectively used for color image processing, which is the root of our improvement.

RGB颜色空间变换到YIQ颜色空间表示为:The transformation from RGB color space to YIQ color space is expressed as:

将其增加到原始算法描述符中,得到新的彩色描述符如下:Adding it to the original algorithm descriptor yields a new color descriptor as follows:

Vcolor=(V1,V2,V3,V4,...,V16,Y(x,y),I(x,y),Q(x,y)) (9)V color =(V 1 ,V 2 ,V 3 ,V 4 ,...,V 16 ,Y (x,y) ,I (x,y) ,Q (x,y) ) (9)

为了使得表述更加清晰,将式(6)进行简单变换In order to make the expression clearer, the simple transformation of formula (6)

V=(i1,i2,i3,i4) (10)V=(i 1 ,i 2 ,i 3 ,i 4 ) (10)

随后对公式(9)式进行重新定义如下:Then formula (9) is redefined as follows:

Vcolor=(i1,i2,i3,i4,...,i64,Y(x,y),I(x,y),Q(x,y)) (11)V color =(i 1 ,i 2 ,i 3 ,i 4 ,...,i 64 ,Y (x,y) ,I (x,y) ,Q (x,y) ) (11)

为了对旋转、尺度、光照有更好的鲁棒性。这里需要对式(11)进行归一化处理:For better robustness to rotation, scale, lighting. Here, formula (11) needs to be normalized:

其中|Vcolor|是(11)式的求模式,表达式为:Where |V color | is the calculation mode of formula (11), the expression is:

其中ik(k=1,2,3,...,63,64)代表经典SURF算法中特征点的64维描述符。从公式(11)中我们可以看到,在原始的64维的特征点描述符的基础上,特征点自身的彩色信息也被加载进来,构成了67维的描述符,和原来的算法相比,时间上没有太大变化,但性能上因为彩色信息的加入,使得精度会进一步提升。Among them, i k (k=1,2,3,...,63,64) represents the 64-dimensional descriptor of the feature point in the classical SURF algorithm. From formula (11), we can see that on the basis of the original 64-dimensional feature point descriptor, the color information of the feature point itself is also loaded in to form a 67-dimensional descriptor. Compared with the original algorithm , there is not much change in time, but in terms of performance, the accuracy will be further improved due to the addition of color information.

以此类推,最终求得目标集{pi}和基准集{qj}中各个特征点的67维描述符。By analogy, the 67-dimensional descriptors of each feature point in the target set {p i } and reference set {q j } are finally obtained.

步骤三、特征点匹配Step 3. Feature point matching

对于参考图像image1中目标集中每一个特征点,在基准集中查询得到它的最近邻欧氏距离dnt和次近邻欧氏距离dsnt,若满足:For each feature point in the target set in the reference image image1, query its nearest neighbor Euclidean distance d nt and second nearest neighbor Euclidean distance d snt in the reference set, if it satisfies:

则保留符合公式(14)的目标集中的特征点与其目标集中该特征点在基准集中查询到的最近邻的特征点最近邻构成的匹配,否则剔除这个匹配对,公式中的T是判断阈值,在0.4-0.8之间取值。Then retain the matching between the feature points in the target set conforming to the formula (14) and the nearest neighbors of the feature points in the target set that are queried in the benchmark set; otherwise, this matching pair is eliminated. T in the formula is the judgment threshold, Take a value between 0.4-0.8.

通过以上步骤我们得到了最近邻欧氏距离与次近邻欧式距离比值度量准则下的匹配点对。这些匹配点对出现误匹配的可能性依然很高,需要进行提纯处理,这里用双向匹配的方法,操作如下:Through the above steps, we get the matching point pairs under the ratio measurement criterion of the nearest neighbor Euclidean distance and the second nearest neighbor Euclidean distance. These matching point pairs are still highly likely to be mis-matched, and need to be purified. Here, the two-way matching method is used, and the operation is as follows:

为使得表述更加清晰,这里将image1和image2中的描述符式(12)记为v1=(v11,v12,v13,v14,...,v167)和v2=(v21,v22,v23,v24,...,v267),由欧氏距离公式可以得到:To make the expression clearer, the descriptor formula (12) in image1 and image2 is recorded here as v1=(v1 1 ,v1 2 ,v1 3 ,v1 4 ,...,v1 67 ) and v2=(v2 1 , v2 2 ,v2 3 ,v2 4 ,...,v2 67 ), from the Euclidean distance formula can be obtained:

其中,v1、v2两个向量之间的欧氏距离表示为d(v1,v2),v1i、v2i(i=1,2,3,...,66,67)表示向量v1、v2的各分量值;Among them, the Euclidean distance between the two vectors v1 and v2 is expressed as d(v1,v2), v1 i , v2 i (i=1,2,3,...,66,67) represent the vectors v1, v2 Each component value of ;

最近距离和次近距离表示为dnt和dsnt,Ti是最近距离与次近距离的比率,表示如下:The closest distance and the second closest distance are denoted as d nt and d snt , and T i is the ratio of the closest distance to the second closest distance, expressed as follows:

根据公式(15)和(16)进行双向匹配点对提纯,具体步骤如下:According to the formulas (15) and (16), the two-way matching point pair is purified, and the specific steps are as follows:

(1)参考图像image1中的特征点p1,用欧氏距离公式在image2中寻得其最近欧式距离和次近欧式距离特征点分别为p2nt和p2snt,用d1nt和d1snt来描述正向的最近距离和次最近距离,其中从参考图像目标集特征点在基准集中寻找的方向,定位正向,即确定目标集中一个特征点在基准集中进行遍历寻找满足条件的点;而从基准集在目标集中寻找的方向,定为反向;(1) Referring to the feature point p1 in image1, use the Euclidean distance formula to find the feature points of the closest Euclidean distance and the next closest Euclidean distance in image2, which are p2 nt and p2 snt respectively, and use d1 nt and d1 snt to describe the positive The shortest distance and the second shortest distance in the direction, where the direction from the feature point of the reference image target set in the reference set is located, and the forward direction is determined, that is, a feature point in the target set is traversed in the reference set to find a point that satisfies the condition; and from the reference set The direction to look for in the target set is defined as reverse;

(2)使用公式(16)计算最近距离和次最近距离的比率T1,即(2) Use formula (16) to calculate the ratio T 1 of the closest distance to the second closest distance, namely

如果T1<T,随之进行步骤(3),否则,自动跳出;If T 1 <T, proceed to step (3), otherwise, jump out automatically;

(3)对于p2nt对应的特征点,用公式(15)在image1中寻得最近距离和次最近距离特征点分别为p1nt和p1snt,用d2nt和d2snt来描述反向的最近距离和次最近距离;(3) For the feature point corresponding to p2 nt , use the formula (15) to find the feature points with the shortest distance and the second shortest distance in image1, which are p1 nt and p1 snt respectively, and use d2 nt and d2 snt to describe the reverse shortest distance and the next closest distance;

(4)使用公式(16)计算最近距离和次最近距离的比率T2 (4) Use formula (16) to calculate the ratio T 2 of the closest distance and the next closest distance

如果T2<T,同时p1nt所寻找的特征点和原始特征点p1是同一个特征点,则认为是正确匹配点对,否则返回步骤(1)对参考图像image1中的其它特征点进行新的特征点判断;If T 2 <T, and at the same time the feature point that p1 nt is looking for is the same feature point as the original feature point p1, then it is considered to be a correct matching point pair, otherwise return to step (1) to update other feature points in the reference image image1 feature point judgment;

依次遍历参考图像目标集中所有特征点,最终实现双向匹配提纯。It traverses all the feature points in the target set of the reference image in turn, and finally achieves two-way matching and purification.

步骤四、RANSAC算法求取变换矩阵参数Step 4, RANSAC algorithm to obtain transformation matrix parameters

首先对变换矩阵求解,给定彩色遥感图像image1和image2上两点m'和m,根据透视变换矩阵H有:m'~Hm,其中~表示成比例相等,设m和m'的坐标分别为(x1,y1)和(x2,y2),写成齐次坐标形式为:(x1,y1,z1)和(x2',y2',z2),其中x2'/z2=x2,y2'/z2=y2,那么则有公式Firstly, solve the transformation matrix. Given two points m' and m on the color remote sensing images image1 and image2, according to the perspective transformation matrix H, there are: m'~Hm, where ~ means equal in proportion, and the coordinates of m and m' are respectively (x 1 ,y 1 ) and (x 2 ,y 2 ), written in homogeneous coordinate form: (x 1 ,y 1 ,z 1 ) and (x 2 ',y 2 ',z 2 ), where x 2 '/z 2 =x 2 , y 2 '/z 2 =y 2 , then the formula

进而在齐次坐标的基础上得到(20)和(21):Then get (20) and (21) on the basis of homogeneous coordinates:

不失一般性而且为了计算简便,令z1=1,由(20)和(21)可得:Without loss of generality and for the convenience of calculation, let z 1 =1, from (20) and (21):

x1H11+y1H12+H13-x1x2H31-y1x2H32-x2H33=0 (22)x 1 H 11 +y 1 H 12 +H 13 -x 1 x 2 H 31 -y 1 x 2 H 32 -x 2 H 33 =0 (22)

x1H21+y1H22+H23-x1y2H31-y1y2H32-y2H33=0 (23)x 1 H 21 +y 1 H 22 +H 23 -x 1 y 2 H 31 -y 1 y 2 H 32 -y 2 H 33 = 0 (23)

由此推导出:From this it is deduced that:

由于透视变换模型最后一位数H33通常被我们设置为1,所以变换模型由八个参数组成,至少需要四对不共线的匹配点对,便可以求得透视变换模型各参数的值。但是不管是在实际计算过程中还是客观条件中,误差时一定存在的。首先,在实际计算中,由于噪声干扰、特征点检测的定位误差、选取的模型误差、错配等因素,只依靠四对不共线的匹配点对求得八个参数的话误差较大,无法保证配准精度,RANSAC算法恰好能够很好的解决上述问题。Since the last digit H 33 of the perspective transformation model is usually set to 1 by us, the transformation model consists of eight parameters, and at least four pairs of non-collinear matching points are needed to obtain the values of the parameters of the perspective transformation model. However, no matter in the actual calculation process or in objective conditions, errors must exist. First of all, in actual calculation, due to noise interference, positioning error of feature point detection, selected model error, mismatch and other factors, if only four pairs of non-collinear matching point pairs are used to obtain eight parameters, the error will be large, and it is impossible To ensure the registration accuracy, the RANSAC algorithm can just solve the above problems very well.

RANSAC算法作为一种选择机制,它不但尽可能保证了提取匹配点对的个数和质量,而且可以对变换模型进行精确的估计,RANSAC算法实现变换矩阵参数求取步骤如下所示:As a selection mechanism, the RANSAC algorithm not only guarantees the number and quality of extracted matching point pairs as much as possible, but also accurately estimates the transformation model. The steps to obtain the parameters of the transformation matrix by the RANSAC algorithm are as follows:

(1)给定的两幅彩色遥感图像image1和image2,寻找双向匹配提纯后的特征点对mi'←→mi,i=1,2,......n;(1) Given two color remote sensing images image1 and image2, find the pair of feature points m i '←→m i after two-way matching and purification, i=1,2,...n;

(2)从匹配点对集合中,任意选取4对匹配点对,使用这4对匹配点对的坐标值,求得变换矩阵H;(2) From the set of matching points, randomly select 4 pairs of matching points, and use the coordinate values of these 4 pairs of matching points to obtain the transformation matrix H;

(3)以欧氏距离为依据,在匹配点对集合中寻找符合d(Hmi,mi')<t条件的点对,其中,Hmi表示对image2中特征点进行矩阵变换映射到image1中的位置坐标,mi'表示mi在image1中对应特征点的位置坐标,d(Hmi,mi')表示两个坐标向量的欧氏距离,将符合条件的点对作为最终的内点,并记录满足H约束的内点数量,t选取4;(3) Based on the Euclidean distance, find a point pair in the matching point pair set that meets the condition of d(Hm i ,m i ')<t, where Hm i represents the matrix transformation mapping of the feature points in image2 to image1 The position coordinates in , m i ' represents the position coordinates of mi corresponding feature points in image1, d(Hm i , m i ') represents the Euclidean distance between two coordinate vectors, and the qualified point pairs are used as the final inner point, and record the number of interior points that satisfy the H constraint, and select 4 for t;

(4)重复(2)和(3)两步n次(100≤n≤200),记录每一次的内点数量;(4) Repeat (2) and (3) two steps n times (100≤n≤200), record the number of interior points each time;

(5)选取对应内点数量最大的Hbest,寻找所有符合d(Hbestmi,mi')<t条件的匹配点对,将它们作为最终的内点,即正确的匹配点对,不符合d(Hbestmi,mi')<t条件的错误匹配点对,即为外点,予以剔除;(5) Select H best corresponding to the largest number of interior points, find all matching point pairs that meet the condition of d(H best m i ,m i ')<t, and use them as the final interior points, that is, the correct matching point pairs, Wrong matching point pairs that do not meet the condition of d(H best m i ,m i ')<t are outliers and should be eliminated;

(6)根据随机抽样一致性算法求得N个匹配点对,我们对2N个包含匹配点对及对应变换矩阵H的方程所构成的矩阵记作A,对其进行SVD奇异值分解得A=UDVT,U和V中分别是A的奇异向量,D为对角线上的奇异值按降序排列,V的最后一列重构为3*3矩阵即为所求的透视变换矩阵参数估计,这个矩阵的参数满足最小二乘意义下误差最小;(6) According to the random sampling consensus algorithm to obtain N matching point pairs, we denote the matrix formed by 2N equations including matching point pairs and the corresponding transformation matrix H as A, and perform SVD singular value decomposition on it to obtain A= UDV T , U and V are the singular vectors of A, D is the singular values on the diagonal in descending order, and the last column of V is reconstructed into a 3*3 matrix, which is the parameter estimation of the perspective transformation matrix. This The parameters of the matrix satisfy the minimum error in the sense of least squares;

步骤五、双三次插值实现配准Step 5. Bicubic interpolation to achieve registration

当获得变换矩阵后需要进行映射实现配准,在映射的过程中,由于像素都是以整数坐标值进行存储,为一个个非连续的值,这便会导致一种必然出现的情况发生:即原来在整数网格上的点(x1,y1)(坐标都是整数),在映射之后(x2,y2)没有落在网格点上,因此插值的作用便显得尤为重要。After obtaining the transformation matrix, it is necessary to perform mapping to achieve registration. During the mapping process, since the pixels are stored in integer coordinate values, which are discontinuous values, this will lead to an inevitable situation: that is The original point (x 1 , y 1 ) (coordinates are all integers) on the integer grid does not fall on the grid point (x 2 , y 2 ) after mapping, so the role of interpolation is particularly important.

双三次插值法利用待插值点在待配准图像中的对应点4*4邻域内的十六个像素点的灰度信息来估计待插值点的灰度值,不仅考虑了四个直接相邻点灰度值的影响,还考虑了各相邻点灰度值变化率的影响。如果待插值点在待配准图像中的对应点附近存在4*4邻域内的十六个像素点,就可以采用此方法。The bicubic interpolation method uses the gray information of sixteen pixels within the 4*4 neighborhood of the corresponding point in the image to be registered to estimate the gray value of the point to be interpolated, not only considering the four directly adjacent The influence of the gray value of each point is also considered, and the influence of the change rate of the gray value of each adjacent point is also considered. This method can be used if there are sixteen pixel points within a 4*4 neighborhood near the corresponding point in the image to be registered for the point to be interpolated.

若参考图像上任意一点坐标为(x,y),其灰度值为g(x,y),该点在待配准图像上的对应点坐标为(u,v),其灰度值为f(u,v),[·]表示取整数,待配准图像中对应点4*4邻域内16个像素点组成的矩阵为B:If the coordinates of any point on the reference image are (x, y), its gray value is g(x, y), and the coordinates of the corresponding point on the image to be registered are (u, v), and its gray value is f(u,v), [ ] means taking an integer, and the matrix composed of 16 pixels in the 4*4 neighborhood of the corresponding point in the image to be registered is B:

矩阵B中元素中函数f(u+i,v+j)(i=-1,0,1,2;j=-1,0,1,2)定义为在待配准图像上的对应点坐标为(u+i,v+j)时的灰度值。The function f(u+i,v+j) (i=-1,0,1,2; j=-1,0,1,2) in the elements in the matrix B is defined as the corresponding point on the image to be registered The gray value when the coordinates are (u+i, v+j).

则参考图像中待插值点的灰度值计算公式如下:Then the calculation formula of the gray value of the point to be interpolated in the reference image is as follows:

g(x,y)=f(u,v)=ABC (26)g(x,y)=f(u,v)=ABC (26)

其中:in:

s(w)为双三次插值的基函数,是对sin(x*π)/x的逼近。通过双三次插值,将image2图像插值到参考图像image1中,实现最终的高精度配准。s(w) is the basis function of bicubic interpolation, which is an approximation to sin(x*π)/x. Through bicubic interpolation, the image2 image is interpolated into the reference image image1 to achieve the final high-precision registration.

双三次插值法采用了较大邻域内像素点的灰度信息,插值精度进一步提高,并且对图像的边缘细节也有很好的保持,视觉效果更加理想,基本是现有的插值算法中插值精度最高的算法。The bicubic interpolation method uses the grayscale information of the pixels in the larger neighborhood, the interpolation accuracy is further improved, and the edge details of the image are also well maintained, and the visual effect is more ideal. Basically, the interpolation accuracy is the highest among the existing interpolation algorithms algorithm.

以下实例对相同的两幅无人机彩色遥感图像image1和image2用原SURF算法和一种鲁棒SURF算法进行精度仿真实验验证,匹配效果良好,鲁棒SURF算法没有出现交叉线(见附图2和附图3)。image1和image2之间实际上是一种经过2倍缩放,且有10度旋转,水平偏移239个像素点,纵向偏移28个像素点仿射变换。The following example uses the original SURF algorithm and a robust SURF algorithm to verify the accuracy of the same two UAV color remote sensing images image1 and image2. and Figure 3). Between image1 and image2 is actually an affine transformation that has been scaled by 2 times, rotated by 10 degrees, offset by 239 pixels horizontally, and offset by 28 pixels vertically.

实例:Example:

首先根据前面所述的彩色遥感图像image1和image2之间几何变换关系,我们可以很容易的得到理论值Hture。其次,我们用原SURF算法对彩色遥感图像image1和image2进行配准,得到原始Hsurf。最后用一种鲁棒SURF算法对彩色遥感图像image1和image2进行配准,得到HrobFirstly, according to the geometric transformation relationship between the color remote sensing images image1 and image2 mentioned above, we can easily obtain the theoretical value H ture . Secondly, we use the original SURF algorithm to register the color remote sensing images image1 and image2 to obtain the original H surf . Finally, a robust SURF algorithm is used to register the color remote sensing images image1 and image2 to obtain H rob .

不失一般性,随后选取八组无人机彩色遥感图像分别进行试验,这里进行8组实验求取平均值,对变换矩阵各参数进行统计如表1所示,表1是仿射变换矩阵参数对比表,发现SURF算法其最大偏差平均值为0.218高于鲁棒的最大偏差平均值0.134,六参数接近程度,鲁棒算法明显优于SURF算法。Without loss of generality, eight groups of UAV color remote sensing images were selected for experiments respectively. Here, eight groups of experiments are carried out to obtain the average value, and the statistics of the parameters of the transformation matrix are shown in Table 1. Table 1 is the parameters of the affine transformation matrix Comparing the table, it is found that the average value of the maximum deviation of the SURF algorithm is 0.218, which is higher than the average value of the maximum deviation of the robust algorithm, which is 0.134. The degree of proximity of the six parameters, the robust algorithm is significantly better than the SURF algorithm.

表1仿射变换矩阵参数对照表Table 1 Affine transformation matrix parameter comparison table

Claims (4)

1.一种鲁棒SURF无人机彩色遥感图像配准方法,其特征在于步骤如下:1. A method for registration of robust SURF unmanned aerial vehicle color remote sensing images, is characterized in that the steps are as follows: 步骤1:对获取的无人机彩色遥感参考图像image1、待配准图像image2进行灰度处理变为灰度图像,对灰度图像进行高斯平滑处理完成尺度空间构建,采用特征检测算子Hessian矩阵的行列式进行特征点提取,获取image1、image2的所有特征点分别记为目标集{pi},i=1,2,……,n和基准集{qj},j=1,2,……,m;Step 1: Perform grayscale processing on the acquired UAV color remote sensing reference image image1 and image2 to be registered to become a grayscale image, and perform Gaussian smoothing on the grayscale image to complete the scale space construction, and use the feature detection operator Hessian matrix The determinant of the feature points is extracted, and all the feature points of image1 and image2 are recorded as the target set {p i }, i=1,2,...,n and the reference set {q j },j=1,2, ...,m; 步骤2:对于得到的目标集{pi}和基准集{qj},以每个特征点为中心确定边长为20s的正方形区域,其中s表示不同尺度空间下的采样间隔,将其周围邻域信息分成4×4个子区域,每个小区域又分为5×5个采样点,最后用Harr小波计算每个小区域垂直和水平方向的响应及模值dx,|dx|,dy,|dy|,并统计5×5个采样点的总响应Σdx,∑|dx|,∑dy,∑|dy|,可得到特征点的64维SURF特征描述符:Step 2: For the obtained target set {p i } and reference set {q j }, a square area with a side length of 20s is determined centering on each feature point, where s represents the sampling interval in different scale spaces, and the surrounding Neighborhood information is divided into 4×4 sub-regions, and each small region is divided into 5×5 sampling points. Finally, Harr wavelet is used to calculate the response and modulus dx,|dx|,dy, |dy|, and counting the total response Σdx, ∑|dx|, ∑dy, ∑|dy| of 5×5 sampling points, the 64-dimensional SURF feature descriptor of the feature point can be obtained: V=(∑dx,∑|dx|,∑dy,∑|dy|)V=(∑dx,∑|dx|,∑dy,∑|dy|) 则特征点的SURF描述符在水平或垂直方向上定义为:Then the SURF descriptor of the feature point is defined in the horizontal or vertical direction as: Vgray=(V1,V2,V3,V4,...,V16)V gray =(V 1 ,V 2 ,V 3 ,V 4 ,...,V 16 ) 其中,Vi,i=1,2,3,...,15,16表示16个子区域所对应的描述向量;由此可得目标集{pi}和基准集{qj}中所有的特征点的64维SURF特征描述符;Among them, V i , i = 1, 2, 3,..., 15, 16 represent the description vectors corresponding to the 16 sub-regions; from this we can get all the 64-dimensional SURF feature descriptor for feature points; 步骤3:对于坐标为(x,y)特征点,其在RGB颜色空间表示为R(x,y)、G(x,y)、B(x,y),将其进行颜色空间转换,转换到YIQ颜色空间为Y(x,y)、I(x,y)、Q(x,y)Step 3: For the feature point with coordinates (x, y), it is expressed as R (x, y) , G (x, y) , B (x, y) in the RGB color space, and it is converted into the color space, and the conversion To the YIQ color space is Y (x,y) , I (x,y) , Q (x,y) ; RGB颜色空间变换到YIQ颜色空间表示为:The transformation from RGB color space to YIQ color space is expressed as: YY II QQ == 0.2290.229 0.5870.587 0.1140.114 0.5960.596 -- 0.2740.274 -- 0.3220.322 0.2110.211 -- 0.5230.523 0.3120.312 RR GG BB 将其增加到原始算法描述符中,得到新的彩色描述符如下:Adding it to the original algorithm descriptor, the new color descriptor is as follows: VV cc oo ll oo rr == (( ii 11 ,, ii 22 ,, ii 33 ,, ii 44 ,, ...... ,, ii 6464 ,, YY (( xx 00 ,, ythe y 00 )) ,, II (( xx 00 ,, ythe y 00 )) ,, QQ (( xx 00 ,, ythe y 00 )) )) 对上式进行归一化处理:Normalize the above formula: VV &OverBar;&OverBar; cc oo ll oo rr == (( ii 11 || VV cc oo ll oo rr || ,, ii 22 || VV cc oo ll oo rr || ,, ii 33 || VV cc oo ll oo rr || ,, ii 44 || VV cc oo ll oo rr || ,, ...... ,, ii 6464 || VV cc oo ll oo rr || ,, YY (( xx 00 ,, ythe y 00 )) || VV cc oo ll oo rr || ,, II (( xx 00 ,, ythe y 00 )) || VV cc oo ll oo rr || ,, QQ (( xx 00 ,, ythe y 00 )) || VV cc oo ll oo rr || )) || VV cc oo ll oo rr || == &Sigma;&Sigma; kk == 11 6464 ii kk 22 ++ YY 22 (( xx 00 ,, ythe y 00 )) ++ II 22 (( xx 00 ,, ythe y 00 )) ++ QQ 22 (( xx 00 ,, ythe y 00 )) 其中ik,k=1,2,3,...,63,64代表经典SURF算法中特征点的64维描述向量;由此可得目标集{pi}和基准集{qj}中各个特征点的67维描述向量;where i k , k =1,2,3,...,63,64 represent the 64-dimensional description vector of feature points in the classic SURF algorithm; 67-dimensional description vector of each feature point; 步骤4:特征点双向匹配Step 4: Two-way matching of feature points 步骤4a:使用欧氏距离公式计算参考图像image1中的特征点p1在image1中正向的最近欧式距离dnt和次近欧式距离dsntStep 4a: use the Euclidean distance formula to calculate the nearest Euclidean distance d nt and the next closest Euclidean distance d snt in the forward direction of the feature point p1 in the reference image image1; 使用欧氏距离公式计算参考图像image1中的特征点p1在image2中正向的最近欧式距离d1nt和次近欧式距离d1snt的特征点分别为p2nt和p2sntUse the Euclidean distance formula to calculate the feature points of the feature point p1 in the reference image image1 in the forward direction of the nearest Euclidean distance d1 nt and the second closest Euclidean distance d1 snt in the image2, respectively, and the feature points are p2 nt and p2 snt ; 步骤4b:计算最近距离d1nt和次最近距离d1snt的比率T1,即Step 4b: Calculate the ratio T 1 of the closest distance d1 nt and the second closest distance d1 snt , namely TT 11 == dd 11 nno tt dd 11 sthe s nno tt 将T1与判断阈值T进行比较:如果T1<T,随之进行步骤4c,否则,自动跳出;Compare T 1 with the judgment threshold T: if T 1 <T, proceed to step 4c, otherwise, jump out automatically; 步骤4c:对于p2nt对应特征点,使用欧氏距离公式计算在image1中反向的最近距离d2nt和次最近距离d2snt的特征点分别为p1nt和p1sntStep 4c: For the feature point corresponding to p2 nt , use the Euclidean distance formula to calculate the feature points of the reverse closest distance d2 nt and the next closest distance d2 snt in image1 are p1 nt and p1 snt respectively; 步骤4d:计算最近距离和次最近距离的比率T2 Step 4d: Calculate the ratio T 2 of the closest distance to the next closest distance TT 22 == dd 22 nno tt dd 22 sthe s nno tt 将T1与判断阈值T进行比较:如果T2<T,同时p1nt所寻找的特征点和原始特征点p1是同一个特征点,则认为是正确匹配点对,否则返回步骤4a对参考图像image1中的其它特征点进行新的特征点判断;Compare T 1 with the judgment threshold T: if T 2 <T, and the feature point p1 nt is looking for is the same feature point as the original feature point p1, it is considered to be a correct matching point pair, otherwise return to step 4a for the reference image Other feature points in image1 are used to judge new feature points; 依次遍历参考图像目标集中所有特征点,最终实现双向匹配提纯;Traverse all the feature points in the reference image target set in turn, and finally achieve two-way matching and purification; 步骤5:采用RANSAC算法实现变换矩阵参数求取Step 5: Use the RANSAC algorithm to obtain the parameters of the transformation matrix 步骤5a:寻找步骤4完成后的双向匹配提纯后的特征点对mi'←→mi,i=1,2,......n;Step 5a: Find the feature point pair m i '←→m i after the two-way matching and purification after step 4 is completed, i=1,2,...n; 步骤5b:从匹配点对集合中,任意选取4对匹配点对,使用这4对匹配点对的坐标值,求得变换矩阵H;Step 5b: Randomly select 4 pairs of matching points from the set of matching points, and use the coordinate values of these 4 pairs of matching points to obtain the transformation matrix H; 步骤5c:以欧氏距离为依据,在匹配点对集合中寻找符合d(Himi,mi')<t条件的点对,其中,Himi表示对image2中特征点进行矩阵变换映射到image1中的位置坐标,mi'表示mi在image1中对应特征点的位置坐标,d(Himi,mi')表示两个坐标向量的欧氏距离,将符合条件的点对作为最终的内点,并记录满足Hi约束的内点数量;Step 5c: Based on the Euclidean distance, find the point pair in the matching point pair set that meets the condition of d(H i m i ,m i ')<t, where H i m i represents the matrix of the feature points in image2 The transformation is mapped to the position coordinates in image1, m i 'indicates the position coordinates of mi in image1 corresponding to the feature point, d(H i m i ,m i ') indicates the Euclidean distance between two coordinate vectors, and will meet the conditions The point pair is used as the final interior point, and the number of interior points satisfying the H i constraint is recorded; 步骤5d:重复5b和5c两步n次,记录每一次的内点数量;Step 5d: Repeat steps 5b and 5c n times, and record the number of interior points each time; 步骤5e:选取对应内点数量最大的Hbest,寻找所有符合d(Hbestmi,mi')<t条件的匹配点对,将它们作为最终的内点,即正确的匹配点对,不符合d(Hbestmi,mi')<t条件的错误匹配点对,即为外点,予以剔除;Step 5e: Select H best corresponding to the largest number of interior points, find all matching point pairs that meet the condition of d(H best m i ,m i ')<t, and use them as the final interior points, that is, the correct matching point pairs, Wrong matching point pairs that do not meet the condition of d(H best m i ,m i ')<t are outliers and should be eliminated; 步骤5f:根据随机抽样一致性算法求得N个匹配点对,对这个2N个由x1,y1,x2,y2所构成的矩阵进行标记,记作A,对其进行SVD奇异值分解得A=UDVT,U和V中分别是A的奇异向量,D为对角线上的奇异值按降序排列,V的最后一列重构为3*3矩阵即为所求的透视变换矩阵参数估计;Step 5f: According to the random sampling consensus algorithm, N matching point pairs are obtained, and the 2N matrices composed of x 1 , y 1 , x 2 , y 2 are marked, denoted as A, and the SVD singular value is performed on it Decompose A=UDV T , U and V are the singular vectors of A, D is the singular values on the diagonal in descending order, and the last column of V is reconstructed into a 3*3 matrix, which is the required perspective transformation matrix Parameter Estimation; 步骤6:双三次插值实现配准Step 6: Bicubic interpolation for registration 若参考图像image1上任意一点坐标为(x,y),其灰度值为g(x,y),该点在待配准图像image2上的对应点坐标为(u,v),其灰度值为f(u,v),[·]表示取整数,待配准图像中对应点4*4邻域内16个像素点组成的矩阵为B:If the coordinates of any point on the reference image image1 are (x, y), its gray value is g(x, y), and the corresponding point coordinates of this point on the image to be registered are (u, v), and its gray value The value is f(u,v), and [ ] means an integer, and the matrix composed of 16 pixels in the 4*4 neighborhood of the corresponding point in the image to be registered is B: BB == ff (( &lsqb;&lsqb; uu &rsqb;&rsqb; -- 11 ,, &lsqb;&lsqb; vv &rsqb;&rsqb; -- 11 )) ff (( &lsqb;&lsqb; uu &rsqb;&rsqb; -- 11 ,, &lsqb;&lsqb; vv &rsqb;&rsqb; )) ff (( &lsqb;&lsqb; uu &rsqb;&rsqb; -- 11 ,, &lsqb;&lsqb; vv &rsqb;&rsqb; ++ 11 )) ff (( &lsqb;&lsqb; uu &rsqb;&rsqb; -- 11 ,, &lsqb;&lsqb; vv &rsqb;&rsqb; ++ 22 )) ff (( &lsqb;&lsqb; uu &rsqb;&rsqb; ,, &lsqb;&lsqb; vv &rsqb;&rsqb; -- 11 )) ff (( &lsqb;&lsqb; uu &rsqb;&rsqb; ,, &lsqb;&lsqb; vv &rsqb;&rsqb; )) ff (( &lsqb;&lsqb; uu &rsqb;&rsqb; ,, &lsqb;&lsqb; vv &rsqb;&rsqb; ++ 11 )) ff (( &lsqb;&lsqb; uu &rsqb;&rsqb; ,, &lsqb;&lsqb; vv &rsqb;&rsqb; ++ 22 )) ff (( &lsqb;&lsqb; uu &rsqb;&rsqb; ++ 11 ,, &lsqb;&lsqb; vv &rsqb;&rsqb; -- 11 )) ff (( &lsqb;&lsqb; uu &rsqb;&rsqb; ++ 11 ,, &lsqb;&lsqb; vv &rsqb;&rsqb; )) ff (( &lsqb;&lsqb; uu &rsqb;&rsqb; ++ 11 ,, &lsqb;&lsqb; vv &rsqb;&rsqb; ++ 11 )) ff (( &lsqb;&lsqb; uu &rsqb;&rsqb; ++ 11 ,, &lsqb;&lsqb; vv &rsqb;&rsqb; ++ 22 )) ff (( &lsqb;&lsqb; uu &rsqb;&rsqb; ++ 22 ,, &lsqb;&lsqb; vv &rsqb;&rsqb; -- 11 )) ff (( &lsqb;&lsqb; uu &rsqb;&rsqb; ++ 22 ,, &lsqb;&lsqb; vv &rsqb;&rsqb; )) ff (( &lsqb;&lsqb; uu &rsqb;&rsqb; ++ 22 ,, &lsqb;&lsqb; vv &rsqb;&rsqb; ++ 11 )) ff (( &lsqb;&lsqb; uu &rsqb;&rsqb; ++ 22 ,, &lsqb;&lsqb; vv &rsqb;&rsqb; ++ 22 )) 矩阵B中元素中函数f(u+i,v+j),i=-1,0,1,2;j=-1,0,1,2,定义为在待配准图像上的对应点坐标为(u+i,v+j)时的灰度值;The function f(u+i,v+j) in the elements in the matrix B, i=-1,0,1,2; j=-1,0,1,2, defined as the corresponding point on the image to be registered The gray value when the coordinates are (u+i,v+j); 则参考图像中待插值点的灰度值计算公式如下:Then the calculation formula of the gray value of the point to be interpolated in the reference image is as follows: g(x,y)=f(u,v)=ABCg(x,y)=f(u,v)=ABC 其中:in: aa == uu -- &lsqb;&lsqb; uu &rsqb;&rsqb; bb == vv -- &lsqb;&lsqb; vv &rsqb;&rsqb; CC == sthe s (( 11 ++ aa )) sthe s (( aa )) sthe s (( 11 -- aa )) sthe s (( 22 -- aa )) TT AA == sthe s (( 11 ++ bb )) sthe s (( bb )) sthe s (( 11 -- bb )) sthe s (( 22 -- bb )) sthe s (( ww )) == 11 -- 22 || ww || 22 ++ || ww || 33 ,, || ww || << 11 44 -- 88 || ww || ++ 55 || ww || 22 -- || ww || 33 ,, 11 &le;&le; || ww || << 22 00 ,, || ww || &GreaterEqual;&Greater Equal; 22 s(w)为双三次插值的基函数,是对sin(x*π)/x的逼近,通过双三次插值,将image2图像插值到参考图像image1中,实现最终的高精度配准。s(w) is the basis function of bicubic interpolation, which is the approximation to sin(x*π)/x. Through bicubic interpolation, the image2 image is interpolated into the reference image image1 to achieve the final high-precision registration. 2.根据权利要求1所述的一种鲁棒SURF无人机彩色遥感图像配准方法,其特征在于所述的步骤4b中的判断阈值T为0.4-0.8。2. A kind of robust SURF unmanned aerial vehicle color remote sensing image registration method according to claim 1, it is characterized in that the judgment threshold T in the described step 4b is 0.4-0.8. 3.根据权利要求1所述的一种鲁棒SURF无人机彩色遥感图像配准方法,其特征在于所述的步骤5c中的t选取4。3. a kind of robust SURF UAV color remote sensing image registration method according to claim 1, is characterized in that t in the described step 5c is selected 4. 4.根据权利要求1所述的一种鲁棒SURF无人机彩色遥感图像配准方法,其特征在于所述的步骤5d中的100≤n≤200。4. A method for registration of robust SURF UAV color remote sensing images according to claim 1, characterized in that 100≤n≤200 in the step 5d.
CN201710163760.8A 2017-03-20 2017-03-20 A kind of robust SURF unmanned planes Color Remote Sensing Image method for registering Pending CN106971404A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710163760.8A CN106971404A (en) 2017-03-20 2017-03-20 A kind of robust SURF unmanned planes Color Remote Sensing Image method for registering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710163760.8A CN106971404A (en) 2017-03-20 2017-03-20 A kind of robust SURF unmanned planes Color Remote Sensing Image method for registering

Publications (1)

Publication Number Publication Date
CN106971404A true CN106971404A (en) 2017-07-21

Family

ID=59329507

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710163760.8A Pending CN106971404A (en) 2017-03-20 2017-03-20 A kind of robust SURF unmanned planes Color Remote Sensing Image method for registering

Country Status (1)

Country Link
CN (1) CN106971404A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108037132A (en) * 2017-12-25 2018-05-15 华南理工大学 A kind of visual sensor system and method for the detection of dry cell pulp layer paper winding defect
CN109087297A (en) * 2018-08-10 2018-12-25 成都工业职业技术学院 A kind of MR method for registering images based on adaptive neighborhood selection
CN109102534A (en) * 2018-08-29 2018-12-28 长光卫星技术有限公司 Optical remote sensing image registration method and system under the conditions of haze weather
WO2019042232A1 (en) * 2017-08-31 2019-03-07 西南交通大学 Fast and robust multimodal remote sensing image matching method and system
CN109711444A (en) * 2018-12-18 2019-05-03 中国科学院遥感与数字地球研究所 A new deep learning-based remote sensing image registration method
CN110009558A (en) * 2019-01-17 2019-07-12 柳州康云互联科技有限公司 A kind of normalized method of easy image color
CN110060285A (en) * 2019-04-29 2019-07-26 中国水利水电科学研究院 A kind of remote sensing image registration method and system based on SURF algorithm
CN111806702A (en) * 2020-06-30 2020-10-23 鑫喆喆 Parachute jumping mechanism pop-up platform and method based on signal detection
CN111914855A (en) * 2020-07-31 2020-11-10 西安电子科技大学 A priori feature point sparse method for large digital image maps
CN113470085A (en) * 2021-05-19 2021-10-01 西安电子科技大学 Image registration method based on improved RANSAC
CN114358148A (en) * 2021-12-20 2022-04-15 中国电子科技集团公司第五十四研究所 A fast image de-occlusion method based on SURF feature matching
CN115457222A (en) * 2022-09-14 2022-12-09 北京建筑大学 A Method of Georeferencing 3D Models in Geographic Information System

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426186A (en) * 2013-09-05 2013-12-04 山东大学 Improved SURF fast matching method
CN103914847A (en) * 2014-04-10 2014-07-09 西安电子科技大学 SAR image registration method based on phase congruency and SIFT
CN104867126A (en) * 2014-02-25 2015-08-26 西安电子科技大学 Method for registering synthetic aperture radar image with change area based on point pair constraint and Delaunay
CN105303567A (en) * 2015-10-16 2016-02-03 浙江工业大学 Image registration method integrating image scale invariant feature transformation and individual entropy correlation coefficient
CN105809693A (en) * 2016-03-10 2016-07-27 西安电子科技大学 SAR image registration method based on deep neural networks
CN106210731A (en) * 2016-07-01 2016-12-07 兰州理工大学 Coloured image reversible data concealing method based on bicubic interpolation extension

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103426186A (en) * 2013-09-05 2013-12-04 山东大学 Improved SURF fast matching method
CN104867126A (en) * 2014-02-25 2015-08-26 西安电子科技大学 Method for registering synthetic aperture radar image with change area based on point pair constraint and Delaunay
CN103914847A (en) * 2014-04-10 2014-07-09 西安电子科技大学 SAR image registration method based on phase congruency and SIFT
CN105303567A (en) * 2015-10-16 2016-02-03 浙江工业大学 Image registration method integrating image scale invariant feature transformation and individual entropy correlation coefficient
CN105809693A (en) * 2016-03-10 2016-07-27 西安电子科技大学 SAR image registration method based on deep neural networks
CN106210731A (en) * 2016-07-01 2016-12-07 兰州理工大学 Coloured image reversible data concealing method based on bicubic interpolation extension

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张二磊等: "一种改进的SURF彩色遥感图像配准算法", 《液晶与显示》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11244197B2 (en) 2017-08-31 2022-02-08 Southwest Jiaotong University Fast and robust multimodal remote sensing image matching method and system
WO2019042232A1 (en) * 2017-08-31 2019-03-07 西南交通大学 Fast and robust multimodal remote sensing image matching method and system
CN108037132A (en) * 2017-12-25 2018-05-15 华南理工大学 A kind of visual sensor system and method for the detection of dry cell pulp layer paper winding defect
CN108037132B (en) * 2017-12-25 2023-06-16 华南理工大学 Visual sensor system and method for detecting winding defect of dry battery slurry layer paper
CN109087297A (en) * 2018-08-10 2018-12-25 成都工业职业技术学院 A kind of MR method for registering images based on adaptive neighborhood selection
CN109102534A (en) * 2018-08-29 2018-12-28 长光卫星技术有限公司 Optical remote sensing image registration method and system under the conditions of haze weather
CN109102534B (en) * 2018-08-29 2020-09-01 长光卫星技术有限公司 Optical remote sensing image registration method and system under haze weather condition
CN109711444A (en) * 2018-12-18 2019-05-03 中国科学院遥感与数字地球研究所 A new deep learning-based remote sensing image registration method
CN110009558A (en) * 2019-01-17 2019-07-12 柳州康云互联科技有限公司 A kind of normalized method of easy image color
CN110060285A (en) * 2019-04-29 2019-07-26 中国水利水电科学研究院 A kind of remote sensing image registration method and system based on SURF algorithm
CN111806702A (en) * 2020-06-30 2020-10-23 鑫喆喆 Parachute jumping mechanism pop-up platform and method based on signal detection
CN111914855A (en) * 2020-07-31 2020-11-10 西安电子科技大学 A priori feature point sparse method for large digital image maps
CN111914855B (en) * 2020-07-31 2024-04-05 西安电子科技大学 Priori feature point sparsification method for oversized digital image map
CN113470085A (en) * 2021-05-19 2021-10-01 西安电子科技大学 Image registration method based on improved RANSAC
CN113470085B (en) * 2021-05-19 2023-02-10 西安电子科技大学 An Image Registration Method Based on Improved RANSAC
CN114358148A (en) * 2021-12-20 2022-04-15 中国电子科技集团公司第五十四研究所 A fast image de-occlusion method based on SURF feature matching
CN115457222A (en) * 2022-09-14 2022-12-09 北京建筑大学 A Method of Georeferencing 3D Models in Geographic Information System

Similar Documents

Publication Publication Date Title
CN106971404A (en) A kind of robust SURF unmanned planes Color Remote Sensing Image method for registering
CN103679702B (en) A kind of matching process based on image border vector
CN102800097B (en) The visible ray of multi-feature multi-level and infrared image high registration accuracy method
CN103310453B (en) A kind of fast image registration method based on subimage Corner Feature
CN105261014B (en) A kind of multisensor Remote Sensing Images Matching Method
CN107292922B (en) A method of it is registrated for optics with diameter radar image
CN102800098B (en) Multi-characteristic multi-level visible light full-color and multi-spectrum high-precision registering method
CN102800099B (en) Multi-feature multi-level visible light and high-spectrum image high-precision registering method
CN102819839B (en) High-precision registration method for multi-characteristic and multilevel infrared and hyperspectral images
CN111709980A (en) Multi-scale image registration method and device based on deep learning
CN107464252A (en) A kind of visible ray based on composite character and infrared heterologous image-recognizing method
CN102722887A (en) Image registration method and device
CN103456022A (en) High-resolution remote sensing image feature matching method
CN101924871A (en) Video Object Tracking Method Based on Mean Shift
CN106919944A (en) A kind of wide-angle image method for quickly identifying based on ORB algorithms
CN103679193A (en) FREAK-based high-speed high-density packaging component rapid location method
CN102938147A (en) Low-altitude unmanned aerial vehicle vision positioning method based on rapid robust feature
CN106023187A (en) Image registration method based on SIFT feature and angle relative distance
CN108229500A (en) A kind of SIFT Mismatching point scalping methods based on Function Fitting
CN110210511A (en) A kind of improvement PCA-SIFT method for registering images based on cosine measure
CN113657225A (en) A target detection method
CN106780294B (en) Circular arc matching method based on feature descriptors
CN109766943A (en) A Template Matching Method and System Based on Global Perceptual Diversity Metrics
CN103208003B (en) Geometric graphic feature point-based method for establishing shape descriptor
CN103325127B (en) A kind of multispectral image SIFT feature is extracted and describing method and system thereof

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170721