CN102968780B - A kind of remote sensing images joining method based on human-eye visual characteristic - Google Patents

A kind of remote sensing images joining method based on human-eye visual characteristic Download PDF

Info

Publication number
CN102968780B
CN102968780B CN201210510695.9A CN201210510695A CN102968780B CN 102968780 B CN102968780 B CN 102968780B CN 201210510695 A CN201210510695 A CN 201210510695A CN 102968780 B CN102968780 B CN 102968780B
Authority
CN
China
Prior art keywords
image
matrix
coordinates
stitched
remote sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210510695.9A
Other languages
Chinese (zh)
Other versions
CN102968780A (en
Inventor
陈锦伟
冯华君
徐之海
李奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201210510695.9A priority Critical patent/CN102968780B/en
Publication of CN102968780A publication Critical patent/CN102968780A/en
Application granted granted Critical
Publication of CN102968780B publication Critical patent/CN102968780B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

本发明公开了一种基于人眼视觉特性的遥感图像拼接方法,包括以下步骤:1)提取参考图像和待拼接图像上的特征点,建立匹配特征点对;2)剔除错误的匹配特征点对,得到参考图像和待拼接图像之间初始的变换矩阵H;3)将参考图像划分为感兴趣的区域和普通的区域,利用两个区域的平均梯度值之比得到权重因子α;4)设置增量矩阵P作为初始的变换矩阵H的优化增量,利用步骤2)中的匹配特征点对的坐标信息和步骤3)中的权重因子α迭代求解增量矩阵P,且利用迭代过程中每一次得到的增量矩阵P可计算得到对应的变换矩阵H,直到收敛结束迭代过程;5)应用最终的变换矩阵H对待拼接图像进行投影变换,并与参考图像进行拼接融合,完成图像的拼接。

The invention discloses a remote sensing image splicing method based on human visual characteristics, which comprises the following steps: 1) extracting feature points on a reference image and an image to be spliced, and establishing matching feature point pairs; 2) eliminating wrong matching feature point pairs , to obtain the initial transformation matrix H between the reference image and the image to be stitched; 3) Divide the reference image into an area of interest and a common area, and use the ratio of the average gradient values of the two areas to obtain the weight factor α; 4) Set The incremental matrix P is used as the optimized increment of the initial transformation matrix H, and the incremental matrix P is iteratively solved by using the coordinate information of the matching feature point pairs in step 2) and the weight factor α in step 3), and using each The incremental matrix P obtained at one time can be calculated to obtain the corresponding transformation matrix H, until the iterative process ends after convergence; 5) Apply the final transformation matrix H to perform projective transformation on the image to be stitched, and stitch and fuse it with the reference image to complete the stitching of the image.

Description

一种基于人眼视觉特性的遥感图像拼接方法A remote sensing image mosaic method based on human visual characteristics

技术领域technical field

本发明涉及计算机图像处理技术领域,尤其涉及一种基于人眼视觉特性的遥感图像拼接方法。The invention relates to the technical field of computer image processing, in particular to a remote sensing image splicing method based on human visual characteristics.

背景技术Background technique

遥感图像是通过卫星、飞机或者热气球等高空飞行的平台获取的图像,在灾害预警、资源探测、军事侦察、地图测绘等领域都有非常重要的作用,是人类了解地球气候、资源等信息变化的重要途径。Remote sensing images are images obtained through high-altitude flying platforms such as satellites, airplanes, or hot air balloons. They play a very important role in disaster warning, resource detection, military reconnaissance, map surveying, and other fields. important way.

在获取遥感图像的过程中,总是希望得到足够视场的高分辨率图像,这样可以便于人们快速地提取有用的信息。但是现有的技术条件下,视场与分辨率是两个很难调和的矛盾。In the process of acquiring remote sensing images, it is always hoped to obtain high-resolution images with sufficient field of view, which can facilitate people to quickly extract useful information. However, under the existing technical conditions, the field of view and resolution are two contradictions that are difficult to reconcile.

现有的解决方案一方面是继续增加传感器的分辨率,一方面通过对某一大区域多次成像获取多幅具有高分辨的重叠图像,通过后续的图像处理技术还原成一幅大覆盖范围高分辨率的遥感图像。The existing solution is to continue to increase the resolution of the sensor on the one hand, and on the other hand to obtain multiple overlapping images with high resolution by imaging a large area multiple times, and restore them into a large-coverage high-resolution image through subsequent image processing technology. rate remote sensing images.

遥感图像拼接可以两个部分:图像配准和图像融合。Remote sensing image stitching can be divided into two parts: image registration and image fusion.

图像配准是图像拼接技术的核心环节,是获取两幅重叠图像的变换关系的手段。Image registration is the core link of image mosaic technology, and it is a means to obtain the transformation relationship of two overlapping images.

为了得到更加准确的图像之间的关系,通常会在图像配准中进行优化,优化的方法有很多种。In order to obtain a more accurate relationship between images, it is usually optimized in image registration, and there are many optimization methods.

但是以前的优化算法都是统一性的优化,也就是不区分图像内容之间的重要性的,实际结果是有可能在一些对实际生产生活没有用的区域达到了比较高的拼接精度,但是在一些非常重要的内容上反而得到了非常差的结果,不利于后续的应用。However, the previous optimization algorithms were uniform optimization, that is, they did not distinguish the importance of image content. The actual result is that it is possible to achieve relatively high stitching accuracy in some areas that are not useful for actual production and life, but in On the contrary, some very important content got very poor results, which is not conducive to subsequent applications.

发明内容Contents of the invention

本发明的目的是针对遥感图像景物重要性的差异提供一种可以针对不同的应用环境的优化方法,使得优化的结果符合人眼的视觉判断特性。The purpose of the present invention is to provide an optimization method for different application environments for differences in the importance of remote sensing image scenes, so that the optimized results conform to the visual judgment characteristics of human eyes.

为了实现上述目的,本发明提供了一种基于人眼视觉特性的遥感图像拼接方法,包括以下步骤:In order to achieve the above object, the present invention provides a remote sensing image splicing method based on human visual characteristics, comprising the following steps:

1)提取参考图像和待拼接图像上的特征点,建立匹配特征点对,得到两幅图像特征点之间的初始的匹配关系;1) Extract the feature points on the reference image and the image to be stitched, set up matching feature point pairs, and obtain the initial matching relationship between the feature points of the two images;

2)剔除错误的匹配关系,得到匹配关系正确的匹配特征点对,并得到参考图像和待拼接图像之间初始的变换矩阵H;2) Eliminate the wrong matching relationship, obtain the matching feature point pair with the correct matching relationship, and obtain the initial transformation matrix H between the reference image and the image to be stitched;

3)将所述参考图像划分为感兴趣的区域和普通的区域,分别计算两个区域的平均梯度值Sinterest(G)和Snormal(G),利用平均梯度值的比得到权重因子α;3) divide the reference image into an area of interest and a common area, calculate the average gradient values S interest (G) and S normal (G) of the two areas respectively, and use the ratio of the average gradient values to obtain the weight factor α;

4)设置增量矩阵P作为初始的变换矩阵H的优化增量,利用步骤2)中的匹配特征点对的坐标信息和步骤3)中的权重因子α迭代求解增量矩阵P,且利用迭代过程中每一次得到的增量矩阵P可计算得到对应的变换矩阵H,直到收敛结束迭代过程;4) Set the increment matrix P as the optimization increment of the initial transformation matrix H, use the coordinate information of the matching feature point pairs in step 2) and the weight factor α in step 3) to iteratively solve the increment matrix P, and use the iterative The incremental matrix P obtained each time in the process can be calculated to obtain the corresponding transformation matrix H, until the convergence ends the iterative process;

5)应用最终的变换矩阵H对待拼接图像进行投影变换,并与参考图像进行拼接融合,完成图像的拼接。5) Apply the final transformation matrix H to carry out projective transformation on the image to be spliced, and splice and fuse it with the reference image to complete the splicing of the image.

参考图像和待拼接图像的特征点选取采用SIFT算法,并利用欧式几何距离原则得到特征点之间的初始的匹配关系;并利用RANSAC方法剔除错误的匹配关系,即得到匹配关系正确的匹配特征点对,通过正确的匹配关系得到的变换矩阵可以使得两幅拼接图像比较好的重叠在一起,而错误的匹配则会产生误拼接。The feature points of the reference image and the image to be stitched are selected using the SIFT algorithm, and the initial matching relationship between the feature points is obtained by using the Euclidean geometric distance principle; and the RANSAC method is used to eliminate the wrong matching relationship, that is, the matching feature points with the correct matching relationship are obtained Yes, the transformation matrix obtained through the correct matching relationship can make the two spliced images overlap better, while the wrong matching will result in false splicing.

令所述步骤2)中参考图像特征点的齐次坐标为X1=(x1,y1,1),待拼接图像特征点的齐次坐标为X2=(x2,y2,1),及初始的变换矩阵H为:Let the homogeneous coordinates of the feature points of the reference image in step 2) be X 1 =(x 1 , y 1 , 1), and the homogeneous coordinates of the feature points of the image to be stitched be X 2 =(x 2 , y 2 , 1 ), and the initial transformation matrix H is:

Hh == hh 11 hh 22 hh 33 hh 44 hh 55 hh 66 hh 77 hh 88 11

式中,h1~h8是变换矩阵H中的元素,表征了参考图像和待拼接图像之间的投影变换关系,参考图像和待拼接图像之间的坐标变换关系对应于下式:In the formula, h 1 ~ h 8 are the elements in the transformation matrix H, which represent the projective transformation relationship between the reference image and the image to be stitched, and the coordinate transformation relationship between the reference image and the image to be stitched corresponds to the following formula:

xx 11 ythe y 11 11 ~~ hh 11 hh 22 hh 33 hh 44 hh 55 hh 66 hh 77 hh 88 11 xx 22 ythe y 22 11

图像中像素的梯度值G(x,y)由下式表示:The gradient value G(x, y) of a pixel in an image is represented by the following formula:

GG (( xx ,, ythe y )) == II xx 22 (( xx ,, ythe y )) ++ II ythe y 22 (( xx ,, ythe y ))

II xx == ∂∂ II (( xx ,, ythe y )) ∂∂ xx ,, II ythe y == ∂∂ II (( xx ,, ythe y )) ∂∂ ythe y

式中,(x,y)是图像中像素的坐标,Ix表示图像在x方向的梯度,Iy表示图像在y方向上的梯度,I(x,y)表示图像的灰度值。In the formula, (x, y) is the coordinate of the pixel in the image, I x represents the gradient of the image in the x direction, I y represents the gradient of the image in the y direction, and I(x, y) represents the gray value of the image.

所述权重因子α表示为:The weight factor α is expressed as:

ββ == SS interestinterest (( GG )) SS normalnormal (( GG ))

αα == 11 ββ ++ 11

式中:Sinterest(G)是感兴趣区域所有像素梯度值的平均值,Snormal(G)是普通区域所有像素梯度值的平均值。In the formula: S interest (G) is the average value of the gradient values of all pixels in the region of interest, and S normal (G) is the average value of the gradient values of all pixels in the normal region.

所述的增量矩阵P表示为:The incremental matrix P is expressed as:

PP == pp 11 pp 22 pp 33 pp 44 pp 55 pp 66 pp 77 pp 88 11

式中,增量矩阵P中的元素p1~p8分别对应变换矩阵H中相应元素的微小增量。In the formula, the elements p 1 to p 8 in the increment matrix P correspond to the tiny increments of the corresponding elements in the transformation matrix H, respectively.

设置中间坐标变量X′2=(x′2,y′2,1)便于数学表达计算:Setting the intermediate coordinate variable X′ 2 =(x′ 2 , y′ 2 , 1) is convenient for mathematical expression calculation:

xx 22 ′′ ythe y 22 ′′ 11 ~~ hh 11 hh 22 hh 33 hh 44 hh 55 hh 66 hh 77 hh 88 11 xx 22 ythe y 22 11

增量矩阵P表征了投影变换矩阵H在每一次迭代过程中的微小增量,其作用是不断的对H进行优化逼近,其表达式为:The increment matrix P represents the small increment of the projection transformation matrix H in each iteration process, and its function is to continuously optimize and approximate H, and its expression is:

(I+P)H→H(I+P)H→H

式中,I为单位矩阵,利用中间坐标变量X′2=(x′2,y′2,1)和公式:In the formula, I is the unit matrix, using the intermediate coordinate variable X′ 2 =(x′ 2 , y′ 2 , 1) and the formula:

(I+P)HX2=(I+P)X′2=X3=(x3,y3,1)(I+P)HX 2 =(I+P)X′ 2 =X3=(x 3 , y 3 , 1)

可以使计算简化,X3=(x3,y3,1)表示待拼接图像的特征点坐标经过投影变换后的坐标。The calculation can be simplified, and X 3 =(x 3 , y 3 , 1) represents the coordinates of the feature points of the image to be stitched after projective transformation.

引入权重因子α,将图像分成两个部分:Introduce the weight factor α to divide the image into two parts:

EE. (( pp )) == ΣΣ nno ∈∈ SS 11 αα || || Xx 11 (( nno )) -- Xx 33 (( nno )) || || ++ ΣΣ mm ∈∈ SS 22 (( 11 -- αα )) || || Xx 11 (( mm )) -- Xx 33 (( mm )) || ||

式中,分别表示参考图像第n个特征点的坐标和待拼接图像第n个特征点坐标经过投影变换后的坐标,分别表示参考图像第m个特征点的坐标和待拼接图像第m个特征点坐标经过投影变换后的坐标,α为权重因子,S1,S2分别代表感兴趣的区域和普通的区域。In the formula, Respectively represent the coordinates of the nth feature point of the reference image and the coordinates of the nth feature point coordinates of the image to be stitched after projective transformation, Represent the coordinates of the mth feature point of the reference image and the coordinates of the mth feature point of the image to be stitched after projective transformation, α is the weight factor, S 1 and S 2 represent the area of interest and the common area respectively.

在每一次的迭代过程中得到的增量矩阵P,根据(I+P)H→H得到下次的迭代过程中的变换矩阵H,迭代的终止条件为收敛或者超过迭代次数限制,并得到最终的变换矩阵H。The incremental matrix P obtained in each iteration process, according to (I+P)H→H, the transformation matrix H in the next iteration process is obtained, and the termination condition of the iteration is convergence or exceeding the limit of the number of iterations, and the final The transformation matrix H of .

将最终的变换矩阵H代入所述的参考图像和待拼接图像之间的坐标变换关系式:Substituting the final transformation matrix H into the coordinate transformation relationship between the reference image and the image to be spliced:

xx 11 ythe y 11 11 ~~ hh 11 hh 22 hh 33 hh 44 hh 55 hh 66 hh 77 hh 88 11 xx 22 ythe y 22 11

完成待拼接图像进行投影变换,并得到拼接图像。Complete the projective transformation of the image to be stitched, and obtain the stitched image.

本发明具有下述优点:The present invention has the following advantages:

本发明在遥感图像配准优化的过程中,引入了权重优化的理念。根据遥感图像内容的重要程度,赋予不同的权重值,使得遥感图像配准优化的结果更加符合人眼视觉判断。在重要的遥感区域,图像配准的误差可以尽可能的小,配准精度达到一个比较高的水平,使得最后的拼接图像可以应用在更广的实际需求中。The present invention introduces the concept of weight optimization in the process of remote sensing image registration optimization. According to the importance of remote sensing image content, different weight values are given, so that the results of remote sensing image registration optimization are more in line with human visual judgment. In important remote sensing areas, the error of image registration can be as small as possible, and the registration accuracy can reach a relatively high level, so that the final stitched image can be applied to a wider range of actual needs.

附图说明Description of drawings

图1为本发明基于人眼视觉特性的遥感图像拼接方法的操作流程图;Fig. 1 is the operation flowchart of the remote sensing image splicing method based on human visual characteristics in the present invention;

图2a、图2b、图2c和图2d为本发明建立图像特征点匹配关系的过程;Fig. 2a, Fig. 2b, Fig. 2c and Fig. 2d are the process of establishing image feature point matching relationship in the present invention;

图2a为参考图像;Figure 2a is a reference image;

图2b为待拼接图像;Figure 2b is the image to be spliced;

图2c为两幅图像特征点初始的匹配关系;Figure 2c is the initial matching relationship of the feature points of the two images;

图2d为经过RANSAC方法后得到的正确匹配关系;Figure 2d shows the correct matching relationship obtained after the RANSAC method;

图3a为普通Levenberg-Marquardt算法优化后的拼接结果;Figure 3a is the mosaic result optimized by the common Levenberg-Marquardt algorithm;

图3b、图3c为图3a中对应的标记区域;Figure 3b and Figure 3c are the corresponding marked areas in Figure 3a;

图4a为本发明拼接方法优化后的拼接结果;Figure 4a is the mosaic result after the mosaic method optimization of the present invention;

图4b、图4c为图4a中对应的标记区域;Figure 4b and Figure 4c are the corresponding marked areas in Figure 4a;

图5a为仿真的参考图像;Figure 5a is a reference image for simulation;

图5b为仿真的待拼接图像;Figure 5b is a simulated image to be stitched;

图6a为普通LM算法优化后的拼接结果;Figure 6a is the splicing result optimized by the common LM algorithm;

图6b、图6c和图6d为图6a中对应的标记区域;Figure 6b, Figure 6c and Figure 6d are the corresponding marked areas in Figure 6a;

图7a为本发明拼接方法优化后的拼接结果;Figure 7a is the mosaic result after the mosaic method optimization of the present invention;

图7b、图7c和图7d为图7a中对应的标记区域;Figure 7b, Figure 7c and Figure 7d are the corresponding marked areas in Figure 7a;

图8a为遥感图像的参考图像;Figure 8a is a reference image of a remote sensing image;

图8b为遥感图像的待拼接图像;Fig. 8b is the image to be spliced of the remote sensing image;

图8c为本发明拼接方法优化后的拼接结果。Fig. 8c is the mosaic result after the splicing method of the present invention is optimized.

具体实施方式Detailed ways

如图1所示,本发明是基于人眼视觉特性的遥感图像拼接方法,通过以下两个具体的实例来说明具体的实施方式。As shown in FIG. 1 , the present invention is a remote sensing image mosaic method based on human visual characteristics, and the specific implementation manners are illustrated by the following two specific examples.

实例一:以图2a、图2b、图2c和图2d来说明整个拼接方法的过程。Example 1: Figure 2a, Figure 2b, Figure 2c and Figure 2d are used to illustrate the process of the entire splicing method.

(1)图2a和图2b分别是参考图像和待拼接图像,利用SIFT技术提取参考图像和待拼接图像的特征点,并且图2c显示了得到的两幅图像特征点初始的匹配关系。(1) Figure 2a and Figure 2b are the reference image and the image to be stitched respectively. The feature points of the reference image and the image to be stitched are extracted using SIFT technology, and Figure 2c shows the initial matching relationship of the feature points of the two images obtained.

(2)用投影变换模型来表征两幅图像之间的变换关系,即变换矩阵含有8个参数变量,然后采用RANSAC技术剔除错误的匹配点,得到匹配关系正确的特征点对,同时得到初始的变换矩阵H。令参考图像特征点的齐次坐标为X1=(x1,y1,1),待拼接图像特征点的齐次坐标为X2=(x2,y2,1),矩阵H为3x3的投影变换矩阵,其中变换矩阵H为:(2) Use the projection transformation model to represent the transformation relationship between the two images, that is, the transformation matrix contains 8 parameter variables, and then use the RANSAC technology to eliminate the wrong matching points to obtain the feature point pairs with the correct matching relationship, and at the same time get the initial Transformation matrix H. Let the homogeneous coordinates of the feature points of the reference image be X 1 = (x 1 , y 1 , 1), the homogeneous coordinates of the feature points of the image to be spliced are X 2 = (x 2 , y 2 , 1), and the matrix H is 3x3 The projection transformation matrix of , where the transformation matrix H is:

Hh == hh 11 hh 22 hh 33 hh 44 hh 55 hh 66 hh 77 hh 88 11

式中,h1~h8是投影变换矩阵H中的元素,表征了参考图像和待拼接图像之间的投影变换关系,其坐标变换关系对应于下式:In the formula, h 1 ~ h 8 are the elements in the projection transformation matrix H, which represent the projection transformation relationship between the reference image and the image to be spliced, and the coordinate transformation relationship corresponds to the following formula:

xx 11 ythe y 11 11 ~~ hh 11 hh 22 hh 33 hh 44 hh 55 hh 66 hh 77 hh 88 11 xx 22 ythe y 22 11

(3)采用考虑内容重要程度的权重优化算法。遥感图像内容的重要程度的量化,可以被设定为两种方法:主观方法和客观方法。主观算法是用户对遥感图像各种内容人为的附加权重值,比如机场跑道、军事目标可以附加比较高的权重,权重值的取值范围为区间[0,1]。针对自动化的优化算法需求,设计出利用图像的梯度信息来表征某个区域的信息重要程度。图像像素点的梯度由下式决定:(3) Adopt a weight optimization algorithm that considers the importance of content. The quantification of the importance of remote sensing image content can be set into two methods: subjective method and objective method. The subjective algorithm is the artificially added weight value of various contents of the remote sensing image by the user. For example, airport runways and military targets can be assigned relatively high weights, and the range of weight values is the interval [0, 1]. Aiming at the requirements of automatic optimization algorithm, the gradient information of the image is designed to represent the importance of information in a certain area. The gradient of the image pixel is determined by the following formula:

GG (( xx ,, ythe y )) == II xx 22 (( xx ,, ythe y )) ++ II ythe y 22 (( xx ,, ythe y ))

II xx == ∂∂ II (( xx ,, ythe y )) ∂∂ xx ,, II ythe y == ∂∂ II (( xx ,, ythe y )) ∂∂ ythe y

式中,(x,y)是图像中像素的坐标,Ix表示图像在x方向的梯度,Iy表示图像在y方向上的梯度,I(x,y)表示图像的灰度值。In the formula, (x, y) is the coordinate of the pixel in the image, I x represents the gradient of the image in the x direction, I y represents the gradient of the image in the y direction, and I(x, y) represents the gray value of the image.

权重因子α表示为:The weight factor α is expressed as:

ββ == SS interestinterest (( GG )) SS normalnormal (( GG ))

αα == 11 ββ ++ 11

式中:Sinterest(G)是感兴趣区域所有像素梯度值的平均值,Snormal(G)是普通区域所有像素梯度值的平均值。In the formula: S interest (G) is the average value of the gradient values of all pixels in the region of interest, and S normal (G) is the average value of the gradient values of all pixels in the normal region.

(4)设置增量矩阵P作为初始变换矩阵H的优化增量,利用权重因子α和匹配特征点对的坐标位置迭代求解增量矩阵P。迭代过程中每一次得到的增量矩阵P与初始的变换矩阵H构成下一次迭代的初始变换矩阵H,直到收敛结束迭代过程。(4) Set the incremental matrix P as the optimized increment of the initial transformation matrix H, and use the weight factor α and the coordinate positions of the matching feature point pairs to iteratively solve the incremental matrix P. In the iterative process, the incremental matrix P obtained each time and the initial transformation matrix H constitute the initial transformation matrix H of the next iteration, and the iterative process ends until convergence.

增量矩阵P表示为:The incremental matrix P is expressed as:

PP == pp 11 pp 22 pp 33 pp 44 pp 55 pp 66 pp 77 pp 88 11

设置中间坐标变量X′2=(x′2,y′2,1)便于数学表达计算:Setting the intermediate coordinate variable X′ 2 =(x′ 2 , y′ 2 , 1) is convenient for mathematical expression calculation:

xx 22 ′′ ythe y 22 ′′ 11 ~~ hh 11 hh 22 hh 33 hh 44 hh 55 hh 66 hh 77 hh 88 11 xx 22 ythe y 22 11

增量矩阵P表征了投影变换矩阵H在每一次迭代过程中的微小增量,其作用是不断的对H进行优化逼近,其表达式为:The increment matrix P represents the small increment of the projection transformation matrix H in each iteration process, and its function is to continuously optimize and approximate H, and its expression is:

(I+P)H→H(I+P)H→H

I为单位矩阵,利用中间坐标变量X′2和(I+P)HX2=(I+P)X′2=X3=(x3,y3,1)可以使计算简化。I is the unit matrix, and the calculation can be simplified by using the intermediate coordinate variable X′ 2 and (I+P)HX 2 =(I+P)X′ 2 =X 3 =(x 3 , y 3 , 1).

为了得到增量矩阵P,我们要求解下式:In order to get the incremental matrix P, we need to solve the following formula:

EE. (( pp )) == ΣΣ nno || || Xx 11 (( nno )) -- Xx 33 (( nno )) || ||

式中,分别表示参考图像第n个特征点的坐标和待拼接图像第n个特征点坐标经过投影变换后的坐标。In the formula, represent the coordinates of the nth feature point of the reference image and the coordinates of the nth feature point coordinates of the image to be stitched after projective transformation, respectively.

引入权重因子α,将图像分成两个部分:Introduce the weight factor α to divide the image into two parts:

EE. (( pp )) == ΣΣ nno ∈∈ SS 11 αα || || Xx 11 (( nno )) -- Xx 33 (( nno )) || || ++ ΣΣ mm ∈∈ SS 22 (( 11 -- αα )) || || Xx 11 (( mm )) -- Xx 33 (( mm )) || ||

式中,分别表示参考图像第n个特征点的坐标和待拼接图像第n个特征点坐标经过投影变换后的坐标,分别表示参考图像第m个特征点的坐标和待拼接图像第m个特征点坐标经过投影变换后的坐标,α为权重因子,S1,S2分别代表感兴趣的区域和普通的区域。In the formula, Respectively represent the coordinates of the nth feature point of the reference image and the coordinates of the nth feature point coordinates of the image to be stitched after projective transformation, Represent the coordinates of the mth feature point of the reference image and the coordinates of the mth feature point of the image to be stitched after projective transformation, α is the weight factor, S 1 and S 2 represent the area of interest and the common area respectively.

利用Levenberg-Marquardt算法求解出增量矩阵P。在每一次的迭代过程中得到的增量矩阵P,根据(I+P)H→H得到下次的迭代过程中的变换矩阵H,迭代的终止条件为收敛或者超过迭代次数限制。收敛的标准为ε小于1个像素:The incremental matrix P is solved by using the Levenberg-Marquardt algorithm. The incremental matrix P obtained in each iteration process, the transformation matrix H in the next iteration process is obtained according to (I+P)H→H, and the termination condition of the iteration is convergence or exceeding the limit of the number of iterations. The criterion for convergence is that ε is less than 1 pixel:

ϵϵ == (( ΣΣ nno ∈∈ SS 11 (( xx 33 nno -- xx 11 nno )) 22 ++ (( ythe y 33 nno -- ythe y 11 nno )) 22 )) // nno

式中,表示参考图像在感兴趣区域特征点的坐标,表示待拼接图像特征点经过变换后的坐标。优选的,最大迭代次数设定为100,有些情况下,获取的两幅图像的情况比较复杂,感兴趣区域的平均误差无法减小到一个像素以内,就以迭代次数为迭代的终止条件。In the formula, Indicates the coordinates of the feature points of the reference image in the region of interest, Indicates the transformed coordinates of the feature points of the image to be stitched. Preferably, the maximum number of iterations is set to 100. In some cases, the situation of the two acquired images is more complicated, and the average error of the region of interest cannot be reduced to within one pixel, so the number of iterations is used as the termination condition of the iteration.

利用增量矩阵P和变换矩阵H可(I+P)H→H迭代求解得到最终的变换矩阵H,然后根据参考图像和待拼接图像之间的投影变换关系式:Using the incremental matrix P and the transformation matrix H, (I+P)H→H can be iteratively solved to obtain the final transformation matrix H, and then according to the projection transformation relation between the reference image and the image to be spliced:

xx 11 ythe y 11 11 ~~ hh 11 hh 22 hh 33 hh 44 hh 55 hh 66 hh 77 hh 88 11 xx 22 ythe y 22 11

将待拼接图像进行投影变换,并得到最终的优化拼接图像。The image to be stitched is subjected to projection transformation, and the final optimized stitched image is obtained.

将本发明优化方法的拼接结果与普通LM算法优化后的拼接结果进行比较,可以得到:图3a为普通Levenberg-Marquardt算法优化后的拼接结果,图4a为采用本发明优化方法的拼接结果。图3b和图3c分别为图3a中相对应区域的放大图像,其中图3b来至于普通区域,图3c来至于感兴趣区域,我们可以发现图3b的优化结果比较的好,但是图3c的优化结果比较的差。但是在图4b和图4c中,我们就能发现感兴趣区域图4c的图像优化效果比较好,但是图4b的优化效果相对较差。Comparing the mosaic result of the optimization method of the present invention with the mosaic result after optimization of the common LM algorithm, it can be obtained: Fig. 3a is the mosaic result after optimization of the common Levenberg-Marquardt algorithm, and Fig. 4a is the mosaic result of the optimization method of the present invention. Figure 3b and Figure 3c are the enlarged images of the corresponding areas in Figure 3a, respectively, where Figure 3b is from the normal area, and Figure 3c is from the area of interest, we can find that the optimization results of Figure 3b are relatively good, but the optimization of Figure 3c The result is relatively poor. But in Figure 4b and Figure 4c, we can find that the image optimization effect of the region of interest in Figure 4c is better, but the optimization effect of Figure 4b is relatively poor.

实例二:通过对比仿真图像和遥感图像的优化拼接结果来说明本发明方法在人眼视觉上的特性。Example 2: The characteristics of the method of the present invention in terms of human vision are illustrated by comparing the optimized splicing results of the simulated image and the remote sensing image.

(1)图5a和图5b为输入的参考图像和待拼接的图像。(1) Figure 5a and Figure 5b are the input reference image and the image to be stitched.

(2)图6a为没有权重值的优化拼接结构。图6b和图6c来至于我们定义的感兴趣区域,其优化拼接后的结果出现了比较大的误差。图6d来至于我们定义的普通区域。(2) Figure 6a shows the optimized stitching structure without weight values. Figure 6b and Figure 6c come from the region of interest we defined, and the result of optimized splicing has a relatively large error. Figure 6d is from the common region we defined.

(3)图7a为采用本发明的优化拼接结果。相比于图6b和图6c,图7b和图7c的优化误差非常的小,基本上消除了视觉层面上的差异,而图7d在视觉上相对于图6d基本无大的差异。(3) Figure 7a is the optimized splicing result of the present invention. Compared with Fig. 6b and Fig. 6c, the optimization error of Fig. 7b and Fig. 7c is very small, basically eliminating the difference on the visual level, while Fig. 7d has basically no big difference visually compared with Fig. 6d.

(4)图8a和图8b为遥感图像的参考图像和待拼接图像,图8c为采用本发明的拼接结果,其中感兴趣区域的特征点优化误差达到了亚像素级别。(4) Figure 8a and Figure 8b are reference images and images to be stitched of remote sensing images, and Figure 8c is the stitching result using the present invention, wherein the optimization error of feature points in the region of interest has reached the sub-pixel level.

Claims (8)

1.一种基于人眼视觉特性的遥感图像拼接方法,其特征在于,包括以下步骤:1. a remote sensing image mosaic method based on human visual characteristics, is characterized in that, comprises the following steps: 1)提取参考图像和待拼接图像上的特征点,建立匹配特征点对,得到两幅图像特征点之间的初始的匹配关系;1) Extract the feature points on the reference image and the image to be stitched, set up matching feature point pairs, and obtain the initial matching relationship between the feature points of the two images; 2)剔除错误的匹配关系,得到匹配关系正确的匹配特征点对,并得到参考图像和待拼接图像之间初始的变换矩阵H;2) Eliminate the wrong matching relationship, obtain the matching feature point pair with the correct matching relationship, and obtain the initial transformation matrix H between the reference image and the image to be stitched; 3)将所述参考图像划分为感兴趣的区域和普通的区域,分别计算两个区域的平均梯度值Sinterest(G)和Snormal(G),利用平均梯度值的比得到权重因子α;3) divide the reference image into an area of interest and a common area, calculate the average gradient values S interest (G) and S normal (G) of the two areas respectively, and use the ratio of the average gradient values to obtain the weight factor α; 所述权重因子α表示为: α = 1 β + 1 , β = S i n t e r e s t ( G ) S n o r m a l ( G ) ; The weight factor α is expressed as: α = 1 β + 1 , β = S i no t e r e the s t ( G ) S no o r m a l ( G ) ; 4)设置增量矩阵P作为初始的变换矩阵H的优化增量,利用步骤2)中的匹配特征点对的坐标信息和步骤3)中的权重因子α迭代求解增量矩阵P,且利用迭代过程中每一次得到的增量矩阵P可计算得到对应的变换矩阵H,直到收敛结束迭代过程;4) Set the increment matrix P as the optimization increment of the initial transformation matrix H, use the coordinate information of the matching feature point pairs in step 2) and the weight factor α in step 3) to iteratively solve the increment matrix P, and use the iterative The incremental matrix P obtained each time in the process can be calculated to obtain the corresponding transformation matrix H, until the convergence ends the iterative process; 收敛的标准为ε小于1个像素:The criterion for convergence is that ε is less than 1 pixel: ϵϵ == (( ΣΣ nno ∈∈ SS 11 (( xx 33 nno -- xx 11 nno )) 22 ++ (( ythe y 33 nno -- ythe y 11 nno )) 22 )) // nno 式中,表示参考图像在感兴趣区域特征点的坐标,表示待拼接图像特征点经过变换后的坐标;In the formula, Indicates the coordinates of the feature points of the reference image in the region of interest, Indicates the transformed coordinates of the feature points of the image to be spliced; 引入公式:Introducing the formula: EE. (( pp )) == ΣΣ nno ∈∈ SS 11 αα || || Xx 11 (( nno )) -- Xx 33 (( nno )) || || ++ ΣΣ mm ∈∈ SS 22 (( 11 -- αα )) || || Xx 11 (( mm )) -- Xx 33 (( mm )) || || 式中,分别表示参考图像第n个特征点的坐标和待拼接图像第n个特征点坐标经过投影变换后的坐标,分别表示参考图像第m个特征点的坐标和待拼接图像第m个特征点坐标经过投影变换后的坐标,α为权重因子,S1,S2分别代表感兴趣的区域和普通的区域;并利用Levenberg-Marquardt算法迭代求解出增量矩阵P;In the formula, Respectively represent the coordinates of the nth feature point of the reference image and the coordinates of the nth feature point coordinates of the image to be stitched after projective transformation, Represent the coordinates of the mth feature point of the reference image and the coordinates of the mth feature point coordinates of the image to be stitched after projective transformation, α is the weight factor, S 1 , S 2 represent the area of interest and the common area respectively; and Use the Levenberg-Marquardt algorithm to iteratively solve the incremental matrix P; 5)应用最终的变换矩阵H对待拼接图像进行投影变换,并与参考图像进行拼接融合,完成图像的拼接。5) Apply the final transformation matrix H to carry out projective transformation on the image to be stitched, and stitch and fuse it with the reference image to complete the stitching of the image. 2.如权利要求1所述的基于人眼视觉特性的遥感图像拼接方法,其特征在于,令所述步骤2)中参考图像特征点的齐次坐标为X1=(x1,y1,1),待拼接图像特征点的齐次坐标为X2=(x2,y2,1),及所述的初始的变换矩阵H为:2. The remote sensing image mosaic method based on human visual characteristics as claimed in claim 1, wherein the homogeneous coordinates of the reference image feature points in said step 2) are X 1 =(x 1 , y 1 , 1), the homogeneous coordinates of the image feature points to be spliced are X 2 =(x 2 ,y 2 ,1), and the initial transformation matrix H is: Hh == hh 11 hh 22 hh 33 hh 44 hh 55 hh 66 hh 77 hh 88 11 式中,h1~h8是变换矩阵H中的元素,表征了参考图像和待拼接图像之间的投影变换关系。In the formula, h 1 ~ h 8 are the elements in the transformation matrix H, which represent the projective transformation relationship between the reference image and the image to be stitched. 3.如权利要求2所述的基于人眼视觉特性的遥感图像拼接方法,其特征在于,图像中像素的梯度值G(x,y)由下式表示:3. the remote sensing image mosaic method based on human visual characteristics as claimed in claim 2, is characterized in that, the gradient value G (x, y) of pixel in the image is represented by following formula: GG (( xx ,, ythe y )) == II xx 22 (( xx ,, ythe y )) ++ II ythe y 22 (( xx ,, ythe y )) II xx == ∂∂ II (( xx ,, ythe y )) ∂∂ xx ,, II ythe y == ∂∂ II (( xx ,, ythe y )) ∂∂ ythe y 式中,(x,y)是图像中像素的坐标,Ix表示图像在x方向的梯度,Iy表示图像在y方向上的梯度,I(x,y)表示图像的灰度值。In the formula, (x, y) is the coordinate of the pixel in the image, I x represents the gradient of the image in the x direction, I y represents the gradient of the image in the y direction, and I(x, y) represents the gray value of the image. 4.如权利要求3所述的基于人眼视觉特性的遥感图像拼接方法,其特征在于,所述的增量矩阵P表示为:4. the remote sensing image splicing method based on human visual characteristics as claimed in claim 3, is characterized in that, described incremental matrix P is expressed as: PP == pp 11 pp 22 pp 33 pp 44 pp 55 pp 66 pp 77 pp 88 11 式中,增量矩阵P中的元素p1~p8分别对应变换矩阵H中相应元素的微小增量。In the formula, the elements p 1 to p 8 in the increment matrix P correspond to the tiny increments of the corresponding elements in the transformation matrix H, respectively. 5.如权利要求4所述的基于人眼视觉特性的遥感图像拼接方法,其特征在于,增量矩阵P与变换矩阵H的关系式为:5. the remote sensing image splicing method based on human visual characteristics as claimed in claim 4, is characterized in that, the relational expression of increment matrix P and transformation matrix H is: (I+P)H→H(I+P)H→H 式中,I为单位矩阵。In the formula, I is the identity matrix. 6.如权利要求5所述的基于人眼视觉特性的遥感图像拼接方法,其特征在于,所述待拼接图像的特征点坐标经过投影变换后的坐标为:6. the remote sensing image splicing method based on human visual characteristics as claimed in claim 5, is characterized in that, the coordinates of the feature point coordinates of the image to be spliced after projection transformation are: X3=(x3,y3,1)=(I+P)HX2=(I+P)X'2 X 3 =(x 3, y 3 ,1)=(I+P)HX 2 =(I+P)X' 2 式中,I为单位矩阵,X'2为中间坐标变量,且X'2=(x'2,y'2,1)。In the formula, I is the unit matrix, X' 2 is the intermediate coordinate variable, and X' 2 =(x' 2 ,y' 2 ,1). 7.如权利要求6所述的基于人眼视觉特性的遥感图像拼接方法,其特征在于,利用每次迭代过程中得到的增量矩阵P,根据(I+P)H→H得到下次的迭代过程中的变换矩阵H,并代入公式(I+P)H→H迭代求解得到最终的变换矩阵H。7. the remote sensing image splicing method based on human visual characteristics as claimed in claim 6, is characterized in that, utilizes the increment matrix P that obtains in each iterative process, obtains next time according to (I+P)H→H The transformation matrix H in the iterative process is substituted into the formula (I+P)H→H to iteratively solve to obtain the final transformation matrix H. 8.如权利要求7所述的基于人眼视觉特性的遥感图像拼接方法,其特征在于,所述参考图像和待拼接图像之间的坐标变换关系式为:8. the remote sensing image splicing method based on human visual characteristics as claimed in claim 7, is characterized in that, the coordinate transformation relational expression between described reference image and image to be spliced is: xx 11 ythe y 11 11 ~~ hh 11 hh 22 hh 33 hh 44 hh 55 hh 66 hh 77 hh 88 11 xx 22 ythe y 22 11 利用最终的变换矩阵H代入上式中,将待拼接图像进行投影变换,并得到拼接图像。Substituting the final transformation matrix H into the above formula, the image to be stitched is subjected to projective transformation, and the stitched image is obtained.
CN201210510695.9A 2012-09-11 2012-12-03 A kind of remote sensing images joining method based on human-eye visual characteristic Expired - Fee Related CN102968780B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210510695.9A CN102968780B (en) 2012-09-11 2012-12-03 A kind of remote sensing images joining method based on human-eye visual characteristic

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN 201210333179 CN102867298A (en) 2012-09-11 2012-09-11 Remote sensing image splicing method based on human eye visual characteristic
CN201210333179.3 2012-09-11
CN201210510695.9A CN102968780B (en) 2012-09-11 2012-12-03 A kind of remote sensing images joining method based on human-eye visual characteristic

Publications (2)

Publication Number Publication Date
CN102968780A CN102968780A (en) 2013-03-13
CN102968780B true CN102968780B (en) 2015-11-25

Family

ID=47446154

Family Applications (2)

Application Number Title Priority Date Filing Date
CN 201210333179 Pending CN102867298A (en) 2012-09-11 2012-09-11 Remote sensing image splicing method based on human eye visual characteristic
CN201210510695.9A Expired - Fee Related CN102968780B (en) 2012-09-11 2012-12-03 A kind of remote sensing images joining method based on human-eye visual characteristic

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN 201210333179 Pending CN102867298A (en) 2012-09-11 2012-09-11 Remote sensing image splicing method based on human eye visual characteristic

Country Status (1)

Country Link
CN (2) CN102867298A (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103514580B (en) * 2013-09-26 2016-06-08 香港应用科技研究院有限公司 Method and system for obtaining super-resolution images with optimized visual experience
US9734599B2 (en) * 2014-10-08 2017-08-15 Microsoft Technology Licensing, Llc Cross-level image blending
CN104599258B (en) * 2014-12-23 2017-09-08 大连理工大学 A kind of image split-joint method based on anisotropic character descriptor
CN105279735B (en) * 2015-11-20 2018-08-21 沈阳东软医疗系统有限公司 A kind of fusion method of image mosaic, device and equipment
CN105931185A (en) * 2016-04-20 2016-09-07 中国矿业大学 Automatic splicing method of multiple view angle image
CN105915804A (en) * 2016-06-16 2016-08-31 恒业智能信息技术(深圳)有限公司 Video stitching method and system
CN107067368B (en) * 2017-01-20 2019-11-26 武汉大学 Streetscape image splicing method and system based on deformation of image
CN107833207B (en) * 2017-10-25 2020-04-03 北京大学 A detection method of false matching between images based on augmented homogeneous coordinate matrix
CN109995993A (en) * 2018-01-02 2019-07-09 广州亿航智能技术有限公司 Aircraft and its filming control method, device and terminal system
CN109829853B (en) * 2019-01-18 2022-12-23 电子科技大学 Unmanned aerial vehicle aerial image splicing method
CN110363179B (en) * 2019-07-23 2022-03-25 联想(北京)有限公司 Map acquisition method, map acquisition device, electronic equipment and storage medium
CN112070775B (en) * 2020-09-29 2021-11-09 成都星时代宇航科技有限公司 Remote sensing image optimization processing method and device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200937946A (en) * 2008-02-18 2009-09-01 Univ Nat Taiwan Full-frame video stabilization with a polyline-fitted camcorder path

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200937946A (en) * 2008-02-18 2009-09-01 Univ Nat Taiwan Full-frame video stabilization with a polyline-fitted camcorder path

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Constructing Image Panoramas using Dual-Homography Warping;Junhong Gao,SeonJoo Kim,Michael S.Brown;《Proceedings of CVPR》;20110625;第3.1节 *
Creating Full View Panoramic Image Mosaics and Environment Maps;Richard Szeliski,Heung-Yeung Shum;《Computer Graphics(SIGGRAPH’97)》;19970831;第3节 *

Also Published As

Publication number Publication date
CN102968780A (en) 2013-03-13
CN102867298A (en) 2013-01-09

Similar Documents

Publication Publication Date Title
CN102968780B (en) A kind of remote sensing images joining method based on human-eye visual characteristic
Wang et al. Dynamic fusion module evolves drivable area and road anomaly detection: A benchmark and algorithms
CN105069746A (en) Video real-time human face substitution method and system based on partial affine and color transfer technology
CN105608693B (en) The calibration system and method that vehicle-mounted panoramic is looked around
CN101894366B (en) Method and device for acquiring calibration parameters and video monitoring system
Dufour et al. Shape, displacement and mechanical properties from isogeometric multiview stereocorrelation
CN110956661B (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
CN107103277B (en) Gait recognition method based on depth camera and 3D convolutional neural network
CN105139412A (en) Hyperspectral image corner detection method and system
CN110084093B (en) Method and device for detecting and identifying target in remote sensing image based on deep learning
CN104537705A (en) Augmented reality based mobile platform three-dimensional biomolecule display system and method
CN110009674A (en) A real-time calculation method of monocular image depth of field based on unsupervised deep learning
CN114332385A (en) Monocular camera target detection and spatial positioning method based on three-dimensional virtual geographic scene
CN107229920A (en) Based on integrating, depth typical time period is regular and Activity recognition method of related amendment
CN102231844B (en) Video image fusion performance evaluation method based on structure similarity and human vision
CN109887029A (en) A monocular visual odometry method based on image color features
CN110044374A (en) A kind of method and odometer of the monocular vision measurement mileage based on characteristics of image
Alemán-Flores et al. Line detection in images showing significant lens distortion and application to distortion correction
CN110148177A (en) For determining the method, apparatus of the attitude angle of camera, calculating equipment, computer readable storage medium and acquisition entity
CN103900504A (en) Nano-scale real-time three-dimensional visual information feedback method
CN107123094A (en) A kind of mixing Poisson, the video denoising method of gaussian sum impulsive noise
CN110910456A (en) Dynamic Calibration Algorithm of Stereo Camera Based on Harris Corner Mutual Information Matching
CN105139401A (en) Depth credibility assessment method for depth map
Shit et al. An encoder‐decoder based CNN architecture using end to end dehaze and detection network for proper image visualization and detection
CN103700110A (en) Full-automatic image matching method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20151125

Termination date: 20181203