CN111008932A - A Panoramic Image Stitching Method Based on Image Screening - Google Patents
A Panoramic Image Stitching Method Based on Image Screening Download PDFInfo
- Publication number
- CN111008932A CN111008932A CN201911245280.1A CN201911245280A CN111008932A CN 111008932 A CN111008932 A CN 111008932A CN 201911245280 A CN201911245280 A CN 201911245280A CN 111008932 A CN111008932 A CN 111008932A
- Authority
- CN
- China
- Prior art keywords
- image
- images
- image group
- group
- similarity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000012216 screening Methods 0.000 title claims abstract description 30
- 239000011159 matrix material Substances 0.000 claims abstract description 48
- 230000009466 transformation Effects 0.000 claims abstract description 28
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 19
- 238000012217 deletion Methods 0.000 claims description 3
- 230000037430 deletion Effects 0.000 claims description 3
- 238000009825 accumulation Methods 0.000 description 8
- 238000005457 optimization Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于图像筛选的全景图像拼接方法,该方法包括以下步骤:根据图像间的相似度矩阵提出图像筛选算法,将冗余图像移除出原始图像组;基于筛选后的图像组的权重矩阵,求出最优基准图像,确定拼接序列并对筛选后的图像组进行分组,分为多个小图像组;小图像组与小图像组之间先进行相似变换,获取初始化的配准参数,然后根据拼接序列,相邻图像间在透视约束下通过单应性模型对初始化的配准参数进行细化。本发明通过无人机对目标区域采集到的多幅具有重叠区域的图像进行筛选,将冗余图像移除出图像组,最终通过筛选的图像组拼接出目标区域的全景图像。
The invention discloses a panorama image stitching method based on image screening. The method includes the following steps: an image screening algorithm is proposed according to the similarity matrix between images, and redundant images are removed from the original image group; The optimal reference image is obtained, the splicing sequence is determined, and the screened image groups are grouped into multiple small image groups; similarity transformation is performed between the small image groups and the small image groups to obtain the initialized matching Then, according to the stitching sequence, the initialized registration parameters are refined by the homography model between adjacent images under the perspective constraints. The invention screens multiple images with overlapping areas collected in the target area by the drone, removes redundant images from the image group, and finally splices a panoramic image of the target area through the screened image group.
Description
技术领域technical field
本发明涉及全景图像拼接领域,尤其涉及一种基于无人机对目标区域采集多幅具有重叠区域图像,进行基于图像筛选的全景图像拼接技术。The invention relates to the field of panoramic image stitching, in particular to a panoramic image stitching technology based on image screening by collecting multiple images with overlapping regions from a target area based on an unmanned aerial vehicle.
背景技术Background technique
无人机遥感是一种新的遥感手段,因其具有高效,灵活,快速,低成本以及高分辨率等特点,近年来呈现出了良好的发展势头。但是,由于无人机遥感平台在航拍过程中,受到飞行高度以及相机焦距等限制,所获取的单幅图像范围较小,往往无法覆盖整个所需区域,因此将获取的多幅小视角的遥感图像拼接成一副大视角的全景图像成为一门重要的技术。UAV remote sensing is a new remote sensing method, which has shown a good development momentum in recent years because of its high efficiency, flexibility, speed, low cost and high resolution. However, because the UAV remote sensing platform is limited by the flight height and the focal length of the camera in the process of aerial photography, the obtained single image has a small range and often cannot cover the entire required area. Image stitching into a panoramic image with a large viewing angle has become an important technology.
全景图像拼接可以将多幅相邻的小视角图像拼接成大视角的全景图像。图像拼接是通过投影扭曲(例如圆柱、球面或透视)将所有图像映射到一个公共坐标系上。由于镜头的不规则移动,图像间存在一定的视差,拼接后的全景图片几乎不可避免地存在局部拼接精度不够(图1(a))和全局形变积累严重等问题(图1(b))。Panoramic image stitching can stitch multiple adjacent small-view images into a large-view panorama image. Image stitching is the mapping of all images onto a common coordinate system through a projective distortion (eg cylindrical, spherical, or perspective). Due to the irregular movement of the lens, there is a certain parallax between the images, and the panorama images after stitching almost inevitably have problems such as insufficient local stitching accuracy (Fig. 1(a)) and serious global deformation accumulation (Fig. 1(b)).
为了提升拼接质量,Konolige et al[1]提出使用光束平差法来进行全局优化,该算法可以使全局重投影误差降至最小。为了避免非线性优化,Kekec et al[2]采用仿射模型来初始化对齐并且采用单应性模型来进行全局优化。随着拼接图像的增多,透视形变的积累会越来越多,为了避免这个问题,Caballero et al[3]提出根据图像的配准质量对图像采用分层模型进行配准,该模型对大视差图像配准具有较小的自由度,该算法的本质是在提高配准精度和降低形变积累之间获得一个权衡。In order to improve the stitching quality, Konolige et al [1] proposed to use the beam adjustment method for global optimization, which can minimize the global reprojection error. To avoid nonlinear optimization, Kekec et al [2] employ an affine model for initial alignment and a homography model for global optimization. With the increase of stitched images, the accumulation of perspective deformation will become more and more. In order to avoid this problem, Caballero et al [3] proposed to use a layered model to register images according to the registration quality of the images. Image registration has less degrees of freedom, and the essence of the algorithm is to obtain a trade-off between improving registration accuracy and reducing deformation accumulation.
对于大视角图像拼接问题,利用图像间的拓扑关系来提高图像的拼接质量也是一种非常高效的方案。为了高效的估算图像间的拓扑关系,Elibol et al[4]采用了粗略的特征点匹配结合最小生成树的算法来检测图像间的重叠关系。关于基准图像的选择,Richardet al[5]表明最合适的选择是最靠近全景图像中心的图像,因为中心图像到达其他所有图像的平均最短路径最短,这样可以使形变积累达到最少。为了实现这一算法,Choe et al[6]通过图论的算法选出了最优基准图像,但前提是需要提前计算每一对图像间的形变误差。M.Xia et al[7]提出了一个配准模型,先通过仿射模型进行初始化配准,然后相邻图像间再通过单应性模型对参数进行细化。For the large-view image stitching problem, it is also a very efficient solution to use the topological relationship between images to improve the quality of image stitching. In order to efficiently estimate the topological relationship between images, Elibol et al [4] used rough feature point matching combined with a minimum spanning tree algorithm to detect the overlapping relationship between images. Regarding the choice of baseline images, Richard et al [5] show that the most appropriate choice is the image closest to the center of the panoramic image, since the center image has the shortest average shortest path to all other images, which minimizes deformation accumulation. In order to realize this algorithm, Choe et al [6] selected the optimal reference image through the algorithm of graph theory, but the premise is that the deformation error between each pair of images needs to be calculated in advance. M.Xia et al [7] proposed a registration model. First, the affine model is used to initialize the registration, and then the parameters between adjacent images are refined through the homography model.
上述方法可以将多幅图像拼接成一幅完整的全景图像,但是随着拼接数量的增多,数据会出现冗余,产生很多不必要的计算。The above method can stitch multiple images into a complete panoramic image, but as the number of stitching increases, the data will become redundant, resulting in many unnecessary calculations.
发明内容SUMMARY OF THE INVENTION
本发明提供了一种基于图像筛选的全景图像拼接方法,本发明通过无人机对目标区域采集到的多幅具有重叠区域的图像进行筛选,将冗余图像移除出图像组,最终通过筛选的图像组拼接出目标区域的全景图像,详见下文描述:The present invention provides a panorama image stitching method based on image screening. The present invention screens multiple images with overlapping areas collected from a target area by a drone, removes redundant images from the image group, and finally passes the screening. A panoramic image of the target area is stitched from the image group, as described in the following description:
一种基于图像筛选的全景图像拼接方法,所述方法包括以下步骤:A panorama image stitching method based on image screening, the method comprises the following steps:
根据图像间的相似度矩阵提出图像筛选算法,将冗余图像移除出原始图像组;According to the similarity matrix between images, an image screening algorithm is proposed to remove redundant images from the original image group;
基于筛选后的图像组的权重矩阵,求出最优基准图像,确定拼接序列并对筛选后的图像组进行分组,分为多个小图像组;Based on the weight matrix of the screened image groups, the optimal reference image is obtained, the splicing sequence is determined, and the screened image groups are grouped into multiple small image groups;
小图像组与小图像组之间先进行相似变换[9],获取初始化的配准参数,然后根据拼接序列,相邻图像间在透视约束下通过单应性模型[10]对初始化的配准参数进行细化。Similar transformation [9] is performed between the small image group and the small image group to obtain the initial registration parameters, and then according to the splicing sequence, the homography model [10] is used to initialize the registration between adjacent images under the perspective constraint. parameters are refined.
进一步地,所述根据图像筛选、相似度矩阵将冗余图像移除出原始图像组具体为:Further, the removal of redundant images from the original image group according to image screening and similarity matrix is specifically:
1)基于原始图像组的相似度矩阵,设定相似度阈值;1) Based on the similarity matrix of the original image group, set the similarity threshold;
2)判断当前图像组相似度矩阵中的最大值是否大于相似度阈值,如果是,选出当前图像组中相似度最高的两幅图像,将其中一幅图像判定为冗余图像,移除出图像组;2) Determine whether the maximum value in the similarity matrix of the current image group is greater than the similarity threshold, if so, select the two images with the highest similarity in the current image group, determine one of the images as a redundant image, and remove the image group;
3)重复步骤2)的操作,直至当前图像组的相似度矩阵中的最大值小于相似度阈值。3) Repeat the operation of step 2) until the maximum value in the similarity matrix of the current image group is smaller than the similarity threshold.
其中,所述将其中一幅图像判定为冗余图像具体为:分别计算删除每张图像后,剩余图像组中所有图像之间的相似度之和,然后将删除后使相似度之和最小的图像定为冗余图像。Wherein, determining one of the images as a redundant image is specifically: calculating the sum of the similarities between all the images in the remaining image groups after deleting each image, and then calculating the sum of the similarities that minimizes the sum of the similarities after deletion. Images are designated as redundant images.
所述基于筛选后的图像组的权重矩阵,求出最优基准图像具体为:Described based on the weight matrix of the screened image group, to find the optimal reference image is specifically:
基于图像的相似度矩阵,建立筛选后所有图像之间的权重矩阵,基于权重矩阵运行最短路径算法,计算每个点到达其他所有点的最短路径的权重之和;将最短路径权重之和最小的点所代表的图像作为最优基准图。Based on the similarity matrix of the images, establish a weight matrix between all the filtered images, run the shortest path algorithm based on the weight matrix, and calculate the sum of the weights of the shortest paths from each point to all other points; The image represented by the point is used as the optimal reference map.
所述确定拼接序列具体为:The determined splicing sequence is specifically:
基于图像组的权重矩阵,以代表最优基准图的点为起点,运用广度优先遍历算法,将所得点的序列作为图像的拼接序列。Based on the weight matrix of the image group, starting from the point representing the optimal reference map, using the breadth-first traversal algorithm, the obtained point sequence is used as the image stitching sequence.
所述方法还包括:The method also includes:
小图像组与小图像组之间先进行相似变换,获取初始化的配准参数,然后根据拼接序列,相邻图像间在透视约束下通过单应性模型对初始化的配准参数进行细化;通过最小化总能量函数来获得最优解。The similarity transformation is performed between the small image group and the small image group to obtain the initialized registration parameters, and then according to the splicing sequence, the initialized registration parameters are refined by the homography model between adjacent images under the perspective constraint; Minimize the total energy function to obtain the optimal solution.
本发明提供的技术方案的有益效果是:The beneficial effects of the technical scheme provided by the present invention are:
1、本方法在保证最终拼接结果的完整性以及质量的前提下,对采集的小视角图像组中的冗余图像进行筛选,大大提高了拼接速率;1. Under the premise of ensuring the integrity and quality of the final splicing result, this method filters the redundant images in the collected small-view image group, which greatly improves the splicing rate;
2、本发明克服了大数据集下透视形变的积累,对图像进行分组,图像组与图像组之间通过相似变换模型进行初始化配准,获得良好的全局一致性;2. The present invention overcomes the accumulation of perspective deformation under large data sets, groups images, and performs initialization registration between image groups through a similarity transformation model to obtain good global consistency;
3、本方法基于图像间的拓扑结构,在相邻图像间进行单应性配准,在局部获得了良好的拼接效果;3. Based on the topology between images, this method performs homography registration between adjacent images, and obtains a good stitching effect locally;
4、实验结果表明,本方法能有效地将冗余图像移除出图像组,生成的大视角全景图的质量和完整性与全部图像拼接所得的效果相差不大。4. The experimental results show that this method can effectively remove redundant images from the image group, and the quality and integrity of the generated large-view panorama are not much different from the effect obtained by stitching all images.
附图说明Description of drawings
图1为全景图像拼接过程中遇到的问题;Figure 1 shows the problems encountered in the process of panoramic image stitching;
其中,(a)为局部拼接精度不高,出现重影或错位;(b)为形变积累严重,最终的拼接效果缺乏全局一致性。Among them, (a) the local stitching accuracy is not high, ghosting or dislocation occurs; (b) the deformation accumulation is serious, and the final stitching effect lacks global consistency.
图2为基于图像筛选的全景图像拼接方法的流程图;Fig. 2 is the flow chart of the panoramic image stitching method based on image screening;
图3为对61幅图像选取最优基准图,并以最优基准图为起点,运行广度优先遍历,得到的拼接序列在空间上的表示图;Figure 3 shows the spatial representation of the splicing sequence obtained by selecting the optimal reference map for 61 images, and starting from the optimal reference map, running breadth-first traversal;
图4为对61幅图像以及筛选后图像的拓扑结构分析示意图;Fig. 4 is the topological structure analysis schematic diagram of 61 images and the images after screening;
其中,(a)为原图像组的拓扑结构;(b)为经过筛选后的图像组(包含31幅图像)的拓扑结构。Among them, (a) is the topological structure of the original image group; (b) is the topological structure of the filtered image group (including 31 images).
图5为对61幅图像以及筛选后图像拼接的结果示意图;Fig. 5 is a schematic diagram of the result of stitching 61 images and images after screening;
其中,(a)为原图像组拼接成的全景图像;(b)为经过筛选后的图像组(包含31幅图像)拼接成的全景图像。Among them, (a) is the panoramic image spliced into the original image group; (b) is the panoramic image spliced into the filtered image group (including 31 images).
图6为对744幅图像以及筛选后的图像的拓扑结构分析示意图;Fig. 6 is the topological structure analysis schematic diagram to 744 images and the image after screening;
其中,(a)为原图像组的拓扑结构;(b)为经过筛选后的图像组(包含375幅图像)的拓扑结构。Among them, (a) is the topological structure of the original image group; (b) is the topological structure of the filtered image group (including 375 images).
图7为对744幅图像以及筛选后的图像拼接的结果示意图。FIG. 7 is a schematic diagram of the results of stitching 744 images and filtered images.
其中,(a)为原图像组拼接成的全景图像;(b)为经过筛选后的图像组(包含375幅图像)拼接成的全景图像。Among them, (a) is the panoramic image spliced into the original image group; (b) is the panoramic image spliced into the filtered image group (including 375 images).
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清楚,下面对本发明实施方式作进一步地详细描述。In order to make the objectives, technical solutions and advantages of the present invention clearer, the embodiments of the present invention are further described in detail below.
实施例1Example 1
参见图1,在全景图像拼接过程中,随着图像拼接数量的增多,数据会出现冗余现象,产生不必要的计算,降低了拼接速率。为了解决上述问题,本发明实施例提出了一种基于图像筛选的全景图像拼接方法,参见图2,该方法包括以下步骤:Referring to Figure 1, in the process of panoramic image stitching, as the number of image stitching increases, data redundancy will appear, resulting in unnecessary calculations and reducing the stitching rate. In order to solve the above problems, an embodiment of the present invention proposes a panorama image stitching method based on image screening. Referring to FIG. 2 , the method includes the following steps:
101:快速获取图像组的相似度矩阵,提出图像筛选算法,从而将冗余图像移除出原始图像组;101: Quickly obtain the similarity matrix of the image group, and propose an image screening algorithm to remove redundant images from the original image group;
102:基于筛选后的图像组的权重矩阵,求出最优基准图像,确定拼接序列并对筛选后的图像组进行分组,分为多个小图像组;102: Based on the weight matrix of the screened image groups, obtain the optimal reference image, determine the splicing sequence, and group the screened image groups into multiple small image groups;
103:小图像组与小图像组之间先进行相似变换,获取初始化的配准参数,然后根据拼接序列,相邻图像间在反透视约束下通过单应性模型对初始化的配准参数进行细化。103: Perform similarity transformation between the small image group and the small image group to obtain the initialized registration parameters, and then refine the initialized registration parameters through the homography model between adjacent images according to the splicing sequence under the inverse perspective constraint. change.
实施例2Example 2
下面结合具体的实例对实施例1中的方案进行进一步地介绍,详见下文描述:The scheme in embodiment 1 is further introduced below in conjunction with specific examples, and is described in detail below:
201:基于无人机对目标区域进行拍摄,获取多幅连续的具有重叠区域的图像,作为原始图像组;201 : photographing the target area based on the drone, and obtaining multiple consecutive images with overlapping areas as the original image group;
其中,通过本步骤的操作可以获取目标区域的图像信息。The image information of the target area can be acquired through the operation of this step.
202:对原始图像组进行粗略的特征点匹配,求出图像间的相似度矩阵;202: Perform rough feature point matching on the original image group to obtain a similarity matrix between images;
其中,该步骤202具体为:Wherein, the step 202 is specifically:
根据两幅图像特征点匹配的个数,建立所有图像(Ii i=1,....,N,N表示原始图像个数)之间的相似度矩阵M,其中第i幅图像和第j幅图像之间的相似度M(i,j)可表示为:According to the number of matching feature points of the two images, establish a similarity matrix M between all images (I i i=1,...,N, N represents the number of original images), where the ith image and the ith image The similarity M(i,j) between j images can be expressed as:
其中,p为图像i中特征点的个数,q表示图像j中特征点的个数。m和n表示图像i和图像j中的特征点序号,特征点m和n如果匹配,vmn记为1,否则记为0。Among them, p is the number of feature points in image i, and q represents the number of feature points in image j. m and n represent the feature point numbers in image i and image j. If the feature points m and n match, v mn is recorded as 1, otherwise it is recorded as 0.
上述计算特征点匹配的方法是通过SURF(Speeded up robust features)[8-10]算法进行特征点的提取,然后对每幅图像提取的特征点进行筛选,每幅图像只保留从高斯金字塔第一层中提取的特征点,最后每幅图像用筛选后的特征点与剩余其他的图像一一进行特征点匹配。The above method of calculating feature point matching is to extract feature points through the SURF (Speeded up robust features) [8-10] algorithm, and then filter the feature points extracted from each image, and each image only retains the first one from the Gaussian pyramid. The feature points extracted in the layer, and finally each image uses the filtered feature points and the remaining other images to match the feature points one by one.
203:基于图像间的相似度矩阵,提出图像筛选算法,将冗余图像移除出原始图像组;203: Based on the similarity matrix between images, an image screening algorithm is proposed to remove redundant images from the original image group;
其中,该步骤203包括:Wherein, this step 203 includes:
基于前一步得到的原始图像组的相似度矩阵M,(1)设定相似度阈值;(2)判断当前图像组相似度矩阵中的最大值是否大于相似度阈值,如果大于相似度阈值,选出当前图像组中相似度最高的两幅图像,将其中一张图像判定为冗余图像,移除出图像组;(3)重复步骤(2)的操作,直到当前图像组的相似度矩阵中的最大值小于相似度阈值。Based on the similarity matrix M of the original image group obtained in the previous step, (1) set the similarity threshold; (2) judge whether the maximum value in the similarity matrix of the current image group is greater than the similarity threshold, if it is greater than the similarity threshold, select The two images with the highest similarity in the current image group are determined, one of the images is determined as a redundant image, and the image group is removed; (3) the operation of step (2) is repeated until the similarity matrix of the current image group is in the The maximum value of is less than the similarity threshold.
进一步地,判定其中一张图像是否为冗余图像的公式为:Further, the formula for determining whether one of the images is a redundant image is:
其中,a,b表示相似度最高的两幅图像的编号,n表示当前图像组中图像的个数,分别计算删除每张图像后,剩余图像组中所有图像之间的相似度之和,然后将删除后使相似度之和最小的图像定为冗余图像。Among them, a, b represent the numbers of the two images with the highest similarity, n represents the number of images in the current image group, calculate the sum of the similarity between all the images in the remaining image group after deleting each image, and then The image with the smallest sum of similarity after deletion is regarded as redundant image.
例如,有5张图像A-E,其中图像A和图像B是相似度最高的两幅图像。删除图像A,计算剩下B-E这四幅图像两两之间的相似度之和,删除图像B,计算A,C-E这四幅图像两两之间的相似度之和。如果删除图像A所得的相似度之和小于删除图像B所得的相似度之和,则将图像A定为冗余图像,将其从图像组中移除,对图像B不作操作。反之,将图像B定为冗余图像,将其从图像组中移除,对图像A不作操作。For example, there are 5 images A-E, where image A and image B are the two most similar images. Delete image A, calculate the sum of the similarity between the remaining four images B-E, delete image B, and calculate the sum of the similarity between the four images A, C-E. If the sum of the similarities obtained by deleting the image A is less than the sum of the similarities obtained by deleting the image B, then the image A is determined as a redundant image, it is removed from the image group, and no operation is performed on the image B. Conversely, image B is set as a redundant image, and it is removed from the image group, and no operation is performed on image A.
204:基于筛选后图像组的权重矩阵,找到最优基准图,确定拼接序列并对图像进行分组;204: Based on the weight matrix of the filtered image group, find the optimal reference image, determine the splicing sequence, and group the images;
其中,该步骤204包括:Wherein, this step 204 includes:
寻找最优基准图的方法如下:首先基于图像的相似度矩阵M,建立筛选后所有图像之间的权重矩阵W,其中图像i和图像j之间的权重定义如下:The method to find the optimal benchmark map is as follows: First, based on the similarity matrix M of the images, establish a weight matrix W between all images after screening, where the weight between image i and image j is defined as follows:
其中,ε为平衡权重,inf为无穷大,In为求自然对数。Among them, ε is the balance weight, inf is infinity, and In is the natural logarithm.
然后基于权重矩阵W,运行最短路径算法,计算每个点到达其他所有点的最短路径的权重之和,将最短路径权重之和最小的点所代表的图像作为最优基准图,用字母O来表示。Then, based on the weight matrix W, run the shortest path algorithm, calculate the sum of the weights of the shortest paths from each point to all other points, and take the image represented by the point with the smallest sum of the shortest path weights as the optimal reference image, which is represented by the letter O. express.
例如,有5张图像A-E以及它们的权重矩阵,运行最短路径算法,计算图像A分别到达B-E四副图像的最短路径,然后求得四条最短路径的权重之和,以此类推,计算出每幅图像到达剩余四幅图像的最短路径,求得每幅图像对应的四条最短路径的权重之和。如果图像A到达其余四副图像的最短路径的权重之和最小,则将图像A设置为最优基准图。For example, there are 5 images A-E and their weight matrices, run the shortest path algorithm, calculate the shortest path from image A to the four images B-E, and then calculate the sum of the weights of the four shortest paths, and so on, calculate each image The image reaches the shortest path of the remaining four images, and the sum of the weights of the four shortest paths corresponding to each image is obtained. If the sum of the weights of the shortest paths from image A to the remaining four images is the smallest, then image A is set as the optimal reference image.
确定拼接顺序的方法如下:基于图像组的权重矩阵W,以代表最优基准图的点O为起点,运用广度优先遍历算法,将所得点的序列作为图像的拼接序列。如图3所示,图7为61幅图像在空间上的关系图,每个点代表一幅图像,点与点之间的连线表示图像有重叠关系,点上的数字代表了图像的拼接次序,点0代表最优基准图,其他点上的数字表示基于权重矩阵W,以点0为起点,运行广度优先遍历算法,依次得到点的顺序,并将此作为全景图像进行拼接的顺序。The method of determining the splicing order is as follows: Based on the weight matrix W of the image group, starting from the point O representing the optimal reference map, using the breadth-first traversal algorithm, the sequence of obtained points is used as the image splicing sequence. As shown in Figure 3, Figure 7 is the spatial relationship diagram of 61 images, each point represents an image, the connection between the points indicates that the images have an overlapping relationship, and the numbers on the points represent the stitching of the images Order, point 0 represents the optimal benchmark map, and the numbers on other points represent the weight matrix W, starting from point 0, and running the breadth-first traversal algorithm to obtain the order of points in turn, and use this as the order of panorama image stitching.
对图像进行分组的方法如下:设定每组图像组包含图像的个数s(s小于图像组中图像的个数n),将图像序列中第1到s幅图像设定为第一组G1={Ii i=1,2,...,s},第s+1到2s幅图像设定为第二组G2={Ii i=s+1,s+2,...,2s},……,直到剩余图像数量小于s,将剩余图像设置为最后一组图像Gm={Ii i=(m-1)s+1,i=(m-1)s+2,...,n}。The method of grouping images is as follows: set the number s of images contained in each group of images (s is less than the number of images in the image group n), and set the first to s images in the image sequence as the first group G 1 ={I i i=1,2,...,s}, the s+1 to 2s-th images are set as the second group G 2 ={I i i=s+1,s+2,. ..,2s},... until the number of remaining images is less than s, set the remaining images as the last group of images G m ={I i i=(m-1)s+1,i=(m-1) s+2,...,n}.
205:图像全局相似变换与局部透视变换相结合的图像配准方法;205: Image registration method combining image global similarity transformation and local perspective transformation;
其中,该步骤205具体为:Wherein, the step 205 is specifically:
为了获得良好的全局一致性,防止形变误差的积累,通过相似变换模型进行图像组与图像组之间的初始化配准。In order to obtain good global consistency and prevent the accumulation of deformation errors, the initial registration between image groups is performed through a similarity transformation model.
将图像组Gm的相似变换模型的参数集合设为 表示图像组Gm中第i幅图像变换到最优基准图O的相似变换矩阵,n1为图像组Gm中图像的个数;与图像组Gm相邻的图像组Gm+的相似变换模型的参数集合设为 表示图像组Gm+1中第j幅图像变换到最优基准图O的相似变换矩阵,n2为图像组Gm+1中图像的个数,其中初始化配准的能量函数是:Set the parameter set of the similarity transformation model of the image group G m as Represents the similarity transformation matrix of the i-th image in the image group G m transformed to the optimal reference image O, n 1 is the number of images in the image group G m ; the similarity transformation of the image group G m + adjacent to the image group G m The parameter set of the model is set to Represents the similarity transformation matrix of the jth image in the image group G m+1 transformed to the optimal reference image O, n 2 is the number of images in the image group G m+1 , and the energy function of the initialization registration is:
E(S)=E1(S|Gm,Gm+1)+E2(Sm|Gm) (4)E(S)=E 1 (S|G m ,G m+1 )+E 2 (S m |G m ) (4)
其中,E1(S|Gm,Gm+1)代表图像组Gm和它相邻图像组Gm+1配准误差之和,S=Sm∪Sm+1,代表图像组Gm和图像组Gm+1的相似变换模型的参数集合的并集。Among them, E 1 (S|G m , G m+1 ) represents the sum of the registration errors of the image group G m and its adjacent image group G m+1 , S=S m ∪S m+1 , representing the image group G The union of the parameter sets of the similarity transformation models of m and image group G m+1 .
E2(Sm|Gm)代表图像组Gm内部具有重叠区域的图像之间的配准误差之和,它的定义为:E 2 (S m |G m ) represents the sum of the registration errors between images with overlapping regions within the image group G m , and it is defined as:
其中,t(x)代表对非齐次坐标x的变换,代表图像i变换到最优基准图O的相似变换矩阵,为图像i与图像j的第k个匹配点在图像i上的二维坐标,Mi,j代表图像i和图像j匹配点的个数。in, t(x) represents the transformation of the inhomogeneous coordinate x, The similarity transformation matrix representing the transformation of image i to the optimal reference image O, is the two-dimensional coordinate of the kth matching point between image i and image j on image i, and M i,j represents the number of matching points between image i and image j.
为了获得最优的全局一致性效果,通过最小化总能量函数来获得最优解。通过上述操作,获得了所有图像变换到最优基准图O的相似变换矩阵的集合:In order to obtain the best global consistency effect, the optimal solution is obtained by minimizing the total energy function. Through the above operations, the set of similar transformation matrices of all images transformed to the optimal reference map O is obtained:
其中为图像i变换到最优基准图O的相似变换矩阵,n为图像组的图像总个数。为了提高全景图像局部重叠区域的配准精度,接下来通过单应性模型进行优化。 in is the similarity transformation matrix for transforming the image i to the optimal reference graph O, and n is the total number of images in the image group. In order to improve the registration accuracy of the partial overlapping area of the panoramic image, the homography model is used for optimization.
单应性变换矩阵优化过程如下:将相似变换矩阵的参数设置为单应性变换矩阵的初始化参数,对变换矩阵的优化公式如下:The optimization process of the homography transformation matrix is as follows: The parameters are set to the homography transformation matrix The initialization parameters for the transformation matrix The optimization formula is as follows:
E(H)=E1(H)+λE2(H) (7)E(H)=E 1 (H)+λE 2 (H) (7)
其中,λ是平衡E1(H)和E2(H)的权重系数;E1(H)的目的是使图像间特征点配准误差的平方和最小,获得良好的局部配准效果,定义如下:Among them, λ is the weight coefficient for balancing E 1 (H) and E 2 (H); the purpose of E 1 (H) is to minimize the sum of squares of registration errors of feature points between images and obtain a good local registration effect, the definition as follows:
其中, 代表图像i变换到最优基准图O的单应性变换矩阵,为图像j变换到最优基准图O的单应性变换矩阵,为图像i和图像j的第k对匹配点在图像i上的二维点坐标,为图像j和图像i的第k对匹配点在图像i上的二维点坐标。in, represents the homography transformation matrix of the image i transformed to the optimal reference graph O, is the homography transformation matrix that transforms the image j to the optimal reference graph O, is the two-dimensional point coordinates of the kth pair of matching points of image i and image j on image i, is the two-dimensional point coordinates of the kth pair of matching points in image j and image i on image i.
E2(H)的目的是保持全局一致性,防止透视形变的严重积累。因此在用单应性模型进行优化时,单应性模型参数应该接近初始化的相似变换模型参数,防止在特征点变换过程中点的位移过大,定义如下:The purpose of E 2 (H) is to maintain global consistency and prevent severe accumulation of perspective distortion. Therefore, when using the homography model for optimization, the parameters of the homography model should be close to the parameters of the initialized similarity transformation model to prevent the displacement of the points from being too large during the transformation of the feature points. The definition is as follows:
通过对每幅图像单应性变换矩阵的参数进行细化,将优化的结果作为每幅图像最后进行拼接的变换矩阵。By transforming the matrix homography for each image The parameters are refined, and the optimized result is used as the transformation matrix for the final stitching of each image.
实施例3Example 3
为了验证该方法的有效性,在本节中对两组由无人机采集的图像集进行了实验,将用原图像组生成的全景图像与筛选后的图像组生成的全景图像进行比较。图4是61幅图像筛选前以及筛选后的拓扑结构对比,图4(a)是原始图像组的拓扑结构,图4(b)是经过筛选后图像组的拓扑结构,图5是对61幅图像筛选前以及筛选后的拼接结果对比,图5(a)是原始图像组的拼接结果,图5(b)是经过筛选后图像组的拼接结果;图6是对744幅图像筛选前以及筛选后的拓扑结构对比,图6(a)是原始图像组的拓扑结构,图6(b)是经过筛选后图像组的拓扑结构,图7是744幅图像筛选前以及筛选后的拼接结果对比,图7(a)是原始图像组的拼接结果,图7(b)是经过筛选后图像组的拼接结果。To verify the effectiveness of the method, experiments are conducted on two sets of images collected by UAVs in this section, and the panoramic images generated with the original image set are compared with the panoramic images generated by the filtered image set. Figure 4 is a comparison of the topological structures of the 61 images before and after screening, Figure 4(a) is the topological structure of the original image group, Figure 4(b) is the topological structure of the filtered image group, and Figure 5 is the topological structure of the 61 images Comparison of splicing results before and after image screening, Figure 5(a) is the splicing result of the original image group, Figure 5(b) is the splicing result of the image group after screening; Figure 6 is the splicing result of 744 images before and after screening Figure 6(a) is the topological structure of the original image group, Figure 6(b) is the topological structure of the filtered image group, and Figure 7 is the comparison of the splicing results of 744 images before and after screening, Figure 7(a) is the stitching result of the original image group, and Figure 7(b) is the stitching result of the filtered image group.
实验结果表明了该方法的有效性,该方法可以在保证最终拼接结果的完整性和质量的前提下,将冗余图像移除出图像组,提高了拼接速率。The experimental results show the effectiveness of the method, which can remove redundant images from the image group on the premise of ensuring the integrity and quality of the final stitching result, and improve the stitching rate.
参考文献references
[1]K.Konolige,Sparse sparse bundle adjustment,in:British MachineVision Conference,2010,pp.1–10.[1] K. Konolige, Sparse sparse bundle adjustment, in: British MachineVision Conference, 2010, pp.1–10.
[2]A.Y.Taygun Kekec,M.Unel,A new approach to real-time mosaicing ofaerialimages,Robot.Auton.Syst.62(12)(2014)1755–1767.[2] A.Y.Taygun Kekec, M.Unel, A new approach to real-time mosaicing ofaerialimages, Robot.Auton.Syst.62(12)(2014) 1755–1767.
[3]F.Caballero,L.Merino,J.Ferruz,A.Ollero,Homography based Kalmanfilter for mosaic building.applications to UAV position estimation,in:Proceedings of the IEEE International Conference on Robotics and Automation,2007,pp.2004–2009[3]F.Caballero,L.Merino,J.Ferruz,A.Ollero,Homography based Kalmanfilter for mosaic building.applications to UAV position estimation,in:Proceedings of the IEEE International Conference on Robotics and Automation,2007,pp.2004 –2009
[4]A.Elibol,N.Gracias,R.Garcia,Fast topology estimation for imagemosaicing using adaptive information thresholding,Robot.Auton.Syst.61(2)(2013)125–136.[4] A. Elibol, N. Gracias, R. Garcia, Fast topology estimation for imagemosaicing using adaptive information thresholding, Robot.Auton.Syst.61(2)(2013) 125–136.
[5]R.Szeliski,Image alignment and stitching:a tutorial,Found.TrendsComput.Graph.Vis.2(1)[5] R. Szeliski, Image alignment and stitching: a tutorial, Found. TrendsComput.Graph.Vis.2(1)
(2006)1–104.(2006) 1–104.
[6]T.E.Choe,I.Cohen,M.Lee,G.Medioni,Optimal global mosaic generationfrom retinal images,in:Proceedings of the IEEE International Conference onPattern Recognition,Vol.3,2006,pp.681–684[6] T.E.Choe, I.Cohen, M.Lee, G.Medioni, Optimal global mosaic generation from retinal images, in: Proceedings of the IEEE International Conference on Pattern Recognition, Vol.3, 2006, pp.681–684
[7]M.Xia,J.Yao,R.Xie,L.Li,and W.Zhang.Globally consistent alignmentfor planar mosaicking via topology analysis.Pattern Recognition,66:239–252,2017.[7] M. Xia, J. Yao, R. Xie, L. Li, and W. Zhang. Globally consistent alignment for planar mosaicking via topology analysis. Pattern Recognition, 66: 239–252, 2017.
[8]Bay,Herbert,Tinne Tuytelaars,and Luc Van Gool."Surf:Speeded uprobust features."European conference on computer vision.Springer,Berlin,Heidelberg,2006.[8] Bay, Herbert, Tinne Tuytelaars, and Luc Van Gool. "Surf: Speeded uprobust features." European conference on computer vision. Springer, Berlin, Heidelberg, 2006.
[9]Taussky O,Zassenhaus H.On the similarity transformation between amatirx and its transpose[J].Pacific Journal of Mathematics,1959,9(3):893-896.[9]Taussky O,Zassenhaus H.On the similarity transformation between amatirx and its transpose[J].Pacific Journal of Mathematics,1959,9(3):893-896.
[10]Dubrofsky E.Homography Estimation[D].UNIVERSITY OF BRITISHCOLUMBIA(Vancouver,2009.[10]Dubrofsky E.Homography Estimation[D].UNIVERSITY OF BRITISHCOLUMBIA(Vancouver,2009.
本领域技术人员可以理解附图只是一个优选实施例的示意图,上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。Those skilled in the art can understand that the accompanying drawing is only a schematic diagram of a preferred embodiment, and the above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages or disadvantages of the embodiments.
以上所述仅为本发明的较佳实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above are only preferred embodiments of the present invention and are not intended to limit the present invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included in the protection of the present invention. within the range.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911245280.1A CN111008932B (en) | 2019-12-06 | 2019-12-06 | A Panoramic Image Stitching Method Based on Image Screening |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911245280.1A CN111008932B (en) | 2019-12-06 | 2019-12-06 | A Panoramic Image Stitching Method Based on Image Screening |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111008932A true CN111008932A (en) | 2020-04-14 |
CN111008932B CN111008932B (en) | 2021-05-25 |
Family
ID=70115500
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911245280.1A Expired - Fee Related CN111008932B (en) | 2019-12-06 | 2019-12-06 | A Panoramic Image Stitching Method Based on Image Screening |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111008932B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113222817A (en) * | 2021-05-13 | 2021-08-06 | 哈尔滨工程大学 | Image feature extraction-based 12-channel video image splicing and image registration method |
CN115713700A (en) * | 2022-11-23 | 2023-02-24 | 广东省国土资源测绘院 | Method for collecting typical crop planting samples in cooperation with open space |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101950426A (en) * | 2010-09-29 | 2011-01-19 | 北京航空航天大学 | Vehicle relay tracking method in multi-camera scene |
CN102169576A (en) * | 2011-04-02 | 2011-08-31 | 北京理工大学 | Quantified evaluation method of image mosaic algorithms |
KR101464218B1 (en) * | 2014-04-25 | 2014-11-24 | 주식회사 이오씨 | Apparatus And Method Of Processing An Image Of Panorama Camera |
CN107274346A (en) * | 2017-06-23 | 2017-10-20 | 中国科学技术大学 | Real-time panoramic video splicing system |
CN109658370A (en) * | 2018-11-29 | 2019-04-19 | 天津大学 | Image split-joint method based on mixing transformation |
CN109741240A (en) * | 2018-12-25 | 2019-05-10 | 常熟理工学院 | A Multiplane Image Mosaic Method Based on Hierarchical Clustering |
-
2019
- 2019-12-06 CN CN201911245280.1A patent/CN111008932B/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101950426A (en) * | 2010-09-29 | 2011-01-19 | 北京航空航天大学 | Vehicle relay tracking method in multi-camera scene |
CN102169576A (en) * | 2011-04-02 | 2011-08-31 | 北京理工大学 | Quantified evaluation method of image mosaic algorithms |
KR101464218B1 (en) * | 2014-04-25 | 2014-11-24 | 주식회사 이오씨 | Apparatus And Method Of Processing An Image Of Panorama Camera |
CN107274346A (en) * | 2017-06-23 | 2017-10-20 | 中国科学技术大学 | Real-time panoramic video splicing system |
CN109658370A (en) * | 2018-11-29 | 2019-04-19 | 天津大学 | Image split-joint method based on mixing transformation |
CN109741240A (en) * | 2018-12-25 | 2019-05-10 | 常熟理工学院 | A Multiplane Image Mosaic Method Based on Hierarchical Clustering |
Non-Patent Citations (3)
Title |
---|
TAUSSKY O,ZASSENHAUS H.: "On the similarity transformation between a matrix and its transpose", 《PACIFIC JOURNAL OF MATHEMATICS》 * |
XIA MENGHAN,ET AL: "Globally consistent alignment for planar mosaicking via topology analysis", 《PATTERN RECOGNITION》 * |
常伟等: "一种改进的快速全景图像拼接算法", 《电子测量技术》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113222817A (en) * | 2021-05-13 | 2021-08-06 | 哈尔滨工程大学 | Image feature extraction-based 12-channel video image splicing and image registration method |
CN115713700A (en) * | 2022-11-23 | 2023-02-24 | 广东省国土资源测绘院 | Method for collecting typical crop planting samples in cooperation with open space |
CN115713700B (en) * | 2022-11-23 | 2023-07-28 | 广东省国土资源测绘院 | Air-ground cooperative typical crop planting sample collection method |
Also Published As
Publication number | Publication date |
---|---|
CN111008932B (en) | 2021-05-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108921781B (en) | Depth-based optical field splicing method | |
JP5778237B2 (en) | Backfill points in point cloud | |
CN112435325A (en) | VI-SLAM and depth estimation network-based unmanned aerial vehicle scene density reconstruction method | |
CN110223383A (en) | A kind of plant three-dimensional reconstruction method and system based on depth map repairing | |
WO2021004416A1 (en) | Method and apparatus for establishing beacon map on basis of visual beacons | |
CN109064404A (en) | It is a kind of based on polyphaser calibration panorama mosaic method, panoramic mosaic system | |
CN101419709B (en) | Plane target drone characteristic point automatic matching method for demarcating video camera | |
CN115719407B (en) | Large-scale aerial image-oriented distributed multi-view three-dimensional reconstruction method | |
CN113838191A (en) | A 3D reconstruction method based on attention mechanism and monocular multi-view | |
CN106846416A (en) | Unit beam splitting bi-eye passiveness stereo vision Accurate Reconstruction and subdivision approximating method | |
CN111008932B (en) | A Panoramic Image Stitching Method Based on Image Screening | |
CN105701770B (en) | A kind of human face super-resolution processing method and system based on context linear model | |
CN110544202A (en) | A parallax image stitching method and system based on template matching and feature clustering | |
CN112243518A (en) | Method and device for acquiring depth map and computer storage medium | |
CN113850293A (en) | Localization method based on joint optimization of multi-source data and direction priors | |
CN111461008B (en) | Unmanned aerial vehicle aerial photographing target detection method combined with scene perspective information | |
Li et al. | MODE: Multi-view omnidirectional depth estimation with 360∘ cameras | |
CN115456870A (en) | Multi-image splicing method based on external parameter estimation | |
Cui et al. | Tracks selection for robust, efficient and scalable large-scale structure from motion | |
CN113379899B (en) | Automatic extraction method for building engineering working face area image | |
CN113298871B (en) | Map generation method, positioning method, system thereof, and computer-readable storage medium | |
CN107256563B (en) | Underwater three-dimensional reconstruction system and method based on difference liquid level image sequence | |
CN114663789A (en) | A method for stitching aerial images of transmission line UAVs | |
CN113808273A (en) | A Disordered Incremental Sparse Point Cloud Reconstruction Method for Numerical Simulation of Ship Traveling Waves | |
Hu et al. | Refractive pose refinement: Generalising the geometric relation between camera and refractive interface |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210525 Termination date: 20211206 |