CN114862672B - A fast image stitching method based on vector shape preserving transformation - Google Patents

A fast image stitching method based on vector shape preserving transformation Download PDF

Info

Publication number
CN114862672B
CN114862672B CN202210340989.5A CN202210340989A CN114862672B CN 114862672 B CN114862672 B CN 114862672B CN 202210340989 A CN202210340989 A CN 202210340989A CN 114862672 B CN114862672 B CN 114862672B
Authority
CN
China
Prior art keywords
image
images
feature
parameters
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210340989.5A
Other languages
Chinese (zh)
Other versions
CN114862672A (en
Inventor
贺霖
贺新国
李军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202210340989.5A priority Critical patent/CN114862672B/en
Publication of CN114862672A publication Critical patent/CN114862672A/en
Application granted granted Critical
Publication of CN114862672B publication Critical patent/CN114862672B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computational Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Algebra (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本发明公开一种基于向量形状保持变换的图像快速拼接方法,包括,读取原始待拼接图像,对图像进行去噪预处理;提取所有图像的SIFT特征;提纯得出匹配图像之间的内点;通过内点与原始特征点之间的数量关系判断图像匹配关系;通过图像内点构造特征向量分步计算图像变换矩阵中的两类参数;使用匹配内点计算两类参数的初始值;运用构造的特征向量迭代优化计算变换矩阵的旋转参数;根据匹配内点和已优化的旋转参数迭代计算变换矩阵的平移参数;通过分步优化所得的旋转参数和平移参数计算每幅图像的变换矩阵;得到最终的拼接效果图。本发明能在显著提高拼接质量的同时,有效减少拼接多幅图片所需的时间,使其能满足工业实时拼接要求。

The invention discloses a fast image splicing method based on vector shape-preserving transformation, which includes: reading original images to be spliced, performing denoising preprocessing on the images; extracting SIFT features of all images; and purifying and obtaining interior points between matching images. ;Judge the image matching relationship through the quantitative relationship between interior points and original feature points; construct feature vectors through image interior points to calculate two types of parameters in the image transformation matrix step by step; use matching interior points to calculate the initial values of the two types of parameters; use The constructed feature vector is iteratively optimized to calculate the rotation parameters of the transformation matrix; the translation parameters of the transformation matrix are iteratively calculated based on the matching interior points and the optimized rotation parameters; the transformation matrix of each image is calculated through the rotation parameters and translation parameters obtained through step-by-step optimization; Get the final splicing effect picture. The invention can significantly improve the splicing quality and effectively reduce the time required for splicing multiple pictures, so that it can meet the industrial real-time splicing requirements.

Description

一种基于向量形状保持变换的图像快速拼接方法A fast image stitching method based on vector shape preserving transformation

技术领域Technical Field

本发明涉及图像处理技术邻域,具体涉及一种基于向量形状保持变换的图像快速拼接方法。The invention relates to the field of image processing technology, and in particular to a fast image splicing method based on vector shape-preserving transformation.

背景技术Background technique

图像拼接是一种将两幅或多幅具有一定重叠区域的局部区域观测图片融合形成一幅包含整体观测区域的广视角、高分辨率图片的技术。在实际应用场景中,如实际战场监视任务,图像拼接技术主要有两方面的具体要求,其一为快速性,即能在拍摄获取大量图片后快速完成拼接,实时拼接完成目标区域全景图;其二为准确性,即拼接后的图像无重影、鬼影等成像质量问题,成像自然美观,真实反映实际成像区域。因此,图像快速准确拼接对实际应用场景是一个极其重要的需求。图像拼接技术主要分为四个步骤,即图像采集、图像预处理、图像配准和图像融合。其中最为关键的步骤是图像配准,在这一步骤中会根据提取所得的图像特征点位置信息,计算匹配图像之间的变换矩阵参数,即旋转参数和平移参数,随后利用Bundle Adjustment方法迭代优化全部图像的所有参数,进而根据所求参数实现图像几何位置上的对齐。采用迭代优化后的参数能改善图像拼接效果,但由于其采用特征点位置信息的局限性,需同时计算所有参数,优化过程中矩阵规模过大,耗时严重,且由于变换矩阵中两种参数差异过大,优化时会相互影响,拼接后的图片仍存在配准效果不佳的现象。Image stitching is a technology that fuses two or more local area observation pictures with certain overlapping areas to form a wide-angle, high-resolution picture that includes the entire observation area. In actual application scenarios, such as actual battlefield surveillance tasks, image stitching technology mainly has two specific requirements. One is speed, that is, it can quickly complete the stitching after capturing a large number of pictures, and complete the panoramic view of the target area in real time; the other The second is accuracy, that is, the spliced image has no imaging quality problems such as ghosting and ghosting, and the image is natural and beautiful, truly reflecting the actual imaging area. Therefore, fast and accurate image stitching is an extremely important requirement for practical application scenarios. Image stitching technology is mainly divided into four steps, namely image acquisition, image preprocessing, image registration and image fusion. The most critical step is image registration. In this step, the transformation matrix parameters between matching images, namely rotation parameters and translation parameters, are calculated based on the extracted image feature point position information, and then the Bundle Adjustment method is used to iteratively optimize All parameters of all images, and then achieve alignment of the geometric positions of the images according to the required parameters. Using iteratively optimized parameters can improve the image splicing effect. However, due to the limitation of using feature point position information, all parameters need to be calculated at the same time. The matrix size during the optimization process is too large and time-consuming. Moreover, due to the two parameters in the transformation matrix If the difference is too large, they will affect each other during optimization, and the stitched images will still have poor registration results.

因此,设计一个计算快速准确的多图拼接方法是很有必要的。本发明基于向量形状保持变换的方法,使用向量分离变换矩阵中的两类参数,减小优化矩阵计算规模的同时消除优化过程中参数相互干扰的现象,使其在满足快速拼接的基础获取优秀的拼接效果。Therefore, it is necessary to design a multi-image stitching method that is computationally fast and accurate. This invention is based on the vector shape-preserving transformation method and uses two types of parameters in the vector separation transformation matrix to reduce the calculation scale of the optimization matrix and eliminate the phenomenon of mutual interference of parameters during the optimization process, so that it can obtain excellent results on the basis of fast splicing. Splicing effect.

发明内容Contents of the invention

为了克服现有技术存在的缺点与不足,本发明提供一种基于向量形状保持变换的图像快速拼接方法。In order to overcome the shortcomings and deficiencies of the existing technology, the present invention provides a fast image splicing method based on vector shape-preserving transformation.

本发明可以获取较好拼接效果的前提下加快多幅图像拼接速度,使其满足实际工业应用需求。The invention can speed up the splicing of multiple images on the premise of obtaining better splicing effect, so as to meet the needs of actual industrial applications.

为了实现以上目的,本发明采用如下技术方案:In order to achieve the above purpose, the present invention adopts the following technical solutions:

一种基于向量形状保持变换的图像快速拼接方法,包括:A fast image splicing method based on vector shape-preserving transformation, including:

读取多幅待拼接图像,并对其进行预处理;Read multiple images to be stitched and preprocess them;

分别提取每一幅图像的SIFT特征点并保存;Extract the SIFT feature points of each image separately and save them;

对任意两幅图像之间提取的SIFT特征点进行提纯,获取图像对之间的特征内点,并计算该图像对之间的匹配关系;Purify the SIFT feature points extracted between any two images, obtain the feature interior points between the image pairs, and calculate the matching relationship between the image pairs;

根据特征内点构造特征向量;Construct feature vectors based on feature interior points;

根据匹配图像的匹配关系及提纯后的特征内点,计算匹配的两幅图像之间的变换矩阵,进一步得到每幅图像的旋转参数和平移参数迭代优化初始值;According to the matching relationship of the matching images and the purified feature internal points, the transformation matrix between the two matching images is calculated, and the initial values of the iterative optimization of the rotation parameters and translation parameters of each image are further obtained;

根据特征向量迭代优化每幅图像的旋转变换参数;Iteratively optimize the rotation transformation parameters of each image according to the feature vector;

根据优化后的旋转变换参数及特征内点匹配关系迭代优化每幅图像的平移变换参数;Iteratively optimize the translation transformation parameters of each image based on the optimized rotation transformation parameters and feature interior point matching relationship;

根据优化后的旋转变换参数和平移变换参数计算最终的变换矩阵;Calculate the final transformation matrix based on the optimized rotation transformation parameters and translation transformation parameters;

根据计算所得的每幅图像的变换矩阵,得到所有图像的相对位置,通过图像融合步骤得到最终拼接图像。According to the calculated transformation matrix of each image, the relative positions of all images are obtained, and the final spliced image is obtained through the image fusion step.

进一步,所述预处理步骤为去噪处理。Further, the preprocessing step is denoising.

进一步,所述对任意两幅图像之间提取的SIFT特征点进行提纯采用RANSAC算法,去除无匹配的点。Furthermore, the SIFT feature points extracted between any two images are purified using a RANSAC algorithm to remove unmatched points.

进一步,获取图像对之间的特征内点,并计算该图像对之间的匹配关系,具体为:Furthermore, the feature inliers between the image pairs are obtained, and the matching relationship between the image pairs is calculated, specifically:

设其提取所得的SIFT特征匹配对总数量为nf,通过RANSAC算法进行提纯后得出的特征内点对数量为ni,若ni>8+0.3·nf,则两幅图像匹配。Assume that the total number of SIFT feature matching pairs extracted is n f , and the number of feature interior point pairs purified by the RANSAC algorithm is n i . If n i >8+0.3·n f , the two images match.

进一步,所述根据特征内点构造特征向量,具体为:Further, the feature vector is constructed according to the feature interior points, specifically:

在单幅图像中,按照特征内点的保存顺序,以第k个点为起点,则k+1个点为终点,依次顺序构造向量:In a single image, according to the order in which the feature interior points are saved, with the k-th point as the starting point and the k+1 point as the end point, vectors are constructed sequentially:

其中,分别为保存的特征内点中第k和第k+1个内点,/>为构造所得的第k个向量;in, are respectively the k-th and k+1-th interior points among the saved feature interior points,/> is the kth vector obtained by construction;

进一步,所述进一步得到每幅图像的旋转参数和平移参数迭代优化初始值,具体为:Furthermore, the initial values of the rotation parameters and translation parameters of each image are obtained by iterative optimization, specifically:

假设任意两幅匹配图像中的一对特征内点分别为和/> 变换矩阵旋转参数θ和平移参数T=[tx ty]T初始值具体计算步骤如下:Assume that a pair of feature inliers in any two matching images are and/> The specific calculation steps of the transformation matrix rotation parameter θ and translation parameter T = [t x t y ] T initial value are as follows:

其中A∈R2N×4,B∈R2N×1,N为提取出的图像内点特征数量,矩阵A,矩阵B即是根据所有特征内点计算得出的ak,bk组合而成。Among them, A∈R 2N×4 , B∈R 2N×1 , N is the number of extracted image interior point features. Matrix A and matrix B are the combination of a k and b k calculated based on all feature interior points. .

进一步,所述根据特征向量迭代优化每幅图像的旋转变换参数,具体为:Furthermore, the rotation transformation parameters of each image are optimized iteratively according to the feature vector, specifically:

将两幅匹配的图片i,j之间的误差定义为图像内部特征向量通过旋转变换后的向量差的模值之和,计算方式如下:The error between two matching images i and j is defined as the sum of the modulus of the vector difference after rotation transformation of the internal feature vectors of the image. The calculation method is as follows:

其中,和/>分别为图片i,,j中的第k个匹配的特征向量,/>表示图片i,,j中构造的所有特征向量,Rij表示图片i,,j间的旋转变换矩阵,in, and/> are the k-th matching feature vector in images i, j, respectively, /> Represents all feature vectors constructed in pictures i,, j, R ij represents the rotation transformation matrix between pictures i,, j,

全体图像的累积误差为所有图像与其匹配的图像之间的对应特征向量通过旋转变换矩阵后的距离总和,计算方式如下所示:The cumulative error of the entire image is the sum of the distances between the corresponding feature vectors of all images and their matching images after passing through the rotation transformation matrix. The calculation method is as follows:

其中n表示待拼接图像数量,I(i)表示所有与图像i匹配的图像,随后迭代优化计算所有旋转变换矩阵Rij,得出旋转参数θijWhere n represents the number of images to be spliced, I(i) represents all images matching image i, and then iteratively optimizes and calculates all rotation transformation matrices R ij to obtain the rotation parameter θ ij .

进一步,根据优化后的旋转变换参数和平移变换参数计算最终的变换矩阵,具体为:Furthermore, the final transformation matrix is calculated based on the optimized rotation transformation parameters and translation transformation parameters, specifically as follows:

将两幅匹配的图片i,j之间的误差定义为经过优化后旋转变换,所有特征内点通过平移变换后的距离总和,计算方式如下:The error between two matching images i and j is defined as the sum of the distances of all feature internal points after the optimized rotation transformation and the translation transformation, and is calculated as follows:

其中,和/>分别为图片i,,j中的第k个匹配内点,/>表示图片i,,j所有特征内点,/>表示图片i,,j间的已优化的旋转变换矩阵,Tij表示图片i,,j间的平移变换参数,全体图像的累积误差为所有图像与其匹配的图像之间的特征内点通过平移变换后的距离总和,计算方式如下所示:in, and/> are the kth matching inliers in images i, j respectively,/> Represents all feature points in image i, j,/> represents the optimized rotation transformation matrix between images i, j, Tij represents the translation transformation parameter between images i, j, and the cumulative error of the entire image is the sum of the distances between the feature inliers of all images and their matching images after the translation transformation, which is calculated as follows:

其中n表示待拼接图像数量,I(i)表示所有与图像i匹配的图像,随后进行迭代优化计算所有平移变换参数TijWhere n represents the number of images to be spliced, I(i) represents all images matching image i, and then iterative optimization is performed to calculate all translation transformation parameters T ij .

进一步,所述最终变换矩阵表示如下:Further, the final transformation matrix is expressed as follows:

进一步,矩阵A,矩阵B即是根据所有特征内点计算得出的ak,bk组合而成,具体为:Furthermore, matrix A and matrix B are the combination of a k and b k calculated based on all feature interior points, specifically:

本发明的有益效果:Beneficial effects of the present invention:

(1)本发明提出的利用向量替代传统特征点的方法计算配准过程中每幅图像的变换矩阵,可分步计算变换矩阵中的两种参数,进而可实现分步优化两种参数,分步后两个迭代优化矩阵相较于传统同时优化所有参数的迭代优化矩阵规模显著减小,大幅缩减计算量,使得拼接速度显著提高;(1) The method proposed by the present invention to use vectors to replace traditional feature points calculates the transformation matrix of each image in the registration process. It can calculate the two parameters in the transformation matrix step by step, and then realize the step-by-step optimization of the two parameters. Compared with the traditional iterative optimization matrix that optimizes all parameters at the same time, the size of the two iterative optimization matrices at the end of the step is significantly reduced, which greatly reduces the amount of calculation and significantly improves the splicing speed;

(2)本发明提出的利用向量替代传统特征点的方法计算配准过程中每幅图像的变换矩阵,可实现变换矩阵中两种参数的分离计算,排除优化过程中两种参数因差异过大相互干扰的现象,提高优化计算矩阵变换参数的准确性,使得拼接效果显著改善。(2) The method proposed by the present invention to use vectors to replace traditional feature points calculates the transformation matrix of each image in the registration process, which can realize the separate calculation of the two parameters in the transformation matrix and eliminate the excessive difference between the two parameters in the optimization process. The phenomenon of mutual interference improves the accuracy of optimizing the calculation matrix transformation parameters and significantly improves the splicing effect.

附图说明Description of drawings

图1是本发明基于向量形状保持变换拼接方法示意图;Figure 1 is a schematic diagram of the splicing method based on vector shape-preserving transformation according to the present invention;

图2(a)为原始待拼接图片,图2(b)原始未优化拼接效果图,为图2(c)为采用传统点特征优化计算变换矩阵拼接效果图,图2(d)为采用本实施例所述方法拼接效果图;Figure 2(a) is the original image to be spliced, Figure 2(b) is the original unoptimized splicing rendering, Figure 2(c) is the splicing rendering using traditional point feature optimization and calculation transformation matrix, and Figure 2(d) is the splicing rendering using this method. The splicing effect diagram of the method described in the embodiment;

图3是本发明的工作流程图。Figure 3 is a work flow chart of the present invention.

具体实施方式Detailed ways

下面结合实施例及附图,对本发明作进一步地详细说明,但本发明的实施方式不限于此。The present invention will be further described in detail below in conjunction with the embodiments and drawings, but the embodiments of the present invention are not limited thereto.

实施例Example

本实施例提供了一种基于向量形状保持变换的图像快速拼接方法,提取所有待拼接图像内的特征点,根据图像内部特征点构造向量特征,分离计算图像变换矩阵中的两种参数,使得其在迭代优化过程中不会因为参数差异过大相互影响,使得矩阵计算结果更为精确,大幅提高图像拼接效果。同时由于参数计算分离,迭代优化过程亦可分离,分离后两个优化矩阵规模相较于原始包含所有参数的优化矩阵规模大幅减小,拼接速度显著加快。This embodiment provides a fast image splicing method based on vector shape-preserving transformation, extracts all feature points in the image to be spliced, constructs vector features based on the internal feature points of the image, and separately calculates two parameters in the image transformation matrix, so that During the iterative optimization process, there will be no mutual influence due to excessive parameter differences, making the matrix calculation results more accurate and greatly improving the image stitching effect. At the same time, due to the separation of parameter calculations, the iterative optimization process can also be separated. After separation, the size of the two optimization matrices is greatly reduced compared with the original optimization matrix containing all parameters, and the splicing speed is significantly accelerated.

如图1及图3所示,包括以下步骤:As shown in Figures 1 and 3, the process includes the following steps:

S1读取所有原始待拼接图片,对原始图像进行预处理,去除噪声;S1 reads all original images to be spliced, preprocesses the original images and removes noise;

S2直接提取每一幅图像的SIFT特征点并保存;S2 directly extracts the SIFT feature points of each image and saves them;

S3通过RANSAC(Random sample consensus)算法对直接提取的SIFT特征点进行提纯,去除误匹配的特征点,保留正确匹配的特征内点并保存,进而判断图像匹配关系。假设其直接提取所得的SIFT特征匹配对总数量为nf,通过RANSAC算法进行提纯后得出的内点数量为ni。若ni>8+0.3·nf,即可判断两幅图像匹配;S3 uses the RANSAC (Random sample consensus) algorithm to purify the directly extracted SIFT feature points, remove mismatched feature points, retain and save the correctly matched feature points, and then determine the image matching relationship. Assume that the total number of SIFT feature matching pairs extracted directly is n f , and the number of interior points obtained after purification through the RANSAC algorithm is n i . If n i >8+0.3·n f , it can be judged that the two images match;

S4通过提纯后的特征内点构造特征向量,构造方式为按照保存后的内点顺序在单幅图像内中以前一个点为起点,后一个点为终点,依次顺序构造向量:S4 constructs a feature vector through the purified feature interior points. The construction method is to construct the vector in sequence according to the order of the saved interior points in a single image, with the previous point as the starting point and the latter point as the end point:

其中,分别为保存的特征内点中第k和第k+1个内点,/>为构造所得的第k个向量;in, are respectively the k-th and k+1-th interior points among the saved feature interior points,/> is the kth vector obtained by construction;

S5根据匹配图像匹配关系及提纯后所得的特征内点信息,计算匹配的两幅图像之间的变换矩阵,为Bundle Adjustment迭代优化提供初始值,假设两幅匹配图像中其中一对内点分别为和/>变换矩阵旋转参数θ和平移参数T=[tx ty]T初始值具体计算步骤如下:S5 calculates the transformation matrix between the two matched images based on the matching relationship of the matched images and the purified feature interior point information, providing initial values for the iterative optimization of Bundle Adjustment. Assume that one pair of interior points in the two matched images are respectively and/> The specific calculation steps for the initial value of the transformation matrix rotation parameter θ and translation parameter T = [t x ty ] T are as follows:

其中A∈R2N×4,B∈R2N×1,N为提取出的图像内点特征数量,矩阵A,B即是根据所有特征内点计算得出的ak,bk组合而成。Among them, A∈R 2N×4 , B∈R 2N×1 , N is the number of extracted image interior point features, and the matrices A and B are the combination of a k and b k calculated based on all feature interior points.

S6根据构造的特征向量匹配关系,迭代优化旋转参数,其具体计算步骤如下:S6 iteratively optimizes the rotation parameters based on the constructed feature vector matching relationship. The specific calculation steps are as follows:

将两幅匹配的图片i,j之间的误差定义为图像内部特征向量通过旋转变换后的向量差的模值之和,计算方式如下:The error between two matching images i and j is defined as the sum of the modulus of the vector difference after rotation transformation of the internal feature vectors of the image. The calculation method is as follows:

其中,和/>分别为图片i,,j中的第k个匹配的特征向量,/>表示图片i,,j中构造的所有特征向量,Rij表示图片i,,j间的旋转变换矩阵。in, and/> are the k-th matching feature vector in images i, j, respectively, /> Represents all feature vectors constructed in pictures i,,j, R ij represents the rotation transformation matrix between pictures i,,j.

全体图像的累积误差为所有图像与其匹配的图像之间的对应特征向量通过旋转变换矩阵后的距离总和。计算方式如下所示:The cumulative error of the entire image is the sum of the distances between the corresponding feature vectors of all images and their matching images after passing through the rotation transformation matrix. The calculation is as follows:

其中n表示待拼接图像数量,I(i)表示所有与图像i匹配的图像。随后根据Levenberg-Marquardt算法对其进行迭代优化计算所有旋转变换矩阵Rij,得出旋转参数θijWhere n represents the number of images to be spliced, and I(i) represents all images matching image i. Then it is iteratively optimized according to the Levenberg-Marquardt algorithm to calculate all rotation transformation matrices R ij , and the rotation parameter θ ij is obtained;

S7根据已优化好的旋转变换参数和图像之间的内点匹配关系,联合所有匹配图像之间的内点迭代优化每幅图像的平移参数其具体计算步骤如下:S7 combines the interior points between all matching images to iteratively optimize the translation parameters of each image based on the optimized rotation transformation parameters and the interior point matching relationship between the images. The specific calculation steps are as follows:

将两幅正确匹配的图片i,j之间的误差定义为经过优化后旋转变换,所有特征内点通过平移变换后的距离总和,计算方式如下:The error between two correctly matched images i and j is defined as the sum of the distances of all feature internal points after the optimized rotation transformation and the translation transformation, which is calculated as follows:

其中,和/>分别为图片i,,j中的第k个匹配内点,/>表示图片i,,j所有特征内点,/>表示图片i,,j间的已优化的旋转变换矩阵,Tij表示图片i,,j间的平移变换参数。全体图像的累积误差为所有图像与其匹配的图像之间的特征内点通过平移变换后的距离总和。计算方式如下所示:in, and/> are the kth matching inliers in images i, j respectively,/> Represents all feature points in image i, j,/> represents the optimized rotation transformation matrix between images i, j, and Tij represents the translation transformation parameter between images i, j. The cumulative error of the entire image is the sum of the distances between the feature inliers of all images and their matching images after translation transformation. The calculation method is as follows:

其中n表示待拼接图像数量,I(i(表示所有与图像i匹配的图像。随后根据Levenberg-Marquardt算法对其进行迭代优化计算所有平移变换参数TijWhere n represents the number of images to be spliced, I(i(represents all images matching image i. Then it is iteratively optimized according to the Levenberg-Marquardt algorithm to calculate all translation transformation parameters T ij ;

S8根据优化所得的旋转参数和平移参数,构造最终的变换矩阵,其具体形式如下:S8 constructs the final transformation matrix based on the rotation parameters and translation parameters obtained from optimization. Its specific form is as follows:

S9根据计算所得的变换矩阵,计算得到所有图像的相对位置,根据平均值融合算法实现图像融合,得到最终拼接图像。S9 calculates the relative positions of all images based on the calculated transformation matrix, implements image fusion based on the average fusion algorithm, and obtains the final spliced image.

图2(a)为原始待拼接图片,图2(b)原始未优化拼接效果图,图2(c)为采用传统特征点优化计算变换矩阵拼接效果图,图2(d)为采用本实施例所诉方法拼接效果图。采用传统特征点进行变换矩阵迭代计算的方法配准误差较大,存在明显的重影模糊现象,拼接质量不佳,且运行时间很长,不能满足实际工业运用需求;而本实施例所提出的基于特征向量分步迭代优化参数的方法得到的拼接效果好,图像配准误差较小,且所需时间相对于同时迭代优化全体参数的方法明显减少,说明本实施例具有相较于现有算法更符合实际运用需求。Figure 2(a) is the original image to be spliced, Figure 2(b) is the original unoptimized stitching rendering, Figure 2(c) is the stitching rendering using traditional feature point optimization calculation transformation matrix, Figure 2(d) is the stitching rendering using this implementation The splicing effect diagram of the method described in the example. The method of using traditional feature points to iteratively calculate the transformation matrix has a large registration error, obvious ghost blur phenomenon, poor splicing quality, and a long running time, which cannot meet the needs of actual industrial applications; and the method proposed in this embodiment The method of step-by-step iterative optimization of parameters based on feature vectors achieves good splicing results, small image registration errors, and the time required is significantly reduced compared to the method of simultaneously iteratively optimizing all parameters, indicating that this embodiment has better performance than existing algorithms. More in line with actual application needs.

上述实施例为本发明较佳的实施方式,但本发明的实施方式并不受所述实施例的限制,其他的任何未背离本发明的精神实质与原理下所作的改变、修饰、替代、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。The above embodiments are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above embodiments. Any other changes, modifications, substitutions, and combinations may be made without departing from the spirit and principles of the present invention. , simplification, should all be equivalent replacement methods, and are all included in the protection scope of the present invention.

Claims (7)

1.一种基于向量形状保持变换的图像快速拼接方法,其特征在于,包括:1. A method for fast image splicing based on vector shape preserving transformation, characterized by comprising: 读取多幅待拼接图像,并对其进行预处理;Read multiple images to be stitched and preprocess them; 分别提取每一幅图像的SIFT特征点并保存;Extract the SIFT feature points of each image separately and save them; 对任意两幅图像之间提取的SIFT特征点进行提纯,获取图像对之间的特征内点,并计算该图像对之间的匹配关系;Purify the SIFT feature points extracted between any two images, obtain the feature interior points between the image pairs, and calculate the matching relationship between the image pairs; 根据特征内点构造特征向量;Construct feature vectors based on feature interior points; 根据匹配图像的匹配关系及提纯后的特征内点,计算匹配的两幅图像之间的变换矩阵,进一步得到每幅图像的旋转参数和平移参数迭代优化初始值;According to the matching relationship of the matched images and the purified feature interior points, calculate the transformation matrix between the two matched images, and further obtain the iterative optimization initial values of the rotation parameters and translation parameters of each image; 根据特征向量迭代优化每幅图像的旋转变换参数;Iteratively optimize the rotation transformation parameters of each image according to the feature vector; 根据优化后的旋转变换参数及特征内点匹配关系迭代优化每幅图像的平移变换参数;Iteratively optimize the translation transformation parameters of each image based on the optimized rotation transformation parameters and feature interior point matching relationships; 根据优化后的旋转变换参数和平移变换参数计算最终的变换矩阵;Calculate the final transformation matrix based on the optimized rotation transformation parameters and translation transformation parameters; 根据计算所得的每幅图像的变换矩阵,得到所有图像的相对位置,通过图像融合步骤得到最终拼接图像;According to the calculated transformation matrix of each image, the relative positions of all images are obtained, and the final spliced image is obtained through the image fusion step; 所述根据特征内点构造特征向量,具体为:The feature vector is constructed according to the feature interior points, specifically: 在单幅图像中,按照特征内点的保存顺序,以第k个点为起点,则k+1个点为终点,依次顺序构造向量:In a single image, according to the order in which the feature interior points are saved, with the k-th point as the starting point and the k+1 point as the end point, vectors are constructed sequentially: 其中,分别为保存的特征内点中第k和第k+1个内点,/>为构造所得的第k个向量;in, They are the kth and k+1th inliers in the saved feature inliers, respectively./> is the kth vector obtained by construction; 所述进一步得到每幅图像的旋转参数和平移参数迭代优化初始值,具体为:The further steps are to obtain the iterative optimization initial values of the rotation parameters and translation parameters of each image, specifically as follows: 假设任意两幅匹配图像中的一对特征内点分别为和/> 变换矩阵旋转参数θ和平移参数T=[txty]T初始值具体计算步骤如下:Assume that a pair of feature interior points in any two matching images are respectively and/> The specific calculation steps for the initial value of the transformation matrix rotation parameter θ and translation parameter T = [t x ty ] T are as follows: 其中A∈R2N×4,B∈R2N×1,N为提取出的图像内点特征数量,矩阵A,矩阵B即是根据所有特征内点计算得出的ak,bk组合而成;Where A∈R 2N×4 , B∈R 2N×1 , N is the number of extracted image internal point features, and matrix A and matrix B are the combination of a k and b k calculated based on all feature internal points; 所述根据特征向量迭代优化每幅图像的旋转变换参数,具体为:The rotation transformation parameters of each image are iteratively optimized according to the feature vector, specifically: 将两幅匹配的图片i,j之间的误差定义为图像内部特征向量通过旋转变换后的向量差的模值之和,计算方式如下:The error between two matching images i and j is defined as the sum of the modulus of the vector difference after rotation transformation of the internal feature vectors of the image. The calculation method is as follows: 其中,和/>分别为图片i,,j中的第k个匹配的特征向量,/>表示图片i,,j中构造的所有特征向量,Rij表示图片i,,j间的旋转变换矩阵,in, and/> are the k-th matching feature vector in images i, j, respectively, /> Represents all feature vectors constructed in pictures i,, j, R ij represents the rotation transformation matrix between pictures i,, j, 全体图像的累积误差为所有图像与其匹配的图像之间的对应特征向量通过旋转变换矩阵后的距离总和,计算方式如下所示:The cumulative error of the entire image is the sum of the distances between the corresponding feature vectors of all images and their matching images after passing through the rotation transformation matrix. The calculation method is as follows: 其中n表示待拼接图像数量,I(i)表示所有与图像i匹配的图像,随后迭代优化计算所有旋转变换矩阵Rij,得出旋转参数θijWhere n represents the number of images to be spliced, I(i) represents all images matching image i, and then iteratively optimizes and calculates all rotation transformation matrices R ij to obtain the rotation parameter θ ij . 2.根据权利要求1所述的图像快速拼接方法,其特征在于,所述预处理步骤为去噪处理。2. The image fast splicing method according to claim 1, characterized in that the preprocessing step is denoising. 3.根据权利要求1所述的图像快速拼接方法,其特征在于,所述对任意两幅图像之间提取的SIFT特征点进行提纯采用RANSAC算法,去除无匹配的点。3. The image fast splicing method according to claim 1, characterized in that the RANSAC algorithm is used to purify the SIFT feature points extracted between any two images to remove unmatched points. 4.根据权利要求3所述的图像快速拼接方法,其特征在于,获取图像对之间的特征内点,并计算该图像对之间的匹配关系,具体为:4. The image fast splicing method according to claim 3, characterized in that, the characteristic interior points between the image pairs are obtained, and the matching relationship between the image pairs is calculated, specifically: 设其提取所得的SIFT特征匹配对总数量为nf,通过RANSAC算法进行提纯后得出的特征内点对数量为ni,若ni>8+0.3·nf,则两幅图像匹配。Assume that the total number of SIFT feature matching pairs extracted is n f , and the number of feature point pairs purified by the RANSAC algorithm is n i . If n i >8+0.3·n f , the two images match. 5.根据权利要求1所述的图像快速拼接方法,其特征在于,根据优化后的旋转变换参数和平移变换参数计算最终的变换矩阵,具体为:5. The image fast splicing method according to claim 1, characterized in that the final transformation matrix is calculated according to the optimized rotation transformation parameters and translation transformation parameters, specifically: 将两幅匹配的图片i,j之间的误差定义为经过优化后旋转变换,所有特征内点通过平移变换后的距离总和,计算方式如下:The error between two matched images i and j is defined as the sum of distances between all feature interior points after optimization of rotation transformation and translation transformation. The calculation method is as follows: 其中,和/>分别为图片i,,j中的第k个匹配内点,/>表示图片i,,j所有特征内点,/>表示图片i,,j间的已优化的旋转变换矩阵,Tij表示图片i,,j间的平移变换参数,全体图像的累积误差为所有图像与其匹配的图像之间的特征内点通过平移变换后的距离总和,计算方式如下所示:in, and/> are the kth matching inliers in images i, j respectively,/> Represents all feature points in image i, j,/> represents the optimized rotation transformation matrix between images i, j, Tij represents the translation transformation parameter between images i, j, and the cumulative error of the entire image is the sum of the distances between the feature inliers of all images and their matching images after the translation transformation, which is calculated as follows: 其中n表示待拼接图像数量,I(i)表示所有与图像i匹配的图像,随后进行迭代优化计算所有平移变换参数TijWhere n represents the number of images to be spliced, I(i) represents all images matching image i, and then iterative optimization is performed to calculate all translation transformation parameters T ij . 6.根据权利要求5所述的图像快速拼接方法,其特征在于,所述最终变换矩阵表示如下:6. The image fast splicing method according to claim 5, characterized in that the final transformation matrix is expressed as follows: 7.根据权利要求1所述的图像快速拼接方法,,其特征在于,矩阵A,矩阵B即是根据所有特征内点计算得出的ak,bk组合而成,具体为:7. The image fast splicing method according to claim 1, characterized in that matrix A and matrix B are a combination of a k and b k calculated based on all feature interior points, specifically:
CN202210340989.5A 2022-04-02 2022-04-02 A fast image stitching method based on vector shape preserving transformation Active CN114862672B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210340989.5A CN114862672B (en) 2022-04-02 2022-04-02 A fast image stitching method based on vector shape preserving transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210340989.5A CN114862672B (en) 2022-04-02 2022-04-02 A fast image stitching method based on vector shape preserving transformation

Publications (2)

Publication Number Publication Date
CN114862672A CN114862672A (en) 2022-08-05
CN114862672B true CN114862672B (en) 2024-04-02

Family

ID=82628688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210340989.5A Active CN114862672B (en) 2022-04-02 2022-04-02 A fast image stitching method based on vector shape preserving transformation

Country Status (1)

Country Link
CN (1) CN114862672B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215770A (en) * 2020-10-10 2021-01-12 成都数之联科技有限公司 Image processing method, system, device and medium
CN113658041A (en) * 2021-07-23 2021-11-16 华南理工大学 Image fast splicing method based on multi-image feature joint matching
CN114219706A (en) * 2021-11-08 2022-03-22 华南理工大学 A fast image stitching method based on grid partition feature point reduction

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11379688B2 (en) * 2017-03-16 2022-07-05 Packsize Llc Systems and methods for keypoint detection with convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215770A (en) * 2020-10-10 2021-01-12 成都数之联科技有限公司 Image processing method, system, device and medium
CN113658041A (en) * 2021-07-23 2021-11-16 华南理工大学 Image fast splicing method based on multi-image feature joint matching
CN114219706A (en) * 2021-11-08 2022-03-22 华南理工大学 A fast image stitching method based on grid partition feature point reduction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于色彩信息的自适应进化点云拼接算法;邹力;计算机应用研究;20190131;第36卷(第1期);第303-307 *

Also Published As

Publication number Publication date
CN114862672A (en) 2022-08-05

Similar Documents

Publication Publication Date Title
CN111080724B (en) Fusion method of infrared light and visible light
CN110211043B (en) A Registration Method Based on Grid Optimization for Panoramic Image Stitching
CN107918927B (en) A fast image stitching method with matching strategy fusion and low error
CN105205781B (en) Transmission line of electricity Aerial Images joining method
CN104966270B (en) A kind of more image split-joint methods
WO2021098081A1 (en) Trajectory feature alignment-based multispectral stereo camera self-calibration algorithm
CN110969667B (en) Multispectral Camera Extrinsic Self-Correction Algorithm Based on Edge Feature
WO2021098083A1 (en) Multispectral camera dynamic stereo calibration algorithm based on salient feature
CN106447601B (en) Unmanned aerial vehicle remote sensing image splicing method based on projection-similarity transformation
CN111553939B (en) An Image Registration Algorithm for Multi-camera Cameras
CN103595981B (en) Based on the color filter array image demosaicing method of non-local low rank
CN101540046A (en) Panoramagram montage method and device based on image characteristics
CN105761233A (en) FPGA-based real-time panoramic image mosaic method
CN111553841B (en) Real-time video splicing method based on optimal suture line updating
CN106910208A (en) A kind of scene image joining method that there is moving target
CN108257089A (en) A kind of method of the big visual field video panorama splicing based on iteration closest approach
CN104240229A (en) Self-adaptation polarline correcting method based on infrared binocular camera
CN110136090A (en) Robust Elastic Model UAV Image Stitching Method with Local Preserving Registration
CN111127353A (en) A high-dynamic image de-ghosting method based on block registration and matching
CN114972022A (en) A fusion hyperspectral super-resolution method and system based on unaligned RGB images
CN117391942A (en) A hyperspectral image stitching method based on new spectral SIFT features
CN110580715B (en) Image alignment method based on illumination constraint and grid deformation
CN109598675B (en) Splicing method of multiple repeated texture images
CN114862672B (en) A fast image stitching method based on vector shape preserving transformation
CN115631094A (en) Real-time image mosaic method of UAV based on spherical correction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant