CN114862672A - Image fast splicing method based on vector shape preserving transformation - Google Patents

Image fast splicing method based on vector shape preserving transformation Download PDF

Info

Publication number
CN114862672A
CN114862672A CN202210340989.5A CN202210340989A CN114862672A CN 114862672 A CN114862672 A CN 114862672A CN 202210340989 A CN202210340989 A CN 202210340989A CN 114862672 A CN114862672 A CN 114862672A
Authority
CN
China
Prior art keywords
image
images
points
feature
transformation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210340989.5A
Other languages
Chinese (zh)
Other versions
CN114862672B (en
Inventor
贺霖
贺新国
李军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202210340989.5A priority Critical patent/CN114862672B/en
Publication of CN114862672A publication Critical patent/CN114862672A/en
Application granted granted Critical
Publication of CN114862672B publication Critical patent/CN114862672B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computational Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Algebra (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image fast splicing method based on vector shape keeping transformation, which comprises the steps of reading an original image to be spliced, and carrying out denoising pretreatment on the image; extracting SIFT features of all images; purifying to obtain inner points between the matched images; judging an image matching relation through the quantity relation between the interior points and the original characteristic points; constructing characteristic vectors through the image interior points and calculating two types of parameters in an image transformation matrix step by step; calculating initial values of the two types of parameters by using the matching interior points; iterative optimization calculation of rotation parameters of the transformation matrix by using the constructed eigenvector; iteratively calculating translation parameters of the transformation matrix according to the matched interior points and the optimized rotation parameters; calculating a transformation matrix of each image through the rotation parameters and the translation parameters obtained by step optimization; and obtaining a final splicing effect picture. The invention can effectively reduce the time required for splicing a plurality of pictures while obviously improving the splicing quality, so that the invention can meet the industrial real-time splicing requirement.

Description

Image fast splicing method based on vector shape preserving transformation
Technical Field
The invention relates to the field of image processing technology, in particular to a vector shape retaining transformation-based image fast splicing method.
Background
Image stitching is a technology for fusing two or more local region observation pictures with certain overlapping regions to form a wide-view-angle and high-resolution picture containing an integral observation region. In practical application scenes, such as an actual battlefield monitoring task, the image splicing technology mainly has two specific requirements, one is rapidity, namely, splicing can be quickly completed after a large number of pictures are shot and obtained, and a target area panoramic image is spliced in real time; the second is accuracy, namely, the spliced image has no imaging quality problems such as double images, ghost images and the like, the imaging is natural and attractive, and the actual imaging area is truly reflected. Therefore, the rapid and accurate image stitching is an extremely important requirement for practical application scenes. The image stitching technology is mainly divided into four steps, namely image acquisition, image preprocessing, image registration and image fusion. The most critical step is image registration, in the step, transformation matrix parameters, namely rotation parameters and translation parameters, between the matched images are calculated according to the position information of the extracted image feature points, then all the parameters of all the images are optimized in an iteration mode by using a Bundle Adjustment method, and further the alignment of the geometric positions of the images is achieved according to the parameters. The image splicing effect can be improved by adopting the parameters after iterative optimization, but due to the limitation of adopting the position information of the characteristic points, all the parameters need to be calculated simultaneously, the matrix scale is overlarge in the optimization process, the time consumption is serious, and due to the overlarge difference of two parameters in the transformation matrix, the two parameters can mutually influence each other in the optimization process, and the spliced images still have the phenomenon of poor registration effect.
Therefore, it is necessary to design a multi-graph stitching method with fast and accurate calculation. The invention is based on a vector shape keeping transformation method, uses two types of parameters in a vector separation transformation matrix, reduces the calculation scale of an optimization matrix, and eliminates the phenomenon of mutual interference of the parameters in the optimization process, so that the method can obtain excellent splicing effect on the basis of meeting the requirement of quick splicing.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a vector shape retaining transformation-based image fast splicing method.
The invention can accelerate the splicing speed of a plurality of images on the premise of obtaining better splicing effect, so that the invention meets the requirement of practical industrial application.
In order to achieve the purpose, the invention adopts the following technical scheme:
a fast image splicing method based on vector shape preserving transformation comprises the following steps:
reading a plurality of images to be spliced and preprocessing the images;
respectively extracting and storing SIFT feature points of each image;
extracting SIFT feature points between any two images, purifying the SIFT feature points to obtain feature interior points between image pairs, and calculating the matching relation between the image pairs;
constructing a feature vector according to the feature interior points;
calculating a transformation matrix between the two matched images according to the matching relationship of the matched images and the purified characteristic interior points, and further obtaining an iterative optimization initial value of a rotation parameter and a translation parameter of each image;
iteratively optimizing the rotation transformation parameters of each image according to the feature vectors;
iteratively optimizing the translation transformation parameters of each image according to the optimized rotation transformation parameters and the feature interior point matching relation;
calculating a final transformation matrix according to the optimized rotation transformation parameters and translation transformation parameters;
and obtaining the relative positions of all the images according to the calculated transformation matrix of each image, and obtaining the final spliced image through an image fusion step.
Further, the preprocessing step is denoising processing.
Further, the SIFT feature points extracted between any two images are purified by using an RANSAC algorithm, and points without matching are removed.
Further, obtaining feature interior points between the image pairs, and calculating a matching relationship between the image pairs, specifically:
the total number of SIFT feature matching pairs extracted by the method is n f The number of characteristic interior point pairs obtained after the characteristic interior point pairs are purified by the RANSAC algorithm is n i If n is i >8+0.3·n f Then the two images match.
Further, the constructing a feature vector according to the feature interior points specifically includes:
in a single image, according to the storage sequence of the characteristic interior points, taking the kth point as a starting point and k +1 points as an end point, sequentially constructing vectors:
Figure BDA0003579372950000021
wherein,
Figure BDA0003579372950000022
respectively the k-th and k + 1-th inliers in the saved characteristic inliers,
Figure BDA0003579372950000023
to construct the resulting kth vector;
further, the further obtaining of the initial value of the iterative optimization of the rotation parameter and the translation parameter of each image specifically includes:
suppose that a pair of characteristic interior points in any two matched images are respectively
Figure BDA0003579372950000024
And
Figure BDA0003579372950000025
Figure BDA0003579372950000026
transforming the rotation parameter theta and translation parameter T ═ T of matrix x t y ] T The specific calculation steps of the initial value are as follows:
Figure BDA0003579372950000031
Figure BDA0003579372950000032
Figure BDA0003579372950000033
Figure BDA0003579372950000034
Figure BDA0003579372950000035
wherein A ∈ R 2N×4 ,B∈R 2N×1 N is the feature quantity of the extracted image interior points, and the matrix A and the matrix B are a calculated according to all the feature interior points k ,b k And (3) combining the components.
Further, the iteratively optimizing the rotation transformation parameters of each image according to the feature vector specifically includes:
defining the error between the two matched pictures i, j as the sum of the mode values of the vector difference of the image internal feature vector after rotation transformation, and calculating the error in the following way:
Figure BDA0003579372950000036
wherein,
Figure BDA0003579372950000037
and
Figure BDA0003579372950000038
the k-th matched feature vector in pictures i, j,
Figure BDA0003579372950000039
representing all the feature vectors, R, constructed in pictures i, j ij Representing a rotational transformation matrix between pictures i, j,
Figure BDA00035793729500000310
the accumulated error of the whole image is the sum of distances of corresponding feature vectors between all images and matched images after matrix transformation is rotated, and the calculation method is as follows:
Figure BDA00035793729500000311
wherein n represents the number of images to be spliced, I (i) represents all images matched with the image i, and then all rotation transformation matrixes R are calculated through iterative optimization ij To obtain a rotation parameter theta ij
Further, calculating a final transformation matrix according to the optimized rotation transformation parameters and translation transformation parameters, specifically:
defining the error between the two matched pictures i, j as the distance sum of all the characteristic interior points after translation transformation after optimization and rotation transformation, and calculating the following modes:
Figure BDA0003579372950000041
wherein,
Figure BDA0003579372950000042
and
Figure BDA0003579372950000043
the k-th matching interior point in pictures i, j,
Figure BDA0003579372950000044
representing all the characteristic interior points of pictures i, j,
Figure BDA0003579372950000045
representing an optimized rotation transformation matrix, T, between pictures i, j ij Representing the translation transformation parameters between pictures i, j, wherein the accumulated error of the whole image is the sum of distances of characteristic interior points between all images matched with the images through translation transformation, and the calculation mode is as follows:
Figure BDA0003579372950000046
wherein n represents the number of images to be spliced, I (i) represents all images matched with the image i, and then iterative optimization is carried out to calculate all translation transformation parameters T ij
Further, the final transformation matrix is represented as follows:
Figure BDA0003579372950000047
further, the matrix A and the matrix B are a calculated according to all the characteristic interior points k ,b k The combination is as follows:
Figure BDA0003579372950000048
the invention has the beneficial effects that:
(1) the method for calculating the transformation matrix of each image in the registration process by using the vector to replace the traditional characteristic points can calculate two parameters in the transformation matrix step by step, further realize step-by-step optimization of the two parameters, and compared with the traditional iterative optimization matrix for simultaneously optimizing all the parameters, the two iterative optimization matrices after step by step are remarkably reduced in scale, the calculated amount is greatly reduced, and the splicing speed is remarkably improved;
(2) the method for calculating the transformation matrix of each image in the registration process by using the vector instead of the traditional characteristic points can realize the separation calculation of two parameters in the transformation matrix, eliminate the phenomenon that the two parameters interfere with each other due to overlarge difference in the optimization process, improve the accuracy of the optimization calculation of the transformation parameters of the matrix and obviously improve the splicing effect.
Drawings
FIG. 1 is a schematic diagram of the vector shape preserving transform-based stitching method of the present invention;
fig. 2(a) is an original picture to be spliced, fig. 2(b) is an original non-optimized splicing effect diagram, fig. 2(c) is a splicing effect diagram of a transformation matrix calculated by using conventional point feature optimization, and fig. 2(d) is a splicing effect diagram of the method according to the embodiment;
fig. 3 is a flow chart of the operation of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited to these examples.
Examples
The embodiment provides an image fast splicing method based on vector shape preserving transformation, which extracts feature points in all images to be spliced, constructs vector features according to the feature points in the images, and separately calculates two parameters in an image transformation matrix, so that the two parameters cannot be influenced by overlarge parameter difference in an iterative optimization process, a matrix calculation result is more accurate, and an image splicing effect is greatly improved. Meanwhile, due to parameter calculation separation, the iterative optimization process can be separated, the scale of the two separated optimization matrices is greatly reduced compared with the scale of the original optimization matrix containing all parameters, and the splicing speed is obviously accelerated.
As shown in fig. 1 and 3, the method comprises the following steps:
s1, reading all original pictures to be spliced, preprocessing the original pictures and removing noise;
s2, directly extracting and storing SIFT feature points of each image;
s3, directly extracting SIFT feature points through RANSAC (random sample consensus) algorithm, purifying, removing mismatching feature points, reserving and storing correctly matched feature interior points, and further judging image matching relationship. The SIFT characteristics obtained by direct extraction of the SIFT are assumedThe total number of matched pairs is n f The number of interior points obtained by purification by RANSAC algorithm is n i . If n is i >8+0.3·n f Then the matching of the two images can be judged;
s4, constructing a feature vector through the purified feature interior points, wherein the construction mode is that the vector is sequentially constructed by taking the front point as a starting point and the rear point as an end point in a single image according to the sequence of the stored interior points:
Figure BDA0003579372950000051
wherein,
Figure BDA0003579372950000052
respectively the k-th and k + 1-th inliers in the saved characteristic inliers,
Figure BDA0003579372950000053
to construct the resulting kth vector;
s5, calculating a transformation matrix between the two matched images according to the matching relationship of the matched images and the feature interior point information obtained after purification, providing an initial value for Bundle Adjustment iteration optimization, and assuming that one pair of interior points in the two matched images are respectively
Figure BDA0003579372950000061
And
Figure BDA0003579372950000062
transforming matrix rotation parameter theta and translation parameter T ═ T x t y ] T The specific calculation steps of the initial value are as follows:
Figure BDA0003579372950000063
Figure BDA0003579372950000064
Figure BDA0003579372950000065
Figure BDA0003579372950000066
Figure BDA0003579372950000067
wherein A ∈ R 2N×4 ,B∈R 2N×1 N is the feature quantity of the extracted image interior points, and the matrixes A and B are a calculated according to all the feature interior points k ,b k And (3) combining the components.
S6, according to the constructed feature vector matching relation, iteratively optimizing the rotation parameters, wherein the specific calculation steps are as follows:
defining the error between the two matched pictures i, j as the sum of the mode values of the vector difference of the image internal feature vector after rotation transformation, and calculating the error in the following way:
Figure BDA0003579372950000068
wherein,
Figure BDA0003579372950000069
and
Figure BDA00035793729500000610
the k-th matched feature vector in pictures i, j,
Figure BDA00035793729500000611
representing all the feature vectors, R, constructed in pictures i, j ij Representing a rotational transformation matrix between pictures i, j.
Figure BDA00035793729500000612
The accumulated error of the whole image is the sum of the distances of corresponding characteristic vectors between all the images and the matched images after the corresponding characteristic vectors pass through a rotation transformation matrix. The calculation is as follows:
Figure BDA00035793729500000613
where n represents the number of images to be stitched and i (i) represents all images matching image i. Then, performing iterative optimization on the Levenberg-Marquardt algorithm to calculate all rotation transformation matrixes R ij To obtain a rotation parameter theta ij
S7, according to the optimized rotation transformation parameters and the interior point matching relation between the images, combining the interior points between all the matched images to iteratively optimize the translation parameters of each image
Figure BDA0003579372950000071
The specific calculation steps are as follows:
defining the error between two correctly matched pictures i, j as the distance sum of all the feature interior points after translation transformation after optimization and rotation transformation, and calculating the following modes:
Figure BDA0003579372950000072
wherein,
Figure BDA0003579372950000073
and
Figure BDA0003579372950000074
the k-th matching interior point in pictures i, j,
Figure BDA0003579372950000075
representing all the characteristic interior points of pictures i, j,
Figure BDA0003579372950000076
representing an optimized rotation transformation matrix, T, between pictures i, j ij Representing the translation transformation parameters between pictures i, j. The accumulated error of the whole image is the sum of the distances of characteristic inner points between all the images and the matched images after translation transformation. The calculation is as follows:
Figure BDA0003579372950000077
where n denotes the number of images to be stitched, I (I (representing all images matching image I. all translation transformation parameters T are then calculated by iterative optimization of this according to the Levenberg-Marquardt algorithm) ij
S8, constructing a final transformation matrix according to the rotation parameters and the translation parameters obtained by optimization, wherein the specific form is as follows:
Figure BDA0003579372950000078
and S9, calculating the relative positions of all the images according to the calculated transformation matrix, and realizing image fusion according to an average value fusion algorithm to obtain the final spliced image.
Fig. 2(a) is an original picture to be spliced, fig. 2(b) is an original non-optimized splicing effect diagram, fig. 2(c) is a splicing effect diagram obtained by optimizing and calculating a transformation matrix by using a conventional feature point, and fig. 2(d) is a splicing effect diagram obtained by using the method of the present embodiment. The method for performing transformation matrix iterative computation by adopting the traditional characteristic points has large registration error, obvious ghost fuzzy phenomenon, poor splicing quality and long running time, and cannot meet the actual industrial application requirement; the method for optimizing the parameters based on the feature vector step-by-step iteration provided by the embodiment has the advantages of good splicing effect, small image registration error and obviously reduced time required compared with the method for simultaneously optimizing all parameters by iteration, and the embodiment is more in line with actual application requirements compared with the existing algorithm.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (10)

1. A method for quickly splicing images based on vector shape preserving transformation is characterized by comprising the following steps:
reading a plurality of images to be spliced and preprocessing the images;
respectively extracting and storing SIFT feature points of each image;
extracting SIFT feature points between any two images, purifying the SIFT feature points to obtain feature interior points between image pairs, and calculating the matching relation between the image pairs;
constructing a feature vector according to the feature interior points;
calculating a transformation matrix between the two matched images according to the matching relationship of the matched images and the purified characteristic interior points, and further obtaining an iterative optimization initial value of a rotation parameter and a translation parameter of each image;
iteratively optimizing the rotation transformation parameters of each image according to the feature vectors;
iteratively optimizing the translation transformation parameters of each image according to the optimized rotation transformation parameters and the feature interior point matching relation;
calculating a final transformation matrix according to the optimized rotation transformation parameters and translation transformation parameters;
and obtaining the relative positions of all the images according to the calculated transformation matrix of each image, and obtaining the final spliced image through an image fusion step.
2. The method for rapidly stitching images according to claim 1, wherein the preprocessing step is a denoising process.
3. The method as claimed in claim 1, wherein the SIFT feature points extracted between any two images are extracted by using a RANSAC algorithm to remove points without matching.
4. The method for rapidly stitching images according to claim 4, wherein feature interior points between image pairs are obtained, and the matching relationship between the image pairs is calculated, specifically:
the total number of SIFT feature matching pairs extracted by the method is n f The number of characteristic interior point pairs obtained after the characteristic interior point pairs are purified by the RANSAC algorithm is n i If n is i >8+0.3·n f Then the two images match.
5. The method for rapidly stitching images according to claim 4, wherein the constructing of the feature vector according to the feature interior points specifically comprises:
in a single image, according to the storage sequence of the characteristic interior points, taking the kth point as a starting point and k +1 points as an end point, sequentially constructing vectors:
Figure FDA0003579372940000011
wherein,
Figure FDA0003579372940000012
respectively the k-th and k + 1-th inliers in the saved characteristic inliers,
Figure FDA0003579372940000013
to construct the resulting k-th vector.
6. The method for rapidly stitching images according to claim 1, wherein the further obtaining of initial values of iterative optimization of rotation parameters and translation parameters of each image specifically comprises:
suppose that a pair of characteristic interior points in any two matched images are respectively
Figure FDA0003579372940000021
And
Figure FDA0003579372940000022
Figure FDA0003579372940000023
transforming the rotation parameter theta and translation parameter T ═ T of matrix x t y ] T The specific calculation steps of the initial value are as follows:
Figure FDA0003579372940000024
Figure FDA0003579372940000025
Figure FDA0003579372940000026
Figure FDA0003579372940000027
Figure FDA0003579372940000028
wherein A ∈ R 2N×4 ,B∈R 2N×1 N is the feature quantity of the extracted image interior points, and the matrix A and the matrix B are a calculated according to all the feature interior points k ,b k And (3) combining the components.
7. The method for rapidly stitching images according to claim 1, wherein the iterative optimization of the rotation transformation parameters of each image according to the feature vector specifically comprises:
defining the error between the two matched pictures i, j as the sum of the mode values of the vector difference of the image internal feature vector after rotation transformation, and calculating the error in the following way:
Figure FDA0003579372940000029
wherein,
Figure FDA00035793729400000210
and
Figure FDA00035793729400000211
the k-th matched feature vector in pictures i, j,
Figure FDA00035793729400000212
representing all the feature vectors, R, constructed in pictures i, j ij Representing a rotational transformation matrix between pictures i, j,
Figure FDA00035793729400000213
the accumulated error of the whole image is the sum of distances of corresponding feature vectors between all images and matched images after matrix transformation is rotated, and the calculation method is as follows:
Figure FDA00035793729400000214
wherein n represents the number of images to be spliced, I (i) represents all images matched with the image i, and then all rotation transformation matrixes R are calculated through iterative optimization ij To obtain a rotation parameter theta ij
8. The method for rapidly stitching images according to claim 1, wherein a final transformation matrix is calculated according to the optimized rotation transformation parameters and translation transformation parameters, and specifically comprises:
defining the error between the two matched pictures i, j as the distance sum of all the characteristic interior points after translation transformation after optimization and rotation transformation, and calculating the following modes:
Figure FDA0003579372940000031
wherein,
Figure FDA0003579372940000032
and
Figure FDA0003579372940000033
the k-th matching interior point in pictures i, j,
Figure FDA0003579372940000034
representing all the characteristic interior points of pictures i, j,
Figure FDA0003579372940000035
representing an optimized rotation transformation matrix, T, between pictures i, j ij Representing the translation transformation parameters between pictures i, j, wherein the accumulated error of the whole image is the sum of distances of characteristic interior points between all images matched with the images through translation transformation, and the calculation mode is as follows:
Figure FDA0003579372940000036
wherein n represents the number of images to be spliced, I (i) represents all images matched with the image i, and then iterative optimization is carried out to calculate all translation transformation parameters T ij
9. The method for fast image stitching according to claim 8, wherein the final transformation matrix is represented as follows:
Figure FDA0003579372940000037
10. the method for fast image stitching according to claim 6, wherein the matrix A and the matrix B are a calculated from all feature interior points k ,b k The combination is as follows:
Figure FDA0003579372940000038
CN202210340989.5A 2022-04-02 2022-04-02 Image rapid splicing method based on vector shape preserving transformation Active CN114862672B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210340989.5A CN114862672B (en) 2022-04-02 2022-04-02 Image rapid splicing method based on vector shape preserving transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210340989.5A CN114862672B (en) 2022-04-02 2022-04-02 Image rapid splicing method based on vector shape preserving transformation

Publications (2)

Publication Number Publication Date
CN114862672A true CN114862672A (en) 2022-08-05
CN114862672B CN114862672B (en) 2024-04-02

Family

ID=82628688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210340989.5A Active CN114862672B (en) 2022-04-02 2022-04-02 Image rapid splicing method based on vector shape preserving transformation

Country Status (1)

Country Link
CN (1) CN114862672B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180268256A1 (en) * 2017-03-16 2018-09-20 Aquifi, Inc. Systems and methods for keypoint detection with convolutional neural networks
CN112215770A (en) * 2020-10-10 2021-01-12 成都数之联科技有限公司 Image processing method, system, device and medium
CN113658041A (en) * 2021-07-23 2021-11-16 华南理工大学 Image fast splicing method based on multi-image feature joint matching
CN114219706A (en) * 2021-11-08 2022-03-22 华南理工大学 Image fast splicing method based on reduction of grid partition characteristic points

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180268256A1 (en) * 2017-03-16 2018-09-20 Aquifi, Inc. Systems and methods for keypoint detection with convolutional neural networks
CN112215770A (en) * 2020-10-10 2021-01-12 成都数之联科技有限公司 Image processing method, system, device and medium
CN113658041A (en) * 2021-07-23 2021-11-16 华南理工大学 Image fast splicing method based on multi-image feature joint matching
CN114219706A (en) * 2021-11-08 2022-03-22 华南理工大学 Image fast splicing method based on reduction of grid partition characteristic points

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邹力: "基于色彩信息的自适应进化点云拼接算法", 计算机应用研究, vol. 36, no. 1, 31 January 2019 (2019-01-31), pages 303 - 307 *

Also Published As

Publication number Publication date
CN114862672B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
CN105205781B (en) Transmission line of electricity Aerial Images joining method
CN110111250B (en) Robust automatic panoramic unmanned aerial vehicle image splicing method and device
CN110310310B (en) Improved method for aerial image registration
WO2021098083A1 (en) Multispectral camera dynamic stereo calibration algorithm based on salient feature
CN115205489A (en) Three-dimensional reconstruction method, system and device in large scene
CN111553939B (en) Image registration algorithm of multi-view camera
CN108257089B (en) A method of the big visual field video panorama splicing based on iteration closest approach
CN102521816A (en) Real-time wide-scene monitoring synthesis method for cloud data center room
CN113538569B (en) Weak texture object pose estimation method and system
CN111553841B (en) Real-time video splicing method based on optimal suture line updating
CN108460792B (en) Efficient focusing stereo matching method based on image segmentation
CN109859137B (en) Wide-angle camera irregular distortion global correction method
CN110992263A (en) Image splicing method and system
CN111553845B (en) Quick image stitching method based on optimized three-dimensional reconstruction
CN110580715B (en) Image alignment method based on illumination constraint and grid deformation
CN113793266A (en) Multi-view machine vision image splicing method, system and storage medium
CN111127353A (en) High-dynamic image ghost removing method based on block registration and matching
CN116310131A (en) Three-dimensional reconstruction method considering multi-view fusion strategy
CN114463196A (en) Image correction method based on deep learning
CN111681271B (en) Multichannel multispectral camera registration method, system and medium
CN107330856B (en) Panoramic imaging method based on projective transformation and thin plate spline
CN109598675B (en) Splicing method of multiple repeated texture images
CN114862672B (en) Image rapid splicing method based on vector shape preserving transformation
CN115358927B (en) Image super-resolution reconstruction method combining space self-adaption and texture conversion
CN113658041B (en) Image rapid splicing method based on multi-image feature joint matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant