CN114862672B - Image rapid splicing method based on vector shape preserving transformation - Google Patents

Image rapid splicing method based on vector shape preserving transformation Download PDF

Info

Publication number
CN114862672B
CN114862672B CN202210340989.5A CN202210340989A CN114862672B CN 114862672 B CN114862672 B CN 114862672B CN 202210340989 A CN202210340989 A CN 202210340989A CN 114862672 B CN114862672 B CN 114862672B
Authority
CN
China
Prior art keywords
image
images
points
feature
transformation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210340989.5A
Other languages
Chinese (zh)
Other versions
CN114862672A (en
Inventor
贺霖
贺新国
李军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202210340989.5A priority Critical patent/CN114862672B/en
Publication of CN114862672A publication Critical patent/CN114862672A/en
Application granted granted Critical
Publication of CN114862672B publication Critical patent/CN114862672B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computational Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Algebra (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a rapid image stitching method based on vector shape preserving transformation, which comprises the steps of reading an original image to be stitched and carrying out denoising pretreatment on the image; extracting SIFT features of all images; purifying to obtain inner points between the matched images; judging an image matching relationship through the quantity relationship between the interior points and the original characteristic points; calculating two types of parameters in an image transformation matrix step by step through construction of feature vectors of points in the image; calculating initial values of two types of parameters by using the matched inner points; performing iterative optimization calculation on rotation parameters of the transformation matrix by using the constructed feature vectors; iteratively calculating translation parameters of the transformation matrix according to the matched inner points and the optimized rotation parameters; calculating a transformation matrix of each image through the rotation parameters and the translation parameters obtained through step-by-step optimization; and obtaining a final splicing effect diagram. The invention can obviously improve the splicing quality, and effectively reduce the time required by splicing a plurality of pictures, so that the invention can meet the industrial real-time splicing requirement.

Description

Image rapid splicing method based on vector shape preserving transformation
Technical Field
The invention relates to the technical field of image processing, in particular to a rapid image stitching method based on vector shape preserving transformation.
Background
Image stitching is a technique of fusing two or more partial region observation pictures with a certain overlapping region to form a wide-viewing angle, high-resolution picture containing an entire observation region. In an actual application scene, an actual battlefield monitoring task is realized, and the image splicing technology mainly has two specific requirements, namely the rapidity is that the splicing can be rapidly completed after a large number of pictures are shot and acquired, and the panoramic view of a target area is completed through real-time splicing; secondly, the accuracy is that the spliced image has no imaging quality problems such as double image, ghost image and the like, the imaging is natural and attractive, and the actual imaging area is truly reflected. Therefore, rapid and accurate image stitching is an extremely important requirement for practical application scenes. The image stitching technology mainly comprises four steps, namely image acquisition, image preprocessing, image registration and image fusion. The most critical step is image registration, in which transformation matrix parameters, namely rotation parameters and translation parameters, between the matched images are calculated according to the extracted image feature point position information, and then all parameters of all the images are iteratively optimized by using a Bundle Adjustment method, so that the geometric alignment of the images is realized according to the obtained parameters. The parameters after iterative optimization can improve the image splicing effect, but due to the limitation of the position information of the characteristic points, all the parameters are required to be calculated simultaneously, the matrix is oversized in the optimization process, the time consumption is serious, and due to the overlarge difference between the two parameters in the transformation matrix, the two parameters can be mutually influenced during the optimization, and the phenomenon of poor registration effect still exists in the spliced images.
Therefore, it is necessary to design a fast and accurate multi-graph stitching method. The method based on vector shape preserving transformation uses two types of parameters in a vector separation transformation matrix, reduces the calculation scale of an optimization matrix, and simultaneously eliminates the phenomenon of mutual interference of parameters in the optimization process, so that the method can obtain excellent splicing effect on the basis of meeting the requirement of quick splicing.
Disclosure of Invention
In order to overcome the defects and shortcomings in the prior art, the invention provides a rapid image stitching method based on vector shape preserving transformation.
The invention can accelerate the splicing speed of a plurality of images on the premise of obtaining better splicing effect, so that the invention meets the actual industrial application requirement.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a rapid image stitching method based on vector shape preserving transformation comprises the following steps:
reading a plurality of images to be spliced, and preprocessing the images;
extracting SIFT feature points of each image respectively and storing the SIFT feature points;
purifying SIFT feature points extracted between any two images, obtaining feature inner points between image pairs, and calculating a matching relationship between the image pairs;
constructing a feature vector according to the feature inner points;
according to the matching relation of the matched images and the purified characteristic inner points, calculating a transformation matrix between the two matched images, and further obtaining a rotation parameter and a translation parameter iteration optimization initial value of each image;
iteratively optimizing rotation transformation parameters of each image according to the feature vectors;
iteratively optimizing translation transformation parameters of each image according to the optimized rotation transformation parameters and the characteristic interior point matching relation;
calculating a final transformation matrix according to the optimized rotation transformation parameters and translation transformation parameters;
and obtaining the relative positions of all the images according to the calculated transformation matrix of each image, and obtaining the final spliced image through an image fusion step.
Further, the preprocessing step is denoising processing.
Further, the extraction of SIFT feature points extracted between any two images adopts a RANSAC algorithm to remove non-matching points.
Further, the characteristic inner points between the image pairs are obtained, and the matching relation between the image pairs is calculated, specifically:
let the total number of SIFT feature matching pairs extracted by the method be n f The number of the characteristic inner point pairs obtained after purification by the RANSAC algorithm is n i If n i >8+0.3·n f The two images match.
Further, the constructing the feature vector according to the feature inner points specifically includes:
in a single image, according to the preservation sequence of points in the feature, taking the kth point as a starting point, and taking k+1 points as end points, sequentially constructing vectors:
wherein,the k and k+1th interior points of the stored characteristic interior points, respectively, +.>To construct the resulting kth vector;
further, the iterative optimization initial values of the rotation parameter and the translation parameter of each image are further obtained, specifically:
assume that a pair of characteristic inner points in any two matched images are respectivelyAnd-> Transformation matrix rotation parameter θ and translation parameter t= [ T ] x t y ] T The specific calculation steps of the initial value are as follows:
wherein A is E R 2N×4 ,B∈R 2N×1 N is the feature quantity of the extracted image interior points, and the matrix A and the matrix B are a calculated according to all the feature interior points k ,b k Is combined.
Further, the rotation transformation parameters of each image are iteratively optimized according to the feature vectors, specifically:
the error between two matched pictures i and j is defined as the sum of the modulus values of vector differences of the internal feature vectors of the images after rotation transformation, and the calculation mode is as follows:
wherein,and->The kth matched feature vector in picture i, j, respectively,/>Representing all feature vectors constructed in pictures i, j, R ij Representing the rotation transformation matrix between pictures i, j,
the accumulated error of the whole image is the sum of the distances of the corresponding feature vectors between all the images and the matched images after passing through the rotation transformation matrix, and the calculation mode is as follows:
where n represents the number of images to be stitched, I (I) represents all images matching image I, followed byPost-iterative optimization calculation of all rotation transformation matrices R ij Obtaining the rotation parameter theta ij
Further, a final transformation matrix is calculated according to the optimized rotation transformation parameters and translation transformation parameters, specifically:
the error between two matched pictures i and j is defined as the sum of distances of all characteristic inner points after the rotation transformation after the optimization and the translation transformation, and the calculation mode is as follows:
wherein,and->The kth matching inliers in pictures i, j, respectively, < >>Representing all characteristic inliers of picture i, j, < >>Representing an optimized rotation transformation matrix, T, between pictures i, j ij The translation transformation parameters between pictures i and j are represented, the accumulated error of the whole image is the sum of the distances between the characteristic inner points of all the images matched with the images after translation transformation, and the calculation mode is as follows:
where n represents the number of images to be stitched, I (I) represents all images matching image I, and then performing iterative optimization to calculate all translation transformation parameters T ij
Further, the final transformation matrix is represented as follows:
further, matrix A, matrix B is a calculated from all feature inliers k ,b k The combination is specifically as follows:
the invention has the beneficial effects that:
(1) The method for calculating the transformation matrix of each image in the registration process by using the vector to replace the traditional characteristic points can calculate two parameters in the transformation matrix step by step, so that the two parameters can be optimized step by step, and compared with the traditional iterative optimization matrix for simultaneously optimizing all parameters, the two iterative optimization matrices after step by step are obviously reduced in scale, the calculated amount is greatly reduced, and the splicing speed is obviously improved;
(2) The method for calculating the transformation matrix of each image in the registration process by using the vector to replace the traditional characteristic points can realize the separation calculation of two parameters in the transformation matrix, eliminate the phenomenon that the two parameters interfere with each other due to overlarge difference in the optimization process, improve the accuracy of the transformation parameters of the optimization calculation matrix and obviously improve the splicing effect.
Drawings
FIG. 1 is a schematic diagram of a vector shape preserving transformation based stitching method of the present invention;
fig. 2 (a) is an original picture to be spliced, fig. 2 (b) is an original non-optimized splicing effect diagram, fig. 2 (c) is a traditional point feature optimization calculation transformation matrix splicing effect diagram, and fig. 2 (d) is a splicing effect diagram adopting the method described in this embodiment;
fig. 3 is a flow chart of the operation of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but embodiments of the present invention are not limited thereto.
Examples
The embodiment provides an image rapid splicing method based on vector shape preserving transformation, which extracts feature points in all images to be spliced, constructs vector features according to the feature points in the images, and separates and calculates two parameters in an image transformation matrix, so that the two parameters cannot be influenced by excessive parameter differences in the iterative optimization process, the matrix calculation result is more accurate, and the image splicing effect is greatly improved. Meanwhile, as the parameter calculation separation is carried out, the iterative optimization process can be separated, and compared with the original optimization matrix comprising all parameters, the size of the two separated optimization matrices is greatly reduced, and the splicing speed is obviously increased.
As shown in fig. 1 and 3, the method comprises the following steps:
s1, reading all original pictures to be spliced, preprocessing the original pictures, and removing noise;
s2, directly extracting SIFT feature points of each image and storing the SIFT feature points;
and S3, purifying the directly extracted SIFT feature points through a RANSAC (Random sample consensus) algorithm, removing the feature points which are mismatched, reserving and storing the correctly matched feature internal points, and further judging the image matching relationship. Assuming that the total number of SIFT feature matching pairs obtained by direct extraction is n f The number of internal points obtained after purification by the RANSAC algorithm is n i . If n i >8+0.3·n f The two images can be judged to be matched;
s4, constructing a feature vector through the purified feature interior points in a mode that a previous point is used as a starting point and a next point is used as an end point in a single image according to the stored interior point sequence, and sequentially constructing the vector in sequence:
wherein,respectively the kth and the kth in the saved feature inner points+1 interior points, ++>To construct the resulting kth vector;
s5, according to the matching relation of the matched images and the characteristic interior point information obtained after purification, calculating a transformation matrix between the two matched images, providing an initial value for Bundle Adjustment iterative optimization, and assuming that one pair of interior points in the two matched images are respectivelyAnd->Transformation matrix rotation parameter θ and translation parameter t= [ T ] x t y ] T The specific calculation steps of the initial value are as follows:
wherein A is E R 2N×4 ,B∈R 2N×1 N is the number of extracted image interior points, and the matrix A, B is a calculated according to all the interior points k ,b k Is combined.
S6, iteratively optimizing rotation parameters according to the constructed feature vector matching relation, wherein the specific calculation steps are as follows:
the error between two matched pictures i and j is defined as the sum of the modulus values of vector differences of the internal feature vectors of the images after rotation transformation, and the calculation mode is as follows:
wherein,and->The kth matched feature vector in picture i, j, respectively,/>Representing all feature vectors constructed in pictures i, j, R ij Representing the rotation transformation matrix between pictures i, j.
The accumulated error of the whole image is the sum of the distances of the corresponding feature vectors between all the images and the matched images after passing through the rotation transformation matrix. The calculation method is as follows:
where n represents the number of images to be stitched and I (I) represents all images matching image I. Then, carrying out iterative optimization calculation on the rotation transformation matrix R according to the Levenberg-Marquardt algorithm ij Obtaining the rotation parameter theta ij
S7, according to the optimized rotation transformation parameters and the imagesInterior point matching relationship, and optimizing translation parameters of each image by combining interior point iteration among all matching imagesThe specific calculation steps are as follows:
the error between two correctly matched pictures i and j is defined as the sum of distances of all feature inner points after the rotation transformation after the optimization and the translation transformation, and the calculation mode is as follows:
wherein,and->The kth matching inliers in pictures i, j, respectively, < >>Representing all characteristic inliers of picture i, j, < >>Representing an optimized rotation transformation matrix, T, between pictures i, j ij Representing the translation transformation parameters between pictures i, j. The cumulative error of the whole image is the sum of the distances of the intra-feature points between all the images and the matched images after translational transformation. The calculation method is as follows:
where n represents the number of images to be stitched, I (representing all images matching image I. Subsequently, it is iteratively optimized according to the Levenberg-Marquardt algorithm to calculate all translation transformation parameters T ij
S8, constructing a final transformation matrix according to the rotation parameters and the translation parameters obtained by optimization, wherein the final transformation matrix is in the following specific form:
and S9, calculating the relative positions of all the images according to the calculated transformation matrix, and realizing image fusion according to an average value fusion algorithm to obtain a final spliced image.
Fig. 2 (a) is an original picture to be spliced, fig. 2 (b) is an original non-optimized splicing effect diagram, fig. 2 (c) is a traditional feature point optimized calculation transformation matrix splicing effect diagram, and fig. 2 (d) is a splicing effect diagram adopting the method described in this embodiment. The method for performing transformation matrix iterative computation by adopting the traditional feature points has larger registration error, obvious ghost blurring phenomenon, poor splicing quality and long running time, and can not meet the actual industrial application requirements; the method based on the feature vector step-by-step iterative optimization parameters provided by the embodiment has good splicing effect, small image registration error and obviously reduced required time compared with the method for simultaneously iterative optimization of all parameters, which shows that the embodiment has the advantage of meeting the actual application requirements better than the existing algorithm.
The embodiments described above are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the embodiments described above, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principles of the present invention should be made in the equivalent manner, and are included in the scope of the present invention.

Claims (7)

1. The image rapid splicing method based on vector shape preserving transformation is characterized by comprising the following steps of:
reading a plurality of images to be spliced, and preprocessing the images;
extracting SIFT feature points of each image respectively and storing the SIFT feature points;
purifying SIFT feature points extracted between any two images, obtaining feature inner points between image pairs, and calculating a matching relationship between the image pairs;
constructing a feature vector according to the feature inner points;
according to the matching relation of the matched images and the purified characteristic inner points, calculating a transformation matrix between the two matched images, and further obtaining a rotation parameter and a translation parameter iteration optimization initial value of each image;
iteratively optimizing rotation transformation parameters of each image according to the feature vectors;
iteratively optimizing translation transformation parameters of each image according to the optimized rotation transformation parameters and the characteristic interior point matching relation;
calculating a final transformation matrix according to the optimized rotation transformation parameters and translation transformation parameters;
obtaining the relative positions of all the images according to the calculated transformation matrix of each image, and obtaining a final spliced image through an image fusion step;
the feature vector is constructed according to the feature inner points, specifically:
in a single image, according to the preservation sequence of points in the feature, taking the kth point as a starting point, and taking k+1 points as end points, sequentially constructing vectors:
wherein,the k and k+1th interior points of the stored characteristic interior points, respectively, +.>To construct the resulting kth vector;
the iterative optimization initial value of the rotation parameter and the translation parameter of each image is further obtained, specifically:
assume that a pair of characteristic inner points in any two matched images are respectivelyAnd-> Transformation matrix rotation parameter θ and translation parameter t= [ T ] x t y ] T The specific calculation steps of the initial value are as follows:
wherein A is E R 2N×4 ,B∈R 2N×1 N is the feature quantity of the extracted image interior points, and the matrix A and the matrix B are a calculated according to all the feature interior points k ,b k Is combined into a whole;
the rotation transformation parameters of each image are optimized according to the feature vector iteration, specifically:
the error between two matched pictures i and j is defined as the sum of the modulus values of vector differences of the internal feature vectors of the images after rotation transformation, and the calculation mode is as follows:
wherein,and->The kth matched feature vector in picture i, j, respectively,/>Representing all feature vectors constructed in pictures i, j, R ij Representing the rotation transformation matrix between pictures i, j,
the accumulated error of the whole image is the sum of the distances of the corresponding feature vectors between all the images and the matched images after passing through the rotation transformation matrix, and the calculation mode is as follows:
where n represents the number of images to be stitched, I (I) represents all images matching image I, and then iteratively optimizing all rotation transformation matrices R ij Obtaining the rotation parameter theta ij
2. The method of claim 1, wherein the preprocessing step is denoising.
3. The rapid image stitching method according to claim 1, wherein the purifying of SIFT feature points extracted between any two images uses RANSAC algorithm to remove unmatched points.
4. The method for rapid image stitching according to claim 3, wherein the characteristic inner points between the image pairs are obtained, and the matching relationship between the image pairs is calculated, specifically:
let the total number of SIFT feature matching pairs extracted by the method be n f The number of the characteristic inner point pairs obtained after purification by the RANSAC algorithm is n i If n i >8+0.3·n f The two images match.
5. The method for rapid image stitching according to claim 1, wherein the final transformation matrix is calculated according to the optimized rotation transformation parameters and translation transformation parameters, specifically:
the error between two matched pictures i and j is defined as the sum of distances of all characteristic inner points after the rotation transformation after the optimization and the translation transformation, and the calculation mode is as follows:
wherein,and->The kth matching inliers in pictures i, j, respectively, < >>Representing all characteristic inliers of picture i, j, < >>Representing an optimized rotation transformation matrix, T, between pictures i, j ij The translation transformation parameters between pictures i and j are represented, the accumulated error of the whole image is the sum of the distances between the characteristic inner points of all the images matched with the images after translation transformation, and the calculation mode is as follows:
where n represents the number of images to be stitched, I (I) represents all images matching image I, and then performing iterative optimization to calculate all translation transformation parameters T ij
6. The method of fast stitching of images according to claim 5, wherein the final transformation matrix is represented as follows:
7. the method of claim 1, wherein the matrix a, matrix B is a calculated from all feature inliers k ,b k The combination is specifically as follows:
CN202210340989.5A 2022-04-02 2022-04-02 Image rapid splicing method based on vector shape preserving transformation Active CN114862672B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210340989.5A CN114862672B (en) 2022-04-02 2022-04-02 Image rapid splicing method based on vector shape preserving transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210340989.5A CN114862672B (en) 2022-04-02 2022-04-02 Image rapid splicing method based on vector shape preserving transformation

Publications (2)

Publication Number Publication Date
CN114862672A CN114862672A (en) 2022-08-05
CN114862672B true CN114862672B (en) 2024-04-02

Family

ID=82628688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210340989.5A Active CN114862672B (en) 2022-04-02 2022-04-02 Image rapid splicing method based on vector shape preserving transformation

Country Status (1)

Country Link
CN (1) CN114862672B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215770A (en) * 2020-10-10 2021-01-12 成都数之联科技有限公司 Image processing method, system, device and medium
CN113658041A (en) * 2021-07-23 2021-11-16 华南理工大学 Image fast splicing method based on multi-image feature joint matching
CN114219706A (en) * 2021-11-08 2022-03-22 华南理工大学 Image fast splicing method based on reduction of grid partition characteristic points

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11379688B2 (en) * 2017-03-16 2022-07-05 Packsize Llc Systems and methods for keypoint detection with convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215770A (en) * 2020-10-10 2021-01-12 成都数之联科技有限公司 Image processing method, system, device and medium
CN113658041A (en) * 2021-07-23 2021-11-16 华南理工大学 Image fast splicing method based on multi-image feature joint matching
CN114219706A (en) * 2021-11-08 2022-03-22 华南理工大学 Image fast splicing method based on reduction of grid partition characteristic points

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于色彩信息的自适应进化点云拼接算法;邹力;计算机应用研究;20190131;第36卷(第1期);第303-307 *

Also Published As

Publication number Publication date
CN114862672A (en) 2022-08-05

Similar Documents

Publication Publication Date Title
CN105205781B (en) Transmission line of electricity Aerial Images joining method
WO2021098083A1 (en) Multispectral camera dynamic stereo calibration algorithm based on salient feature
CN113538569B (en) Weak texture object pose estimation method and system
CN115205489A (en) Three-dimensional reconstruction method, system and device in large scene
CN108257089B (en) A method of the big visual field video panorama splicing based on iteration closest approach
CN109376641B (en) Moving vehicle detection method based on unmanned aerial vehicle aerial video
CN113221665A (en) Video fusion algorithm based on dynamic optimal suture line and improved gradual-in and gradual-out method
CN101540046A (en) Panoramagram montage method and device based on image characteristics
CN111553841B (en) Real-time video splicing method based on optimal suture line updating
CN106447601A (en) Unmanned aerial vehicle remote image mosaicing method based on projection-similarity transformation
CN109859137B (en) Wide-angle camera irregular distortion global correction method
CN111553939A (en) Image registration algorithm of multi-view camera
CN114331835A (en) Panoramic image splicing method and device based on optimal mapping matrix
CN110580715B (en) Image alignment method based on illumination constraint and grid deformation
CN111127353A (en) High-dynamic image ghost removing method based on block registration and matching
CN112465702B (en) Synchronous self-adaptive splicing display processing method for multi-channel ultrahigh-definition video
CN111681271B (en) Multichannel multispectral camera registration method, system and medium
CN114862672B (en) Image rapid splicing method based on vector shape preserving transformation
CN107330856B (en) Panoramic imaging method based on projective transformation and thin plate spline
CN107067368B (en) Streetscape image splicing method and system based on deformation of image
CN109598675B (en) Splicing method of multiple repeated texture images
CN116758121A (en) Infrared image and visible light image registration fusion method based on wearable helmet
CN113658041B (en) Image rapid splicing method based on multi-image feature joint matching
CN116823895A (en) Variable template-based RGB-D camera multi-view matching digital image calculation method and system
CN112700504B (en) Parallax measurement method of multi-view telecentric camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant