CN114219706A - Image fast splicing method based on reduction of grid partition characteristic points - Google Patents

Image fast splicing method based on reduction of grid partition characteristic points Download PDF

Info

Publication number
CN114219706A
CN114219706A CN202111315117.5A CN202111315117A CN114219706A CN 114219706 A CN114219706 A CN 114219706A CN 202111315117 A CN202111315117 A CN 202111315117A CN 114219706 A CN114219706 A CN 114219706A
Authority
CN
China
Prior art keywords
points
feature
image
images
interior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111315117.5A
Other languages
Chinese (zh)
Inventor
贺霖
贺新国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202111315117.5A priority Critical patent/CN114219706A/en
Publication of CN114219706A publication Critical patent/CN114219706A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image fast splicing method based on mesh partition characteristic point reduction, which comprises the steps of reading an original image to be spliced, and carrying out denoising pretreatment on the image; extracting SIFT features of all images; purifying by using RANSAC algorithm to obtain interior points between the matched images; according to Bayesian estimation, judging an image matching relationship through the quantity relationship between the interior points and the original feature points; dividing the image into dense grids by using a grid partitioning method, and reducing the interior points in each grid; calculating a transformation matrix between any two matched images according to the reduced characteristic interior points as an optimized initial value; calculating a transformation matrix of each image according to the reduced interior point matching relation in an iterative optimization mode; and obtaining a final splicing effect picture by adopting an average value fusion method. The invention can ensure the splicing quality and effectively reduce the time required for splicing a plurality of pictures, thereby meeting the industrial real-time splicing requirement.

Description

Image fast splicing method based on reduction of grid partition characteristic points
Technical Field
The invention relates to the technical field of image processing, in particular to a rapid image splicing method based on mesh partition characteristic point reduction.
Background
Image stitching is a technology for fusing two or more narrow-view and low-resolution pictures with certain overlapping regions to form a wide-view and high-resolution picture, and is widely applied to actual scenes, such as unmanned aerial vehicle aerial photography, virtual reality, medical image processing and the like. In the application of various actual scenes, image splicing is required to be completed in real time in the aerial shooting process of the unmanned aerial vehicle in the actual battlefield monitoring process, and the real battlefield scene is returned immediately to assist battlefield decision making. Therefore, fast image stitching is an extremely important requirement for practical application scenes. The image stitching technology can be summarized into four parts, namely image acquisition, image preprocessing, image registration and image fusion. In the step, firstly, feature information of all images, such as SIFT, SURF, ORB or Harris corner points, is extracted, and then, according to the position information of all feature points, a Bundle Adjustment method is used for iterative optimization calculation of registration matrixes of all images, so that alignment on the geometric positions of the images is completed. However, when feature point extraction is directly performed, the number of feature points on each image is too large, the scale of an optimization matrix is too large during iterative optimization, the calculation time is very long, and the requirement for real-time splicing in an actual application scene cannot be met.
Therefore, it is necessary to design a multi-graph stitching method with strong real-time performance and less computation. The invention reduces the quantity of the directly extracted feature points based on a grid partition mode, reduces the calculation scale of the optimization matrix, and can obtain better splicing effect on the basis of meeting the requirement of quick splicing.
Disclosure of Invention
In order to overcome the defects and shortcomings of the prior art, the invention provides a rapid image splicing method based on mesh partition characteristic point reduction.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for rapidly splicing images based on reduction of grid partition feature points comprises the following steps:
obtaining an image to be spliced;
respectively extracting the characteristic points of each image;
acquiring feature interior points between the matched images according to the extracted feature points;
acquiring an image matching pair according to the characteristic interior points;
reducing the matched feature interior points by a grid distinguishing method;
calculating a transformation matrix between every two matched images according to the reduced characteristic interior points;
calculating a transformation matrix of each picture according to the reduced interior point matching relation;
and obtaining the relative positions of all the images according to the calculated transformation matrix of each image, and obtaining the final spliced image through an image fusion step.
Further, the step of obtaining the image to be spliced further comprises preprocessing the image to be spliced, wherein the preprocessing comprises noise removal.
Further, the obtaining of feature interior points between the matched images according to the extracted feature points specifically includes:
directly matching the feature points between the two images, purifying the feature points after the direct matching through an RANSAC algorithm to obtain correctly matched feature interior points, and setting a feature point pair set V of the direct matching, wherein the number of feature point matching pairs is m;
randomly selecting q pairs of matched feature points from the set V, and further calculating a homography transformation matrix H;
calculating the number k of the satisfied transformation matrixes H in the pair of feature points (m-q);
repeating the steps for n times, and selecting the feature matching points with the maximum number k as final feature interior points.
Further, the image matching pairs are obtained according to the feature interior points, whether any two images are matched is judged through Bayesian estimation, and all the image matching pairs are obtained, wherein the judgment process is as follows:
the total number of the feature matching pairs is m, and the number of feature interior points obtained after the feature matching pairs are purified by the RANSAC algorithm is niIf n isiIf the image matching is more than 8+ 0.3. m, the matching of the two images can be judged.
Further, the reducing the matched feature interior points by the grid distinguishing method specifically comprises:
partitioning matching images into C1×C2A grid;
calculating the number of feature points in each grid;
respectively clustering the characteristic interior points in each grid by using a k-means clustering algorithm, and if the number of the interior points in the grid exceeds 1 ^ of the total characteristic points1Then cluster is 2, if less than 1 of the total feature point1Then clustering into 4 classes;
sampling the clustering interior points in the grid, and randomly selecting 60% of interior points in each type;
and deleting the other feature interior points.
Further, the calculating a transformation matrix between every two matched images according to the reduced feature interior points specifically includes:
suppose that a pair of characteristic interior points in two matched images are respectively u1=[x1 y1]TAnd u2=[x2 y2]TThe transformation matrix adopts projective transformation, and the calculation method is as follows:
Figure BDA0003343363540000031
Figure BDA0003343363540000032
wherein N is the feature quantity of the extracted SIFT interior points of the image, and the matrix A is a calculated and obtained according to all the feature interior pointsiThe matrix size is 2 Nx 9, singular value decomposition algorithm is used for calculation, and finally the matrix h to be solved is the right singular vector with the minimum matrix A.
Further, the calculating a transformation matrix of each picture according to the reduced interior point matching relationship specifically includes:
defining the error between the two matched pictures i, j as the sum of distances of all the characteristic interior points after reduction after transformation by a transformation matrix, and calculating the method as follows:
Figure BDA0003343363540000033
wherein,
Figure BDA0003343363540000034
and
Figure BDA0003343363540000035
the k-th matched feature in pictures i, j,
Figure BDA0003343363540000036
all SIFT interior points H of pictures i and j after being purified by RANSAC algorithmijRepresenting a homography transformation matrix between pictures i, j, wherein the accumulated error of the whole image is the sum of distances of reduced characteristic inner points between all images and matched images after passing through the transformation matrix, and the calculation mode is as follows:
Figure BDA0003343363540000037
where I (i) represents all images that match correctly with image i, which are then subjected to an iterative optimization calculation of all transformation matrices H according to the Levenberg-Marquardt algorithmij
Further, the image fusion is realized by adopting an average value fusion method.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the image fast stitching method when executing the computer program.
A storage medium having stored thereon a computer program which, when executed by a processor, implements the image rapid stitching method of any one of the above.
The invention has the beneficial effects that:
(1) compared with a mode of directly calculating the transformation matrix by using the original characteristic points, the transformation matrix of each image in the image registration process is calculated after the characteristic points are reduced by using the grid partitions, so that the iterative optimization matrix scale can be reduced, the calculation amount is greatly reduced, the splicing process is accelerated, the splicing time is shortened, and the real-time splicing is realized.
(2) According to the invention, the number of the characteristic points is reduced according to a grid partitioning mode, and the characteristic points are not reduced by adopting a mode of randomly removing some characteristic points, so that the condition that registration errors are increased due to the random removal of the characteristic points can be avoided, and the splicing effect is ensured to be similar to that of the original characteristic points.
Drawings
FIG. 1 is a flow chart of the operation of the present invention;
fig. 2(a) is an original picture to be stitched, fig. 2(b) is a stitching effect diagram of a transformation matrix calculated by using directly extracted feature points, fig. 2(c) is a stitching effect diagram of a transformation matrix calculated by using randomly removed feature points, and fig. 2(d) is a stitching effect diagram processed by using the method of the embodiment.
FIG. 3 is a schematic diagram of the characteristic point reduction method of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited to these examples.
Example 1
A method for rapidly splicing images based on mesh distinguishing characteristic point reduction aims to increase the splicing speed of a plurality of images on the premise of obtaining a good splicing effect and enable the images to meet the requirements of practical industrial application, and includes the steps of extracting all characteristic points in the images to be spliced, dividing the characteristic points in the images into a plurality of areas according to a mesh partitioning mode, randomly selecting 60% of the characteristic points in each area to be reserved, removing the rest of the characteristic points, enabling the optimization matrix scale in the iterative optimization process to be small greatly, and increasing the splicing speed greatly. Meanwhile, due to the adoption of a grid partitioning mode, the reduced characteristic points can still keep the original characteristic point position distribution uniformly, and the registration error cannot be increased due to abnormal characteristic point position distribution caused by random removal of the characteristic points. The method ensures the splicing effect and greatly reduces the time required by multi-image splicing.
The flow chart is shown in fig. 1, and specifically includes:
s1, reading all original pictures to be spliced, preprocessing the original pictures and removing noise in the original pictures;
s2, extracting and storing SIFT feature points of each image respectively;
s3, directly matching the feature points between every two images, purifying the directly matched feature points through RANSAC (random sample consensus) algorithm, removing the feature points which are mismatched, and keeping the feature interior points which are correctly matched. Assuming that the pairs of feature points for which direct matching is performed are known, they are symbolized as a set V of feature point pairs, where the number of feature point matching pairs is m. The method comprises the following specific steps:
s3.1, randomly selecting the characteristic points matched with the q pairs from the set V;
s3.2, calculating a homography transformation matrix H according to the selected q pairs of matched feature points;
s3.3, calculating the number k of the satisfied transformation matrixes H in the residual (m-q) pairs of feature points;
and S3.4, repeating the steps for n times, and selecting the matched feature point with the maximum k value as a final interior point.
S4, acquiring image matching pairs according to the feature interior points, specifically:
judging whether any two images are matched according to the extracted feature point information through Bayesian estimation, and acquiring all image matching pairs in the following judging mode:
assuming that the total number of SIFT feature matching pairs obtained by directly extracting the same is m, the number of interior points obtained by purifying the SIFT feature matching pairs through the RANSAC algorithm is ni. If n isi>8+0.3·nfThen the matching of the two images can be judged;
s5 reducing the matched feature interior points by a mesh distinguishing method, as shown in fig. 3, specifically, dividing the matched image into C1×C2A grid;
calculating the number of feature points in each grid;
respectively clustering the characteristic interior points in each grid by using a k-means clustering algorithm, and if the quantity of the interior points in the grid exceeds 1/C of the total characteristic points1Then cluster into 2 classes, if less than 1/C of the total feature point1Then clustering into 4 classes;
sampling the clustering interior points in the grid, and randomly selecting 60% of interior points in each type;
and deleting the other feature interior points.
The selected inner point in the step is 60%, so that the calculated amount is reduced, and the splicing effect is optimal.
S6, calculating a transformation matrix between every two matched images according to the reduced feature interior points, and providing an initial value for Bundle Adjustment method iteration optimization, wherein the specific calculation steps are as follows:
suppose that one pair of interior points in two matched images are u respectively1=[x1 y1]TAnd u2=[x2 y2]TThe transformation matrix adopts projective transformation, and the calculation method is as follows:
Figure BDA0003343363540000051
Figure BDA0003343363540000052
wherein N is the feature quantity of the extracted SIFT interior points of the image, and the matrix A is a calculated and obtained according to all the feature interior pointsiThe matrix size is 2 Nx 9. Calculating by using a singular value decomposition algorithm, wherein a matrix h to be finally solved is a right singular vector with the minimum matrix A;
s7, according to the reduced interior point matching relationship, combining the interior point pairs among all the matched images to iteratively optimize the transformation matrix of each image, and the specific implementation steps are as follows:
defining the error between two correctly matched pictures i, j as the sum of distances of all reduced feature interior points after transformation of a transformation matrix, and calculating the method as follows:
Figure BDA0003343363540000053
wherein,
Figure BDA0003343363540000061
and
Figure BDA0003343363540000062
the k-th matched feature in pictures i, j,
Figure BDA0003343363540000063
all SIFT interior points H of pictures i and j after being purified by RANSAC algorithmijRepresenting a homography transformation matrix between pictures i, j. The accumulated error of the whole image is the sum of the distances of the reduced characteristic inner points between all the images and the matched images after passing through the transformation matrix. The calculation is as follows:
Figure BDA0003343363540000064
where I (i) represents all images that match correctly with image i. Then, performing iterative optimization on the Levenberg-Marquardt algorithm to calculate all transformation matrixes Hij
And S8, calculating the relative positions of all the images according to the calculated transformation matrix, and realizing image fusion according to an average value fusion algorithm to obtain the final spliced image.
Fig. 2(a) is an original picture to be stitched, fig. 2(b) is a stitching effect diagram of a transformation matrix calculated by using directly extracted feature points, fig. 2(c) is a stitching effect diagram of a transformation matrix calculated by using randomly removed feature points, and fig. 2(d) is a stitching effect diagram processed by using the method of the embodiment. The method for directly adopting the extracted characteristic points to carry out transformation matrix calculation has the advantages of minimum registration error, optimal image splicing quality, long running time and incapability of meeting the actual industrial application requirement; the method for randomly removing the feature point calculation transformation matrix has short running time, but has large registration error, obvious ghost fuzzy phenomenon and poor splicing quality; the method provided by the embodiment has the advantages that the obtained splicing effect is good, the image registration error is small, and the required time is obviously reduced compared with the method adopting the calculation of all the feature points, so that the method provided by the embodiment is more in line with the actual application requirement compared with the existing algorithm.
Example 2
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the image fast stitching method when executing the computer program.
The image fast splicing method comprises the following steps:
a method for rapidly splicing images based on reduction of grid partition feature points comprises the following steps:
obtaining an image to be spliced;
respectively extracting the characteristic points of each image;
acquiring feature interior points between the matched images according to the extracted feature points;
acquiring an image matching pair according to the characteristic interior points;
reducing the matched feature interior points by a grid distinguishing method;
calculating a transformation matrix between every two matched images according to the reduced characteristic interior points;
calculating a transformation matrix of each picture according to the reduced interior point matching relation;
and obtaining the relative positions of all the images according to the calculated transformation matrix of each image, and obtaining the final spliced image through an image fusion step.
Example 3
A storage medium having stored thereon a computer program which, when executed by a processor, implements the image rapid stitching method of any one of the above.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (10)

1. A method for rapidly splicing images based on reduction of grid partition feature points is characterized by comprising the following steps:
obtaining an image to be spliced;
respectively extracting the characteristic points of each image;
acquiring feature interior points between the matched images according to the extracted feature points;
acquiring an image matching pair according to the characteristic interior points;
reducing the matched feature interior points by a grid distinguishing method;
calculating a transformation matrix between every two matched images according to the reduced characteristic interior points;
calculating a transformation matrix of each picture according to the reduced interior point matching relation;
and obtaining the relative positions of all the images according to the calculated transformation matrix of each image, and obtaining the final spliced image through an image fusion step.
2. The method for rapidly splicing images according to claim 1, wherein the step of obtaining the images to be spliced further comprises preprocessing the images to be spliced, wherein the preprocessing comprises removing noise.
3. The method for rapidly stitching images according to claim 1, wherein the obtaining of feature interior points between the matched images according to the extracted feature points specifically comprises:
directly matching the feature points between the two images, purifying the feature points after the direct matching through an RANSAC algorithm to obtain correctly matched feature interior points, and setting a feature point pair set V of the direct matching, wherein the number of feature point matching pairs is m;
randomly selecting q pairs of matched feature points from the set V, and further calculating a homography transformation matrix H;
calculating the number k of the satisfied transformation matrixes H in the pair of feature points (m-q);
repeating the steps for n times, and selecting the feature matching points with the maximum number k as final feature interior points.
4. The method for fast stitching images according to claim 1, wherein the image matching pairs are obtained according to the feature interior points, and whether any two images are matched is determined by bayesian estimation, so as to obtain all the image matching pairs, wherein the determination process is as follows:
the total number of the feature matching pairs is m, and the number of feature interior points obtained after the feature matching pairs are purified by the RANSAC algorithm is niIf n isiIf the image matching is more than 8+ 0.3. m, the matching of the two images can be judged.
5. The method for rapidly stitching images according to claim 1, wherein the reduction of the matched feature interior points is performed by a grid distinguishing method, specifically:
partitioning matching images into C1×C2A grid;
calculating the number of feature points in each grid;
respectively clustering the characteristic interior points in each grid by using a k-means clustering algorithm, and if the quantity of the interior points in the grid exceeds 1/C of the total characteristic points1Then cluster is 2, if less than 1 of the total feature point1Then clustering into 4 classes;
sampling the clustering interior points in the grid, and randomly selecting 60% of interior points in each type;
and deleting the other feature interior points.
6. The method for rapidly stitching images according to claim 1, wherein the calculating of the transformation matrix between each two matched images according to the reduced feature interior points comprises:
suppose that a pair of characteristic interior points in two matched images are respectively u1=[x1 y1]TAnd u2=[x2 y2]TThe transformation matrix adopts projective transformation, and the calculation method is as follows:
Figure FDA0003343363530000021
Figure FDA0003343363530000022
wherein N is the feature quantity of the extracted SIFT interior points of the image, and the matrix A is a calculated and obtained according to all the feature interior pointsiThe matrix size is 2 Nx 9, singular value decomposition algorithm is used for calculation, and finally the matrix h to be solved is the right singular vector with the minimum matrix A.
7. The method for rapidly stitching images according to any one of claims 1 to 6, wherein the calculating of the transformation matrix of each picture according to the reduced interior point matching relationship specifically comprises:
defining the error between the two matched pictures i, j as the sum of distances of all the characteristic interior points after reduction after transformation by a transformation matrix, and calculating the method as follows:
Figure FDA0003343363530000023
wherein,
Figure FDA0003343363530000024
and
Figure FDA0003343363530000025
the k-th matched feature in pictures i, j,
Figure FDA0003343363530000026
all SIFT interior points H of pictures i and j after being purified by RANSAC algorithmijRepresenting a homography transformation matrix between pictures i, j, wherein the accumulated error of the whole image is the sum of distances of reduced characteristic inner points between all images and matched images after passing through the transformation matrix, and the calculation mode is as follows:
Figure FDA0003343363530000027
where I (i) represents all images that match correctly with image i, which are then subjected to an iterative optimization calculation of all transformation matrices H according to the Levenberg-Marquardt algorithmij
8. The method for rapidly splicing the images according to claim 7, wherein the fusion of the images is realized by adopting an average value fusion method.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the image fast stitching method according to any one of claims 1 to 8 when executing the computer program.
10. A storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the image rapid stitching method according to any one of claims 1 to 8.
CN202111315117.5A 2021-11-08 2021-11-08 Image fast splicing method based on reduction of grid partition characteristic points Pending CN114219706A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111315117.5A CN114219706A (en) 2021-11-08 2021-11-08 Image fast splicing method based on reduction of grid partition characteristic points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111315117.5A CN114219706A (en) 2021-11-08 2021-11-08 Image fast splicing method based on reduction of grid partition characteristic points

Publications (1)

Publication Number Publication Date
CN114219706A true CN114219706A (en) 2022-03-22

Family

ID=80696588

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111315117.5A Pending CN114219706A (en) 2021-11-08 2021-11-08 Image fast splicing method based on reduction of grid partition characteristic points

Country Status (1)

Country Link
CN (1) CN114219706A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114862672A (en) * 2022-04-02 2022-08-05 华南理工大学 Image fast splicing method based on vector shape preserving transformation

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114862672A (en) * 2022-04-02 2022-08-05 华南理工大学 Image fast splicing method based on vector shape preserving transformation
CN114862672B (en) * 2022-04-02 2024-04-02 华南理工大学 Image rapid splicing method based on vector shape preserving transformation

Similar Documents

Publication Publication Date Title
US8306366B2 (en) Method and apparatus for extracting feature points from digital image
CN107578376B (en) Image splicing method based on feature point clustering four-way division and local transformation matrix
CN108229290B (en) Video object segmentation method and device, electronic equipment and storage medium
Tang et al. Single image dehazing via lightweight multi-scale networks
CN112288628B (en) Aerial image splicing acceleration method and system based on optical flow tracking and frame extraction mapping
CN111553845B (en) Quick image stitching method based on optimized three-dimensional reconstruction
CN102096915B (en) Camera lens cleaning method based on precise image splicing
CN103426190A (en) Image reconstruction method and system
CN114998773B (en) Characteristic mismatching elimination method and system suitable for aerial image of unmanned aerial vehicle system
CN114219706A (en) Image fast splicing method based on reduction of grid partition characteristic points
CN113421210B (en) Surface point Yun Chong construction method based on binocular stereoscopic vision
Sun et al. Uni6Dv2: Noise elimination for 6D pose estimation
CN117196954A (en) Weak texture curved surface image stitching method and device for aircraft skin
CN113298187A (en) Image processing method and device, and computer readable storage medium
CN117132503A (en) Method, system, equipment and storage medium for repairing local highlight region of image
CN109785367B (en) Method and device for filtering foreign points in three-dimensional model tracking
CN113658041B (en) Image rapid splicing method based on multi-image feature joint matching
CN114078096A (en) Image deblurring method, device and equipment
CN114608558A (en) SLAM method, system, device and storage medium based on feature matching network
CN115100444A (en) Image mismatching filtering method and image matching device thereof
CN108426566B (en) Mobile robot positioning method based on multiple cameras
CN111985535A (en) Method and device for optimizing human body depth map through neural network
CN112967398B (en) Three-dimensional data reconstruction method and device and electronic equipment
CN114862672B (en) Image rapid splicing method based on vector shape preserving transformation
CN111524161A (en) Method and device for extracting track

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination