CN112529019A - Image splicing method and system based on fusion of linear features and key point features - Google Patents

Image splicing method and system based on fusion of linear features and key point features Download PDF

Info

Publication number
CN112529019A
CN112529019A CN202011548924.7A CN202011548924A CN112529019A CN 112529019 A CN112529019 A CN 112529019A CN 202011548924 A CN202011548924 A CN 202011548924A CN 112529019 A CN112529019 A CN 112529019A
Authority
CN
China
Prior art keywords
image
features
point
linear
transformation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011548924.7A
Other languages
Chinese (zh)
Other versions
CN112529019B (en
Inventor
孙志刚
张凯
张楠
肖力
王卓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202011548924.7A priority Critical patent/CN112529019B/en
Publication of CN112529019A publication Critical patent/CN112529019A/en
Application granted granted Critical
Publication of CN112529019B publication Critical patent/CN112529019B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image splicing method and system based on fusion of linear features and key point features, and relates to the technical field of image splicing and pattern recognition. In order to extract more features on the low-texture image as far as possible, the invention introduces linear features, namely, simultaneously detects the linear features and key point features on the image, enriches the feature extraction of the image to be spliced, and uses the linear features and the key point features to guide the local homography transformation of each grid together by introducing a moving direct linear transformation algorithm combining the linear features and the key point features, so as to prevent the linear features from weakening the alignment capability of the key point features, the fusion degree of the linear features is balanced by a lambda constant coefficient, and the moving direct linear transformation algorithm with a balance factor has better alignment capability of the low-texture image through verification, thereby effectively solving the problem of mis-alignment of the low-texture image caused by insufficient SIFT feature points.

Description

Image splicing method and system based on fusion of linear features and key point features
Technical Field
The invention belongs to the field of image splicing and pattern recognition, and particularly relates to an image splicing method and system based on linear feature and key point feature fusion.
Background
Image stitching is a process of combining a plurality of images into a larger image with a wider view field, and the quality of stitching needs to be judged from three aspects of image alignment precision, shape distortion degree and overall naturalness.
In order to improve the alignment capability and flexibility of the model, people provide a method based on grid deformation (APAP algorithm), a target image is divided into grids with the size of R x C, and pixel points in each grid independently calculate corresponding local homography matrixes, so that the image splicing mode is more exquisite, and the alignment precision of image splicing is greatly improved. In order to further improve the alignment accuracy of the stitched image and reduce the perspective distortion of the non-overlapping region, a shape-preserving half-projection scheme (SPHP algorithm) is proposed, which combines the projective transformation and the similarity transformation in space, smoothly extrapolates the projective transformation of the overlapping region to the similarity transformation of the non-overlapping region, and both maintains the alignment accuracy of the overlapping region and significantly reduces the image distortion of the non-overlapping region. However, since the similarity transformation is derived from the global homography matrix, when two images have multiple feature planes, the rotation angle corresponding to the global similarity transformation is not optimal, so that the spliced image rotates unnaturally in the overlapping region. In order to avoid directly extrapolating global similarity transformation by a global homography matrix, the AANAP algorithm is generated, the similarity transformation of a plurality of characteristic planes of an image is calculated, the similarity transformation with the minimum rotation angle is used as the optimal global similarity transformation, the local homography transformation of a non-overlapping area is linearized by using a matrix linearization method, and the local homography transformation and the global similarity transformation are combined on space by using a distance-based weight updating strategy, so that the alignment precision of the overlapping area is effectively improved, and the perspective distortion of the non-overlapping area is reduced.
However, since the AANAP algorithm is only for images that rely on SIFT feature points for alignment, the alignment accuracy and the naturalness of the deformation thereof greatly depend on the number of feature points and the registration accuracy. Therefore, when the texture of a certain feature plane of an image is small, the number of feature points is rapidly reduced, so that the alignment capability of a local homography model calculated by a Moving direct linear transformation (Moving DLT) algorithm in an image overlapping region becomes weak, the linear structure of the image overlapping region is obviously dislocated, the non-overlapping region is obviously distorted in shape, and the overall splicing quality is finally affected. Therefore, the AANAP algorithm has limited capability of splicing certain weak texture images and has insufficient robustness and universality for image splicing.
Disclosure of Invention
Aiming at the defects and improvement requirements of poor splicing effect and poor alignment precision of low-texture images and weak texture regions of natural images by the original AANAP algorithm in the prior art, the invention provides an image splicing method and system based on the fusion of linear features and key point features, and aims to enhance the feature extraction capability of the AANAP algorithm, improve the alignment capability of the algorithm in the weak texture regions of the images, further reduce the perspective distortion and deformation of non-overlapped regions of spliced images, improve the overall splicing quality, and further enhance the robustness and universality of the algorithm in splicing low-texture images and natural images.
To achieve the above object, according to a first aspect of the present invention, there is provided an image stitching method based on fusion of a straight line feature and a key point feature, the method comprising the steps of:
s1, calculating key point features and linear features of a reference image, calculating key point features and linear features of a target image, and matching to obtain key point feature pairs and linear feature pairs of the reference image and the target image;
s2, dividing the target image into a plurality of grids, calculating that each grid point fuses linear featuresAnd local homography model coefficient matrix H of key point featureslUsing HlGuiding pixel points of the target image to project to a reference coordinate system, and realizing image pre-splicing;
s3, carrying out linearization on the local homography transformation of the grid points of the non-overlapping area of the pre-spliced image at the selected anchor point on the reference image to obtain a linearized local homography model Ht
S4, grouping the feature point pairs, calculating the similarity transformation of each group of feature point pairs, and taking the average similarity transformation as the optimal global similarity transformation S;
s5, global similarity transformation S and local homography transformation HtLinear integration to obtain an integrated local transformation model Hs
S6, using HlAdjusting and correcting the reference image on a reference coordinate system to obtain a final local homography transformation model Hs′=Hl -1Hs
S7, passing the target image through a local homography model HsAnd projecting the image to a reference coordinate system to realize the splicing of the target image and the reference image.
Preferably, step S1 is specifically as follows:
s11, calculating SIFT key point characteristics of the reference image and the target image respectively, detecting straight line structures on the reference image and the target image by using an LSD straight line segment detection algorithm, and expressing the characteristics of each straight line segment by using an LBD descriptor;
s12, matching the SIFT key point characteristics of the reference image and the target image, and eliminating the SIFT key point characteristic pairs which are mismatched;
and S13, matching the linear features by using an LBD descriptor to obtain a SIFT key point feature pair and an LSD linear feature pair of the reference image and the target image.
Has the advantages that: according to the method, the SIFT algorithm is used for extracting the feature points on the reference image and the target image, the LSD line detection algorithm is used for extracting the feature straight line segments on the reference image and the target image, and the line features of the two images are matched through the LBD descriptor.
Preferably, in step S12, the global matching method based on euclidean distance is used to match the SIFT key point features of the reference image and the target image, and the random sampling consistency algorithm is used to eliminate the mismatched SIFT key point feature pairs.
Has the advantages that: according to the method, SIFT mismatching feature points are removed by using a random sample consensus (RANSAC), and the RANSAC can further remove mismatching SIFT feature points generated by global matching by minimizing a reprojection error among the SIFT feature points, so that the subsequent registration precision can be improved.
Preferably, in step S13, the LBD descriptors of the reference image and the straight line segment of the target image are matched using a nearest neighbor distance ratio method.
Has the advantages that: the invention carries out registration of the characteristic straight-line segments by using a nearest neighbor distance ratio method (NNDR algorithm), and because the nearest neighbor distance ratio method accurately screens the real matched straight-line segments by calculating the ratio of the nearest distance from the characteristic straight-line segments to the rest straight-line segments and the distance of the next nearest neighbor, mismatching of the characteristic straight-line can be reduced, and the subsequent registration precision is improved.
Preferably, the method calculates a local homography model H with each grid point fused with linear features and key point featureslThe method comprises the following steps:
the method comprises the following steps of carrying out linear fusion on a coefficient matrix constructed by SIFT feature matching point pairs and LBD straight-line segment feature matching pairs of a reference image and a target image, wherein the local homography transformation array estimation method for fusing key point features and straight-line features at each grid point comprises the following steps:
Figure BDA0002857254160000041
and estimating a local homography transformation matrix based on grid division by a mobile direct linear transformation method:
Figure BDA0002857254160000042
wherein h is (h)1h2h3h4h5h6h7h8h9)TAnd | h |21, a denotes a coefficient matrix composed of homogeneous coordinates of all SIFT feature point pairs, B denotes a coefficient matrix composed of homogeneous coordinates of all straight-line segment matching pairs, and W denotespRepresenting the distance weight coefficient, W, from the grid point to all SIFT feature points on the target imagelAnd the distance weight coefficients from the grid points on the target image to all the characteristic straight line segments are represented, and lambda is a weight balance coefficient.
Has the advantages that: the invention provides an improved Moving DLT algorithm, which integrates Gaussian weight W from a grid point on a target image to a distance of a characteristic straight line segment by fusing LBD straight line segment characteristic and key point characteristiclAnd Gaussian weight W of distance from grid point to feature point of target imagepAnd obtaining a final weight coefficient W of the Moving direct linear transformation (Moving DLT). Since the splicing accuracy of the original AANAP algorithm completely depends on the detection number and matching accuracy of the SIFT feature points, when the original image overlapping region is a low-texture image, the SIFT feature points sharply decrease, and at this time, the matching accuracy of the AANAP algorithm becomes poor. The invention introduces the linear characteristic and leads the moving direct conversion algorithm which fuses the linear characteristic and the key point characteristic
Figure BDA0002857254160000054
Figure BDA0002857254160000052
The local homographic transformation of each grid point is estimated, and is guided by the straight line characteristic and the key point characteristic together. Meanwhile, in order to prevent the alignment capability of the linear features weakening the features of the key points, the fusion degree of the linear features is balanced through the lambda constant coefficient, and the problem of mis-alignment of the low-texture image caused by insufficient SIFT feature points can be effectively solvedTo give a title.
Preferably, in step S3, using the linearization method in AANAP, the formula of linearization is as follows:
Figure BDA0002857254160000053
wherein h isL(q) represents the homography matrix of each grid point after linearization, R represents the total number of anchor points in the selected anchor point set P, alphaiRepresenting the weight coefficients, q representing any grid point on the non-overlapping region of the pre-stitched image, piDenotes the ith selected anchor point, h (p)i) Representing a homography matrix, J, at the selected anchor pointh(pi) Represented at an anchor point piA Jacobian matrix of the homography matrix;
reintegration hL(q) and HlThereby obtaining a linearized model HtThe integrated formula is:
Ht=δHl+(1-δ)hL
wherein δ is a t distribution weight coefficient of the distance from the grid point to the anchor point on the target image.
Has the advantages that: the invention uses the linearization method in AANAP, linearizes the grid points of the non-overlapping area of the target image at the position of the selected anchor point set P on the reference image, groups the reference image and the target image feature points through RANSAC algorithm, calculates the similar transformation corresponding to each group of feature points, and uses all similar transformations to obtain the average similar transformation as the global similar transformation S of the image to be spliced.
Preferably, in step S4, the SIFT feature points on the target image and the reference image are grouped by a random sampling consistency algorithm to form a plurality of feature planes;
each group of SIFT feature points estimates local similarity transformation, and the similarity transformation formula is as follows:
Figure BDA0002857254160000061
the similarity transformation parameter estimation formula using each pair of feature points is:
Figure BDA0002857254160000062
β1=scosθ
β2=ssinθ
β3=tx
β4=ty
θ=tan-121)
wherein, (x ', y', 1) and (x, y,1) respectively correspond to the homogeneous coordinates, t, of SIFT feature points on the target image and the reference imagex,tyRespectively representing an x-axis translation amount and a y-axis translation amount representing rotation transformation, s representing a scale factor, and theta representing a rotation angle of the similarity transformation;
the average similarity transformation of all local similarity transformations is taken as the optimal global similarity transformation S.
Has the advantages that: the method uses the global average similarity transformation as the global similarity transformation, and can minimize the deviation of the rotation angle of the similarity transformation caused by insufficient feature points due to the average similarity transformation, so that the alignment of the images in the weak texture area is more accurate.
Preferably, in step S5, the global similarity transformation S and the linearized model H are transformedtWeighted combination is carried out to obtain a final local homography transformation model HsThe combination mode is as follows:
Figure BDA0002857254160000071
wherein the content of the first and second substances,
Figure BDA0002857254160000072
to be integratedThe local homography matrix for the latter grid point i,
Figure BDA0002857254160000073
representing a linearized grid point i local homography model, μh,μsAre weight coefficients.
Has the advantages that: the invention transforms global similarity S and local homography H in a weighting waysThe linear combination increases the alignment capability of the image in the non-overlapping area on one hand, obviously reduces the perspective distortion and deformation of the non-overlapping area on the other hand, and improves the naturalness of the spliced image.
Preferably, the method further comprises:
and S8, fusing the final spliced image in an overlapping area through a gradual-in and gradual-out fusion algorithm to obtain the final spliced image.
Has the advantages that: the invention fuses the spliced image in the overlapping area by a gradual-in gradual-out fusion method, which is a linear weighting fusion method of pixel values, namely, the pixel value of each point of the image overlapping area is obtained by carrying out coefficient weighting summation according to the distance ratio of the pixel of the reference image and the target image at the point to the left and right boundaries of the spliced image overlapping area.
To achieve the above object, according to a second aspect of the present invention, there is provided an image stitching system based on fusion of a straight line feature and a key point feature, including: a computer-readable storage medium and a processor;
the computer-readable storage medium is used for storing executable instructions;
the processor is configured to read executable instructions stored in the computer-readable storage medium, and execute the image stitching method based on the fusion of the straight-line feature and the key point feature according to the first aspect.
Generally, by the above technical solution conceived by the present invention, the following beneficial effects can be obtained:
to as much as possible in order toThe invention introduces linear features, namely simultaneously detecting the linear features and key point features on the image, enriches the feature extraction of the image to be spliced, and introduces a moving direct conversion algorithm of the linear features and the key point features
Figure BDA0002857254160000074
Figure BDA0002857254160000081
The formula is used for estimating local homography transformation of each grid point, the local homography transformation of each grid is guided by the linear features and the key point features together, in order to prevent the linear features from weakening the alignment capability of the key point features, the fusion degree of the linear features is balanced by the lambda constant coefficient, and the verification proves that the moving direct linear transformation algorithm with the balance factors has better low-texture image alignment capability, so that the problem of mis-alignment of the low-texture image caused by insufficient SIFT feature points is effectively solved.
Drawings
Fig. 1 is a flowchart of an embodiment of an image stitching method based on a straight line feature and a key point feature according to the present invention;
FIG. 2 is a feature detection diagram of a fusion of straight-line features and keypoint features provided by an embodiment of the invention;
fig. 3 is a schematic diagram of a distance calculation manner from a grid point to a feature straight line on a target image according to an embodiment of the present invention;
fig. 4 is a flowchart of RANSAC iterative packet SIFT feature points according to an embodiment of the present invention;
FIGS. 5(a) and 5(b) are graphs comparing the final stitching effect of the method provided by the embodiment of the present invention on two weak texture images with the original AANAP algorithm, respectively;
fig. 5(c) and 5(d) are graphs comparing the final stitching effect on the natural image by the method provided by the embodiment of the present invention with the original AANAP algorithm.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
As shown in fig. 1, the present invention provides an image stitching method based on the fusion of a straight line feature and a key point feature, including:
s1, extracting feature points of a reference image and a target image by using an SIFT algorithm, and normalizing coordinates of the feature points on the images to ensure that the average distance from the feature points to the original point of the images is
Figure BDA0002857254160000091
Performing coarse matching on the feature points by using a nearest neighbor matching method based on Euclidean distance, and eliminating mismatching feature points by using an RANSAC algorithm; detecting characteristic straight lines on the two images through an LSD algorithm, matching the characteristic straight lines of the two images by using an LBD descriptor, normalizing the end point coordinates of the characteristic straight line segments on the images, and finding out the best matching pair of the same-name straight line segments through a nearest neighbor distance ratio method (NNDR), wherein the detected effects of the characteristic straight line segments and the key point characteristics on the target image and the reference image are shown in FIG. 2, wherein the left side is the detection effect of the two characteristics on the reference image, and the right side is the detection effect of the two characteristics on the target image; and detecting the linear characteristic and the image characteristic of both the images, and respectively matching. Fig. 2 shows the result of the detection of two kinds of features in the overlapping region of two images (only the overlapping region has matched feature points and feature straight lines).
And S2, constructing a weight coefficient matrix based on fusion of LBD straight line characteristics and SIFT key point characteristics. Dividing the target image into unit grids of R x C, and calculating partial homography H based on grid division through a moving direct linear transformation (MovingDLT) algorithml
Firstly, a weighting coefficient matrix estimated based on the SIFT feature point homography matrix is calculated. p ' ═ x ', y ', 1)TAnd p ═ 1 (x, y)TRespectively correspond to the targetsThe homogeneous coordinates of SIFT feature points on the image I 'and the reference image I are used for estimating a global homography matrix H by using a direct linear transformation algorithm (DLT) according to the projection relation between the I' and the Ig
Order to
Figure BDA0002857254160000092
Then the homography estimation equation based on SIFT feature points is as follows:
Figure BDA0002857254160000093
wherein h is (h)1h2h3h4h5h6h7h8h9)TAnd satisfy | h |21. Each row of the coefficient matrix is a 1 × 9 vector, and because the coordinates of the two-dimensional feature points are homogeneous, only the first and second rows are linearly independent, and each pair of SIFT matching points can generate a 2 × 9 vector AiThen, the N pairs of SIFT matching points of the two images form a coefficient matrix A (composed of A) with the size of 2 Nx 9iVertically stacked), then the estimated formula for h is:
Figure BDA0002857254160000101
the invention divides the target image into 100 × 100 grids, calculates each grid point p*To the ith matching point p on the target imageiThe gaussian weight of the distance is calculated as:
Figure BDA0002857254160000102
wherein, the gaussian weight constant σ is 12.5, the minimum value τ of the gaussian weight is 0.025, and a weight coefficient matrix composed of the N pairs of SIFT feature points is:
Figure BDA0002857254160000103
multiplying the grid point weight coefficient matrix by the coefficient matrix A to form a grid point weight coefficient matrix, and estimating the formula of local homography transform by using the grid point weight coefficient matrix as follows:
Figure BDA0002857254160000104
wherein A isiCoefficient matrix formed for each pair of matching points:
Figure BDA0002857254160000105
the matrix A is formed by N pairs of characteristic points AiThe vertical superposition constitutes a 2N x 9 matrix of estimated coefficients,
Figure BDA0002857254160000106
is a local homographic model of all grid points estimated by SIFT feature points.
Next, a weighting coefficient matrix based on the straight-line segment characteristics is calculated. Obtaining two endpoint coordinates p of the matched straight line segment through LBD algorithm0,p1Obtaining homogeneous coordinates l ═ x, y,1 of the normalized feature straight line on the reference image and homogeneous coordinates l '═ u, v,1 of the feature straight line of the target image, wherein the two end points of the feature straight line on the reference image are homographic transformed and then fall on the target image, and the corresponding feature straight line satisfies l'j×HljAnd (3) further obtaining a coefficient matrix corresponding to the straight line segment, wherein the coefficient matrix is constructed by the following steps:
Figure BDA0002857254160000107
then from the K-to-linear matching pairs, the following equation can be constructed:
Figure BDA0002857254160000111
wherein K is the matching logarithm of the straight line segment, BjIs formed by each pair of featuresA 2 x 9 coefficient matrix of line segments, B being BjThe size of the vertical superposition is a 2 Kx 9 linear characteristic coefficient matrix, and then a corresponding coefficient matrix B is given according to the distance from each grid point on the target image to a certain characteristic linear segmentjWeighting, wherein a calculation formula of the distance from the grid point to the straight line segment is as follows:
Figure BDA0002857254160000112
wherein p is*Which represents the grid points,
Figure BDA0002857254160000113
and
Figure BDA0002857254160000114
respectively representing two end points of a reference straight line segment (a)j,bj,cj) Is a straight line ljHomogeneous coordinates of (a). As shown in fig. 3, a grid point p is given*The distance to the reference straight line segment is calculated. When the grid points are not on a straight line to the vertical leg of the straight line segment (point P in the figure)2,P3) Calculating the distance using equation (a), otherwise calculating the distance using equation (b) (point P in the graph)1) From this distance, a Gaussian weight W is calculatedl jThe formula is as follows:
Figure BDA0002857254160000115
wherein, tau is equal to [0,1 ]]Denotes a constant threshold value, Dl(p*,lj) Representing grid points p*And a straight line ljThe shortest distance of (c).
Then the 2K × 2K weight matrix obtained from K to the straight line segment is:
Figure BDA0002857254160000116
by Gaussian weight WlWeighting the linear characteristic coefficient matrix B in the following way:
Figure BDA0002857254160000117
wherein λ is weight balance constant, the value is the average projection error of matched characteristic point pair and linear matched pair, and the calculation formula is
λ=(Ep+El)/(2N+4K)
Wherein E ispAnd ElThe projection error of the characteristic point pair and the projection error of the characteristic straight line are respectively.
According to the method, the coefficient matrix A of the weighted SIFT feature point and the coefficient matrix B of the weighted LBD straight line feature are combined as follows:
Figure BDA0002857254160000121
wherein the weight W is diag [ W ]p,λWl]The coefficient matrix C is generated by vertically superimposing the coefficient matrices a and B, i.e., C ═ a; B. Therefore, a local homography model H fusing SIFT feature points and LBD straight line features can be estimated through a final estimation equationl
Step S3, carrying out linearization on the local homographic transformation of the grid points of the non-overlapping area of the pre-spliced image at the selected anchor point through a linearization method in the AANAP so as to reduce distortion deformation caused by perspective distortion generated after splicing of the non-overlapping area, wherein the linearization method specifically comprises the following steps:
uniformly selecting 20 anchor points on each edge of the reference image, and obtaining each local homography at each anchor point p according to the following formulaiThe method uses Taylor series expansion, and the formula is as follows:
Figure BDA0002857254160000122
wherein R is 20 and weight alphaiThe calculation method is that,
Figure BDA0002857254160000123
q is the mesh vertex, Jh(pi) As anchor point piA Jacobian matrix of homography matrices. On the basis of the above formula, a linearized local homography matrix H is obtained by using the following formulat
Ht=δHl+(1-δ)hL
Wherein δ represents a projection value of a vector formed by the grid point of the target image and the center point of the reference image on a vector formed by the center point of the reference image and the target image.
S4, the error threshold value is t through reprojectionlThe RANSAC algorithm iteratively groups matched SIFT feature point pairs, calculates the similarity transformation of each group of feature point pairs, then calculates the average similarity transformation according to all the similarity transformations and uses the average similarity transformation as the optimal global similarity transformation S, and further reduces the local homography H of non-overlapping regionslResulting in perspective distortion. Combining local homography models H by linear weightingtAnd global similarity transformation S to obtain integrated grid deformation model Hs
The integration formula is as follows:
Hs i=ηHt i+(1-η)S
and eta is a projection value of a vector formed by the central point of the non-overlapped region on the pre-spliced image and the central point of the reference image on a vector formed by the central points of the reference image and the target image.
Further, fig. 4 shows a step of iteratively selecting an optimal global similarity transformation S by using a RANSAC algorithm, and the specific implementation method is as follows:
using a reprojection error threshold of t on the initially determined feature pointslRANSAC algorithm of 0.001, and the reprojection error e is less than or equal to tlAs a set of feature interior points, e>tlThe point of (2) is taken as an outer point. A similarity transformation can be estimated for each set of feature inliers:
Figure BDA0002857254160000131
using each pair of feature points to compute an estimated expression for the local similarity transformation:
Figure BDA0002857254160000132
wherein, beta1=scosθ,β2=ssinθ,β3=tx,β2=tyRotation angle θ of similarity transformation is tan-121) And s represents a scale scaling factor, and the optimal similarity transformation of the characteristic point pairs can be estimated by each group of characteristic points.
And (5) taking the outer point of the previous step as an initial inner point of the RANSAC algorithm, and repeating the previous step. Until an error threshold t is reached according to the reprojectionlAnd if the number of the divided outer points is less than 30, exiting iteration. And obtaining a plurality of similarity transformations according to iterative calculation, and solving the average similarity transformation of all the similarity transformations to serve as the optimal global similarity transformation.
Further, according to the above process, a global similarity transformation S and a homography transformation H are obtainedtGlobal similarity transformation S and local homography matrix for each grid point
Figure BDA0002857254160000133
Linear combination is adopted to obtain an image mosaic model which can be accurately aligned in a low-texture area of a mosaic image and can obviously reduce perspective distortion and deformation in a non-overlapping area, and the formula of the linear combination is as follows:
Hs i=μHt i+(1-μ)S
wherein μ is a projection value of a vector formed by the central point of the reference image and the central point of the non-overlapping region of the target image on a vector formed by the central points of the target image and the reference image.
Because the pre-splicing result image generates dislocation in the overlapping area when the global similarity transformation S carries out affine transformation on the target image, the pre-transformation matrix H is usedlAdjusting and correcting the reference image on a reference coordinate system to obtain a final local homography transformation model, wherein a correction formula is as follows:
Hs′=Hl -1Hs
furthermore, a gradual-in and gradual-out method is used for fusion in the overlapped area of the spliced images, so that ghost images in the overlapped area of the spliced images and splicing traces caused by exposure difference are eliminated.
In practical application, compared with the AANAP algorithm, the method can more effectively splice the low-texture image and the natural image, improve the alignment precision of the spliced image in an overlapping area, also can obviously reduce the perspective distortion and deformation of the spliced image in a non-overlapping area, and has better robustness and universality. FIGS. 5(a) and 5(b) show the comparison of the stitching effect of the algorithm of the present invention on a weak texture image and the original AANAP algorithm; fig. 5(c) and 5(d) are comparison results of the final stitching effect of the method provided by the embodiment of the invention on the natural image and the original AANAP algorithm, and it can be seen that the stitching effect of the method on the natural image and the weak texture image is better than that of the original AANAP algorithm.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. An image stitching method based on the fusion of linear features and key point features is characterized by comprising the following steps:
s1, calculating key point features and linear features of a reference image, calculating key point features and linear features of a target image, and matching to obtain key point feature pairs and linear feature pairs of the reference image and the target image;
s2, dividing the target image into a plurality of grids, and calculating a local homography model coefficient matrix H of each grid point with linear characteristics and key point characteristics fusedlUsing HlGuiding pixel points of the target image to project to a reference coordinate system, and realizing image pre-splicing;
s3, non-overlapping of pre-spliced imagesThe local homography transformation of the grid points of the overlapped region is linearized at the selected anchor point on the reference image to obtain a linearized local homography model Ht
S4, grouping the feature point pairs, calculating the similarity transformation of each group of feature point pairs, and taking the average similarity transformation as the optimal global similarity transformation S;
s5, global similarity transformation S and local homography transformation HtLinear integration to obtain an integrated local transformation model Hs
S6, using HlAdjusting and correcting the reference image on a reference coordinate system to obtain a final local homography transformation model Hs′=Hl -1Hs
S7, passing the target image through a local homography model HsAnd projecting the image to a reference coordinate system to realize the splicing of the target image and the reference image.
2. The method of claim 1, wherein step S1 is specifically as follows:
s11, calculating SIFT key point characteristics of the reference image and the target image respectively, detecting straight line structures on the reference image and the target image by using an LSD straight line segment detection algorithm, and expressing the characteristics of each straight line segment by using an LBD descriptor;
s12, matching the SIFT key point characteristics of the reference image and the target image, and eliminating the SIFT key point characteristic pairs which are mismatched;
and S13, matching the linear features by using an LBD descriptor to obtain a SIFT key point feature pair and an LSD linear feature pair of the reference image and the target image.
3. The method of claim 2, wherein in step S12, the reference image and the target image SIFT keypoint features are matched using a global matching method based on euclidean distance, and pairs of misfit SIFT keypoint features are eliminated using a random sampling consistency algorithm.
4. A method as claimed in claim 2 or 3, wherein in step S13, the LBD descriptors of the reference image and the straight-line segment of the target image are matched using nearest neighbor distance ratio.
5. The method according to any of claims 2 to 4, wherein a local homography model H is calculated for each grid point fusing straight line features and key point featureslThe method comprises the following steps:
the method comprises the following steps of carrying out linear fusion on a coefficient matrix constructed by SIFT feature matching point pairs and LBD straight-line segment feature matching pairs of a reference image and a target image, wherein the local homography transformation array estimation method for fusing key point features and straight-line features at each grid point comprises the following steps:
Figure FDA0002857254150000021
and estimating a local homography transformation matrix based on grid division by a mobile direct linear transformation method:
Figure FDA0002857254150000022
wherein h is (h)1h2h3h4h5h6h7h8h9)TAnd | h | count the luminance21, a denotes a coefficient matrix composed of homogeneous coordinates of all SIFT feature point pairs, B denotes a coefficient matrix composed of homogeneous coordinates of all straight-line segment matching pairs, and W denotespRepresenting the distance weight coefficient, W, from the grid point to all SIFT feature points on the target imagelAnd the distance weight coefficients from the grid points on the target image to all the characteristic straight line segments are represented, and lambda is a weight balance coefficient.
6. The method according to any of claims 2 to 5, wherein in step S3, the linearization method in AANAP is used, the formula of linearization is as follows:
Figure FDA0002857254150000031
wherein h isL(q) represents the homography matrix of each grid point after linearization, R represents the total number of anchor points in the selected anchor point set P, alphaiRepresenting the weight coefficients, q representing any grid point on the non-overlapping region of the pre-stitched image, piDenotes the ith selected anchor point, h (p)i) Representing a homography matrix, J, at the selected anchor pointh(pi) Represented at an anchor point piA Jacobian matrix of the homography matrix;
reintegration hL(q) and HlThereby obtaining a linearized model HtThe integrated formula is:
Ht=δHl+(1-δ)hL
wherein δ is a t distribution weight coefficient of the distance from the grid point to the anchor point on the target image.
7. The method according to any one of claims 2 to 6, wherein in step S4, SIFT feature points on the target image and the reference image are grouped by a random sampling consistency algorithm to form a plurality of feature planes;
each group of SIFT feature points estimates local similarity transformation, and the similarity transformation formula is as follows:
Figure FDA0002857254150000032
the similarity transformation parameter estimation formula using each pair of feature points is:
Figure FDA0002857254150000033
β1=s cosθ
β2=s sinθ
β3=tx
β4=ty
θ=tan-121)
wherein, (x ', y', 1) and (x, y,1) respectively correspond to the homogeneous coordinates, t, of SIFT feature points on the target image and the reference imagex,tyRespectively representing the translation amount of an x axis and the translation amount of a y axis in the similarity transformation, s represents a scale scaling factor, theta represents the rotation angle of the similarity transformation, and beta1,β2,β3,β4Is an intermediate variable;
the average similarity transformation of all local similarity transformations is taken as the optimal global similarity transformation S.
8. The method according to any of claims 1 to 7, characterized in that in step S5, the global similarity transformation S and the linearized model H are transformedtWeighted combination is carried out to obtain a final local homography transformation model HsThe combination mode is as follows:
Figure FDA0002857254150000041
wherein the content of the first and second substances,
Figure FDA0002857254150000042
for the integrated local homography matrix of grid points i,
Figure FDA0002857254150000043
representing a linearized grid point i local homography model, μh,μsAre weight coefficients.
9. The method of any of claims 1 to 8, further comprising:
and S8, fusing the final spliced image in an overlapping area through a gradual-in and gradual-out fusion algorithm to obtain the final spliced image.
10. An image stitching system based on the fusion of straight line features and key point features is characterized by comprising the following steps: a computer-readable storage medium and a processor;
the computer-readable storage medium is used for storing executable instructions;
the processor is used for reading executable instructions stored in the computer-readable storage medium and executing the image stitching method based on the fusion of the straight line feature and the key point feature according to any one of claims 1 to 9.
CN202011548924.7A 2020-12-24 2020-12-24 Image stitching method and system based on fusion of linear features and key point features Active CN112529019B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011548924.7A CN112529019B (en) 2020-12-24 2020-12-24 Image stitching method and system based on fusion of linear features and key point features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011548924.7A CN112529019B (en) 2020-12-24 2020-12-24 Image stitching method and system based on fusion of linear features and key point features

Publications (2)

Publication Number Publication Date
CN112529019A true CN112529019A (en) 2021-03-19
CN112529019B CN112529019B (en) 2024-02-09

Family

ID=74976213

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011548924.7A Active CN112529019B (en) 2020-12-24 2020-12-24 Image stitching method and system based on fusion of linear features and key point features

Country Status (1)

Country Link
CN (1) CN112529019B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113066010A (en) * 2021-04-06 2021-07-02 无锡安科迪智能技术有限公司 Secondary adjustment method and device for panoramic stitching image, electronic equipment and storage medium
CN113205457A (en) * 2021-05-11 2021-08-03 华中科技大学 Microscopic image splicing method and system
CN113253968A (en) * 2021-06-01 2021-08-13 卡莱特云科技股份有限公司 Abnormal slice image judgment method and device for special-shaped LED display screen
CN114693562A (en) * 2022-04-15 2022-07-01 黄淮学院 Image enhancement method based on artificial intelligence
CN114708439A (en) * 2022-03-22 2022-07-05 重庆大学 Improved EDLines linear extraction method based on PROSAC and screening combination
CN114708439B (en) * 2022-03-22 2024-05-24 重庆大学 PROSAC and screening combination-based improved EDLines linear extraction method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160358355A1 (en) * 2015-06-05 2016-12-08 International Business Machines Corporation System and method for perspective preserving stitching and summarizing views
CN109658370A (en) * 2018-11-29 2019-04-19 天津大学 Image split-joint method based on mixing transformation
CN109961398A (en) * 2019-02-18 2019-07-02 鲁能新能源(集团)有限公司 Fan blade image segmentation and grid optimization joining method
CN111899164A (en) * 2020-06-01 2020-11-06 东南大学 Image splicing method for multi-focal-zone scene

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160358355A1 (en) * 2015-06-05 2016-12-08 International Business Machines Corporation System and method for perspective preserving stitching and summarizing views
CN109658370A (en) * 2018-11-29 2019-04-19 天津大学 Image split-joint method based on mixing transformation
CN109961398A (en) * 2019-02-18 2019-07-02 鲁能新能源(集团)有限公司 Fan blade image segmentation and grid optimization joining method
CN111899164A (en) * 2020-06-01 2020-11-06 东南大学 Image splicing method for multi-focal-zone scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宋佳乾;汪西原;: "基于改进SIFT特征点匹配的图像拼接算法", 计算机测量与控制, no. 02, pages 182 - 185 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113066010A (en) * 2021-04-06 2021-07-02 无锡安科迪智能技术有限公司 Secondary adjustment method and device for panoramic stitching image, electronic equipment and storage medium
CN113066010B (en) * 2021-04-06 2022-11-15 无锡安科迪智能技术有限公司 Secondary adjustment method and device for panoramic stitching image, electronic equipment and storage medium
CN113205457A (en) * 2021-05-11 2021-08-03 华中科技大学 Microscopic image splicing method and system
CN113253968A (en) * 2021-06-01 2021-08-13 卡莱特云科技股份有限公司 Abnormal slice image judgment method and device for special-shaped LED display screen
CN113253968B (en) * 2021-06-01 2021-11-02 卡莱特云科技股份有限公司 Abnormal slice image judgment method and device for special-shaped LED display screen
CN114708439A (en) * 2022-03-22 2022-07-05 重庆大学 Improved EDLines linear extraction method based on PROSAC and screening combination
CN114708439B (en) * 2022-03-22 2024-05-24 重庆大学 PROSAC and screening combination-based improved EDLines linear extraction method
CN114693562A (en) * 2022-04-15 2022-07-01 黄淮学院 Image enhancement method based on artificial intelligence
CN114693562B (en) * 2022-04-15 2022-11-25 黄淮学院 Image enhancement method based on artificial intelligence

Also Published As

Publication number Publication date
CN112529019B (en) 2024-02-09

Similar Documents

Publication Publication Date Title
CN112529019A (en) Image splicing method and system based on fusion of linear features and key point features
CN109544447B (en) Image splicing method and device and storage medium
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
CN109272570B (en) Space point three-dimensional coordinate solving method based on stereoscopic vision mathematical model
Moulon et al. Adaptive structure from motion with a contrario model estimation
CN110111250B (en) Robust automatic panoramic unmanned aerial vehicle image splicing method and device
US8019703B2 (en) Bayesian approach for sensor super-resolution
CN110992263B (en) Image stitching method and system
CN110349086B (en) Image splicing method under non-concentric imaging condition
WO2007015374A2 (en) Image processing apparatus and image processing program
CN108447022B (en) Moving target joining method based on single fixing camera image sequence
CN110223222B (en) Image stitching method, image stitching device, and computer-readable storage medium
CN108759788B (en) Unmanned aerial vehicle image positioning and attitude determining method and unmanned aerial vehicle
CN109859137B (en) Wide-angle camera irregular distortion global correction method
CN110084743B (en) Image splicing and positioning method based on multi-flight-zone initial flight path constraint
CN107492080B (en) Calibration-free convenient monocular head image radial distortion correction method
CN110580715B (en) Image alignment method based on illumination constraint and grid deformation
CN109064392A (en) Determine the method and its system and image conversion method and its system of homography matrix
CN107240077B (en) Visual measurement method based on elliptic conformation deviation iterative correction
Jin A three-point minimal solution for panoramic stitching with lens distortion
KR101938067B1 (en) Method and Apparatus for Stereo Matching of Wide-Angle Images using SIFT Flow
CN116402904A (en) Combined calibration method based on laser radar inter-camera and monocular camera
CN110290395A (en) A kind of image processing method, device and computer readable storage medium
CN111739158B (en) Three-dimensional scene image recovery method
CN112419172A (en) Remote sensing image processing method for correcting and deblurring inclined image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant