CN109767388B - Method for improving image splicing quality based on super pixels, mobile terminal and camera - Google Patents

Method for improving image splicing quality based on super pixels, mobile terminal and camera Download PDF

Info

Publication number
CN109767388B
CN109767388B CN201811625249.6A CN201811625249A CN109767388B CN 109767388 B CN109767388 B CN 109767388B CN 201811625249 A CN201811625249 A CN 201811625249A CN 109767388 B CN109767388 B CN 109767388B
Authority
CN
China
Prior art keywords
image
matrix
points
superpixel
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811625249.6A
Other languages
Chinese (zh)
Other versions
CN109767388A (en
Inventor
宋锐
王锦锦
李娇娇
贾媛
任海蕾
王养利
李云松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201811625249.6A priority Critical patent/CN109767388B/en
Publication of CN109767388A publication Critical patent/CN109767388A/en
Application granted granted Critical
Publication of CN109767388B publication Critical patent/CN109767388B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of image splicing, and discloses a method for improving image splicing quality based on superpixels, a mobile terminal and a camera; SIFT feature extraction is carried out on the image, and feature matching is carried out by using a descriptor to obtain a matching pair set; screening the inner points of the matched pair set, and constructing a coefficient matrix A by using the screened inner point set according to a direct linear algorithm DLT; carrying out superpixel division on the image, and calculating the center coordinates of the superpixels; calculating to obtain a weight matrix W of the superpixel by using the center coordinates of the superpixel and the matching pair set; multiplying the weight matrix W of the super-pixel by the coefficient matrix A, and performing singular value decomposition on the product to obtain a mapping transformation matrix of the super-pixel; and mapping the super pixels by using the mapping matrix to obtain a spliced image. The invention can align the images to be spliced efficiently, and solves the problems of double images and dislocation in the image splicing process.

Description

Method for improving image splicing quality based on super pixels, mobile terminal and camera
Technical Field
The invention belongs to the technical field of image splicing, and particularly relates to a method for improving image splicing quality based on superpixels, a mobile terminal and a camera.
Background
Currently, the current state of the art commonly used in the industry is such that: at present, with the continuous development and iteration of mobile terminal mobile phones, people have higher and higher requirements on cameras, people do not meet the requirement of shooting a single image, but want to splice the shot images, and image splicing also becomes a research hotspot in the field of computer vision. The image splicing based on the feature points is still the mainstream algorithm in the image splicing at present, the most representative algorithm is AutoStitch, and the software integrates the algorithms of global homography alignment, spherical projection, BA optimization, multi-band fusion and the like. However, the software has two requirements on spliced images, the overlapping area of the first required image is approximate to a plane, the optical centers of the cameras are approximate to overlap when shooting is carried out according to the second requirement, and ghost images and dislocation problems can occur in the spliced images which do not meet the two requirements. Then, aiming at the problem that the alignment capability of a Homography matrix is not enough, a Dual-Homography Warping (DHW) method is provided, namely a method for aligning images by using a double mapping transformation matrix, a scene in the images is divided into a background plane and a foreground plane, the two Homography matrices are used for aligning the two scenes, and the problem of splicing most of real scenes in reality is solved. Then, people have a continuous optimization splicing method, and a plurality of Affine transformation aligned images of Smoothly Varying Affine (SVA) appear, so that images with larger parallax can be spliced. And then, exerting the capability of aligning the images to the maximum in an As-Project-As-Possible (APAP), dividing the images into dense grids, wherein each grid corresponds to a mapping transformation matrix, and the method can well align scenes in reality. However, the alignment capability of the method is based on the dense grid division of the image, and the number of the grid division of the method is about ten thousand according to experimental data in theory, so the algorithm is not ideal in time complexity and space complexity.
In summary, the problems of the prior art are as follows: for the condition that the overlapped area is not on one plane or a large parallax exists between images, the existing algorithm cannot well align the images, or the alignment efficiency is too low, and the processing speed is too slow.
The difficulty and significance for solving the technical problems are as follows: how to ensure that when the algorithm splices images, images can be aligned when the overlapped areas of the images to be spliced are not on the same depth plane or large parallax exists between the images, so that the spliced images have no problems of double images, dislocation and the like; and the image splicing can be rapidly completed, and the high alignment capability and the high splicing speed are met.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a method for improving image splicing quality based on superpixels, a mobile terminal and a camera.
The invention is realized in such a way that a method for improving image splicing quality based on superpixels comprises the following steps:
(1) SIFT feature extraction is carried out on the image, and feature matching is carried out by utilizing the extracted feature descriptors to obtain a matching pair set;
(2) Screening interior points of the matched pair set by utilizing an RNASAC algorithm, and constructing a coefficient matrix A by utilizing the screened interior point set according to a direct linear algorithm DLT;
(3) Dividing the image into superpixels, and calculating the center coordinates of the superpixels;
(4) Calculating a weight matrix W of the superpixel by using the central coordinates of the superpixel and the inner points obtained by screening;
(5) Calculating a mapping transformation matrix H of the superpixel according to a singular value decomposition technology by using a weight matrix W and a global coefficient matrix A of the superpixel;
(6) And mapping the image by using the mapping matrix of the superpixel, and performing interpolation processing on the image after mapping to obtain a final splicing result.
Further, the extracting of the feature points in the image in the first step, calculating feature descriptors corresponding to the feature points, and performing feature matching by using the feature descriptors to obtain a corresponding matching pair set specifically includes: extracting feature points in the image by using an SIFT algorithm, wherein the SIFT algorithm can extract invariant of position, scale and rotation from the image, and the invariant is called as the feature points; calculating gradient information around the characteristic points, and establishing a characteristic descriptor according to the gradient information; and calculating the distance of the feature descriptors between the images to obtain a corresponding feature matching pair set.
Further, the step two of screening the feature matching pair set by using the RANSAC algorithm to obtain interior points, and constructing the global coefficient matrix a according to the interior point set specifically includes: screening the feature matching pair set to obtain an interior point set meeting the conditions; two points (u, v) and (x, y) of matching between different images are represented by the following mapping:
Figure BDA0001927842520000031
in the above equation, H is a 3 × 3 matrix with 8 degrees of freedom, and the specific form is as follows:
Figure BDA0001927842520000032
substituting H into the above can give:
cu=h 1 x+h 2 y+h 3
cv=h 4 x+h 5 y+h 6
c=h 7 x+h 8 y+h 9
the three equations are simplified and arranged:
h 1 x+h 2 y+h 3 +h 4 *0+h 5 *0+h 6 *0-h 7 *xu-h 8 *yu-h 9 u=0
h 1 *0+h 2 *0+h 3 *0+h 4 x+h 5 y+h 6 -h 7 *xv-h 8 *yv-h 9 v=0
the RANSAC algorithm filters interior points, firstly, randomly selects 4 pairs of matched feature points from the feature matching pair set, and calculates to obtain an H matrix; mapping and transforming the residual characteristic points by using the H matrix, and judging whether the mapped and transformed points meet the requirement of interior points; repeating the process for multiple times, and counting the condition that the number of the interior points meeting the conditions is the maximum, namely, the interior point set obtained by the screening sequence;
constructing a global coefficient matrix A according to the equation by using the inner point set obtained by screening, wherein the matched inner point of the ith pair is (x) i ,y i ) And (u) i ,v i ) Then row i and row i +1 of coefficient matrix a:
A[2*i:]=(x i ,y i ,1,0,0,0,-x i u i ,-y i u i ,-u i );
A[2*i+1:]=(0,0,0,x i ,y i ,1,-x i v i ,-y i v i ,-v i );
the N pairs of inner points are obtained through screening, and a coefficient matrix A with the size of (2N, 9) is obtained through construction.
Further, in the third step, the image is divided into superpixels, and the central coordinate position is calculated according to the divided superpixels; firstly, establishing a descriptor for each pixel point, wherein the descriptor comprises color and brightness description information; calculating the distance between descriptors of the image pixel points according to the clustering idea, classifying the descriptors of the pixel points with short distance into a class, and calling the class of pixel points as a super pixel; and averaging the coordinates of the pixel points in the superpixel to obtain the center coordinate of the superpixel.
Further, in the fourth step, a weight matrix W of the superpixel is calculated by using the matching pair set of the center and the interior point of the superpixel of the image, which is specifically as follows: inner point (x) is calculated first i ,y i ) And super pixel center (z) x ,z y ) Distance d of i
Figure BDA0001927842520000041
Then from d i Computing interior points (x) i ,y i ) Corresponding weight value w i The calculation formula is as follows:
Figure BDA0001927842520000042
the weight matrix corresponding to the kth super pixel is obtained by calculating N pairs of matched interior points and is W k ,W k The expression of (a) is as follows:
W k =diag([w k,1 ,w k,1 ,w k,2 ,w k,2 ,......,w k,N ,w k,N ]);
weight matrix W k Is a (2N, 2N) size diagonal matrix, except for the diagonals, allThe elements are all 0.
Further, in the fifth step, the weight matrix W of the superpixel is multiplied by the coefficient matrix A, and singular value decomposition is carried out on the product to obtain a mapping transformation matrix of the superpixel; define M as the product of W and A, i.e.: and M = W × A, and performing singular value decomposition on the matrix M according to a direct linear algorithm to obtain a mapping transformation matrix corresponding to the super pixel.
Further, in the sixth step, the mapping transformation matrix is used for mapping transformation of the superpixels, and interpolation processing is carried out on mapping transformation results to obtain final splicing results; mapping and transforming the superpixel to another image according to a mapping and transforming matrix corresponding to the superpixel in the image; because the mapping transformation can stretch the image to a certain degree, the pixel values of pixel points in the superpixels or between the superpixels after the mapping transformation are zero, and the pixel points with missing pixel values are interpolated to obtain the final splicing result.
Another object of the present invention is to provide a mobile terminal applying the method for improving image stitching quality based on superpixels.
Another object of the present invention is to provide a camera applying the method for improving image stitching quality based on superpixels.
In summary, the advantages and positive effects of the invention are: the image splicing method based on the superpixel can be used for carrying out superpixel division on the spliced image and carrying out aligned splicing on the image by utilizing the divided superpixels, so that a dense grid division method is avoided, and the image splicing speed is increased. And when the superpixels are divided, the spliced image can be divided into superpixels in different degrees by setting different parameters, so that the division of an image area is more reasonable, objects with the same depth of field are prevented from being cut apart by grid division, and the alignment capability of the image is improved. And under the condition of less super pixel points, the invention can obtain good splicing effect. The invention realizes an image splicing method with high speed and strong alignment capability.
Drawings
Fig. 1 is a flowchart of a method for improving image stitching quality based on superpixels according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The method aims at the problems that in the prior art, for the condition that an overlapped area is not on one plane or a large parallax exists between images, the images cannot be well aligned by the existing algorithm, or the alignment efficiency is too low and the processing speed is too low. Under the condition of less super pixel points, the invention can obtain good splicing effect. The invention realizes an image splicing method with high speed and strong alignment capability.
The following detailed description of the principles of the invention is provided in connection with the accompanying drawings.
As shown in fig. 1, the method for improving image stitching quality based on superpixels provided by the embodiment of the present invention includes the following steps:
s101: SIFT feature extraction is carried out on the image, and feature matching is carried out by utilizing a feature descriptor to obtain a corresponding matching pair set;
s102: screening interior points of the obtained matching pair set by using a RANSAC algorithm, and constructing a coefficient matrix A by using the screened matching pair set according to a direct linear algorithm;
s103: performing superpixel division on an image to be spliced, and calculating the coordinates of the center point of the superpixel;
s104: calculating a weight matrix W of the superpixel by using the superpixel central point coordinates and the matching pair set of the image;
s105: multiplying the weight matrix W by the coefficient matrix A, and performing singular value decomposition on the product of the weight matrix W and the coefficient matrix A to obtain a mapping transformation matrix of the superpixel;
s106: and mapping the super pixels by using the mapping matrix, and interpolating the mapping result to obtain a final splicing result.
In the present inventionIn a preferred embodiment of the present invention, in step S101, SIFT feature extraction is performed on the image, and feature matching is performed by using a feature descriptor to obtain a corresponding matching pair set, which specifically includes: image I by SIFT algorithm A And image I B And (5) extracting and matching features. The SIFT algorithm is mainly divided into three steps, wherein in the first step, an image I is extracted A And image I B The feature points mainly include three pieces of information: position, scale and orientation; secondly, establishing a descriptor for each feature point, calculating a gradient histogram in a block by blocking image regions around the feature points, and generally calculating gradient information in 8 directions in a window with the size of 4 × 4 in a feature point scale space, namely a total 128-dimensional vector to represent the feature points, wherein the 128-dimensional vector is called as a feature descriptor; thirdly, fast matching the high-dimensional descriptors of the feature points by using a fast nearest neighbor (FLANN) method to obtain an image I A And image I B A corresponding set of feature matching pairs therebetween.
In a preferred embodiment of the present invention, in step S102, a RANSAC algorithm is used to perform interior point screening on the obtained matching pair set, and according to a direct linear algorithm, a coefficient matrix a is constructed by using the screened matching pair set, which specifically includes: in an interior point set matched with two images obtained by screening through a RANSAC algorithm, firstly, a group of mapping transformation relations are obtained by calculation through a direct linear algorithm, and the expression of the mapping relations is as follows:
Figure BDA0001927842520000071
wherein (u, v) represents the image I A The coordinates of the feature points in (1), (x, y) represent the image I B The feature point coordinates matching with the (u, v) points. H is a 3 x 3 matrix with 8 degrees of freedom. The specific form is as follows:
Figure BDA0001927842520000072
given a set of matching pairs, the H matrix can be solved using the above equation. To solve the above equation, the above equation is developed:
cu=h 1 x+h 2 y+h 3
cv=h 4 x+h 5 y+h 6
c=h 7 x+h 8 y+h 9
the above equation is simplified and arranged to obtain:
h 1 x+h 2 y+h 3 +h 4 *0+h 5 *0+h 6 *0-h 7 *xu-h 8 *yu-h 9 u=0;
h 1 *0+h 2 *0+h 3 *0+h 4 x+h 5 y+h 6 -h 7 *xv-h 8 *yv-h 9 v=0;
as can be seen from the above equations, given a pair of matched feature points, two sets of equation equations can be listed, and to calculate the H matrix with the degree of freedom of 8, 4 pairs of matched feature points are needed to form 8 equations, and the H mapping transformation matrix can be calculated.
The method for screening interior points by using the RANSAC algorithm is based on the calculation process, and comprises the following specific processes: a threshold d is established max The threshold is used to determine whether the feature point is an interior or exterior point. The image I calculated from the above step S101 A And image I B Randomly selecting 4 matched feature points in the feature matching pair set. And then calculating a mapping transformation matrix H by using the 4 pairs of matched characteristic points according to the equation. Then, the image I is mapped by using the mapping transformation matrix H B Mapping the matched feature points except the 4 points to the image I A To calculate the image I A The distance d between the feature point after mapping transformation and the corresponding matching feature point is determined, if the distance d is less than the threshold d max The feature point is considered to conform to the mapping equation; otherwise, it is considered as not compliant. The number of feature points conforming to the mapping relationship is counted, and the feature points are called interior points. Repeating the above steps for several times, if allAnd if the number of the inner points obtained by counting is increased, updating the number of the inner points and the number of the inner points, and screening the inner points meeting the conditions after multiple iterations.
Further, the screened interior points are utilized to align the image, and the image I is aligned A And image I B The aim of the method is to ensure that the inner points on one image can be superposed with the matched inner points on the corresponding image as much as possible after mapping transformation, namely the distance between the inner points and the matched inner points is as small as possible. An error function is defined here
Figure BDA0001927842520000081
To represent the above relationship:
Figure BDA0001927842520000082
wherein | | | a i h | | | denotes to put the image I A Inner point a of i Mapping to image I B Points after top and image I B A of i Distance between matched interior points such that the error function is described above
Figure BDA0001927842520000083
The smallest h matrix is used to align the image I A And image I B Homography matrix of (2).
Solving the homography matrix h requires constructing a coefficient matrix A according to a direct linear algorithm, and then carrying out singular value decomposition on the coefficient matrix A, so that the homography matrix h can be solved. The coefficient matrix a is specifically constructed as follows: take the inner point of the ith pair match as an example, i.e. (x) i ,y i ) And (u) i ,v i ) Interior points that match each other, where (u) i ,v i )∈I A ,(x i ,y i )∈I B The coefficient matrix a is represented as follows:
A[2*i:]=(x i ,y i ,1,0,0,0,-x i u i ,-y i u i ,-u i );
A[2*i+1:]=(0,0,0,x i ,y i ,1,-x i v i ,-y i v i ,-v i );
after the coefficient matrix A is constructed, the error function
Figure BDA0001927842520000084
Is represented as follows:
Figure BDA0001927842520000085
suppose an image I A And image I B With N pairs of matched feature points, the coefficient matrix a is a (2n, 9) matrix, and singular value decomposition of the matrix results in a homography matrix h for the aligned image.
In the preferred embodiment of the present invention, step S103 performs superpixel division on the image, and calculates the coordinate position of the central pixel point according to the divided superpixels, specifically as follows: due to the fact that the image I B Alignment to image I A To image I B The division of the superpixels is performed. The super-pixel division mainly utilizes the idea of clustering, firstly, the image I is subjected to the number of the preset super-pixels B Carrying out uniform segmentation and initializing seed points; then, calculating a pixel point with the minimum gradient value in a 3 multiplied by 3 neighborhood near the seed point, and moving the seed point to the position of the pixel point with the minimum gradient value; calculating the distance from each pixel point to a nearby seed point, finding the seed point closest to the pixel point, dividing the pixel point into the super pixels corresponding to the seed point, and updating the seed points corresponding to the super pixels; and continuously repeating the process, and carrying out iterative optimization until the center point of the superpixel is not transformed any more, thereby finishing the superpixel division of the image.
Suppose an image I B The super-pixel divided in the middle section comprises M pixel points, and the coordinate of the center point z of the super-pixel is calculated as follows:
Figure BDA0001927842520000091
Figure BDA0001927842520000092
wherein (z) x ,z y ) Representing the center point coordinates of the superpixel, (x) i ,y i ) Representing the coordinates of the ith pixel within the superpixel.
In a preferred embodiment of the present invention, in step S104, a weight matrix W of the superpixel is obtained by calculating using the matching pair set obtained in step S102 and the superpixel obtained in step S103, specifically as follows: suppose an image I B Is z, wherein z = (z) x ,z y ) Calculating an image I B A characteristic point u of i Distance d to center point coordinate z of super pixel i The calculation formula is as follows:
Figure BDA0001927842520000093
then from d i Calculating to obtain the weight W in W in the weight matrix i The calculation formula is as follows:
Figure BDA0001927842520000094
the σ and γ in the above expression are parameters to be adjusted, and the range of σ and γ is determined by experiments: sigma belongs to [8,12], gamma belongs to [0.0025,0.025].
Since N pairs of matched feature points are obtained by screening in step S102, the weight matrix W of the kth super-pixel is matched with the dimension of the global coefficient matrix A k Is expressed as follows:
Figure BDA0001927842520000101
the weight matrix W can be known from the above calculation k Is a 2N × 2N diagonal matrix, all elements except the diagonal are 0.
In a preferred embodiment of the present invention, in step S105, the weighting matrix of the superpixel in step S104 is multiplied by the global coefficient matrix a in step S102, and singular value decomposition is performed on the obtained product to obtain a mapping transformation matrix corresponding to the superpixel, specifically as follows: with image I B In the description, the jth super-pixel is taken as an example, and the weight matrix of the jth super-pixel is assumed to be W j Picture I B The corresponding global coefficient matrix is a, and according to the definition of the error function in step S102, the error function of the super-pixel j is then obtained
Figure BDA0001927842520000102
Comprises the following steps:
Figure BDA0001927842520000103
in order to solve the homography matrix of the jth super pixel, a direct linear algorithm is required to be utilized to define a matrix M j Is equal to W j Multiplication by A, i.e. M j =W j A, to matrix M j Singular value decomposition is carried out, namely the image I can be solved B And (4) a homography matrix h corresponding to the jth super pixel.
In the preferred embodiment of the present invention, in step S106, the image I is processed by using the weight matrix corresponding to the super-pixel B Performing mapping transformation on all pixels in the image, and performing interpolation processing on the result after mapping transformation, wherein the details are as follows: image I B Dividing the super-pixels into a plurality of super-pixels, and calculating a homography matrix h of each super-pixel by using the step S105; then mapping each super pixel of image B to image I A And performing interpolation processing on the image obtained by mapping conversion to obtain a final splicing result.
The effect of the present invention will be described in detail with reference to experiments.
The invention aims to improve the splicing efficiency well and solve the problems of double images and dislocation caused by the fact that the overlapped areas of the images to be spliced are not on the same plane or have larger parallax. Splicing experiments under different scenes are respectively carried out, and the time required by the APAP splicing and the time required by the splicing of the invention are shown in the table 1 under the condition of obtaining the splicing effect which is the same as the APAP algorithm with extremely strong alignment capability.
Table 1: APAP and test time of the invention
Splicing time(s) rail temple house Site
APAP 12.23 11.45 14.69 13.98
Proposed 1.65 1.61 2.19 1.97
From the results of table 1, the time complexity of the present invention is greatly improved compared to the APAP method. The invention can solve the problems of double image and dislocation in the image splicing process, and the experimental result shows the usability of the invention.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (9)

1. A method for improving image stitching quality based on superpixel is characterized in that the method for improving the image stitching quality based on superpixel comprises the following steps:
the method comprises the following steps that firstly, SIFT feature extraction is carried out on an image, and feature matching is carried out by utilizing an extracted feature descriptor to obtain a matching pair set;
step two, screening interior points of the matched pair set by utilizing an RNASAC algorithm, and constructing a coefficient matrix A by utilizing the screened interior point set according to a direct linear algorithm DLT;
dividing the image into superpixels, and calculating the center coordinates of the superpixels;
calculating a weight matrix W of the superpixel by using the central coordinates of the superpixel and the inner points obtained by screening;
calculating a mapping transformation matrix H of the superpixel according to a singular value decomposition technology by using a weight matrix W and a global coefficient matrix A of the superpixel;
and sixthly, performing mapping transformation on the image by using the mapping transformation matrix of the superpixel, and performing interpolation processing on the image after mapping transformation to obtain a final splicing result.
2. The method according to claim 1, wherein the first step extracts feature points in the image, calculates feature descriptors corresponding to the feature points, and performs feature matching using the feature descriptors to obtain a corresponding set of matching pairs, specifically comprising: extracting feature points in the image by using an SIFT algorithm, wherein the SIFT algorithm can extract invariant of position, scale and rotation from the image, and the invariant is called as the feature points; calculating gradient information around the feature points, and establishing a feature descriptor according to the gradient information; and calculating the distance of the feature descriptors between the images to obtain a corresponding feature matching pair set.
3. The method for improving image stitching quality based on superpixels according to claim 1, wherein in the second step, the step of screening the feature matching pair set by using a RANSAC algorithm to obtain interior points, and the step of constructing the global coefficient matrix A according to the interior point set specifically comprises the steps of: screening the feature matching pair set to obtain an interior point set meeting the conditions; two points (u, v) and (x, y) of matching between different images are represented by the following mapping:
Figure FDA0003969235000000021
in the above equation, H is a 3 × 3 matrix with 8 degrees of freedom, and the specific form is as follows:
Figure FDA0003969235000000022
substituting H into the above can give:
cu=h 1 x+h 2 y+h 3
cv=h 4 x+h 5 y+h 6
c=h 7 x+h 8 y+h 9
the three equations are simplified and arranged:
h 1 x+h 2 y+h 3 +h 4 *0+h 5 *0+h 6 *0-h 7 *xu-h 8 *yu-h 9 u=0
h 1 *0+h 2 *0+h 3 *0+h 4 x+h 5 y+h 6 -h 7 *xv-h 8 *yv-h 9 v=0
the RANSAC algorithm filters interior points, firstly, randomly selects 4 pairs of matched feature points from the feature matching pair set, and calculates to obtain an H matrix; mapping and transforming the residual characteristic points by using the H matrix, and judging whether the mapped and transformed points meet the requirement of interior points; repeating the process for multiple times, and counting the condition that the number of the interior points meeting the conditions is the maximum, namely, the interior point set obtained by the screening sequence;
constructing a global coefficient matrix A according to the equation by using the inner point set obtained by screening, wherein the matched inner point of the ith pair is (x) i ,y i ) And (u) i ,v i ) Then row i and row i +1 of coefficient matrix a:
A[2*i:]=(x i ,y i ,1,0,0,0,-x i u i ,-y i u i ,-u i );
A[2*i+1:]=(0,0,0,x i ,y i ,1,-x i v i ,-y i v i ,-v i );
the N pairs of inner points are obtained through screening, and a coefficient matrix A with the size of (2N, 9) is obtained through construction.
4. The method for improving image stitching quality based on the super-pixels as claimed in claim 1, wherein the image is divided into the super-pixels in the third step, and the central coordinate position is calculated according to the divided super-pixels; firstly, establishing a descriptor for each pixel point, wherein the descriptor comprises color and brightness description information; calculating the distance between descriptors of the image pixel points according to the clustering idea, classifying the descriptors of the pixel points with short distance into a class, and calling the class of pixel points as a super pixel; and averaging the coordinates of the pixel points in the superpixel to obtain the center coordinate of the superpixel.
5. The method for improving image stitching quality based on the superpixel as claimed in claim 1, wherein in the fourth step, the weight matrix W of the superpixel is calculated by using the superpixel center and interior point matching pair set of the image, and the specific steps are as follows: inner point (x) is calculated first i ,y i ) And super pixel center (z) x ,z y ) Distance d of i
Figure FDA0003969235000000031
Then from d i Computing interior points (x) i ,y i ) Corresponding weight value w i The calculation formula is as follows:
Figure FDA0003969235000000032
the weight matrix corresponding to the kth super pixel is obtained by calculating N pairs of matched interior points and is W k ,W k The expression of (a) is as follows:
W k =diag([w k,1 ,w k,1 ,w k,2 ,w k,2 ,......,w k,N ,w k,N ]);
weight matrix W k Is a diagonal matrix of size (2n, 2n) with all elements except the diagonal being 0.
6. The method for improving image stitching quality based on the super-pixels as claimed in claim 1, wherein in the fifth step, the weight matrix W of the super-pixels is multiplied by the coefficient matrix A, and singular value decomposition is performed on the product to obtain the mapping transformation matrix of the super-pixels; define M as the product of W and A, i.e.: and M = W × A, and performing singular value decomposition on the matrix M according to a direct linear algorithm to obtain a mapping transformation matrix corresponding to the super pixel.
7. The method for improving image stitching quality based on the superpixel as claimed in claim 1, wherein in the sixth step, the superpixel is mapped by using a mapping transformation matrix, and the mapping transformation result is interpolated to obtain a final stitching result; mapping and transforming the superpixel to another image according to a mapping and transforming matrix corresponding to the superpixel in the image; because the mapping transformation can stretch the image to a certain degree, the pixel values of pixel points in the superpixels or between the superpixels after the mapping transformation are zero, and the pixel points with missing pixel values are interpolated to obtain the final splicing result.
8. A mobile terminal applying the method for improving image splicing quality based on superpixels according to any one of claims 1 to 7.
9. A camera applying the method for improving image stitching quality based on superpixel according to any one of claims 1 to 7.
CN201811625249.6A 2018-12-28 2018-12-28 Method for improving image splicing quality based on super pixels, mobile terminal and camera Active CN109767388B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811625249.6A CN109767388B (en) 2018-12-28 2018-12-28 Method for improving image splicing quality based on super pixels, mobile terminal and camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811625249.6A CN109767388B (en) 2018-12-28 2018-12-28 Method for improving image splicing quality based on super pixels, mobile terminal and camera

Publications (2)

Publication Number Publication Date
CN109767388A CN109767388A (en) 2019-05-17
CN109767388B true CN109767388B (en) 2023-03-24

Family

ID=66451703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811625249.6A Active CN109767388B (en) 2018-12-28 2018-12-28 Method for improving image splicing quality based on super pixels, mobile terminal and camera

Country Status (1)

Country Link
CN (1) CN109767388B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717936B (en) * 2019-10-15 2023-04-28 哈尔滨工业大学 Image stitching method based on camera attitude estimation
CN111429358B (en) * 2020-05-09 2022-04-22 南京大学 Image splicing method based on planar area consistency
CN111914049A (en) * 2020-07-29 2020-11-10 北京天睿空间科技股份有限公司 Method for mapping longitude and latitude coordinates and image coordinates
CN112907633B (en) * 2021-03-17 2023-12-01 中国科学院空天信息创新研究院 Dynamic feature point identification method and application thereof
CN113034365B (en) * 2021-03-19 2023-09-22 西安电子科技大学 Multi-picture splicing method and system based on super pixels
CN113450252A (en) * 2021-05-11 2021-09-28 点智芯科技(北京)有限公司 Super-pixel segmentation single mapping matrix clustering image splicing method
CN113222817A (en) * 2021-05-13 2021-08-06 哈尔滨工程大学 Image feature extraction-based 12-channel video image splicing and image registration method
CN113781299B (en) * 2021-08-16 2023-08-25 兰州大学 Multi-image collaborative stitching method based on improved RANSAC algorithm
CN116363031B (en) * 2023-02-28 2023-11-17 锋睿领创(珠海)科技有限公司 Imaging method, device, equipment and medium based on multidimensional optical information fusion

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616247A (en) * 2015-02-10 2015-05-13 天津大学 Method for aerial photography map splicing based on super-pixels and SIFT
EP2958077A1 (en) * 2014-06-20 2015-12-23 Thomson Licensing Method and apparatus for generating temporally consistent superpixels
CN109064410A (en) * 2018-10-24 2018-12-21 清华大学深圳研究生院 A kind of light field image joining method based on super-pixel

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101984463A (en) * 2010-11-02 2011-03-09 中兴通讯股份有限公司 Method and device for synthesizing panoramic image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2958077A1 (en) * 2014-06-20 2015-12-23 Thomson Licensing Method and apparatus for generating temporally consistent superpixels
CN104616247A (en) * 2015-02-10 2015-05-13 天津大学 Method for aerial photography map splicing based on super-pixels and SIFT
CN109064410A (en) * 2018-10-24 2018-12-21 清华大学深圳研究生院 A kind of light field image joining method based on super-pixel

Also Published As

Publication number Publication date
CN109767388A (en) 2019-05-17

Similar Documents

Publication Publication Date Title
CN109767388B (en) Method for improving image splicing quality based on super pixels, mobile terminal and camera
Yin et al. Fisheyerecnet: A multi-context collaborative deep network for fisheye image rectification
CN106846336B (en) Method and device for extracting foreground image and replacing image background
CN103093444B (en) Image super-resolution reconstruction method based on self-similarity and structural information constraint
Yang et al. Progressively complementary network for fisheye image rectification using appearance flow
CN111260543B (en) Underwater image splicing method based on multi-scale image fusion and SIFT features
CN110276768B (en) Image segmentation method, image segmentation device, image segmentation apparatus, and medium
CN111553939B (en) Image registration algorithm of multi-view camera
CN110853151A (en) Three-dimensional point set recovery method based on video
CN110717936B (en) Image stitching method based on camera attitude estimation
CN107767339B (en) Binocular stereo image splicing method
US11270415B2 (en) Image inpainting with geometric and photometric transformations
CN110992263A (en) Image splicing method and system
CN104240229A (en) Self-adaptation polarline correcting method based on infrared binocular camera
CN108109148A (en) Image solid distribution method, mobile terminal
CN111127353A (en) High-dynamic image ghost removing method based on block registration and matching
CN110276717B (en) Image stitching method and terminal
WO2021035627A1 (en) Depth map acquisition method and device, and computer storage medium
CN114119987A (en) Feature extraction and descriptor generation method and system based on convolutional neural network
CN110910457B (en) Multispectral three-dimensional camera external parameter calculation method based on angular point characteristics
CN109493279B (en) Large-scale unmanned aerial vehicle image parallel splicing method
CN116469095A (en) Method for detecting self-adaptive three-dimensional target of space-time scene by using radar sensing fusion
Xue et al. Feature Point Extraction and Matching Method Based on Akaze in Illumination Invariant Color Space
CN115713678A (en) Arrow picture data augmentation method and system, electronic device and storage medium
CN113034365A (en) Multi-image splicing method and system based on super pixels

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Song Rui

Inventor after: Wang Jinjin

Inventor after: Li Jiaojiao

Inventor after: Jia Yuan

Inventor after: Ren Hailei

Inventor after: Wang Yangli

Inventor after: Li Yunsong

Inventor before: Song Rui

Inventor before: Wang Jinjin

Inventor before: Jia Yuan

Inventor before: Ren Hailei

Inventor before: Wang Yangli

Inventor before: Li Yunsong

GR01 Patent grant
GR01 Patent grant