CN110717936A - Image stitching method based on camera attitude estimation - Google Patents
Image stitching method based on camera attitude estimation Download PDFInfo
- Publication number
- CN110717936A CN110717936A CN201910978286.3A CN201910978286A CN110717936A CN 110717936 A CN110717936 A CN 110717936A CN 201910978286 A CN201910978286 A CN 201910978286A CN 110717936 A CN110717936 A CN 110717936A
- Authority
- CN
- China
- Prior art keywords
- image
- matrix
- camera
- updated
- characteristic point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 239000011159 matrix material Substances 0.000 claims abstract description 109
- 239000013598 vector Substances 0.000 claims abstract description 39
- 230000009466 transformation Effects 0.000 claims abstract description 28
- 238000013519 translation Methods 0.000 claims abstract description 21
- 238000012216 screening Methods 0.000 claims abstract description 6
- 230000000007 visual effect Effects 0.000 claims abstract description 5
- 230000008569 process Effects 0.000 claims description 17
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 4
- 230000004927 fusion Effects 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Data Mining & Analysis (AREA)
- Computational Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
An image stitching method based on camera attitude estimation belongs to the technical field of computer vision and image processing. The invention solves the problems of serious distortion of the splicing result and low splicing efficiency of the existing image splicing method. The invention is realized by the following steps: 1. extracting and matching feature points of the image; 2. classifying and screening the characteristic point pairs; 3. estimating parameters of a focal length, a translation matrix and a rotation matrix of the camera; 4. carrying out compromise updating on the posture of the camera; 5. calculating the normal vector of the plane where each type of point pairs are located; 6. and carrying out image transformation and splicing. The method can be applied to splicing the images of the same scene under different visual angles.
Description
Technical Field
The invention belongs to the technical field of computer vision and image processing, and particularly relates to an image stitching method based on camera attitude estimation.
Background
The image splicing method based on the characteristics solves the transformation relation among the images by establishing the corresponding relation among the image characteristic points. Compared with the image splicing method based on the gray value, the method has the advantages that the calculated amount is small, the result is stable, and the method is a common main method. However, most of the methods used in image stitching software estimate the global transformation relationship between images, and a good stitching effect can be obtained only when the camera attitude is only in a rotation condition or when the scene is on a plane, but false stitching results such as ghosting and the like occur in more general conditions in reality. In recent years, methods such as a local transformation model and mesh optimization are proposed in the academic world to solve the problem, but the problems of serious splicing result distortion, low splicing efficiency and the like also occur. Therefore, it is important to research an image stitching method which is efficient and can obtain a more accurate and natural panorama.
Disclosure of Invention
The invention aims to solve the problems of serious distortion of a splicing result and low splicing efficiency of the conventional image splicing method, and provides an image splicing method based on camera attitude estimation.
The technical scheme adopted by the invention for solving the technical problems is as follows: an image stitching method based on camera pose estimation comprises the following steps:
step one, shooting two images I of the same scene by using a camera under different visual angles respectivelypAnd IqAnd separately for the image IpAnd IqExtracting the characteristic points;
for the secondary image IpAnd IqMatching the extracted feature points to obtain an initial feature point pair set S which is:wherein: p is a radical ofiAs an image IpCharacteristic point of (a), qiAs an image IqN is the number of characteristic point pairs in the set S;
step two, screening the characteristic point pairs contained in the set S, and obtaining the categories of the screened characteristic point pairs; set S of screened pairs of characteristic points1Comprises the following steps:N1for the screened characteristic point pairsThe number of (a), wherein: class number c of ith' characteristic point pairi′,ci′N, n is the number of categories;
and respectively obtain a set S1A homography transformation matrix between pairs of feature points within each class, wherein: the homography transformation matrix between pairs of feature points in class k is Hk,k=1,...,n;
Step three, according to the homography transformation matrix H obtained in the step twokRespectively estimating the camera focal length value f corresponding to the characteristic point pairs in each classkThen according to the focal length f of the camerakSelecting an initial camera focal length value f0;
Step four, the set S obtained according to the step two1And the initial camera focal length value f obtained in the third step0To estimate the focal length f and the essential matrix E of the camera;
decomposing the obtained essential matrix E to obtain a rotation matrix R and a translation matrix t between cameras at two different viewing angles;
step five, setting a shot image IpFrom the perspective of (1), the rotation matrix R of the camera relative to the world coordinate systempIs a unit matrix and a translation matrix tpIf the vector is 0, image I is takenqFrom the perspective of (1), the rotation matrix R of the camera relative to the world coordinate systemq=RRpR, translation matrix tq=Rtp+t=t;
Respectively rotate the matrix RpAnd RqConverted into a rotation vector rpAnd rqAnd calculating a rotation vector rpAnd rqAverage value of (2) Then average value is calculatedInto a rotation matrixRotation matrixNamely, the compromise rotation matrix;
will rotate the matrix RpAnd RqIs updated toAndwill translate the matrix tpAnd tqIs updated toAndobtaining an updated camera pose;
sixthly, according to the homography transformation matrix H obtained in the step twokThe rotation matrix R and the translation matrix t between the cameras obtained in the fourth step and the updated rotation matrix obtained in the fifth stepCalculating the normal vector of the plane where the characteristic point pairs in each class are located;
transforming the normal vector to the updated camera attitude obtained in the step five to obtain an updated normal vector of the plane where the characteristic point pair in each class is located;
step seven, according to the updated camera attitude obtained in the step five and the updated normal vector calculated in the step six, carrying out comparison on the image IpAnd IqIs transformed to obtain a transformed image I'pAnd l'q;
And then to the transformed image I'pAnd l'qPerforming image splicing fusion, and obtaining image I'pAnd l'qIn the overlapped area, calculating the pixel mean value of each pixel point in the overlapped area, and taking the calculated pixel mean value as the pixel value of the corresponding pixel point; for image I'pAnd l'qMaintaining the original pixel value in the non-overlapping area to obtain the spliced image Ipq。
The invention has the beneficial effects that: the invention has proposed a picture splicing method based on camera posture estimation, the invention is at first to under different visual angles, after two pictures of the same scene carry on the characteristic point extraction and match, classify and screen the characteristic point pair obtained; processing the screened characteristic point pairs, estimating the focal length, the translation matrix and the rotation matrix of the camera, and carrying out compromise updating on the posture of the camera; and finally, carrying out image transformation and splicing according to the normal vector of the plane where each type of feature point pair is screened out. Compared with the existing image splicing method, the method has the advantages that the image overlapping area alignment effect is more accurate, namely, the ghost phenomenon is less, the distortion of the spliced panoramic image is less, and especially, the plane area in the scene can be kept not to be bent; the method has small calculation amount and can obviously improve the image splicing efficiency.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is an image 1 to be stitched according to the present invention;
FIG. 3 is an image 2 to be stitched according to the present invention;
fig. 4 is a graph of the stitching result of the present invention for image 1 and image 2.
Detailed Description
The first embodiment is as follows: this embodiment will be described with reference to fig. 1. The image stitching method based on camera attitude estimation in the embodiment comprises the following steps:
step one, shooting two images I of the same scene by using a camera under different visual angles respectivelypAnd IqAnd respectively extracting the image I by using an SIFT (Scale Invariant Feature Transform) Feature point extraction methodpAnd IqExtracting the characteristic points;
then, the slave image I is subjected to Fast Nearest neighbor approximation search Function Library (FLANN) based on Fast library for approximation approach NeighborspAnd IqMatching the extracted feature points to obtain an initial feature point pair set S which is:wherein: p is a radical ofiAs an image IpCharacteristic point of (a), qiAs an image IqN is the number of characteristic point pairs in the set S;
the image IpAnd IqTwo images of the same scene under different viewing angles;
step two, screening the feature point pairs contained in the set S by adopting a random sample consensus (RANSAC) method, and obtaining the categories of the screened feature point pairs; set S of screened pairs of characteristic points1Comprises the following steps:N1the number of the screened feature point pairs is as follows: class number c of ith' characteristic point pairi′,ci′N, n is the number of categories;
and respectively obtain a set S1A homography transformation matrix between pairs of feature points within each class, wherein: the homography transformation matrix between pairs of feature points in class k is Hk,k=1,...,n;
Step three, according to the homography transformation matrix H obtained in the step twokRespectively estimating the camera focal length value f corresponding to the characteristic point pairs in each classkThen according to the focal length f of the camerakSelecting an initial camera focal length value f0;
Step four, the set S obtained according to the step two1And the initial camera focal length value f obtained in the third step0To estimate the focal length f and the essential matrix E of the camera;
decomposing the obtained essential matrix E by adopting an SVD (Singular Value Decomposition) method to obtain a rotation matrix R and a translation matrix t between cameras at two different viewing angles;
step five, setting a shot image IpFrom the perspective of (1), the rotation matrix R of the camera relative to the world coordinate systempIs a unit matrix and a translation matrix tpIf the vector is 0, image I is takenqFrom the perspective of (1), the rotation matrix R of the camera relative to the world coordinate systemq=RRpR, translation matrix tq=Rtp+t=t;
Respectively using Rodrigues (Reed-Solomon rotation) formula to rotate the matrix RpAnd RqConverted into a rotation vector rpAnd rqAnd calculating a rotation vector rpAnd rqAverage value of (2)The average value is again calculated using the Rodrigues equationInto a rotation matrixRotation matrixNamely, the compromise rotation matrix;
will rotate the matrix RpAnd RqIs updated toAndwill translate the matrix tpAnd tqIs updated toAndobtaining an updated camera pose;
sixthly, according to the homography transformation matrix H obtained in the step twokThe rotation matrix R and the translation matrix t between the cameras obtained in the fourth step and the updated rotation matrix obtained in the fifth stepCalculating the normal vector of the plane where the characteristic point pairs in each class are located;
transforming the normal vector to the updated camera attitude obtained in the step five to obtain an updated normal vector of the plane where the characteristic point pair in each class is located;
step seven, according to the updated camera attitude obtained in the step five and the updated normal vector calculated in the step six, carrying out comparison on the image IpAnd IqIs transformed to obtain a transformed image I'pAnd l'q;
And then to the transformed image I'pAnd l'qPerforming image splicing fusion, and obtaining image I'pAnd l'qIn the overlapped area, calculating the pixel mean value of each pixel point in the overlapped area, and taking the calculated pixel mean value as the pixel value of the corresponding pixel point; for image I'pAnd l'qMaintaining the original pixel value in the non-overlapping area to obtain the spliced image Ipq。
In the process of extracting and matching the feature points, the corresponding relation of the feature points can be quickly established based on an SIFT feature point extraction algorithm and a FLANN quick nearest neighbor search library method.
The present embodiment can be applied to a case where the camera pose change of a general shooting scene in reality is accompanied by rotation and translation.
The second embodiment is as follows: the first difference between the present embodiment and the specific embodiment is: in the second step, the feature point pairs included in the set S are screened, and categories of the screened feature point pairs are obtained, which includes the following specific processes:
step two, setting a set of the feature point pairs left after screening as S ', and initializing the set S' into an initial feature point pair set S in the step one; and the set S of the screened characteristic point pairs1Initializing to an empty set, and collecting the set S1The number n of the categories of the feature point pairs contained in the table is initialized to 0;
step two, extracting interior points from the set S' by adopting an RANSAC method, wherein the set of the extracted interior point feature point pairs is Sn+1The homography transformation matrix between pairs of interior point feature points is Hn+1;
After the extracted characteristic point pairs of the inner points are removed from the set S ', obtaining a set of the remaining characteristic point pairs, namely obtaining an updated set S';
the model selected by the RANSAC algorithm is a homography matrix transformation model, the threshold value of the distance between interior points is 3, and the iteration times is 500;
step two and step three, if set sn+1If the number of the middle characteristic point pairs is more than or equal to 15, then s is addedn+1The class number of the feature point pair included in the set is n +1, and then the set s is setn+1The characteristic point pairs contained in it are added to the set S1In (3), obtaining an updated set S1Will aggregate sn+1Clear and put the set S1Adding 1 to the category number n of the feature point pairs contained in the table;
step two and step four, repeat the process from step two to step two and step three, continue to process the set S' after updating, until the set Sn+1The number of the characteristic point pairs contained in the set S is less than 15 to obtain a set S1And the set S1The class number of the characteristic point pair contained in (a).
The third concrete implementation mode: the second embodiment is different from the first embodiment in that: the specific process of the third step is as follows:
is provided withThe camera focal length value f corresponding to the characteristic point pair in the kth classkComprises the following steps:
The fourth concrete implementation mode: the third difference between the present embodiment and the specific embodiment is that: in the fourth step, the set S obtained according to the second step1And the initial camera focal length value f obtained in the third step0Estimating the focal length f and the essential matrix E of the camera, which comprises the following specific processes:
step four, firstly, setting the initial camera focal length value f0Nearby [0.5f ]0,2f0]Within the range, every 0.01f0Sampling for one time to obtain a camera focal length value set F ═ Fm=0.5f0+m×0.01f00, 1.., 150}, wherein: f. ofmRepresenting the camera focal length value corresponding to the mth sampling;
step four, according to each camera focal length value F in the set FmRespectively aiming at the set S based on a five-point algorithm and a RANSAC algorithm1Estimating an essential matrix EmAnd obtain fmCorresponding number of interior points nm;
The epipolar geometry description equation is then:
wherein: emIs fmCorresponding essence matrix, intermediate variable matrixcxAnd cyAre respectively an image IpHalf width and height, image IpAnd image IqAre the same in width and height; the superscript T represents the transpose of the matrix, and the superscript-1 represents the inverse of the matrix; with image IpThe vertex of the lower left corner of (1) is taken as the origin of coordinates O, and the image I is takenpIs the x-axis, as image IpIs the y-axis, a rectangular coordinate system xOy is established,is a characteristic point pi′Coordinates in a rectangular coordinate system xOy, as image IqIs the origin of coordinates O' with the vertex of the lower left corner of the image IqIs the x' axis, as image IqIs the y 'axis, a rectangular coordinate system x' O 'y' is established,is a characteristic point qi′Coordinates under a rectangular coordinate system x ' O ' y ';
step four and three, selecting the corresponding camera focal length value f with the largest number of interior pointsmAs the focal length f of the camera, and the value f of the focal length of the camera with the largest number of interior pointsmCorresponding EmAs the intrinsic matrix E of the camera.
The fifth concrete implementation mode: the fourth difference between this embodiment and the specific embodiment is that: in the fourth step, f is obtainedmCorresponding number of interior points nmThe specific process comprises the following steps:
traverse set S1If set S, of all pairs of characteristic points in1Characteristic point pair (p) containedi′,qi′) Point q in (1)i′The straight line distance to the epipolar line is less than 3 pixel values, the characteristic point pair (p)i′,qi′) Is fmInterior point of otherwiseCharacteristic point pair (p)i′,qi′) Is different from fmThe inner point of (a);
the polar line equation is:
wherein: x and y are variables of the polar line equation.
The sixth specific implementation mode: the fifth embodiment is different from the fifth embodiment in that: said isAndthe specific calculation formula of (2) is as follows:
The seventh embodiment: the sixth embodiment is different from the sixth embodiment in that: the concrete process of the sixth step is as follows:
the normal vector of the plane where the characteristic point pairs in the kth class are located is nk,k=1,...,n;
Hk=K(R+tnk)K-1
A normal vector nkAnd 4, under the updated camera posture obtained in the step five, obtaining an updated normal vector of the plane where the characteristic point pair in the kth class is located
And similarly, obtaining the updated normal vector of the plane where the characteristic point pair in other classes is located.
The specific implementation mode is eight: the seventh embodiment is different from the seventh embodiment in that: in the seventh step, according to the updated camera pose obtained in the fifth step and the updated normal vector calculated in the sixth step, the image I is subjected to image matchingpAnd IqIs transformed to obtain a transformed image I'pAnd l'qThe specific process comprises the following steps:
step seven one, in the image IpSelecting grid points, wherein the interval between adjacent grid points is 40 pixels, and obtaining a grid point set V of which is V ═ p'i″,i″=1,...,N2},N2Is the number, p ', of grid points in the set of grid points V'i″Is the ith "grid point in the set V;
seventhly two, any grid point p 'in the grid point set V'i″Separately calculate p'i″And image IpIn the set S1Feature point set P in (1)1={pi′,i′=1,...,N1Euclidean distance of each feature point in the set P1Is extracted from'i″5 points closest to the center of the grid point p 'are calculated as the mean value of normal vectors of planes of the selected 5 points'i″Normal vector of (1)
Seventhly and three steps of obtaining grid points p'i″Normal vector of (1)Calculating grid point p'i″Transformation matrix of (2)
Step seven and four, image IpPixel point p in post-transform image I'pIf the value is pixel p ', the transformation matrix from pixel p to pixel p' is: in picture IpIn the step (2), a transformation matrix at the grid point nearest to the pixel point p is obtained;
based on the obtained transformation matrix, image IpOf (1) converting each pixel point to image I'pTo obtain a transformed image I'p;
Step seven five, in the same way, image IqOf (1) converting each pixel point to image I'qTo obtain a transformed image I'q。
Fig. 4 is a graph showing the splicing result of fig. 2 and 3 by the method of the present invention.
The above-described calculation examples of the present invention are merely to explain the calculation model and the calculation flow of the present invention in detail, and are not intended to limit the embodiments of the present invention. It will be apparent to those skilled in the art that other variations and modifications of the present invention can be made based on the above description, and it is not intended to be exhaustive or to limit the invention to the precise form disclosed, and all such modifications and variations are possible and contemplated as falling within the scope of the invention.
Claims (8)
1. An image stitching method based on camera pose estimation is characterized by comprising the following steps:
step one, shooting two images I of the same scene by using a camera under different visual angles respectivelypAnd IqAnd separately for the image IpAnd IqExtracting the characteristic points;
for the secondary image IpAnd IqMatching the extracted feature points to obtain an initial feature point pair set S which is:wherein: p is a radical ofiAs an image IpCharacteristic point of (a), qiAs an image IqN is the number of characteristic point pairs in the set S;
step two, screening the characteristic point pairs contained in the set S, and obtaining the categories of the screened characteristic point pairs; set S of screened pairs of characteristic points1Comprises the following steps:N1the number of the screened feature point pairs is as follows: class number c of ith' characteristic point pairi′,ci′N, n is the number of categories;
and respectively obtain a set S1A homography transformation matrix between pairs of feature points within each class, wherein: the homography transformation matrix between pairs of feature points in class k is Hk,k=1,...,n;
Step three, rootThe homography transformation matrix H obtained according to the step twokRespectively estimating the camera focal length value f corresponding to the characteristic point pairs in each classkThen according to the focal length f of the camerakSelecting an initial camera focal length value f0;
Step four, the set S obtained according to the step two1And the initial camera focal length value f obtained in the third step0To estimate the focal length f and the essential matrix E of the camera;
decomposing the obtained essential matrix E to obtain a rotation matrix R and a translation matrix t between cameras at two different viewing angles;
step five, setting a shot image IpFrom the perspective of (1), the rotation matrix R of the camera relative to the world coordinate systempIs a unit matrix and a translation matrix tpIf the vector is 0, image I is takenqFrom the perspective of (1), the rotation matrix R of the camera relative to the world coordinate systemq=RRpR, translation matrix tq=Rtp+t=t;
Respectively rotate the matrix RpAnd RqConverted into a rotation vector rpAnd rqAnd calculating a rotation vector rpAnd rqAverage value of (2) Then average value is calculatedInto a rotation matrixRotation matrixNamely, the compromise rotation matrix;
will rotate the matrix RpAnd RqIs updated toAndwill translate the matrix tpAnd tqIs updated toAndobtaining an updated camera pose;
sixthly, according to the homography transformation matrix H obtained in the step twokThe rotation matrix R and the translation matrix t between the cameras obtained in the fourth step and the updated rotation matrix obtained in the fifth stepCalculating the normal vector of the plane where the characteristic point pairs in each class are located;
transforming the normal vector to the updated camera attitude obtained in the step five to obtain an updated normal vector of the plane where the characteristic point pair in each class is located;
step seven, according to the updated camera attitude obtained in the step five and the updated normal vector calculated in the step six, carrying out comparison on the image IpAnd IqIs transformed to obtain a transformed image I'pAnd l'q;
And then to the transformed image I'pAnd l'qPerforming image splicing fusion, and obtaining image I'pAnd l'qOverlapping region, calculating the pixel of each pixel point in the overlapping regionTaking the calculated pixel mean value as the pixel value of the corresponding pixel point; for image I'pAnd l'qMaintaining the original pixel value in the non-overlapping area to obtain the spliced image Ipq。
2. The image stitching method based on camera pose estimation according to claim 1, wherein in the second step, the feature point pairs included in the set S are screened, and categories of the screened feature point pairs are obtained, and the specific process is as follows:
step two, setting a set of the feature point pairs left after screening as S ', and initializing the set S' into an initial feature point pair set S in the step one; and the set S of the screened characteristic point pairs1Initializing to an empty set, and collecting the set S1The number n of the categories of the feature point pairs contained in the table is initialized to 0;
step two, extracting interior points from the set S' by adopting an RANSAC method, wherein the set of the extracted interior point feature point pairs is Sn+1The homography transformation matrix between pairs of interior point feature points is Hn+1;
After the extracted characteristic point pairs of the inner points are removed from the set S ', obtaining a set of the remaining characteristic point pairs, namely obtaining an updated set S';
step two and step three, if set sn+1If the number of the middle characteristic point pairs is more than or equal to 15, then s is addedn+1The class number of the feature point pair included in the set is n +1, and then the set s is setn+1The characteristic point pairs contained in it are added to the set S1In (3), obtaining an updated set S1Will aggregate sn+1Clear and put the set S1Adding 1 to the category number n of the feature point pairs contained in the table;
step two and step four, repeat the process from step two to step two and step three, continue to process the set S' after updating, until the set Sn+1The number of the characteristic point pairs contained in the set S is less than 15 to obtain a set S1And the set S1The class number of the characteristic point pair contained in (a).
3. The image stitching method based on camera pose estimation according to claim 2, wherein the specific process of the third step is as follows:
is provided withThe camera focal length value f corresponding to the characteristic point pair in the kth classkComprises the following steps:
wherein:andare all HkThe element in (1) is the corresponding camera focal length value f of each type1,f2,...,fk,...,fnAs the initial camera focal length value f0。
4. The method for image stitching based on camera pose estimation of claim 3, wherein in the fourth step, the set S obtained from the second step1And the initial camera focal length value f obtained in the third step0Estimating the focal length f and the essential matrix E of the camera, which comprises the following specific processes:
step four, firstly, setting the initial camera focal length value f0Nearby [0.5f ]0,2f0]Within the range, every 0.01f0Sampling for one time to obtain a camera focal length value set F ═ Fm=0.5f0+m×0.01f00, 1.., 150}, wherein: f. ofmRepresenting the camera focal length value corresponding to the mth sampling;
step four, according to each camera focal length value F in the set FmSeparately for set S1Estimating an essential matrix EmAnd obtain fmCorresponding number of interior points nm;
The epipolar geometry description equation is:
wherein: emIs fmCorresponding essence matrix, intermediate variable matrixcxAnd cyAre respectively an image IpHalf width and height, image IpAnd image IqAre the same in width and height; the superscript T represents the transpose of the matrix, and the superscript-1 represents the inverse of the matrix; with image IpThe vertex of the lower left corner of (1) is taken as the origin of coordinates O, and the image I is takenpIs the x-axis, as image IpIs the y-axis, a rectangular coordinate system xOy is established,is a characteristic point pi′Coordinates in a rectangular coordinate system xOy, as image IqIs the origin of coordinates O' with the vertex of the lower left corner of the image IqIs the x' axis, as image IqIs the y 'axis, a rectangular coordinate system x' O 'y' is established,is a characteristic point qi′Coordinates under a rectangular coordinate system x ' O ' y ';
step four and three, selecting the corresponding camera focal length value f with the largest number of interior pointsmAs the focal length f of the camera, and the value f of the focal length of the camera with the largest number of interior pointsmCorresponding EmAs the intrinsic matrix E of the camera.
5. The image stitching method based on camera pose estimation of claim 4, wherein f is obtained in the second stepmCorresponding number of interior points nmTool (A)The process is as follows:
traverse set S1If set S, of all pairs of characteristic points in1Characteristic point pair (p) containedi′,qi′) Point q in (1)i′The straight line distance to the epipolar line is less than 3 pixel values, the characteristic point pair (p)i′,qi′) Is fmOtherwise, the characteristic point pair (p)i′,qi′) Is different from fmThe inner point of (a);
the polar line equation is:
wherein: x and y are variables of the polar line equation.
6. The method of claim 5, wherein the image stitching method based on camera pose estimation is characterized in thatAndthe specific calculation formula of (2) is as follows:
wherein:represents RpThe corresponding updated rotation matrix is then updated,represents RqThe corresponding updated rotation matrix is then updated,represents tpThe corresponding updated translation matrix is then updated with the translation matrix,represents tqThe corresponding updated rotation matrix.
7. The image stitching method based on camera pose estimation according to claim 6, wherein the specific process of the sixth step is as follows:
the normal vector of the plane where the characteristic point pairs in the kth class are located is nk,k=1,...,n;
Hk=K(R+tnk)K-1
A normal vector nkAnd 4, under the updated camera posture obtained in the step five, obtaining an updated normal vector of the plane where the characteristic point pair in the kth class is located
And similarly, obtaining the updated normal vector of the plane where the characteristic point pair in other classes is located.
8. The method of claim 7, wherein in the seventh step, the image I is stitched according to the updated camera pose obtained in the fifth step and the updated normal vector calculated in the sixth steppAnd IqIs transformed to obtain a transformed image I'pAnd l'qThe specific process comprises the following steps:
step seven one, in the image IpSelecting grid points, wherein the interval between adjacent grid points is 40 pixels, and obtaining a grid point set V of which is V ═ p'i″,i″=1,...,N2},N2Is the number, p ', of grid points in the set of grid points V'i″Is the ith "grid point in the set V;
seventhly two, any grid point p 'in the grid point set V'i″Separately calculate p'i″And image IpIn the set S1Feature point set P in (1)1={pi′,i′=1,...,N1Euclidean distance of each feature point in the set P1Is extracted from'i″5 points closest to the center of the grid point p 'are calculated as the mean value of normal vectors of planes of the selected 5 points'i″Normal vector of (1)
Seventhly and three steps of obtaining grid points p'i″Normal vector of (1)Calculating grid point p'i″Transformation matrix of (2)
Step seven and four, image IpPixel of (2)Point p post-transform image I'pIf the value is pixel p ', the transformation matrix from pixel p to pixel p' is: in picture IpIn the step (2), a transformation matrix at the grid point nearest to the pixel point p is obtained;
based on the obtained transformation matrix, image IpOf (1) converting each pixel point to image I'pTo obtain a transformed image I'p;
Step seven five, in the same way, image IqOf (1) converting each pixel point to image I'qTo obtain a transformed image I'q。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910978286.3A CN110717936B (en) | 2019-10-15 | 2019-10-15 | Image stitching method based on camera attitude estimation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910978286.3A CN110717936B (en) | 2019-10-15 | 2019-10-15 | Image stitching method based on camera attitude estimation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110717936A true CN110717936A (en) | 2020-01-21 |
CN110717936B CN110717936B (en) | 2023-04-28 |
Family
ID=69211693
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910978286.3A Active CN110717936B (en) | 2019-10-15 | 2019-10-15 | Image stitching method based on camera attitude estimation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110717936B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111325792A (en) * | 2020-01-23 | 2020-06-23 | 北京字节跳动网络技术有限公司 | Method, apparatus, device, and medium for determining camera pose |
CN111429358A (en) * | 2020-05-09 | 2020-07-17 | 南京大学 | Image splicing method based on planar area consistency |
CN111899174A (en) * | 2020-07-29 | 2020-11-06 | 北京天睿空间科技股份有限公司 | Single-camera rotation splicing method based on deep learning |
CN113034362A (en) * | 2021-03-08 | 2021-06-25 | 桂林电子科技大学 | Expressway tunnel monitoring panoramic image splicing method |
CN113327198A (en) * | 2021-06-04 | 2021-08-31 | 武汉卓目科技有限公司 | Remote binocular video splicing method and system |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6009190A (en) * | 1997-08-01 | 1999-12-28 | Microsoft Corporation | Texture map construction method and apparatus for displaying panoramic image mosaics |
US6018349A (en) * | 1997-08-01 | 2000-01-25 | Microsoft Corporation | Patch-based alignment method and apparatus for construction of image mosaics |
WO2012058902A1 (en) * | 2010-11-02 | 2012-05-10 | 中兴通讯股份有限公司 | Method and apparatus for combining panoramic image |
EP2518688A1 (en) * | 2009-12-21 | 2012-10-31 | Huawei Device Co., Ltd. | Method and device for splicing images |
WO2012175029A1 (en) * | 2011-06-22 | 2012-12-27 | 华为终端有限公司 | Multi-projection splicing geometric calibration method and calibration device |
US20130329072A1 (en) * | 2012-06-06 | 2013-12-12 | Apple Inc. | Motion-Based Image Stitching |
JP2015046040A (en) * | 2013-08-28 | 2015-03-12 | Kddi株式会社 | Image conversion device |
CN105069750A (en) * | 2015-08-11 | 2015-11-18 | 电子科技大学 | Determination method for optimal projection cylindrical surface radius based on image feature points |
US20170018086A1 (en) * | 2015-07-16 | 2017-01-19 | Google Inc. | Camera pose estimation for mobile devices |
CN106355550A (en) * | 2016-10-31 | 2017-01-25 | 微景天下(北京)科技有限公司 | Image stitching system and image stitching method |
CN106600592A (en) * | 2016-12-14 | 2017-04-26 | 中南大学 | Track long chord measurement method based on the splicing of continuous frame images |
US20170124680A1 (en) * | 2014-10-31 | 2017-05-04 | Fyusion, Inc. | Stabilizing image sequences based on camera rotation and focal length parameters |
CN108122191A (en) * | 2016-11-29 | 2018-06-05 | 成都观界创宇科技有限公司 | Fish eye images are spliced into the method and device of panoramic picture and panoramic video |
CN108317953A (en) * | 2018-01-19 | 2018-07-24 | 东北电力大学 | A kind of binocular vision target surface 3D detection methods and system based on unmanned plane |
CN109005334A (en) * | 2018-06-15 | 2018-12-14 | 清华-伯克利深圳学院筹备办公室 | A kind of imaging method, device, terminal and storage medium |
CN109064404A (en) * | 2018-08-10 | 2018-12-21 | 西安电子科技大学 | It is a kind of based on polyphaser calibration panorama mosaic method, panoramic mosaic system |
WO2018232518A1 (en) * | 2017-06-21 | 2018-12-27 | Vancouver Computer Vision Ltd. | Determining positions and orientations of objects |
CN109767388A (en) * | 2018-12-28 | 2019-05-17 | 西安电子科技大学 | Method, the mobile terminal, camera of image mosaic quality are promoted based on super-pixel |
CN109840884A (en) * | 2017-11-29 | 2019-06-04 | 杭州海康威视数字技术股份有限公司 | A kind of image split-joint method, device and electronic equipment |
CN110120010A (en) * | 2019-04-12 | 2019-08-13 | 嘉兴恒创电力集团有限公司博创物资分公司 | A kind of stereo storage rack vision checking method and system based on camera image splicing |
-
2019
- 2019-10-15 CN CN201910978286.3A patent/CN110717936B/en active Active
Patent Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6018349A (en) * | 1997-08-01 | 2000-01-25 | Microsoft Corporation | Patch-based alignment method and apparatus for construction of image mosaics |
US6009190A (en) * | 1997-08-01 | 1999-12-28 | Microsoft Corporation | Texture map construction method and apparatus for displaying panoramic image mosaics |
EP2518688A1 (en) * | 2009-12-21 | 2012-10-31 | Huawei Device Co., Ltd. | Method and device for splicing images |
WO2012058902A1 (en) * | 2010-11-02 | 2012-05-10 | 中兴通讯股份有限公司 | Method and apparatus for combining panoramic image |
WO2012175029A1 (en) * | 2011-06-22 | 2012-12-27 | 华为终端有限公司 | Multi-projection splicing geometric calibration method and calibration device |
US20130329072A1 (en) * | 2012-06-06 | 2013-12-12 | Apple Inc. | Motion-Based Image Stitching |
JP2015046040A (en) * | 2013-08-28 | 2015-03-12 | Kddi株式会社 | Image conversion device |
US20170124680A1 (en) * | 2014-10-31 | 2017-05-04 | Fyusion, Inc. | Stabilizing image sequences based on camera rotation and focal length parameters |
US20170018086A1 (en) * | 2015-07-16 | 2017-01-19 | Google Inc. | Camera pose estimation for mobile devices |
CN105069750A (en) * | 2015-08-11 | 2015-11-18 | 电子科技大学 | Determination method for optimal projection cylindrical surface radius based on image feature points |
CN106355550A (en) * | 2016-10-31 | 2017-01-25 | 微景天下(北京)科技有限公司 | Image stitching system and image stitching method |
CN108122191A (en) * | 2016-11-29 | 2018-06-05 | 成都观界创宇科技有限公司 | Fish eye images are spliced into the method and device of panoramic picture and panoramic video |
CN106600592A (en) * | 2016-12-14 | 2017-04-26 | 中南大学 | Track long chord measurement method based on the splicing of continuous frame images |
WO2018232518A1 (en) * | 2017-06-21 | 2018-12-27 | Vancouver Computer Vision Ltd. | Determining positions and orientations of objects |
CN109840884A (en) * | 2017-11-29 | 2019-06-04 | 杭州海康威视数字技术股份有限公司 | A kind of image split-joint method, device and electronic equipment |
CN108317953A (en) * | 2018-01-19 | 2018-07-24 | 东北电力大学 | A kind of binocular vision target surface 3D detection methods and system based on unmanned plane |
CN109005334A (en) * | 2018-06-15 | 2018-12-14 | 清华-伯克利深圳学院筹备办公室 | A kind of imaging method, device, terminal and storage medium |
CN109064404A (en) * | 2018-08-10 | 2018-12-21 | 西安电子科技大学 | It is a kind of based on polyphaser calibration panorama mosaic method, panoramic mosaic system |
CN109767388A (en) * | 2018-12-28 | 2019-05-17 | 西安电子科技大学 | Method, the mobile terminal, camera of image mosaic quality are promoted based on super-pixel |
CN110120010A (en) * | 2019-04-12 | 2019-08-13 | 嘉兴恒创电力集团有限公司博创物资分公司 | A kind of stereo storage rack vision checking method and system based on camera image splicing |
Non-Patent Citations (1)
Title |
---|
张富贵等: "基于SIFT算法的无人机烟株图像快速拼接方法" * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111325792A (en) * | 2020-01-23 | 2020-06-23 | 北京字节跳动网络技术有限公司 | Method, apparatus, device, and medium for determining camera pose |
CN111325792B (en) * | 2020-01-23 | 2023-09-26 | 抖音视界有限公司 | Method, apparatus, device and medium for determining camera pose |
CN111429358A (en) * | 2020-05-09 | 2020-07-17 | 南京大学 | Image splicing method based on planar area consistency |
CN111899174A (en) * | 2020-07-29 | 2020-11-06 | 北京天睿空间科技股份有限公司 | Single-camera rotation splicing method based on deep learning |
CN113034362A (en) * | 2021-03-08 | 2021-06-25 | 桂林电子科技大学 | Expressway tunnel monitoring panoramic image splicing method |
CN113327198A (en) * | 2021-06-04 | 2021-08-31 | 武汉卓目科技有限公司 | Remote binocular video splicing method and system |
Also Published As
Publication number | Publication date |
---|---|
CN110717936B (en) | 2023-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110717936A (en) | Image stitching method based on camera attitude estimation | |
US9811946B1 (en) | High resolution (HR) panorama generation without ghosting artifacts using multiple HR images mapped to a low resolution 360-degree image | |
JP6561216B2 (en) | Generating intermediate views using optical flow | |
CN107767339B (en) | Binocular stereo image splicing method | |
JP5197279B2 (en) | Method for tracking the 3D position of an object moving in a scene implemented by a computer | |
CN109389555B (en) | Panoramic image splicing method and device | |
CN110111250B (en) | Robust automatic panoramic unmanned aerial vehicle image splicing method and device | |
CN109767388B (en) | Method for improving image splicing quality based on super pixels, mobile terminal and camera | |
JP2007000205A (en) | Image processing apparatus, image processing method, and image processing program | |
CN110853151A (en) | Three-dimensional point set recovery method based on video | |
CN111553845B (en) | Quick image stitching method based on optimized three-dimensional reconstruction | |
CN111798373A (en) | Rapid unmanned aerial vehicle image stitching method based on local plane hypothesis and six-degree-of-freedom pose optimization | |
CN112862683B (en) | Adjacent image splicing method based on elastic registration and grid optimization | |
CN110766782A (en) | Large-scale construction scene real-time reconstruction method based on multi-unmanned aerial vehicle visual cooperation | |
CN111127353A (en) | High-dynamic image ghost removing method based on block registration and matching | |
JP6272071B2 (en) | Image processing apparatus, image processing method, and program | |
CN114022562A (en) | Panoramic video stitching method and device capable of keeping integrity of pedestrians | |
CN110580715A (en) | Image alignment method based on illumination constraint and grid deformation | |
CN110910457B (en) | Multispectral three-dimensional camera external parameter calculation method based on angular point characteristics | |
CN116433822B (en) | Neural radiation field training method, device, equipment and medium | |
CN112365589A (en) | Virtual three-dimensional scene display method, device and system | |
CN110322476B (en) | Target tracking method for improving STC and SURF feature joint optimization | |
CN111829522A (en) | Instant positioning and map construction method, computer equipment and device | |
CN111161143A (en) | Optical positioning technology-assisted operation visual field panoramic stitching method | |
CN112116653B (en) | Object posture estimation method for multiple RGB pictures |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Yang Xianqiang Inventor after: Zhang Zhihao Inventor after: Gao Huijun Inventor before: Zhang Zhihao Inventor before: Yang Xianqiang Inventor before: Gao Huijun |
|
GR01 | Patent grant | ||
GR01 | Patent grant |