CN114972536B - Positioning and calibrating method for aviation area array swing scanning type camera - Google Patents

Positioning and calibrating method for aviation area array swing scanning type camera Download PDF

Info

Publication number
CN114972536B
CN114972536B CN202210589639.2A CN202210589639A CN114972536B CN 114972536 B CN114972536 B CN 114972536B CN 202210589639 A CN202210589639 A CN 202210589639A CN 114972536 B CN114972536 B CN 114972536B
Authority
CN
China
Prior art keywords
matrix
camera
image
focus
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210589639.2A
Other languages
Chinese (zh)
Other versions
CN114972536A (en
Inventor
张艳
王涛
张永生
于英
李磊
李力
张磊
宋亮
谭熊
刘少聪
赵祥
郑迎辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Information Engineering University of PLA Strategic Support Force
Original Assignee
Information Engineering University of PLA Strategic Support Force
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Information Engineering University of PLA Strategic Support Force filed Critical Information Engineering University of PLA Strategic Support Force
Priority to CN202210589639.2A priority Critical patent/CN114972536B/en
Publication of CN114972536A publication Critical patent/CN114972536A/en
Application granted granted Critical
Publication of CN114972536B publication Critical patent/CN114972536B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • G06F17/12Simultaneous equations, e.g. systems of linear equations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Operations Research (AREA)
  • Algebra (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of aerial remote sensing and unmanned aerial vehicle data processing, and particularly relates to an aerial area array swaying type camera positioning and calibrating method. Firstly, establishing spatial relation between any two images with overlapping ranges through image matching, and converting spatial rotation and translation relations of the spatial relation into a space coordinate system of a route; then establishing a four-linear constraint relation for two adjacent groups of images in the four-focus tensor model, and calculating object point coordinates of successfully matched points in the four-focus tensor model by taking the four-focus tensor model as an independent adjustment unit; and finally, establishing a beam method adjustment cost function taking the four-focus tensor model as an independent unit, and performing beam method adjustment and calibration processing based on the four-focus tensor model. The invention can establish a steady and fixed local image geometric relationship, and overcomes the defect that the bigger the camera sweep angle is, the bigger the geometric distortion degree of the image is; and a four-focus tensor model is used as an independent unit to establish a adjustment cost function, so that the operand is reduced, and the adjustment convergence speed is improved.

Description

Positioning and calibrating method for aviation area array swing scanning type camera
Technical Field
The invention belongs to the technical field of aerial remote sensing and unmanned aerial vehicle data processing, and particularly relates to an aerial area array swaying type camera positioning and calibrating method.
Background
The aerial area array swaying camera is provided with a single lens or a plurality of lenses, an area array CCD or CMOS electronic device is adopted as a detection element, the camera can perform large-angle oblique swaying side view imaging along the vertical direction of the aerial line, and can obtain ground images with larger ranges on two sides of the flying aerial line, and the aerial area array swaying camera has the characteristics of instant staring imaging, stable geometric relationship, large total field of view, wide observation range, multi-angle imaging and the like, and has wide application in the fields of overseas reconnaissance imaging, oblique photogrammetry, three-dimensional simulation reconstruction and the like, as shown in fig. 1 (a) and 1 (b). Typical area array swipe aerial cameras are the DB-110 series of aerial cameras from Goodrich, the Global eagle-mounted cameras from Raytheon, the B2-FO cameras from FLIR, the dual band LOROP cameras from Israel ELOP, the A3-edge cameras from Israel Vision map. The A3-edge is a new generation of stepping framing imaging aerial camera, carries 300mm long-focus double lenses, the position relation of the two lenses is fixed, the camera swings around a central shaft in the aerial camera in the vertical flight direction for imaging, the No. 0 lens and the No. 1 lens of the camera are simultaneously exposed for imaging, and the swing angle can reach 109 degrees at most. After the swing imaging is completed, the camera swings back and returns to the original position quickly, and the swing process does not image.
However, the area array swing scanning type aerial image has small image amplitude, large image quantity, complex route relation and serious panoramic distortion during large-inclination imaging, and the difficulty of positioning and calibrating processing is increased. Moreover, the area array swing-scanning type camera is large in swing angle, the angle between adjacent images is rapid in change and large in difference, the camera is generally only provided with GNSS equipment, and can provide position information of the camera during imaging, but IMU attitude measurement is not provided, angle information of the camera during imaging cannot be provided, and the traditional photogrammetry positioning processing method is not applicable.
Disclosure of Invention
The invention aims to provide an aviation area array sweeping type camera positioning and calibrating method which is used for solving the problem that the traditional photogrammetry calibration and positioning processing method is not applicable to an aviation area array sweeping type camera.
In order to solve the technical problems, the technical scheme and the corresponding beneficial effects of the technical scheme provided by the invention are as follows:
the invention provides a positioning and calibrating method of an aviation area array swing scanning camera, which comprises the following steps:
1) Acquiring an aerial area array sweeping camera image, and establishing an image retrieval relationship organized according to an aerial line arrangement relationship;
2) Performing feature extraction and feature matching on each obtained image according to the image retrieval relationship to find successful matching points between each image and all images with overlapping relationship;
3) For each image, an imaging model considering camera distortion and calibration parameters is established, and the geometrical relationship between corresponding object points and image points is as follows:
x=PX=K[R|t]X
wherein, P=K [ R|t ] is a camera matrix, R is a rotation matrix, t is a translation vector, K is a camera internal parameter matrix; x is the position of an object point; x is the position of the image point;
4) Setting a camera internal reference matrix K as an identity matrix, and calculating a rotation and translation relation between two images with overlapping relation, namely a rotation matrix R between an image i and an image j by utilizing the successful matching points obtained in the step 2) ij And translation vector t ij
5) Defining a route space coordinate system by taking a first scene image of a first sweeping period of each route as a reference, and according to a rotation matrix R between the image i and the image j obtained in the step 4) ij And translation vector t ij Converting the rotation matrix of all the images into a route space coordinate system, thereby obtaining the initial value of the rotation matrix of all the images in the route space coordinate system;
6) Acquiring a plurality of groups of four-scene images, wherein one group of four-scene images are respectively images shot by two cameras in an aviation area array swaying camera under a certain swaying angle and images shot by two cameras under another swaying angle, determining a camera matrix of the four-scene images according to initial values of rotation matrices of all the images under an air line space coordinate system, further establishing corresponding four-focus tensor models, and determining object point coordinates corresponding to successful matching points in each four-focus tensor model;
7) And constructing a adjustment cost function taking the four-focus tensor model as a unit, performing adjustment processing by using control points, successful matching points and corresponding three-dimensional coordinates in the four-focus tensor model, updating a rotation matrix R and a translation vector t of each scene image, realizing positioning of the aerial area array sweeping camera, updating a parameter matrix K in the camera, and realizing calibration of the aerial area array sweeping camera.
The beneficial effects are as follows: in view of the problems that the area array swaying large-dip angle aerial camera is small in image amplitude, large in number, complex in route relation, serious in panoramic distortion and free of attitude angle initial value during large-dip angle imaging, the invention provides a four-focus tensor model, and the four-focus tensor model is taken as an independent unit to perform positioning and calibration processing of the area array swaying aerial camera. In the specific processing, the four-focus tensor model is taken as an independent adjustment unit, and the space three-dimensional coordinates (object point coordinates) of a large number of successfully matched points in each four-focus tensor model are calculated, so that the defects that the larger the camera sweep angle is, the larger the geometric distortion degree of the image is can be overcome; the constraint relation among the rotation matrixes is utilized in the four-focus tensor model, so that the error matching points in the four-focus tensor model can be further removed, the camera matrix is refined, and a more accurate initial estimated value is provided for the adjustment of a beam method; and finally, setting up a adjustment cost function based on the four-focus tensor model, so that adjustment calculation amount can be reduced, adjustment convergence speed is provided, and the problems of low adjustment speed, non-convergence and the like caused by numerous images and complex relation are avoided.
Further, in step 2), a Sift feature extraction algorithm is adopted to extract features, and a RANSAC algorithm is utilized to remove rough differences in feature matching results.
Further, in step 3), the intra-camera parameter matrix K is an intra-camera parameter matrix with the warp parameter s added:
Figure BDA0003664579670000031
wherein alpha is x And alpha y The geometric distortion parameters of the camera in the x-axis direction and the y-axis direction are respectively; x is x 0 And y 0 The principal point displacement parameters on the camera focal plane are respectively.
Further, in step 4), the rotation matrix R between the image i and the image j is determined by the following method ij And translation vector t ij
4.1 Utilizing at least 5 pairs of successful matching point pairs and combining the nuclear surface constraint condition to obtain an essential matrix E; wherein, the nuclear plane constraint condition is:
Figure BDA0003664579670000032
wherein said p 1 And p 2 Coordinates of a pair of successfully matched points on the image i and the image j respectively;
4.2 Singular value decomposition is performed on the essential matrix E to obtain a rotation matrix R between the image i and the image j ij And translation vector t ij
Further, in step 4), the rotation matrix R of the image i and the image j in the space coordinate system of the aerial line i And R is j The relation between the two is:
R j =R ij R i
further, in step 6), the following method is adopted to build a corresponding quadric focus tensor model:
6.1 The camera matrix of the four-scene image is a camera matrix A, a camera matrix B, a camera matrix C and a camera matrix D respectively, a group of corresponding points, namely image points X, image points X', exist on the four-scene image on the basis of a certain object point X, and four-focus tensors are defined as follows:
Figure BDA0003664579670000033
in which Q pqrs Tensor notation for a four focus tensor; a, a i Is the ith row of camera matrix a; b q Is the q-th row of the camera matrix B; c r Is the r-th row of the camera matrix C; d, d s Is the s-th row of the camera matrix D;
6.2 Deriving a four-linear constraint relation of four-focus tensors according to the projection relation existing among the four-scene images;
the projection relation existing among the four images is as follows:
Figure BDA0003664579670000041
wherein k, k ', k ", and k'" are all uncertain proportionality constants;
the four-wire constraint relationship is:
x i x′ j x″ k x″′ l ε ipw ε jqx ε kry ε lsz Q pqrs =0 wxyz
wherein, subscripts w, x, y and z are free variables; epsilon ipw 、ε jqx 、ε kry And epsilon lsz Is the vector product between different vectors;
6.3 Determining a camera matrix A, a camera matrix B, a camera matrix C and a camera matrix D based on the initial value of the rotation matrix and the initial value of the position information of the image in the step 5) under the space coordinate system of the route so as to establish a four-focus tensor model.
Further, the object point coordinates corresponding to the successfully matched points in each four-focus tensor model are determined by adopting the following method:
6.4 Obtaining the projection relationship between the object point X and each image point X, X ', X ", X'":
Figure BDA0003664579670000042
6.5 Setting the plane coordinates of the image point x as (x, y), the plane coordinates of the image point x 'as (x', y '), the plane coordinates of the image point x″ as (x ", y"), the plane coordinates of the image point x' "as (x '", y' ") and eliminating the homogeneous scalar factor by cross multiplication for each equation in step 6.4), so that each equation in step 6.4) obtains three equations, thereby obtaining twelve equations, respectively:
Figure BDA0003664579670000043
Figure BDA0003664579670000044
Figure BDA0003664579670000045
Figure BDA0003664579670000051
wherein A is 3T Is the transposed matrix of row 3 of camera matrix A, A 1T Is the transposed matrix of the 1 st row of camera matrix A, A 2T Is the transposed matrix of row 2 of camera matrix A, B 3T Is the transposed matrix of row 3 of camera matrix B, B 1T Is the transposed matrix of row 1 of camera matrix B, B 2T Is the transposed matrix of row 2 of camera matrix B, C 3T Is the transposed matrix of row 3 of camera matrix C, C 1T Is the transposed matrix of row 1 of camera matrix C, C 2T A transpose matrix of the 2 nd row of the camera matrix C;
6.6 Taking the first two equations of every three equations in the step 6.5, obtaining eight equations in total, and constructing the following equations:
NX-L=0
Figure BDA0003664579670000052
6.7 Solving the equation constructed in the step 6.6), thereby obtaining the X coordinate of the object point.
Further, in step 6.7), the equation constructed in step 6.6) is solved by using a least square method.
Further, in the step 7), the method adopted for carrying out adjustment treatment is a beam method adjustment method and a calibration method based on a four-focus tensor model; providing a group of corresponding points of a certain object point X on the four-scene image, wherein the corresponding points are respectively an image point X, an image point X ', and an image point X';
the adjustment cost function based on the four-focus tensor model is as follows:
Figure BDA0003664579670000053
wherein f (z) is a adjustment cost function and is also a BA cost function; (x) i ,y i ) I epsilon (1, 2,3, 4) is the image point observation coordinates corresponding to four image points x, x'; r is R i Is a corresponding rotation matrix; t is t i Is a corresponding translation vector;
Figure BDA0003664579670000054
a calculated value for the coordinates of the image point; z is an unknown parameter, including a parameter matrix, a rotation matrix, a translation vector and object point coordinates in the camera;
and correspondingly, when the optical beam method adjustment and calibration method is adopted for positioning and calibration, an LM algorithm is adopted for carrying out optimization solution on a BA cost function, wherein the BA cost function is as follows:
Figure BDA0003664579670000061
where n is the sum of all control points and successful matching points participating in the adjustment and scaling operation to the number of listed BA cost functions and k is the sequence number of the listed individual BA cost functions.
Further, the beam method adjustment and calibration method based on the four-focus tensor model is a three-stage beam adjustment method positioning and calibration method, and three stages of the three-stage beam adjustment method positioning and calibration method are as follows: in the first stage, setting the parameter matrix K in the camera as the identity matrix E and rotating the matrix R i The method is kept unchanged, and only translation vector t is processed in the beam method adjustment positioning and calibrating process i And object point coordinates; in the second stage, the internal parameter moment K is still the identity matrix E, but the translation vector t is set i Rotation matrix R i And the object point coordinates participate in adjustment positioning and calibration by a beam method; in the third stage, including the in-camera parameter matrix K and translation vector t i Rotation matrix R i All unknown numbers including the coordinates of the object points participate in the beam method adjustment positioning and calibration.
Drawings
FIG. 1 (a) is a schematic illustration of a swipe imaging relationship of an aerial area array swipe camera;
FIG. 1 (b) is a schematic diagram of the image relationship between adjacent sweep cycles of an aerial area array sweep camera;
FIG. 2 is a flow chart of the aircraft area array swipe camera positioning and calibration method of the present invention;
FIG. 3 is a schematic diagram of the epipolar geometry between two images;
fig. 4 is a diagram illustration of a four focus tensor model.
Detailed Description
The invention provides a four-focus tensor model, which is used as an independent adjustment unit to carry out adjustment processing so as to realize positioning and calibration of the large-dip angle aerial image of the array. The present invention will be described in detail with reference to the accompanying drawings and examples.
An embodiment of an aviation area array swing scanning type camera positioning and calibrating method comprises the following steps:
the embodiment of the positioning and calibrating method of the aviation area array sweeping camera is shown in fig. 2, and the whole flow is specifically described below.
Firstly, acquiring an aerial area array sweeping camera image, arranging all aerial images with large inclination angles of the area array to be processed according to the aerial image imaging time and POS data, and establishing an image retrieval relation organized according to the air route relation.
Extracting feature operators and feature descriptors of each scene image according to an image retrieval relation by adopting a Sift feature extraction algorithm, performing iterative feature matching by utilizing a Hamming distance, removing a large amount of rough differences existing in a matching result by adopting a RANSAC algorithm, and searching successful matching points between each scene image and all images with overlapping relation. The specific process is as follows:
1. and carrying out primary matching, randomly selecting 4 groups of matching point pairs from primary matching results as initial samples, calculating a homography matrix of the initial samples, and marking a parameter model corresponding to the matrix as M.
2. And (3) substituting other matching points into the parameter model M in the step (1) in sequence, if the matching points can be better fit with the model, considering the matching points as inner points under the model, otherwise, taking the matching points as outer points, and counting the number of the inner points under the model as sum.
3. Repeating the steps k times, obtaining the iteration times k by the formula (1), selecting the model with the largest number of internal points in the k models as a correct estimation model, and taking the internal points under the model as correct matching points as final matching results.
Figure BDA0003664579670000071
Wherein n is the number of initial samples, n=4 in this embodiment; p is the probability that each sample of n randomly selected pairs of matching points is a correct matching point; w is the probability of selecting one point as the correct matching point in all matching point pairs.
And thirdly, sweeping the image of each scene area array, and establishing an imaging model considering camera distortion and calibration parameters according to the pinhole imaging model.
The planar array sweeping aerial camera is described by adopting a pinhole camera model, three-dimensional points P (X, Y, Z) in an object space are projected through small holes and then fall on an image plane to form image points P (X, Y, -f). The geometrical model relationship adopted by the geometrical relationship between the object point and the image point is as follows:
x=PX=K[R|t]X (2)
wherein R is an attitude matrix, and is a 3X 3 rotation matrix representing the direction of a camera coordinate system; t is a translation vector; r and t are referred to as the camera's extrinsic parameter matrix; k is an internal parameter matrix of the camera, which is also called a camera calibration matrix; p=k [ r|t ] is the camera matrix of the camera. The positioning and calibrating process of the area array sweeping aerial camera is the process of determining R, t and K.
The general form of the calibration matrix of a CCD camera is:
Figure BDA0003664579670000072
wherein alpha is x And alpha y Camera at x and x respectivelyGeometrical distortion parameters in the y-axis direction; x is x 0 And y 0 The principal point displacement parameters on the camera focal plane are respectively.
In order to increase the generality, the warping parameter s can be increased, and the calibration matrix after the warping parameter s is increased is as follows:
Figure BDA0003664579670000081
step four, based on the successful matching points obtained in the step two, calculating the spatial rotation and translation relationship between two images with overlapping relationship, namely, the gesture rotation matrix R between the image i and the image j ij And a translation matrix t ij
If there are successfully matched point pairs on two images with overlapping degree, as shown in FIG. 3, image I 1 、I 2 The camera centers are O respectively 1 、O 2 Ray of light O 1 p 1 And light ray O 2 p 2 Intersecting at points P, O in three-dimensional space 1 、O 2 Determining nuclear surface by three points P, O 1 、O 2 Connection line and image plane I 1 、I 2 The intersection points of (a) are respectively the nuclear points e 1 、e 2 The intersection line of the nuclear plane and the two image planes is a nuclear line l 1 、l 2 Image I 1 Intermediate point p 1 And image I 2 Intermediate point p 2 To successfully match the point pairs, the image I can be estimated using the successful matching point pairs 1 To image I 2 Is a rotation matrix R of (2) ij And a translation matrix t ij
Image point p 1 、p 2 The nuclear plane constraint is satisfied as follows:
Figure BDA0003664579670000082
its geometric meaning is O 1 、P、O 2 The three are coplanar, E is called an essential matrix, E=tR, E is the outer product of t and R, perpendicular to t and R. The nuclear plane constraint includes both translation and rotation, and can be defined by an essential matrix EBasis matrix F, f=k - T EK -1 . The nuclear plane constraint gives the spatial position relation of two matching points, the matching points can be used for solving an essential matrix E or a basic matrix F according to the nuclear plane constraint, and then the R is determined by decomposition ij And t ij The relative rotation and translation between the two images is determined.
The specific solving process is as follows:
1. let the normalized coordinates of the image points p1 and p2 be x 1 =(u 1 v 1 1) T 、x 2 =(u 2 v 2 1) T Essence matrix
Figure BDA0003664579670000083
The corresponding epipolar constraint form is as follows:
Figure BDA0003664579670000084
2. expanding the formula (6) to obtain a linear equation set:
Figure BDA0003664579670000091
3. the essential matrix E describes translational and rotational variations with a degree of freedom of 6, but with a practical degree of freedom of 5 due to the influence of scale equivalence. Thus, here the E matrix is determined using the classical Eight-point method (Eight-Point-algorithm) using at least 5 more successfully matched point pairs.
4. After the essential matrix E is obtained, singular Value Decomposition (SVD) is carried out on the essential matrix E to obtain R ij And t ij The relative rotation and translation of the two images is determined.
And fifthly, defining a route space coordinate system by taking a first scenery cam0 image of a first swing period of each route as a reference, converting the rotation matrix of all images into the route space coordinate system based on the calculation result of the step four, and obtaining the initial value of the rotation matrix of all images in the route space coordinate system.
Let the rotation matrix of the image i in the space coordinate system of the navigation line be R i The rotation matrix between the image j and the image i obtained in the fourth step is R ij Then the rotation matrix R of the image j in the space coordinate system of the navigation line j The method comprises the following steps:
R j =R ij R i (8)
step six, obtaining a plurality of groups of four-scene images, establishing corresponding four-focus tensor models, and calculating object point coordinates (three-dimensional coordinates) corresponding to successful matching points in each four-focus tensor model.
For a Vision map camera, at a certain sweeping angle, the cam0 and cam1 cameras obtain an image I and an image I ', and the projection centers are C and C ' respectively corresponding to projection planes II and II '; the sweep angle changes, and at the next imaging instant, the cam0 and cam1 cameras obtain images I 'and I', corresponding projection surfaces II 'and II', the projection centers are C 'and C', the straight lines L in space are imaged on the four-view image views respectively, as shown in fig. 4. The process of calculating the coordinates of the object point is as follows:
1. when the camera matrix of the four-view image view is A, B, C and D, a spatial three-dimensional point X (also referred to as object point X) exists on the four-view image view, and a group of corresponding points crossing the four-view image view exists
Figure BDA0003664579670000092
The four-view image views form a four-focus tensor model, and projection equation relations exist among the four-view image views:
Figure BDA0003664579670000101
where k, k ', k "and k'" are all uncertain proportionality constants. Record the ith behavior a of matrix A i Ith behavior B of matrix B i Matrices C and D can be analogized in turn.
2. The four-dimensional four-focus tensor is defined as follows:
Figure BDA0003664579670000102
wherein a is i Is the ith row of camera matrix a; b q Is the q-th row of the camera matrix B; c r Is the r-th row of the camera matrix C; d, d s Is the s-th row of the camera matrix D.
3. The four-linear relationship of the four-focus tensor is derived from the projection relationship equation (9) as follows:
x i x′ j x″ k x″′ l ε ipw ε jqx ε kry ε lsz Q pqrs =0 wxyz (11)
wherein w, x, y and z are free variables; epsilon ipw 、ε jqx 、ε kry And epsilon lsz Representing the vector product between the different vectors.
4. And (3) establishing camera matrixes A, B, C and D based on the initial value of the rotation matrix under the space coordinate system of the route and the initial value of the position information provided by the POS in the fifth step, and establishing a four-focus tensor model.
The four-linear constraint relation can be used for further eliminating the error matching points in the four-focus tensor model, refining the camera matrixes A, B, C and D and providing a more accurate initial estimated value for beam method adjustment. And performing triangulation operation by using the camera matrix A, B, C, D and the image point coordinates of the successful matching points, and calculating the three-dimensional model coordinates of the successful matching points under the space coordinate system of the airlines.
5. The corresponding points of the three-dimensional point X in the space on the four-view image view are respectively
Figure BDA0003664579670000103
The projection relationship exists between the three-dimensional point and the image point:
Figure BDA0003664579670000104
6. assuming that the plane coordinates of the image point x are (x, y), the plane coordinates of the image point x 'are (x', y '), the plane coordinates of the image point x″ are (x ", y"), and the plane coordinates of the image point x' "are (x '", y' "), three equations are obtained for each equation in the formula (12) by cross-multiplying and eliminating the homogeneous scalar factor. Taking the first equation in equation (12) as an example, three equations are obtained by cross-multiplying the cancellation of the homogeneous scalar factor, as shown in equation (13), two of which are linearly independent.
Figure BDA0003664579670000111
Wherein A is 3T Is the transposed matrix of row 3 of camera matrix A, A 1T Is the transposed matrix of the 1 st row of camera matrix A, A 2T Is the transposed matrix of row 2 of camera matrix A, B 3T Is the transposed matrix of row 3 of camera matrix B, B 1T Is the transposed matrix of row 1 of camera matrix B, B 2T Is the transposed matrix of row 2 of camera matrix B, C 3T Is the transposed matrix of row 3 of camera matrix C, C 1T Is the transposed matrix of row 1 of camera matrix C, C 2T Is the transpose of row 2 of camera matrix C.
7. Taking the first two equations from equation (13), the remaining pixels are processed in the same manner, four pixels
Figure BDA0003664579670000112
8 equations were constructed:
NX-L=0
Figure BDA0003664579670000113
8. and (5) adopting a least square method to answer Jie Gong (14) to obtain the space three-dimensional point coordinates corresponding to the successful matching points in the four-focus tensor model.
And seventhly, constructing a adjustment cost function taking the four-focus tensor model as a unit, performing beam adjustment BA (Bundle Adjustment) processing, updating a rotation matrix and a translation vector of each scene image, and calibrating a camera internal parameter matrix to realize positioning and calibration of the aviation area array sweeping type camera.
Defining a cost function of the beam method adjustment based on the four-focus tensor model, wherein the cost function is as follows:
Figure BDA0003664579670000121
in (x) i ,y i ) I.e. (1, 2,3, 4) is four pixels
Figure BDA0003664579670000122
Observing coordinates of the corresponding image points; r is R i I e (1, 2,3, 4) is the corresponding rotation matrix; t is t i I e (1, 2,3, 4) is the corresponding translation vector;
Figure BDA0003664579670000123
a calculated value for the coordinates of the image point; z is an unknown parameter vector including an in-plane parameter matrix, a rotation matrix, a translation vector, and target space point three-dimensional coordinates.
The error between the observed and calculated image point coordinates is the ground point re-projection error and represents the error magnitude contained in the rotation matrix, translation parameters, camera calibration parameters and ground point coordinates. And obtaining the minimum value by the cost function, and obtaining the optimal solution by the pose parameter, the calibration parameter and the space point three-dimensional coordinate.
The process of beam method adjustment is the optimization process of nonlinear least square BA cost function of the formula (16):
Figure BDA0003664579670000124
where n is the sum of all control points and successful matching points participating in the adjustment and scaling operation to the number of listed BA cost functions and k is the sequence number of the listed individual BA cost functions.
In this embodiment, the BA cost function is optimally solved using a Levenberg-Marquardt (LM). The LM algorithm decomposes the original nonlinear cost function (16) into an approximation of a series of regularized linear functions, J (z) being the jacobian of f (z), and each cycle LM updates the solution to the linear least squares problem as follows:
Figure BDA0003664579670000125
wherein delta is the increment of the unknown parameter z in each iteration process; d (z) is a matrix J (z) T Square root matrix of J (z); λ is a regularization parameter that can be adjusted according to the approximation of J (z) to f (z). If I F (z+delta) * ) And (3) updating the unknown parameters z & gtz+delta if the value of the unknown parameters is equal to or less than the value of f (z) * . Solving equation (17) corresponds to solving the following standard equation:
(J T J+λD T D)δ=-J T in formula f (18), (J) T J+λD T D) Referred to as an extended Hessian matrix.
In the beam-law adjustment, the unknown parameter vector z includes two parts: camera pose and calibration parameters, and three-dimensional coordinate parameters of space point, z= [ z ] c ;z p ],z c Is the pose and calibration parameter of the camera, z p Is a spatial three-dimensional point coordinate parameter. Similarly, D, delta, J can be divided into two parts, D= [ D ] c ;D p ],δ=[δ c ;δ p ],J=[J c ;J p ]. Is provided with
Figure BDA0003664579670000131
Figure BDA0003664579670000132
Equation (18) can be written as a block linear system:
Figure BDA0003664579670000133
by adopting the Gaussian elimination method to process the equation (19), the three-dimensional coordinate parameters of the space points can be eliminated, and a linear equation only comprising camera parameters is obtained:
Figure BDA0003664579670000134
solving equation (20) to obtain delta c Then, reverse replacement is performed to obtain:
Figure BDA0003664579670000135
iterative loop solution delta c And delta p Delta after two cycles c And delta p The high precision can be achieved, and the calculation is finished.
In the photogrammetry process, large systematic errors are mainly caused by pose parameters, while the remaining small systematic errors of both aerial and satellite cameras are caused by intra-camera parameters. The invention also provides a three-stage beam adjustment method positioning and calibration process, which comprises the steps of firstly processing the rotation matrix and the translation vector, and finally processing the inner parameter matrix. In the first stage, the internal reference matrix K is set to E (identity matrix), and the matrix R is rotated i The method is kept unchanged, and only translation vector t is processed in the beam method adjustment positioning and calibrating process i And object point coordinates. In the second stage, the inner parameter matrix K is still E, but the translation vector t i Rotation matrix R i And the object point coordinates are involved in the adjustment positioning and calibration of the beam method. In the third stage, the internal parameter matrix K and the translation vector t are included i Rotation matrix R i All unknowns including the object point coordinates participate in beam method adjustment positioning and calibration.
Thus, the positioning and calibrating process flow of the plane array sweeping large-inclination angle aerial camera based on the four-focus tensor is completed.
In summary, in view of the problems that the area array swaying large-dip angle aerial camera is small in image amplitude, large in number, complex in route relation, serious in panoramic distortion and free of attitude angle initial value during large-dip angle imaging, the invention provides a four-focus tensor model, and the four-focus tensor model is taken as an independent unit to perform positioning and calibration processing of the area array swaying aerial camera. The four-focus tensor model is used as an independent adjustment unit, and the spatial three-dimensional coordinates of a large number of successful points matched in each four-focus tensor model are calculated, so that the defects that the larger the camera sweep angle is and the larger the geometric distortion degree of the image is can be overcome. And by utilizing the constraint relation among the rotation matrixes in the four-focus tensor model, the error matching points in the four-focus tensor model can be further removed, the camera matrixes A, B, C and D are refined, and a more accurate initial estimated value is provided for the adjustment of a beam method. And finally, setting up a adjustment cost function based on the four-focus tensor model, so that adjustment calculation amount can be reduced, adjustment convergence speed is provided, and the problems of low adjustment speed, non-convergence and the like caused by numerous images and complex relation are avoided.

Claims (10)

1. The positioning and calibrating method for the aircraft area array swing scanning type camera is characterized by comprising the following steps of:
1) Acquiring an aerial area array sweeping camera image, and establishing an image retrieval relationship organized according to an aerial line arrangement relationship;
2) Performing feature extraction and feature matching on each obtained image according to the image retrieval relationship to find successful matching points between each image and all images with overlapping relationship;
3) For each image, an imaging model considering camera distortion and calibration parameters is established, and the geometrical relationship between corresponding object points and image points is as follows:
x=PX=K[R|t]X
wherein, P=K [ R|t ] is a camera matrix, R is a rotation matrix, t is a translation vector, K is a camera internal parameter matrix; x is the position of an object point; x is the position of the image point;
4) Setting a camera internal reference matrix K as an identity matrix, and calculating a rotation and translation relation between two images with overlapping relation, namely a rotation matrix R between an image i and an image j by utilizing the successful matching points obtained in the step 2) ij And translation vector t ij
5) Defining a route space coordinate system by taking a first scene image of a first sweeping period of each route as a reference, and according to a rotation matrix R between the image i and the image j obtained in the step 4) ij And translation vector t ij The whole image is displayedThe rotation matrix of the image is converted into a route space coordinate system, so that the initial value of the rotation matrix of all images in the route space coordinate system is obtained;
6) Acquiring a plurality of groups of four-scene images, wherein one group of four-scene images are respectively images shot by two cameras in an aviation area array swaying camera under a certain swaying angle and images shot by two cameras under another swaying angle, determining a camera matrix of the four-scene images according to initial values of rotation matrices of all the images under an air line space coordinate system, further establishing corresponding four-focus tensor models, and determining object point coordinates corresponding to successful matching points in each four-focus tensor model;
7) And constructing a adjustment cost function taking the four-focus tensor model as a unit, performing adjustment processing by using the control points, the successful matching points in the four-focus tensor model and the corresponding three-dimensional coordinates, updating a rotation matrix R and a translation vector t of each scene image, realizing the positioning of the aerial area array sweeping camera, updating a parameter matrix K in the camera, and realizing the calibration of the aerial area array sweeping camera.
2. The method for positioning and calibrating an aircraft area array sweeping camera according to claim 1, wherein in the step 2), a Sift feature extraction algorithm is adopted to extract features, and a RANSAC algorithm is utilized to remove rough differences in feature matching results.
3. The method for positioning and calibrating an aircraft area array pan-tilt camera according to claim 1, wherein in step 3), the in-camera parameter matrix K is an in-camera parameter matrix with a distortion parameter s added:
Figure FDA0003664579660000011
wherein alpha is x And alpha y The geometric distortion parameters of the camera in the x-axis direction and the y-axis direction are respectively; x is x 0 And y 0 The principal point displacement parameters on the camera focal plane are respectively.
4. The method for positioning and calibrating an aircraft area array pan camera according to claim 1, wherein in step 4), a rotation matrix R between an image i and an image j is determined by the following method ij And translation vector t ij
4.1 Utilizing at least 5 pairs of successful matching point pairs and combining the nuclear surface constraint condition to obtain an essential matrix E; wherein, the nuclear plane constraint condition is:
Figure FDA0003664579660000021
wherein said p 1 And p 2 Coordinates of a pair of successfully matched points on the image i and the image j respectively;
4.2 Singular value decomposition is performed on the essential matrix E to obtain a rotation matrix R between the image i and the image j ij And translation vector t ij
5. The method for positioning and calibrating an aircraft area array pan-tilt camera according to claim 1, wherein in step 4), the rotation matrix R of the image i and the image j in the space coordinate system of the aircraft is i And R is j The relation between the two is:
R j =R ij R i
6. the method for positioning and calibrating an aircraft area array sweeping camera according to claim 1, wherein in step 6), the corresponding four-focus tensor model is established by adopting the following method:
6.1 The camera matrix of the four-scene image is a camera matrix A, a camera matrix B, a camera matrix C and a camera matrix D respectively, a group of corresponding points, namely image points X, image points X', exist on the four-scene image on the basis of a certain object point X, and four-focus tensors are defined as follows:
Figure FDA0003664579660000022
in which Q pqrs Tensor notation for a four focus tensor; a, a i Is the ith row of camera matrix a; b q Is the q-th row of the camera matrix B; c r Is the r-th row of the camera matrix C; d, d s Is the s-th row of the camera matrix D;
6.2 Deriving a four-linear constraint relation of four-focus tensors according to the projection relation existing among the four-scene images;
the projection relation existing among the four images is as follows:
Figure FDA0003664579660000023
wherein k, k ', k ", and k'" are all uncertain proportionality constants;
the four-wire constraint relationship is:
x i x′ j x″ k x″′ l ε ipw ε jqx ε kry ε lsz Q pqrs =0 wxyz
wherein, subscripts w, x, y and z are free variables; epsilon ipw 、ε jqx 、ε kry And epsilon lsz Is the vector product between different vectors;
6.3 Determining a camera matrix A, a camera matrix B, a camera matrix C and a camera matrix D based on the initial value of the rotation matrix and the initial value of the position information of the image in the step 5) under the space coordinate system of the route so as to establish a four-focus tensor model.
7. The method for positioning and calibrating the aircraft area array sweeping camera according to claim 6, wherein the object point coordinates corresponding to successful matching points in each four-focus tensor model are determined by adopting the following method:
6.4 Obtaining the projection relationship between the object point X and each image point X, X ', X ", X'":
Figure FDA0003664579660000031
6.5 Setting the plane coordinates of the image point x as (x, y), the plane coordinates of the image point x 'as (x', y '), the plane coordinates of the image point x″ as (x ", y"), the plane coordinates of the image point x' "as (x '", y' ") and eliminating the homogeneous scalar factor by cross multiplication for each equation in step 6.4), so that each equation in step 6.4) obtains three equations, thereby obtaining twelve equations, respectively:
Figure FDA0003664579660000032
Figure FDA0003664579660000033
Figure FDA0003664579660000034
Figure FDA0003664579660000035
wherein A is 3T Is the transposed matrix of row 3 of camera matrix A, A 1T Is the transposed matrix of the 1 st row of camera matrix A, A 2T Is the transposed matrix of row 2 of camera matrix A, B 3T Is the transposed matrix of row 3 of camera matrix B, B 1T Is the transposed matrix of row 1 of camera matrix B, B 2T Is the transposed matrix of row 2 of camera matrix B, C 3T Is the transposed matrix of row 3 of camera matrix C, C 1T Is the transposed matrix of row 1 of camera matrix C, C 2T A transpose matrix of the 2 nd row of the camera matrix C;
6.6 Taking the first two equations of every three equations in the step 6.5), obtaining eight equations in total, and constructing the following equations:
NX-L=0
Figure FDA0003664579660000041
6.7 Solving the equation constructed in the step 6.6), thereby obtaining the X coordinate of the object point.
8. The method for positioning and calibrating an aircraft area array pan-tilt camera according to claim 7, wherein the equation constructed in step 6.6) is solved by using a least squares method in step 6.7).
9. The method for positioning and calibrating the aircraft area array sweeping camera according to claim 1, wherein in the step 7), the method for performing adjustment treatment is a beam method adjustment and calibration method based on a four-focus tensor model; providing a group of corresponding points of a certain object point X on the four-scene image, wherein the corresponding points are respectively an image point X, an image point X ', and an image point X';
the adjustment cost function based on the four-focus tensor model is as follows:
Figure FDA0003664579660000042
where f (z) is a difference cost function, (x) i ,y i ) I epsilon (1, 2,3, 4) is the image point observation coordinates corresponding to four image points x, x'; r is R i Is a corresponding rotation matrix; t is t i Is a corresponding translation vector;
Figure FDA0003664579660000043
a calculated value for the coordinates of the image point; z is an unknown parameter, including a parameter matrix, a rotation matrix, a translation vector and object point coordinates in the camera;
and correspondingly, when the optical beam method adjustment and calibration method is adopted for positioning and calibration, an LM algorithm is adopted for carrying out optimization solution on a BA cost function, wherein the BA cost function is as follows:
Figure FDA0003664579660000051
where n is the sum of all control points and successful matching points participating in the adjustment and scaling operation to the number of listed BA cost functions and k is the sequence number of the listed individual BA cost functions.
10. The method for positioning and calibrating an aviation area array sweeping camera according to claim 9, wherein the method for positioning and calibrating the beam method based on the four-focus tensor model is a three-stage beam adjustment method, and three stages of the three-stage beam adjustment method are as follows: in the first stage, setting the parameter matrix K in the camera as the identity matrix E and rotating the matrix R i The method is kept unchanged, and only translation vector t is processed in the beam method adjustment positioning and calibrating process i And object point coordinates; in the second stage, the intrinsic parameter matrix K is still the identity matrix E, but the translation vector t is set i Rotation matrix R i And the object point coordinates participate in adjustment positioning and calibration by a beam method; in the third stage, including the in-camera parameter matrix K and translation vector t i Rotation matrix R i All unknown numbers including the coordinates of the object points participate in the beam method adjustment positioning and calibration.
CN202210589639.2A 2022-05-26 2022-05-26 Positioning and calibrating method for aviation area array swing scanning type camera Active CN114972536B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210589639.2A CN114972536B (en) 2022-05-26 2022-05-26 Positioning and calibrating method for aviation area array swing scanning type camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210589639.2A CN114972536B (en) 2022-05-26 2022-05-26 Positioning and calibrating method for aviation area array swing scanning type camera

Publications (2)

Publication Number Publication Date
CN114972536A CN114972536A (en) 2022-08-30
CN114972536B true CN114972536B (en) 2023-05-09

Family

ID=82956238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210589639.2A Active CN114972536B (en) 2022-05-26 2022-05-26 Positioning and calibrating method for aviation area array swing scanning type camera

Country Status (1)

Country Link
CN (1) CN114972536B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6198852B1 (en) * 1998-06-01 2001-03-06 Yeda Research And Development Co., Ltd. View synthesis from plural images using a trifocal tensor data structure in a multi-view parallax geometry
CN101750029A (en) * 2008-12-10 2010-06-23 中国科学院沈阳自动化研究所 Characteristic point three-dimensional reconstruction method based on trifocal tensor
CN107067437A (en) * 2016-12-28 2017-08-18 中国航天电子技术研究院 A kind of unmanned plane alignment system and method based on multiple view geometry and bundle adjustment
CN108280858A (en) * 2018-01-29 2018-07-13 重庆邮电大学 A kind of linear global camera motion method for parameter estimation in multiple view reconstruction
CN108489395A (en) * 2018-04-27 2018-09-04 中国农业大学 Vision measurement system structural parameters calibration and affine coordinate system construction method and system
CN111508029A (en) * 2020-04-09 2020-08-07 武汉大学 Satellite-borne segmented linear array CCD optical camera overall geometric calibration method and system
WO2020206903A1 (en) * 2019-04-08 2020-10-15 平安科技(深圳)有限公司 Image matching method and device, and computer readable storage medium
CN113739767A (en) * 2021-08-24 2021-12-03 武汉大学 Method for producing orthoimage aiming at image acquired by domestic area array swinging imaging system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6996254B2 (en) * 2001-06-18 2006-02-07 Microsoft Corporation Incremental motion estimation through local bundle adjustment
US8855406B2 (en) * 2010-09-10 2014-10-07 Honda Motor Co., Ltd. Egomotion using assorted features
EP2622576A4 (en) * 2010-10-01 2017-11-08 Saab AB Method and apparatus for solving position and orientation from correlated point features in images

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6198852B1 (en) * 1998-06-01 2001-03-06 Yeda Research And Development Co., Ltd. View synthesis from plural images using a trifocal tensor data structure in a multi-view parallax geometry
CN101750029A (en) * 2008-12-10 2010-06-23 中国科学院沈阳自动化研究所 Characteristic point three-dimensional reconstruction method based on trifocal tensor
CN107067437A (en) * 2016-12-28 2017-08-18 中国航天电子技术研究院 A kind of unmanned plane alignment system and method based on multiple view geometry and bundle adjustment
CN108280858A (en) * 2018-01-29 2018-07-13 重庆邮电大学 A kind of linear global camera motion method for parameter estimation in multiple view reconstruction
CN108489395A (en) * 2018-04-27 2018-09-04 中国农业大学 Vision measurement system structural parameters calibration and affine coordinate system construction method and system
WO2020206903A1 (en) * 2019-04-08 2020-10-15 平安科技(深圳)有限公司 Image matching method and device, and computer readable storage medium
CN111508029A (en) * 2020-04-09 2020-08-07 武汉大学 Satellite-borne segmented linear array CCD optical camera overall geometric calibration method and system
CN113739767A (en) * 2021-08-24 2021-12-03 武汉大学 Method for producing orthoimage aiming at image acquired by domestic area array swinging imaging system

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
Global Fusion of Relative Motions for Robust, Accurate and Scalable Structure from Motion;Moulon, P. , et al.;IEEE International Conference on Computer Vision IEEE;全文 *
Schonberger, J. L ,et al..Structure-from-Motion Revisited.IEEE Conference on Computer Vision &amp Pattern Recognition IEEE.2016,全文. *
Thirthala, S.,et al..Multi-view geometry of 1D radial cameras and its application to omnidirectional camera calibration.Tenth IEEE International Conference on Computer Vision (ICCV'05).2005,全文. *
一种基于面阵摆扫式航空影像的特征匹配方法;张昆 等;地球信息科学学报;全文 *
基于A3航摄仪的小基高比影像连接点精提取技术研究;王艳利;宁卫远;;河南城建学院学报(第03期);全文 *
基于三焦点张量的多视图三维重构;陈春晓;张娟;;生物医学工程学杂志(第04期);全文 *
面阵摆扫热红外航空影像分步几何校正方法;李赛;胡勇;巩彩兰;宋文韬;;红外与毫米波学报(第02期);全文 *

Also Published As

Publication number Publication date
CN114972536A (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN109903227B (en) Panoramic image splicing method based on camera geometric position relation
CN109859272B (en) Automatic focusing binocular camera calibration method and device
CN109272574B (en) Construction method and calibration method of linear array rotary scanning camera imaging model based on projection transformation
CN104537707B (en) Image space type stereoscopic vision moves real-time measurement system online
CN110345921B (en) Stereo visual field vision measurement and vertical axis aberration and axial aberration correction method and system
CN106530358A (en) Method for calibrating PTZ camera by using only two scene images
CN108648237A (en) A kind of space-location method of view-based access control model
CN107560603B (en) Unmanned aerial vehicle oblique photography measurement system and measurement method
CN110874854B (en) Camera binocular photogrammetry method based on small baseline condition
CN111145269B (en) Calibration method for external orientation elements of fisheye camera and single-line laser radar
CN107038753B (en) Stereoscopic vision three-dimensional reconstruction system and method
CN109900274B (en) Image matching method and system
CN113963068B (en) Global calibration method for mirror image type single-camera omnidirectional stereoscopic vision sensor
Eichhardt et al. Affine correspondences between central cameras for rapid relative pose estimation
CN111768486A (en) Monocular camera three-dimensional reconstruction method and system based on rotating refraction sheet
CN117197333A (en) Space target reconstruction and pose estimation method and system based on multi-view vision
CN114998448A (en) Method for calibrating multi-constraint binocular fisheye camera and positioning space point
CN113706635B (en) Long-focus camera calibration method based on point feature and line feature fusion
CN115359127A (en) Polarization camera array calibration method suitable for multilayer medium environment
CN117333367A (en) Image stitching method, system, medium and device based on image local features
CN110322514B (en) Light field camera parameter estimation method based on multi-center projection model
CN112712566A (en) Binocular stereo vision sensor measuring method based on structure parameter online correction
CN114972536B (en) Positioning and calibrating method for aviation area array swing scanning type camera
RU2692970C2 (en) Method of calibration of video sensors of the multispectral system of technical vision
CN115456870A (en) Multi-image splicing method based on external parameter estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant