CN116205991A - Construction method of multi-information source event sensor based on multi-view camera array - Google Patents

Construction method of multi-information source event sensor based on multi-view camera array Download PDF

Info

Publication number
CN116205991A
CN116205991A CN202310054282.2A CN202310054282A CN116205991A CN 116205991 A CN116205991 A CN 116205991A CN 202310054282 A CN202310054282 A CN 202310054282A CN 116205991 A CN116205991 A CN 116205991A
Authority
CN
China
Prior art keywords
camera
calibration
cam
mirror
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310054282.2A
Other languages
Chinese (zh)
Inventor
顾平
孙垚
张潇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Fanlai Intelligent Co ltd
Original Assignee
Shenzhen Fanlai Intelligent Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Fanlai Intelligent Co ltd filed Critical Shenzhen Fanlai Intelligent Co ltd
Priority to CN202310054282.2A priority Critical patent/CN116205991A/en
Publication of CN116205991A publication Critical patent/CN116205991A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a method for constructing a multi-information source event sensor based on a multi-view camera array, which is characterized by comprising the following steps of; the method comprises the following steps: step S1: arranging a camera area; the number of cameras is more than or equal to 3; step S2: calibrating internal parameters of a camera; prefabricating internal parameters to calibrate the checkerboard, and shooting checkerboard images by all cameras under different visual angles; step S3: calibrating camera external parameters; prefabricating an external parameter calibration checkerboard; finding G using Bron-Kerbosch algorithm cam Clique= { C m -a }; step S4: adjusting a camera parameter bundle set; in the process of external parameter calibration, we obtain the initial external parameter estimation result
Figure DDA0004059809590000011
Constructing an objective function of single setting of a single calibration object; step S5: mirror parameter calibration and mirror camera parameter calculation. The invention adopts a multi-view cameraThe multi-information-source event sensor is built by the array, so that effective sensing can be performed on a complex scene, and the shielding has strong processing property, so that an event sensing task is completed.

Description

Construction method of multi-information source event sensor based on multi-view camera array
Technical Field
The invention relates to the technical field of a construction method of a multi-information-source event sensor, in particular to a construction method of a multi-information-source event sensor based on a multi-view camera array.
Background
The array camera has the shooting effect of replacing one large lens with a plurality of small lenses, and the principle of the array camera is similar to that of an array astronomical telescope and compound eyes of insects. Compared with the traditional camera, the array camera has wider visual field, larger shot photo and smaller volume, and the array camera has the shooting effect of replacing one large lens with a plurality of small lenses, and the principle is similar to that of an array astronomical telescope and compound eyes of insects.
Event triggering and recording is the marking and recording of scenes in which activity or anomalies occur for the purpose of accurately locating and describing the scenes during video processing. In current commonly used uncoupled video scenes, events are typically defined in a single view only, and are described by a simple two-dimensional image region, which is prone to a large number of missed and false detections, and difficult to detect for complex scenes. Furthermore, complex event definitions are often difficult to make in a single view.
Therefore, we propose a method for constructing a multi-information source event sensor based on a multi-view camera array to solve the above-mentioned problems.
Disclosure of Invention
In order to overcome the above-mentioned drawbacks of the prior art, embodiments of the present invention provide a method for constructing a multi-source event sensor based on a multi-view camera array, so as to solve the above-mentioned problems set forth in the background art.
In order to achieve the above purpose, the present invention provides the following technical solutions: the construction method of the multi-information source event sensor based on the multi-view camera array comprises the following steps:
step S1: arranging a camera area; the number of cameras is more than or equal to 3;
assume that each camera within the camera area represents a node individually and that there is a common field of view, either two cameras Cam i Cam j Represented nodeOne undirected edge is connected, so that any two cameras in the camera area are communicated;
step S2: calibrating internal parameters of a camera; prefabricating internal parameters to calibrate the checkerboard, and shooting checkerboard images by all cameras under different visual angles;
step S3: calibrating camera external parameters; prefabricating an external parameter calibration checkerboard;
finding G using Bron-Kerbosch algorithm cam Clique= { C m };
Step S4: adjusting a camera parameter bundle set; in the process of external parameter calibration, we obtain the initial external parameter estimation result
Figure SMS_1
Constructing an objective function of single setting of a single calibration object;
step S5: calibrating mirror parameters and calculating mirror camera parameters; if there are several mirrors m= { M in the camera area v We can make full use of the mirror information by constructing a mirror camera;
there is an objective function construction of a single calibration setting of the mirror.
In a preferred embodiment, in step S1, it is assumed that each camera in the camera area represents a node individually, and that there is any two cameras Cam in a common field of view i Cam j The represented nodes are connected by a non-directional edge, i.e. all nodes of the camera area form a connected graph G cam
In a preferred embodiment, in step S2, the product of the internal reference matrix and the external reference matrix is solved;
and solving an internal reference matrix.
In a preferred embodiment, in step S3, at each maximum C m Within the region, the checkerboard is placed stationary and the intra-region camera is used to capture the checkerboard image, and the PnP algorithm is used to calculate C m The pose of each camera node in the region relative to the checkerboard is further obtained to obtain the pose relation between the cameras in the region
Figure SMS_2
(belonging to a special European group, expressed from Cam) i Coordinate system to Cam j European transformation of the coordinate system);
each maximum group C m Simplified into a point to form a new graph G clique If two maximum groups C are used m ,C n With adjacent nodes in between, graph G clique The corresponding nodes in (a) also have connected edges;
selecting maximum group C with most nodes in Clique max And it is at G clique The corresponding node of the rule is used as a ground control point by using the checkerboard used by the node.
In a preferred embodiment, in step S3, its G is found using Dijkstra' S algorithm clique All nodes in the network reach C max The shortest path of (1) and the edge of the path is reserved to form a new graph G clique ′;
For G clique The actual neighbors of the' included edges (i.e., the neighbors between the two biggest groups) calculate their relative pose relationship using checkerboard and PnP;
reservation G cam All nodes with direct relative pose relationship to form G cam ′,G cam Each node in' has a shortest path = { p to the ground control point cami };
Cam for any node i Calculating to obtain initial external parameter estimation through the path and the corresponding relative pose relation
Figure SMS_3
In a preferred embodiment, in step S4, the conventional optimization method generally puts all camera parameters and their corresponding data into one objective function to optimize, for a large complex camera array, that is too much time consuming for a single iteration when optimizing using the Leveberg-marquadt algorithm due to having an excessive parameter amount, and we optimize the complex camera array external parameters by constructing a multi-stage parallel optimization strategy based on the camera array' S maximum cluster, considering that cameras without common field of view will not be directly associated in a single calibration setup;
in step S3 we construct a graph G of the maximum group composition clique For G clique We first perform individual optimizations within the node. For different nodes, the optimization operation can be performed in parallel;
for a single camera node, we should optimize its camera extrinsic parameters, extrinsic matrix Extri cam Can be determined by pose parameters
Figure SMS_4
Control, obtained by the following formula
Figure SMS_5
Wherein the rotation vector
Figure SMS_6
And translation vector->
Figure SMS_7
Is Rodrigues Formula.
In a preferred embodiment, in step S4, in a single setting of a single calibration object, the relevant parameters are the coordinate system of the calibration object
Figure SMS_8
Marker point set Calib of calibration object and camera set Cam capable of observing calibration object calib
For any calibration object and the identification points thereof, the identification points form a set Calib= { c i -and selecting a node c from them root Constructing a coordinate system of a calibration object by taking the root node as an origin;
we combine the rest of the nodes in Calib with c root Directly connected to, and in reality measured with c root Homogeneous coordinates in a calibration object coordinate system
Figure SMS_9
From this we getThe physical model of the calibration object is obtained, and the pose parameter of the coordinate system of the calibration object relative to the world coordinate system is set>
Figure SMS_10
The parametric expression of homogeneous coordinates in world coordinate system of all the identified points in the calibration object can be expressed as
Figure SMS_11
Placing the calibration object in the camera array, and expressing the camera set capable of seeing the calibration object as Cam calib ={cam j Use Cam calib The camera in the camera is used for shooting a picture related to the calibration object and marking the image coordinates of the identification point of the calibration object on the picture
Figure SMS_12
In the previous step we have obtained the internal references of all cameras
Figure SMS_13
And preliminary extrinsic parameters estimation ∈ ->
Figure SMS_14
Combining the coordinates of the marker mark points obtained in the previous step +.>
Figure SMS_15
We can obtain preliminary position estimates of the individual marker points of the calibration object by triangulation>
Figure SMS_16
Preliminary position estimation using identification points
Figure SMS_17
And its parametric expression for root node pose +.>
Figure SMS_18
We can get the position and orientation parameters of the calibration object coordinate system +.>
Figure SMS_19
Is a preliminary estimate of (1);
thus, we can construct the objective function of the calibration object in this setting as:
Figure SMS_20
Figure SMS_21
in a preferred embodiment, in step S4, inside the maximum bolus, we perform a multiple calibration setup Clique_sets= { calib_set with multiple different types of calibration objects k ) } calib_set k Elements representing this calibration setting: calibration object coordinate system
Figure SMS_22
Marker point set Calib of calibration object and camera set Cam capable of observing calibration object calib . Adding the objective functions which are set for multiple times to obtain the objective function to be optimized
Figure SMS_23
Optimizing by using a Leveberg-marquadt algorithm, and setting pose parameters of the identification points of the corresponding calibration objects as constants if the biggest mass contains ground control points;
after finishing the external parameter adjustment optimization in the maximum clusters, we need to perform external parameter optimization among the maximum clusters;
at each maximum group C m Internally we elect a root node
Figure SMS_24
(for a maximum clique containing ground control points, the root node is the origin of the world coordinate system), the pose parameters of the world coordinate system of the root node are +.>
Figure SMS_25
All nodes inside the root node are represented by relative pose +.>
Figure SMS_26
The result of the relative pose is derived from the obtained relative pose in the process of optimizing the interior of the maximum mass in the first stage, and the initial pose of the root node is the initial calibrated external reference pose;
if there are connected edges between two nodes, i.e. there are adjacent nodes between two biggest clusters. For all adjacent nodes, multiple calibration settings are performed by using multiple calibration objects of different types, so that a calibration setting set edge_sets= { calib_set about edges is obtained e }. For camera cam participating in settings j Its external parameters are root node parameter expressions about its belonging biggest group
Figure SMS_27
For all G clique The edges in (3) are all set by calibration objects, and finally, an objective function about the node pose of all the maximum groups is formed
Figure SMS_28
Through the Leveberg-marquadt algorithm, the optimized pose of all the maximum group root nodes can be obtained, and the optimized pose of all the camera nodes can be obtained.
In a preferred embodiment, in step S5, the specular vector may be expressed as a parameter related to the specular
Figure SMS_29
Is defined by the parameter expression:
Figure SMS_30
wherein modulo (rn) x ,rn y ,rn z ) Representation vector (rn) x ,rn y ,rn z ) Is a die length of the die.
Figure SMS_31
The corresponding reflection matrix is
Figure SMS_32
Under the mirror image effect, the mirror image camera
Figure SMS_33
Is related to the original camera cam j Pose parameters->
Figure SMS_34
And mirror parameters->
Figure SMS_35
The parametric expression of (2) is
Figure SMS_36
And it is
Figure SMS_37
Internal reference of (c) and cam j Consistency of
The mirror camera can participate in each calibration setting as the normal camera, and only the coordinates of the identification points in the mirror image need to be identified on the original camera image
Figure SMS_38
Adding mirror parameters to the objective function>
Figure SMS_39
In a preferred embodiment, in step S5, there is an objective function construction of a single calibration setting of the mirror:
if mirror information is to be considered, in a single calibration setting involving the mirror, participation in the single calibration setting is requiredSet mirror face set
Figure SMS_40
The parameters are added as additional optimization terms to the objective function
In this calibration setting, mirror m is set v And a camera through which the identification point (in-lens image) can be observed
Figure SMS_41
Make association at CM v In the view of the included camera of (a), all at mirror plane m are marked v Image coordinates of the identification point under the mirror effect +.>
Figure SMS_42
Mirror surface m v The related objective function term is
Figure SMS_43
Optimization regarding mirror correlation can be added to the maximum C m ∈G clique Internal parameter optimization and G clique Optimization of the middle edge.
The invention has the technical effects and advantages that:
1. the problem of low interference immunity and low editability of the event sensor of a single information source is solved.
2. The method solves the problem of real-time triggering and recording of fine-grained events in complex scenes.
3. The multi-information source event sensor is constructed by adopting the multi-view camera array, so that effective sensing can be performed on complex scenes, and the shielding has strong processing property, so that the event sensing task is completed.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a method for constructing a multi-information source event sensor based on a multi-view camera array includes the steps of:
calibrating camera internal parameters
The internal reference is prefabricated to calibrate the checkerboard, each camera shoots the checkerboard image (more than or equal to 10) under different visual angles, and the world coordinate system is fixed on the checkerboard, so that the physical coordinate W=0 of any point on the checkerboard, and the original single-point undistorted imaging model can be represented as the following formula. Wherein R1, R2 rotate the first two columns of matrix R. For simplicity, the reference matrix is denoted as a,
Figure SMS_44
we do a certain description of the above formula. For different pictures, the internal reference matrix A is a fixed value; for the same picture, the internal reference matrix A and the external reference matrix (R1R 2T) are fixed values; for a single point on the same picture, the scale factor Z is a constant value, and the internal reference matrix A and the external reference matrix (R1R 2T) are used for the same picture.
Let A (R1R 2T) be the matrix H, H is the product of the internal matrix and the external matrix, and three columns of the matrix H are (H1, H2, H3), then there are:
Figure SMS_45
by using the above formula, the scale factor Z is eliminated, and the following can be obtained:
Figure SMS_46
Figure SMS_47
at this time, the scale factor Z has been eliminated, so the above equation holds for all corner points on the same picture. (U, V) is the coordinates of the corner points of the calibration plate in the pixel coordinate system, and (U, V) is the coordinates of the corner points of the calibration plate in the world coordinate system. Through an image recognition algorithm, pixel coordinates (U, V) of corner points of the calibration plate can be obtained, and as the world coordinate system of the calibration plate is defined artificially, the size of each grid on the calibration plate is known, and the (U, V) under the world coordinate system can be obtained through calculation.
From here H is a homogeneous matrix with 8 independent unknown elements. Each calibration plate corner point can provide two constraint equations (the corresponding relation of U, U and V, and the corresponding relation of V, U and V provide two constraint equations), so that when the number of the calibration plate corner points on one picture is equal to 4, a matrix H corresponding to the picture can be obtained. When the number of corner points of the calibration plate on one picture is larger than 4, the optimal matrix H is regressed by using a least square method.
Solving an internal reference matrix
We know the matrix h=a (R1R 2T), and then need to solve the internal reference matrix a of the camera.
We use R1, R2 as two columns of the rotation matrix R, there is a relationship of unity orthogonality, namely:
R1 T R2=0
R1 T R1=R2 T R2=1
from the relationship between H and R1, R2, it can be seen that:
R1=A -1 H1
R2=A -1 H2
substitution can be obtained:
H1 T A -T A -1 H2=0
H1 T A -T A -1 H1=H2 T A -T A -1 H2=1
in addition, we find that matrix A exists in both constraint equations -T A -1 . Thus, we note A -T A -1 And B, B is a symmetric array. We try to solve for matrix B first, and then solve for the reference matrix a of the camera through matrix B.
Meanwhile, for simplicity, we note that the camera reference matrix a is:
Figure SMS_48
then:
Figure SMS_49
then matrix B is represented by matrix a:
Figure SMS_50
note that: since B is a symmetric array, B12, B13, B23 appear twice in the above equation.
Here, we can use b=a -T A -1 The constraint obtained by orthogonalization of the R1 and R2 units is expressed as:
H1 T BH2=0
H1 T BH1=H2 T BH2=1
therefore, to solve the matrix B, we have to calculate HiTBHj. Then:
Figure SMS_51
at this time, the constraint equation obtained by the unit orthogonality of R1, R2 can be:
Figure SMS_52
Figure SMS_53
not only is the following:
Figure SMS_54
wherein the matrix
Figure SMS_55
Since the matrix is known, the matrix is in turn entirely made up of the elements of the matrix, and thus the matrix is known.
At this time, we can obtain the matrix by solving the vector. Each calibration plate picture may provide a constraint relationship that contains two constraint equations. However, the vector has 6 unknown elements. Thus, the two constraint equations provided by a single picture are insufficient to solve the vector. Therefore, only 3 calibration plate photos are taken, and 3 constraint relations, namely 6 equations, are obtained, so that the vector can be solved. When the number of calibration plate pictures is greater than 3 (in fact, typically 15 to 20 calibration plate pictures are required), the best vector B of the least squares fit can be used and a matrix B is obtained.
Figure SMS_56
From the correspondence (above equation) of the elements of matrix B and the camera internal parameters α, β, γ, u0, v0, we can obtain:
Figure SMS_57
Figure SMS_58
Figure SMS_59
γ=-B l2 α 2 β
Figure SMS_60
obtaining the internal reference matrix of the camera
Figure SMS_61
The external reference matrix reflects the position relationship between the calibration plate and the camera. For different pictures, the position relation between the calibration plate and the camera is changed, and the external parameter matrix corresponding to each picture is different.
In the relation: in a (R1R 2T) =h, we have solved for matrix H (same for the same picture, different for different pictures), matrix a (same for different pictures). By the formula: (R1R 2T) =a-1H, and the external parameter matrix (R1R 2T) corresponding to each picture can be obtained.
Note that it is worth noting here that the complete extrinsic matrix is (RT 01). However, since the Zhang Zhengyou calibration plate selects the origin of the world coordinate system on the checkerboard, the physical coordinates w=0 of any point on the checkerboard, and the third column R3 of R of the rotation matrix is eliminated, so R3 has no effect in coordinate transformation. But R3 is such that R satisfies the property of a rotation matrix, i.e. the unit orthogonality between columns, so R3 can be calculated by cross-multiplying the vectors R1, R2, i.e. r3=r1×r2.
At this time, both the internal and external reference matrices of the camera are already obtained.
Preparing a checkerboard with a Zhang Zhengyou calibration method, wherein the size of the checkerboard is known, and shooting the checkerboard at different angles by using a camera to obtain a group of images;
2) Detecting characteristic points in the image, such as the corner points of the calibration plate, to obtain pixel coordinate values of the corner points of the calibration plate, and calculating to obtain physical coordinate values of the corner points of the calibration plate according to the known checkerboard size and the origin of the world coordinate system;
3) And solving an internal reference matrix and an external reference matrix.
According to the relation between the physical coordinate values and the pixel coordinate values, an H matrix is obtained, a v matrix is further constructed, a B matrix is solved, a camera internal reference matrix A is solved by using the B matrix, and finally a camera external reference matrix corresponding to each picture is solved (RT 01);
4) And solving distortion parameters.
Constructing a D matrix by using u, v and v, and calculating radial distortion parameters;
5) Optimization of the above parameters using L-M (Levenberg-Marquardt) algorithm
Arranging camera areas
The number of cameras is more than or equal to 3
Assume that each camera within the camera area represents a node individually and that there is a common field of view, either two cameras Cam i Cam j The represented nodes are connected by an undirected edge, so that any two cameras in the camera area should be connected (i.e. all nodes of the camera area form a connected graph G) cam )
Preliminary external parameter calibration of complex camera array
Prefabricated external parameter calibration checkerboard
Finding G using Bron-Kerbosch algorithm cam Clique= { C m }
At each maximum C m Within the region, the checkerboard is placed stationary and the intra-region camera is used to capture the checkerboard image. Computing C using PnP algorithm m The pose of each camera node in the region relative to the checkerboard is further obtained to obtain the pose relation between the cameras in the region
Figure SMS_62
(belonging to a special European group, expressed from Cam) i Coordinate system to Cam j European transformation of coordinate system
Each maximum group C m Simplified into a point to form a new graph G clique If two maximum groups C are used m ,C n With adjacent nodes in between, graph G clique Corresponding nodes in (a) also have connected edges
Selecting maximum group C with most nodes in Clique max And it is at G clique Corresponding nodes in the network, and the checkerboard used by the nodes is used as a ground control point
Find its G using Dijkstra's algorithm clique All nodes in the network reach C max The shortest path of (1) and the edge of the path is reserved to form a new graph G clique
For G clique The actual neighbors of the' included edges (i.e., the neighbors between the two maxima) calculate their relative pose relationship using checkerboard and PnP
Reservation G cam All nodes with direct relative pose relationship to form G cam ′,G cam Each node in' has a shortest path = { p to the ground control point cami }
Cam for any node i Calculating to obtain initial external parameter estimation through the path and the corresponding relative pose relation
Figure SMS_63
Parallel bundle set adjustment optimization strategy for complex camera array external parameters
In the process of external parameter calibration, we obtain the initial external parameter estimation result
Figure SMS_64
However, the result of the extrinsic estimation has a large error, so that more data is needed for further optimization
Conventional optimization methods typically place all camera parameters and their corresponding data into one objective function, which is too time consuming for a single iteration when optimizing using the Leveberg-marquadt algorithm due to an excessive amount of parameters for a large complex camera array, and we optimize complex camera array external parameters by constructing a multi-stage parallel optimization strategy based on the camera array's blobs, considering that cameras without common field of view are not directly related in a single calibration setup
In the above step we constructed graph G of the maximum group composition clique For G clique We first perform individual optimizations within the node. The optimization operations may be performed in parallel for different nodes.
For a single camera node, we should optimize its camera extrinsic parameters, extrinsic matrix Extri cam Can be determined by pose parameters
Figure SMS_65
Control, obtained by the following formula
Figure SMS_66
Wherein the rotation vector
Figure SMS_67
And translation vector->
Figure SMS_68
Is Rodrigues Formula.
And (3) constructing an objective function of single setting of a single calibration object:
in the single setting of a single calibration object, the related parameters are a calibration object coordinate system
Figure SMS_69
Marker point set Calib of calibration object and camera set Cam capable of observing calibration object calib
For any calibration object and the identification points thereof, the identification points form a set Calib= { c i -and selecting a node c from them root Constructing a coordinate system of a calibration object by taking the root node as an origin;
we combine the rest of the nodes in Calib with c root Directly connected to, and in reality measured with c root Homogeneous coordinates in a calibration object coordinate system
Figure SMS_70
From this we get a physical model of the calibration object. Setting pose parameters of a calibration object coordinate system relative to a world coordinate system>
Figure SMS_71
The parametric expression of homogeneous coordinates in world coordinate system of all the identified points in the calibration object can be expressed as
Figure SMS_72
Placing the calibration object in the camera array, and expressing the camera set capable of seeing the calibration object as Cam calib ={cam j Use Cam calib The camera in the camera is used for shooting a picture related to the calibration object and marking the image coordinates of the identification point of the calibration object on the picture
Figure SMS_73
/>
In the previous step we have obtained the internal references of all cameras
Figure SMS_74
And preliminary extrinsic parameters estimation ∈ ->
Figure SMS_75
Combining the coordinates of the marker mark points obtained in the previous step +.>
Figure SMS_76
We can obtain preliminary position estimates of the individual marker points of the calibration object by triangulation>
Figure SMS_77
Preliminary position estimation using identification points
Figure SMS_78
And its parametric expression for root node pose +.>
Figure SMS_79
We can get the position and orientation parameters of the calibration object coordinate system +.>
Figure SMS_80
Is a preliminary estimate of (1);
thus, we can construct the objective function of the calibration object in this setting as:
Figure SMS_81
within the very big cluster, we set Clique_sets= { calib_set multiple calibration with multiple different types of calibration objects k ) } calib_set k Elements representing this calibration setting: calibration object coordinate system
Figure SMS_82
Marker point set Calib of calibration object and camera set Cam capable of observing calibration object calib . Adding the objective functions which are set for multiple times to obtain the objective function to be optimized
Figure SMS_83
Optimizing by using a Leveberg-marquadt algorithm, and setting pose parameters of the identification points of the corresponding calibration objects as constants if the maximum mass contains ground control points
After finishing the external parameter adjustment optimization in the maximum clusters, we need to perform external parameter optimization among the maximum clusters;
at each maximum group C m Internally we elect a root node
Figure SMS_84
(for a maximum clique containing ground control points, the root node is the origin of the world coordinate system), the pose parameters of the world coordinate system of the root node are +.>
Figure SMS_85
All nodes inside the root node are represented by relative pose +.>
Figure SMS_86
The result of the relative pose is derived from the obtained relative pose in the process of optimizing the interior of the maximum mass in the first stage, and the initial pose of the root node is the initial calibrated external reference pose;
if there are connected edges between two nodes, i.e. there are adjacent nodes between two biggest clusters. For all adjacent nodes, multiple calibration settings are performed by using multiple calibration objects of different types to obtain the related edgesSet of calibration settings edge_sets= { calib_set e }. For camera cam participating in settings j Its external parameters are root node parameter expressions about its belonging biggest group
Figure SMS_87
For all G clique The object setting is carried out on all edges in the model, and finally, an objective function of all the maximum root node positions is formed.
Figure SMS_88
Through the Leveberg-marquadt algorithm, the optimized pose of all the maximum group root nodes can be obtained, and the optimized pose of all the camera nodes can be obtained.
Mirror parameter calibration and mirror camera parameter calculation
If there are several mirrors m= { M in the camera area v We can make full use of the mirror information by constructing a mirror camera
The specular vector may be expressed as a parameter related to the specular
Figure SMS_89
Parametric expression of (2)
Figure SMS_90
Figure SMS_91
Wherein modulo (rn) x ,rn y ,rn z ) Representation vector (rn) x ,rn y ,rn z ) Is a die length of the die.
Figure SMS_92
The corresponding reflection matrix is
Figure SMS_93
Under the mirror image effect, the mirror image camera
Figure SMS_94
Is related to the original camera cam j Pose parameters->
Figure SMS_95
And mirror parameters->
Figure SMS_96
The parametric expression of (2) is
Figure SMS_97
And it is
Figure SMS_98
Internal reference of (c) and cam j And consistent.
The mirror camera can participate in each calibration setting as the normal camera, and only the coordinates of the identification points in the mirror image need to be identified on the original camera image
Figure SMS_99
Adding mirror parameters to the objective function>
Figure SMS_100
Objective function construction for single calibration settings with mirrors:
if mirror information is to be considered for use, in a single calibration setting in which the mirror participates, the mirror involved in that calibration setting needs to be aggregated
Figure SMS_101
Adding the parameters of (2) as additional optimization terms to the objective function;
in this calibration setting, mirror m is set v And a camera through which the identification point (in-lens image) can be observed
Figure SMS_102
Make association at CM v In the view of the included camera of (a), all at mirror plane m are marked v Image coordinates of the identification point under the mirror effect +.>
Figure SMS_103
Mirror surface m v The relevant objective function term is +.>
Figure SMS_104
Optimization regarding mirror correlation can be added to the maximum C m ∈G clique Internal parameter optimization and G clique Optimization of the middle edge.
Finally: the foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (10)

1. The construction method of the multi-information source event sensor based on the multi-view camera array is characterized by comprising the following steps of; the method comprises the following steps:
step S1: arranging a camera area; the number of cameras is more than or equal to 3;
assume that each camera within the camera area represents a node individually and that there is a common field of view, either two cameras Cam i Cam j The represented nodes are connected by an undirected edge, so that any two cameras in the camera area are communicated;
step S2: calibrating internal parameters of a camera; prefabricating internal parameters to calibrate the checkerboard, and shooting checkerboard images by all cameras under different visual angles;
step S3: calibrating camera external parameters; prefabricating an external parameter calibration checkerboard;
finding G using Bron-Kerbosch algorithm cam Clique= { C m };
Step S4: camera parametersBundle set adjustment; in the process of external parameter calibration, we obtain the initial external parameter estimation result
Figure QLYQS_1
Constructing an objective function of single setting of a single calibration object;
step S5: calibrating mirror parameters and calculating mirror camera parameters; if there are several mirrors m= { M in the camera area v We can make full use of the mirror information by constructing a mirror camera;
there is an objective function construction of a single calibration setting of the mirror.
2. The method for constructing a multi-source event sensor based on a multi-view camera array according to claim 1, wherein: in step S1, it is assumed that each camera in the camera area represents a node individually, and that there are any two cameras Cam of common view i Cam j The represented nodes are connected by a non-directional edge, i.e. all nodes of the camera area form a connected graph G cam
3. The method for constructing a multi-source event sensor based on a multi-view camera array according to claim 1, wherein: in step S2, solving the product of the internal reference matrix and the external reference matrix;
and solving an internal reference matrix.
4. The method for constructing a multi-source event sensor based on a multi-view camera array according to claim 1, wherein: in step S3, at each maximum group C m Within the region, the checkerboard is placed stationary and the intra-region camera is used to capture the checkerboard image, and the PnP algorithm is used to calculate C m The pose of each camera node in the region relative to the checkerboard is further obtained to obtain the pose relation between the cameras in the region
Figure QLYQS_2
(belonging to a special European group, expressed from Cam) i Coordinates ofIs tied to Cam j European transformation of the coordinate system);
each maximum group C m Simplified into a point to form a new graph G clique If two maximum groups C are used m ,C n With adjacent nodes in between, graph G clique The corresponding nodes in (a) also have connected edges;
selecting maximum group C with most nodes in Clique max And it is at G clique The corresponding node of the rule is used as a ground control point by using the checkerboard used by the node.
5. The method for constructing a multi-view camera array-based multi-source event sensor as recited in claim 4, wherein: in step S3, find its G using Dijkstra' S algorithm clique All nodes in the network reach C max The shortest path of (1) and the edge of the path is reserved to form a new graph G clique ′;
For G clique The actual neighbors of the' included edges (i.e., the neighbors between the two biggest groups) calculate their relative pose relationship using checkerboard and PnP;
reservation G cam All nodes with direct relative pose relationship to form G cam ′,G cam Each node in' has a shortest path = { p to the ground control point cami };
Cam for any node i Calculating to obtain initial external parameter estimation through the path and the corresponding relative pose relation
Figure QLYQS_3
6. The method for constructing a multi-source event sensor based on a multi-view camera array according to claim 1, wherein: in step S4, the conventional optimization method generally puts all camera parameters and their corresponding data into one objective function to be optimized, and for a large complex camera array, the objective function is too huge in time consumption of a single iteration when the objective function is optimized by using the Leveberg-marquadt algorithm due to the excessive parameter quantity, and considering that cameras without common fields of view cannot be directly associated in a single calibration setting, we optimize the external parameters of the complex camera array by constructing a multi-stage parallel optimization strategy based on the extremely large group of camera arrays;
in step S3 we construct a graph G of the maximum group composition clique For G clique We first perform individual optimizations within the node. For different nodes, the optimization operation can be performed in parallel;
for a single camera node, we should optimize its camera extrinsic parameters, extrinsic matrix Extri cam Can be determined by pose parameters
Figure QLYQS_4
Control, obtained by the following formula
Figure QLYQS_5
Wherein the rotation vector
Figure QLYQS_6
And translation vector->
Figure QLYQS_7
Is Rodrigues Formula.
7. The method for constructing a multi-source event sensor based on a multi-view camera array according to claim 1, wherein: in step S4, in the single setting of the single calibration object, the related parameters include a calibration object coordinate system
Figure QLYQS_8
Marker point set Calib of calibration object and camera set Cam capable of observing calibration object calib
For any calibration object and the identification points thereof, the identification points form a set Calib= { c i -selecting a node from the groupc root Constructing a coordinate system of a calibration object by taking the root node as an origin;
we combine the rest of the nodes in Calib with c root Directly connected to, and in reality measured with c root Homogeneous coordinates in a calibration object coordinate system
Figure QLYQS_9
From this we get the physical model of the calibration object, set the pose parameters of the coordinate system of the calibration object relative to the world coordinate system +.>
Figure QLYQS_10
The parametric expression of homogeneous coordinates in world coordinate system of all the identified points in the calibration object can be expressed as
Figure QLYQS_11
Placing the calibration object in the camera array, and expressing the camera set capable of seeing the calibration object as Cam calib ={cam j Use Cam calib The camera in the camera is used for shooting a picture related to the calibration object and marking the image coordinates of the identification point of the calibration object on the picture
Figure QLYQS_12
In the previous step we have obtained the internal references K of all cameras camj Preliminary extrinsic parameter estimation
Figure QLYQS_13
Combining the coordinates of the marker mark points obtained in the previous step +.>
Figure QLYQS_14
We can obtain preliminary position estimates of the individual marker points of the calibration object by triangulation>
Figure QLYQS_15
/>
Preliminary position estimation using identification points
Figure QLYQS_16
And its parametric expression for root node pose +.>
Figure QLYQS_17
We can get the position and orientation parameters of the calibration object coordinate system +.>
Figure QLYQS_18
Is a preliminary estimate of (1);
thus, we can construct the objective function of the calibration object in this setting as:
Figure QLYQS_19
8. the method for constructing a multi-source event sensor based on a multi-view camera array according to claim 1, wherein: in step S4, within the maximum cluster, we perform multiple calibration settings Clique_sets= { calib_set with multiple different types of calibration objects k ) } calib_set k Elements representing this calibration setting: calibration object coordinate system
Figure QLYQS_20
Marker point set Calib of calibration object and camera set Cam capable of observing calibration object calib . Adding the objective functions which are set for multiple times to obtain the objective function to be optimized
Figure QLYQS_21
Optimizing by using a Leveberg-marquadt algorithm, and setting pose parameters of the identification points of the corresponding calibration objects as constants if the biggest mass contains ground control points;
after finishing the external parameter adjustment optimization in the maximum clusters, we need to perform external parameter optimization among the maximum clusters;
at each maximum group C m Internally we elect a root node
Figure QLYQS_22
(for a maximum clique containing ground control points, the root node is the origin of the world coordinate system), the pose parameters of the world coordinate system of the root node are +.>
Figure QLYQS_23
All nodes inside the root node are represented by relative pose +.>
Figure QLYQS_24
The result of the relative pose is derived from the obtained relative pose in the process of optimizing the interior of the maximum mass in the first stage, and the initial pose of the root node is the initial calibrated external reference pose;
if there are connected edges between two nodes, i.e. there are adjacent nodes between two biggest clusters. For all adjacent nodes, multiple calibration settings are performed by using multiple calibration objects of different types, so that a calibration setting set edge_sets= { calib_set about edges is obtained e }. For camera cam participating in settings j Its external parameters are root node parameter expressions about its belonging biggest group
Figure QLYQS_25
For all G clique The edges in (3) are all set by calibration objects, and finally, an objective function about the node pose of all the maximum groups is formed
Figure QLYQS_26
Through the Leveberg-marquadt algorithm, the optimized pose of all the maximum group root nodes can be obtained, and the optimized pose of all the camera nodes can be obtained.
9. The method for constructing a multi-source event sensor based on a multi-view camera array according to claim 1, wherein: in step S5, the specular vector may be expressed as a parameter related to the specular
Figure QLYQS_27
Is defined by the parameter expression:
Figure QLYQS_28
wherein modulo (rn) x ,rn y ,rn z ) Representation vector (rn) x ,rn y ,rn z ) Is a die length of the die.
Figure QLYQS_29
The corresponding reflection matrix is
Figure QLYQS_30
Under the mirror image effect, the mirror image camera
Figure QLYQS_31
Is related to the original camera cam j Pose parameters->
Figure QLYQS_32
And mirror parameters->
Figure QLYQS_33
The parametric expression of (2) is
Figure QLYQS_34
And it is
Figure QLYQS_35
Internal reference of (c) and cam j Consistency of
The mirror camera can participate in each calibration setting as the normal camera, and only the coordinates of the identification points in the mirror image need to be identified on the original camera image
Figure QLYQS_36
Adding mirror parameters to the objective function>
Figure QLYQS_37
10. The method for constructing a multi-source event sensor based on a multi-view camera array according to claim 1, wherein: in step S5, an objective function construction of a single calibration setting of the mirror exists:
if mirror information is to be considered for use, in a single calibration setting in which the mirror participates, the mirror involved in that calibration setting needs to be aggregated
Figure QLYQS_38
The parameters are added as additional optimization terms to the objective function
In this calibration setting, mirror m is set v And a camera through which the identification point (in-lens image) can be observed
Figure QLYQS_39
Make association at CM v In the view of the included camera of (a), all at mirror plane m are marked v Image coordinates of the identification point under the mirror effect +.>
Figure QLYQS_40
Mirror surface m v The relevant objective function term is +.>
Figure QLYQS_41
With respect to mirror surfacesThe relevant optimizations can be added to the maximum C m ∈G clique Internal parameter optimization and G clique Optimization of the middle edge.
CN202310054282.2A 2023-02-03 2023-02-03 Construction method of multi-information source event sensor based on multi-view camera array Pending CN116205991A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310054282.2A CN116205991A (en) 2023-02-03 2023-02-03 Construction method of multi-information source event sensor based on multi-view camera array

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310054282.2A CN116205991A (en) 2023-02-03 2023-02-03 Construction method of multi-information source event sensor based on multi-view camera array

Publications (1)

Publication Number Publication Date
CN116205991A true CN116205991A (en) 2023-06-02

Family

ID=86507122

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310054282.2A Pending CN116205991A (en) 2023-02-03 2023-02-03 Construction method of multi-information source event sensor based on multi-view camera array

Country Status (1)

Country Link
CN (1) CN116205991A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681732A (en) * 2023-08-03 2023-09-01 南昌工程学院 Target motion recognition method and system based on compound eye morphological vision
CN117036448A (en) * 2023-10-10 2023-11-10 深圳纷来智能有限公司 Scene construction method and system of multi-view camera

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681732A (en) * 2023-08-03 2023-09-01 南昌工程学院 Target motion recognition method and system based on compound eye morphological vision
CN116681732B (en) * 2023-08-03 2023-10-20 南昌工程学院 Target motion recognition method and system based on compound eye morphological vision
CN117036448A (en) * 2023-10-10 2023-11-10 深圳纷来智能有限公司 Scene construction method and system of multi-view camera
CN117036448B (en) * 2023-10-10 2024-04-02 深圳纷来智能有限公司 Scene construction method and system of multi-view camera

Similar Documents

Publication Publication Date Title
CN106803273B (en) A kind of panoramic camera scaling method
CN116205991A (en) Construction method of multi-information source event sensor based on multi-view camera array
CN108765328B (en) High-precision multi-feature plane template and distortion optimization and calibration method thereof
US10290119B2 (en) Multi view camera registration
CN103033132B (en) Plane survey method and device based on monocular vision
CN105243637B (en) One kind carrying out full-view image joining method based on three-dimensional laser point cloud
CN107633536A (en) A kind of camera calibration method and system based on two-dimensional planar template
US20200334842A1 (en) Methods, devices and computer program products for global bundle adjustment of 3d images
CN108629829B (en) Three-dimensional modeling method and system of the one bulb curtain camera in conjunction with depth camera
CN109102537A (en) A kind of three-dimensional modeling method and system of laser radar and the combination of ball curtain camera
CN110345921B (en) Stereo visual field vision measurement and vertical axis aberration and axial aberration correction method and system
CN111369659B (en) Texture mapping method, device and equipment based on three-dimensional model
WO2020038386A1 (en) Determination of scale factor in monocular vision-based reconstruction
CN105160663A (en) Method and system for acquiring depth image
CN105654547B (en) Three-dimensional rebuilding method
CN105488766B (en) Fisheye image bearing calibration and device
CN109559349A (en) A kind of method and apparatus for calibration
CN110675456B (en) Method and device for calibrating external parameters of multi-depth camera and storage medium
JP2009284188A (en) Color imaging apparatus
CN103729839B (en) A kind of method and system of sensor-based outdoor camera tracking
US20190082173A1 (en) Apparatus and method for generating a camera model for an imaging system
CN106500729B (en) A kind of smart phone self-test calibration method without controlling information
CN108759788A (en) Unmanned plane image positioning and orientation method and unmanned plane
CN113450416B (en) TCSC method applied to three-dimensional calibration of three-dimensional camera
CN115797468B (en) Automatic correction method, device and equipment for installation height of fish-eye camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination