CN113592954B - Multi-camera calibration method and related equipment in large space environment based on optical dynamic capturing - Google Patents

Multi-camera calibration method and related equipment in large space environment based on optical dynamic capturing Download PDF

Info

Publication number
CN113592954B
CN113592954B CN202110828055.1A CN202110828055A CN113592954B CN 113592954 B CN113592954 B CN 113592954B CN 202110828055 A CN202110828055 A CN 202110828055A CN 113592954 B CN113592954 B CN 113592954B
Authority
CN
China
Prior art keywords
camera
data
optical
target
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110828055.1A
Other languages
Chinese (zh)
Other versions
CN113592954A (en
Inventor
王越
许秋子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Realis Multimedia Technology Co Ltd
Original Assignee
Shenzhen Realis Multimedia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Realis Multimedia Technology Co Ltd filed Critical Shenzhen Realis Multimedia Technology Co Ltd
Priority to CN202110828055.1A priority Critical patent/CN113592954B/en
Publication of CN113592954A publication Critical patent/CN113592954A/en
Application granted granted Critical
Publication of CN113592954B publication Critical patent/CN113592954B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The invention relates to the technical field of computer vision, in particular to a multi-camera calibration method and related equipment in a large-space environment based on optical dynamic capturing. The method comprises the following steps: collecting multi-frame data captured by each optical camera on the calibration rod, classifying the multi-frame data according to frames, and obtaining a plurality of corresponding initial data by each frame; removing and detecting the coordinate data of each frame, and obtaining a plurality of corresponding effective data for each frame; initializing each optical camera according to the plurality of effective data to obtain a target internal reference of each optical camera; and determining the camera serial number with the most effective data as a main camera, and obtaining target external parameters of all the optical cameras according to the internal and external parameters of the main camera. The invention finally obtains the high-precision camera internal and external parameters, provides sufficient and necessary conditions for the subsequent operation of converting the two-dimensional space coordinates into the three-dimensional space coordinates, and further lays a foundation for high-precision positioning and tracking in the whole optical dynamic capturing system.

Description

Multi-camera calibration method and related equipment in large space environment based on optical dynamic capturing
Technical Field
The invention relates to the technical field of computer vision, in particular to a multi-camera calibration method and related equipment in a large-space environment based on optical dynamic capturing.
Background
With the increasing popularity of machine vision applications, there is an increasing demand for multi-camera vision systems in large space environments, mainly in high-precision positioning and tracking in large spaces. In machine vision applications, to determine the correlation between the three-dimensional geometric position of a point on the surface of a spatial object and its corresponding point in the image, geometric models of camera imaging must be established, these geometric models being camera parameters. These parameters must be obtained through experiments and calculations, and the process of solving the parameters is called camera calibration. The traditional camera calibration method needs to use a calibration object with known size, such as a chessboard calibration plate, and obtains the internal and external parameters of a camera model by a certain algorithm through establishing the correspondence between the known coordinate points on the calibration object and the image points of the calibration object. In a multi-camera environment, in order to locate and track an object, not only parameters of each camera but also a positional relationship between the cameras needs to be determined.
The optical dynamic capturing system emits infrared light through an ultra-high power near infrared light source in a dynamic capturing camera, irradiates the infrared light on a passive marking point, a photosensitive element converts an optical signal into an image signal and outputs the image signal to a control circuit, an image processing unit in the control circuit uses an FPGA (field programmable gate array) to preprocess the image signal in a hardware form, and finally 2D coordinate information of the marking point flows out to tracking software. These characteristics of optical dynamic capture make it unsuitable for traditional camera calibration methods for three reasons: firstly, the optical camera cannot obtain the coordinate information of the chessboard calibration board; secondly, the number of cameras in the dynamic capturing system is large, the area crossing relationship between the cameras is complex, and the chessboard calibration board needs to spend a great deal of manpower and material resources; thirdly, the chessboard calibration board algorithm is used under severe conditions, the algorithm time is too long, and the use is very inconvenient.
Disclosure of Invention
The invention mainly aims to provide a multi-camera calibration method and related equipment in a large-space environment based on optical dynamic capturing, and aims to solve the technical problem of calibrating a plurality of optical cameras in the large-space environment.
In order to achieve the above purpose, the invention provides a multi-camera calibration method in a large space environment based on optical dynamic capturing, which comprises the following steps:
Acquiring camera serial numbers of a plurality of optical cameras, acquiring multi-frame data captured by each optical camera on a calibration rod in swinging, classifying the multi-frame data containing coordinate data according to frames, and obtaining a plurality of corresponding initial data by each frame, wherein each initial data comprises the camera serial numbers and the corresponding coordinate data;
removing initial data with coordinate points less than the preset number in the coordinate data of each frame, detecting whether the coordinate data contain coordinate points belonging to a plurality of marking points of the calibration rod in the rest initial data, if so, recording the coordinate points and corresponding camera serial numbers to form effective data, otherwise, removing the initial data, and obtaining a plurality of corresponding effective data in each frame;
initializing each optical camera according to the plurality of effective data to obtain target internal parameters of each optical camera;
determining the camera serial number with the most effective data as a main camera, defining the rotation information of the main camera as a unit matrix, defining the translation information of the main camera as a zero matrix, and determining the unit matrix and the zero matrix as target external parameters of the main camera;
According to the effective data of each frame, matching other optical cameras with the main camera, marking the optical camera containing the matched data as a target camera, and obtaining rotation information and translation information of the target camera through the rotation information and the translation information of the main camera, wherein the rotation information and the translation information are target external parameters of the target camera;
and marking the target camera with the obtained target external parameters as a main camera, and repeating the previous operation with other optical cameras which are not matched with the matching data until the target external parameters of all the optical cameras are obtained.
Optionally, the removing the initial data with less coordinate points than the preset number in the coordinate data of each frame, and detecting whether the coordinate data includes coordinate points of a plurality of mark points belonging to the calibration rod in the remaining initial data includes:
acquiring the number of coordinate points in each coordinate data in a frame, judging whether the number of the coordinate points is less than the number of a plurality of marking points of the calibration rod, and if so, eliminating the coordinate data from the corresponding frame;
if the number of the coordinate points is not less than the preset maximum number, continuously judging whether the number of the coordinate points is greater than the preset maximum number, if so, eliminating the coordinate data from the corresponding frames, and obtaining initial data after eliminating each frame;
And acquiring position relation data of a plurality of mark points on the calibration rod, and detecting whether the coordinate data contains a plurality of coordinate points of the position relation data or not in the initial data after the elimination.
Optionally, initializing each optical camera according to a plurality of the valid data to obtain a target internal parameter of each optical camera, including:
the target internal parameters of the optical camera comprise imaging length, imaging width and focal length, and in all effective data corresponding to the optical camera, the maximum value of the abscissa and the maximum value of the ordinate of the coordinate data are searched, the maximum value of the abscissa is recorded as the imaging length of the optical camera, and the maximum value of the ordinate is recorded as the imaging width of the optical camera;
the focal length of the optical camera is obtained by the following calculation formula:
let the imaging length be W, the imaging width be H, then imaging length ratio alpha, imaging width beta ratio be:
alpha=W/(W+H)
beta=H/(W+H);
the value fx of the focal length of the optical camera in the imaging length direction and the value fy of the focal length of the optical camera in the imaging width direction are
fx=W*0.5/alpha
fy=H*0.5/beta;
Wherein fx and fy are focal lengths of the optical camera.
Optionally, the matching the other optical camera with the main camera according to the plurality of initial data and the valid data of each frame, and recording the optical camera containing the matching data as a target camera, including:
Searching whether the effective data contains the camera serial number of the main camera or not frame by frame, and if the effective data does not contain the camera serial number of the main camera, continuously searching the next frame;
if the camera serial number of the main camera is contained, continuously searching whether the initial data or the coordinate data of other optical cameras in the effective data contain enough matching data one by one, and if the effective data with the number of frames being more than a preset number of frames simultaneously contain the coordinate data of the main camera and the current optical camera, considering that the main camera and the current camera contain enough matching data;
if the matching data is not contained, the next optical camera is continuously searched, if the matching data is contained, the current optical camera is marked as a target camera, and finally a plurality of target cameras and corresponding coordinate data are obtained in each frame.
Optionally, the obtaining rotation information and translation information of the target camera through the rotation information and the translation information of the main camera, where the rotation information and the translation information are target external parameters of the target camera includes:
in any one frame of the matching data, respectively acquiring coordinate data of the main camera and the target camera, acquiring position relation data of a plurality of marking points on the calibration rod, matching the coordinate data of the main camera with the coordinate data of the target camera according to the position key data to obtain a plurality of groups of two-dimensional space feature pairs, constructing a linear equation set by the plurality of groups of two-dimensional space feature pairs and the two optical camera parameters, and solving an essential matrix;
And decomposing the essential matrix through a singular value decomposition algorithm to obtain rotation information and translation information of the target camera.
Optionally, performing iterative optimization on the target internal parameters and the target external parameters of the main camera, the target internal parameters and the target external parameters of the target camera, and all matching data of the main camera and the target camera, wherein a cost function in the iterative optimization process is a reprojection error, so as to obtain the optimized target internal parameters and the target external parameters of the main camera, and the target internal parameters and the target external parameters of the target camera, and the iterative optimization process is as follows:
converting world coordinates p to camera coordinates:
P’=R*p+T={X,Y,Z}
wherein R and T are optical camera external parameters;
p' is projected onto the normalized plane to obtain normalized coordinates:
Pc={u,v,1}={X/Z,Y/Z,1}
and (3) performing de-distortion:
u’=u*(1+k1*r*r+k2*r*r*r*r)
v’=v*(1+k1*r*r+k2*r*r*r*r)
calculate pixel coordinates M (Us, vs):
Us=fx*u’+cx
Vs=fy*v’+cy
wherein fx, fy, cx, cy is an optical camera reference;
let the pixel coordinates N (U0, V0) detected by the optical camera, the reprojection error e of the world coordinate p be:
e=||N-M|| 2
substituting all matching data of the main camera and the target camera into, then the overall cost function is:
Figure BDA0003174051080000051
in the iterative process, when the error is reduced to be within a preset threshold range, stopping calculation, and outputting all the internal parameters and external parameters of the optical camera after iterative optimization.
Optionally, inputting the target internal parameters, the target external parameters and all acquired coordinate data of all the optical cameras into a preset beam method adjustment model, wherein the output result of the beam method adjustment model is the optimized target internal parameters of all the optical cameras.
Further, in order to achieve the above object, the present invention further provides a multi-camera calibration device in a large space environment based on optical dynamic capturing, including:
the initial data acquisition module is used for acquiring camera serial numbers of a plurality of optical cameras, acquiring multi-frame data captured by each optical camera on a calibration rod in swinging, classifying the multi-frame data containing coordinate data according to frames, acquiring a plurality of corresponding initial data from each frame, wherein each initial data comprises the camera serial numbers and the corresponding coordinate data;
the effective data recording module is used for eliminating initial data with coordinate points less than the preset number in the coordinate data of each frame, detecting whether the coordinate data contain coordinate points belonging to a plurality of marking points of the calibration rod in the rest initial data, if so, recording the coordinate points and corresponding camera serial numbers to form effective data, otherwise, eliminating the initial data, and obtaining a plurality of corresponding effective data in each frame;
Obtaining all target internal parameters modules, which are used for initializing each optical camera according to a plurality of effective data to obtain target internal parameters of each optical camera;
the method comprises the steps of determining a main camera module, wherein the camera serial number with the most effective data is determined to be a main camera, rotation information of the main camera is defined to be a unit matrix, translation information of the main camera is defined to be a zero matrix, and the unit matrix and the zero matrix are target external parameters of the main camera;
the calibration calculation module is used for matching other optical cameras with the main camera according to a plurality of effective data of each frame, marking the optical camera containing the matched data as a target camera, and obtaining rotation information and translation information of the target camera through the rotation information and the translation information of the main camera, wherein the rotation information and the translation information are target external parameters of the target camera;
and obtaining all target external parameters, namely marking the target camera with the obtained target external parameters as a main camera, and repeating the previous operation with other optical cameras which are not matched with the matching data until the target external parameters of all the optical cameras are obtained.
In order to achieve the above object, the present invention further provides a multi-camera calibration device in a large space environment based on optical dynamic capturing, the device comprising: the method comprises the steps of a memory, a processor and a multi-camera calibration program which is stored in the memory and can run on the processor and is based on the optical dynamic capturing in a large space environment, wherein the multi-camera calibration program is executed by the processor and realizes the multi-camera calibration method based on the optical dynamic capturing in the large space environment.
In order to achieve the above object, the present invention further provides a computer readable storage medium, where a multi-camera calibration program under a large space environment based on optical dynamic capturing is stored, where the steps of the multi-camera calibration method under a large space environment based on optical dynamic capturing are implemented when the multi-camera calibration program under a large space environment based on optical dynamic capturing is executed by a processor.
The multi-camera calibration method based on the optical dynamic capturing in the large space environment is used for calibrating the multi-camera in the large space environment, namely the scanning field of an optical dynamic capturing system based on a two-dimensional calibration rod, the internal and external parameters of each optical camera are calculated through a certain algorithm by means of a plurality of coordinate data captured by a plurality of optical cameras, the calibration method not only simplifies the complex calibration plate and other device structures in the traditional calibration method, but also reduces the complexity of the algorithm in the matching process and the parameter calculation process, and is faster in calibration time and saves manpower and material resources. The invention finally obtains the high-precision camera internal and external parameters, provides sufficient and necessary conditions for the subsequent operation of converting the two-dimensional space coordinates into the three-dimensional space coordinates, and further lays a foundation for high-precision positioning and tracking in the whole optical dynamic capturing system.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention.
FIG. 1 is a schematic structural diagram of an operating environment of a multi-camera calibration device in a large-space environment based on optical dynamic capturing according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for calibrating multiple cameras in a large-space environment based on optical dynamic capturing in an embodiment of the invention;
FIG. 3 is a schematic view of a marking pole according to an embodiment of the present invention;
FIG. 4 is a block diagram of a multi-camera calibration device in a large space environment based on optical dynamic capture in one embodiment of the invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Referring to fig. 1, a schematic structural diagram of an operating environment of a multi-camera calibration device in a large-space environment based on optical dynamic capturing according to an embodiment of the present invention is shown.
As shown in fig. 1, the multi-camera calibration device in a large space environment based on optical dynamic capture comprises: a processor 1001, such as a CPU, a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Those skilled in the art will appreciate that the hardware configuration of the multi-camera calibration apparatus in the large-space optical-dynamic-capture-based environment shown in fig. 1 does not constitute a limitation of the multi-camera calibration apparatus in the large-space optical-dynamic-capture-based environment, and may include more or fewer components than shown, or may combine certain components, or may have a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a computer readable storage medium, may include an operating system, a network communication module, a user interface module, and a multi-camera calibration program in a large space environment based on optical capturing. The operating system is a program for managing and controlling the multi-camera calibration equipment and software resources in the large-space environment based on optical dynamic capturing, and supports the operation of the multi-camera calibration program and other software and/or programs in the large-space environment based on optical dynamic capturing.
In the hardware structure of the multi-camera calibration device in the large-space environment based on optical dynamic capturing shown in fig. 1, the network interface 1004 is mainly used for accessing a network; the user interface 1003 is mainly used for detecting a confirm command, an edit command, etc., and the processor 1001 may be used for calling a multi-camera calibration program under a large space environment based on optical dynamic capturing stored in the memory 1005 and performing the following operations of the embodiments of the multi-camera calibration method under the large space environment based on optical dynamic capturing.
Referring to fig. 2, a flowchart of a method for calibrating multiple cameras in a large space environment based on optical dynamic capturing according to an embodiment of the present invention is shown in fig. 2, and the method for calibrating multiple cameras in a large space environment based on optical dynamic capturing includes the following steps:
Step S1, initial data are acquired: acquiring camera serial numbers of a plurality of optical cameras, acquiring multi-frame data captured by each optical camera on a calibration rod in the swinging process, classifying the multi-frame data containing coordinate data according to frames, obtaining a plurality of corresponding initial data by each frame, wherein each initial data comprises the camera serial numbers and the corresponding coordinate data.
The calibration rod in the step adopts a two-dimensional calibration rod, a plurality of marking points are arranged on the calibration rod, and the marking points are coated with high-reflection materials and can be identified by an optical camera. The position relation of the mark points is preset, and the position relation data among a plurality of mark points can be directly obtained. As shown in fig. 3, the calibration rod 2 is provided with five marking points 21. In the use process, the calibration rod is swung under the multi-camera environment in a large space, the optical camera can identify the mark points on the calibration rod, two-dimensional space coordinate data of each frame are obtained, and the coordinate data are recorded and stored.
Since calibration algorithms require the collection of large amounts of data, these data need to be clearly sorted and stored in a canonical data structure. The data structure of the step is that coordinate data collected by each optical camera is taken as the bottommost layer, the coordinate data form one frame of data of each optical camera of the current frame, and finally the data of the current frame of all the optical cameras are integrated into one complete frame of data. Firstly, when the calibration rod is swung, the complete data of each Frame of all the optical cameras is recorded as a Frame; then, each Frame of complete data Frame comprises initial data of the current Frame of each optical camera, and the initial data is recorded as View; finally, each optical Camera data View includes a Camera serial number camera_id and coordinate data Points.
Not all optical cameras can capture the calibration bar in every frame, i.e. not all optical cameras have coordinate data per frame, so each View does not contain current frame data of all optical cameras, but only current frame data of those optical cameras that contain coordinate data. Clearly, the advantage of such a design is that a large amount of storage space is saved. Through the data structure, the finally acquired data are a plurality of frames of Frame data when the calibration rod swings, and each Frame of data View comprises two-dimensional space coordinate data Points of each optical Camera camera_id of the current Frame.
Step S2, recording effective data: and eliminating initial data with coordinate points less than the preset number in the coordinate data of each frame, detecting whether the coordinate data contains coordinate points belonging to a plurality of marking points of the calibration rod in the rest initial data, if so, recording the plurality of coordinate points and corresponding camera serial numbers to form effective data, otherwise, eliminating the initial data, and obtaining a plurality of corresponding effective data in each frame.
Because the calibration rod is continuously swung during the data collection in step S1, not every frame of data can be complete, that is, the coordinates of a plurality of mark points on the calibration rod are contained, and even if the coordinates are contained, the coordinates are not determined to be the mark points on the calibration rod. This step requires checking the acquired coordinate data of each frame. The number of the preset number of the steps is the same as the number of the marking points arranged on the calibration rod. If the five marking points are arranged on the calibration rod, the preset number is 5, the data with the coordinate points of the coordinate data in each frame less than 5 are firstly excluded, then, as the position relation of the five marking points on the calibration rod is determined, whether each frame contains the five coordinate points belonging to the calibration rod or not can be detected in the rest coordinate data, if yes, the five coordinate data are recorded, and if not, the incomplete coordinate data in the frame are removed.
In one embodiment, in detecting whether the coordinate data includes coordinate points belonging to a plurality of marker points of the calibration rod, the following manner may be adopted:
step S201, first round of rejection: and acquiring the number of coordinate points in each coordinate data in one frame, judging whether the number of the coordinate points is less than the number of a plurality of marking points of the calibration rod, and if so, eliminating the coordinate data from the corresponding frame.
In this step, the number of coordinate points of the preset number is the same as the number of the marking points of the calibration rod, that is, if the number of the marking points of the calibration rod is 5 as shown in fig. 3, it is determined whether the number of the coordinate points is less than 5, and the first round of elimination is performed on the coordinate data less than 5.
Step S202, second round of elimination: if the number of the coordinate points is not less than the preset maximum number, continuously judging whether the number of the coordinate points is greater than the preset maximum number, and if so, eliminating the coordinate data from the corresponding frames, wherein each frame obtains initial data after eliminating.
And judging whether the number of coordinate points is greater than a certain maximum threshold value in the coordinate data left after the first round of elimination is completed, wherein the maximum threshold value can be 500, considering that the number of coordinate points obtained by the current optical camera in the frame is too many, and carrying out the second round of elimination due to excessive useless data.
Step S203, position detection: and acquiring position relation data of a plurality of mark points on the calibration rod, and detecting whether the coordinate data contains a plurality of coordinate points of the position relation data in the initial data after the removal.
Because the positions of the marking points on the calibration rod are known and determined, according to the position relation data, position calculation can be carried out on a plurality of coordinate points in the coordinate data corresponding to each optical camera of each frame, and finally, whether the coordinate points contain the position relation data or not is obtained. For example, if the 5 mark points in fig. 3 have the determined positional relationship data, it is calculated whether one line segment formed by connecting 3 coordinate points is included in the plurality of coordinate points, one line segment formed by connecting 3 coordinate points is included, one coordinate point in the middle of the two line segments is overlapped, and the two line segments are perpendicular. If there are 5 coordinate points of the positional relationship data, it is considered that the coordinate data includes a plurality of coordinate points of the positional relationship data.
According to the detection mode, whether the coordinate data of each frame is complete data can be accurately determined according to the known coordinate position information of the mark points, so that accurate and complete calculation data can be conveniently provided for the inner parameter and the outer parameter of the subsequent optical camera.
Step S3, obtaining target internal parameters: and initializing each optical camera according to the plurality of effective data to obtain the target internal parameters of each optical camera.
The internal parameters of the optical camera include information such as focal length of the camera, lens distortion parameters, and the like. In order to obtain accurate internal parameters, this step needs to determine an initialization parameter for each optical camera, where the initialization parameter is the target internal parameter of each optical camera.
In one embodiment, step S3 further comprises:
step S301, calculating an imaging length and width: the target internal parameters of the optical camera comprise imaging length, imaging width and focal length, and in all effective data corresponding to the optical camera, the maximum value of the abscissa and the maximum value of the ordinate of the coordinate data are searched, the maximum value of the abscissa is recorded as the imaging length of the optical camera, and the maximum value of the ordinate is recorded as the imaging width of the optical camera.
In step S2, the effective data of each frame of each optical camera is recorded, the maximum values in the x and y directions in the effective data are analyzed, and the two maximum values are considered as the imaging length and imaging width of the optical camera.
Step S302, calculating a focal length: the focal length of the optical camera is obtained by the following calculation formula:
Let the imaging length be W, the imaging width be H, then imaging length ratio alpha, imaging width beta ratio are respectively:
alpha=W/(W+H)
beta=H/(W+H);
the value fx of the focal length of the optical camera in the imaging length direction and the value fy of the focal length of the optical camera in the imaging width direction are as follows:
fx=W*0.5/alpha
fy=H*0.5/beta;
wherein fx and fy are focal lengths of the optical camera.
After the imaging length and the imaging width of the optical camera are obtained, the focal length of the optical camera can be obtained through the calculation formula.
According to the embodiment, the accurate internal parameters of the optical cameras can be finally determined through the effective data of each frame of each optical camera and the calculation mode of the two steps.
Step S4, determining a main camera: the camera serial number with the most effective data is determined as a main camera, the rotation information of the main camera is defined as a unit matrix, the translation information of the main camera is defined as a zero matrix, and the unit matrix and the zero matrix are the target external parameters of the main camera.
In addition to determining the internal parameters of the optical camera, it is also necessary to determine the external parameters of the optical camera, and in order to determine the external parameters, it is first necessary to determine a main camera and its external parameters, and calculate the external parameters of the other associated optical cameras by the main camera. When the main camera is determined, according to all effective data of all frames, the camera serial number with the largest occurrence number in the coordinate data is analyzed, the optical camera corresponding to the camera serial number is recorded as the main camera, the rotation information of the main camera is determined as a unit array, and the translation information is determined as a zero matrix. At this time, the correlation degree between the main camera and other optical cameras is the largest, and the external parameters of other cameras are rotation and translation relative to the main camera.
Step S5, calibration calculation: according to the plurality of effective data of each frame, other optical cameras are matched with the main camera, the optical camera containing the matched data is recorded as a target camera, rotation information and translation information of the target camera are obtained through rotation information and translation information of the main camera, and the rotation information and the translation information are target external parameters of the target camera.
Before calculating external parameters of other optical cameras, the other optical cameras are required to be subjected to data matching with the main camera, enough matching data are found from effective data, an essential matrix is obtained through an eight-point method according to the matching data and the external parameters of the main camera, singular value decomposition (Singular Value Decomposition, SVD) is further carried out, and finally the external parameters of the target camera are obtained.
In one embodiment, step S5 further comprises:
step S501, look up camera number frame by frame: and searching whether the effective data contains the camera serial number of the main camera or not frame by frame, and if the effective data does not contain the camera serial number of the main camera, continuously searching the next frame.
Step S502, matching data: if the number of the camera of the main camera is contained, continuously searching whether the coordinate data of other optical cameras in the effective data contains enough matching data one by one, and if the number of the effective data of more than a preset frame is contained in the effective data of the main camera and the current optical camera at the same time, considering that the main camera and the current camera contain enough matching data.
The preset frame number is 50 frames, namely, the effective data of more than 50 frames contains the coordinate data of the main camera and the coordinate data of the current optical camera, and the effective data of the frames are considered to be the matching data of the main camera and the current optical camera.
Step S503, determining a target camera: if the matching data is not contained, the next optical camera is continuously searched, if the matching data is contained, the optical camera is marked as a target camera, and finally a plurality of target cameras and corresponding coordinate data are obtained in each frame.
Step S504, solving an essential matrix: and respectively acquiring coordinate data of the main camera and the target camera in any frame of the matching data, acquiring position relation data of a plurality of marking points on the calibration rod, matching the coordinate data of the main camera with the coordinate data of the target camera according to the position key data to obtain a plurality of groups of two-dimensional space feature pairs, constructing a linear equation set by the plurality of groups of two-dimensional space feature pairs and two optical camera parameters, and solving an essential matrix.
In the step, the essential matrix is obtained based on an eight-point method, before the essential matrix is obtained, coordinate data of the mark points are needed to be matched, and because the position relation data of the mark points are determined, and the coordinate data of the main camera and the coordinate data of the target camera in the matching data necessarily contain coordinate points with the same position relation as the mark points, a plurality of groups of two-dimensional space feature pairs can be obtained in each frame of matching data according to the position relation data. And if the number of the marking points on the marking rod is 5, five groups of two-dimensional space feature pairs are obtained by matching data of each frame.
And constructing a linear equation set by using a plurality of groups of two-dimensional space feature pairs and the optical camera parameters, and further solving an essential matrix. To solve the essential matrix, a base matrix F is first calculated, consisting of
Figure BDA0003174051080000141
Obtaining a basic matrix F according to a plurality of groups of two-dimensional space feature pairs, and obtaining a base matrix F according to F=M -T EM, since the matrix M corresponding to the camera parameters is known, the essential matrix E can be obtained.
Step S505, decomposing the essential matrix: and decomposing the essential matrix through a singular value decomposition algorithm to obtain rotation information and translation information of the target camera.
The essential matrix E is a matrix of 3*3 size, and according to the formula e=u×w×vt, the matrix E can be decomposed into U, W, VT three matrices 3*3, where U is called a left singular matrix, V is called a right singular matrix, VT is a transposed matrix of V, W is called a singular value matrix, and the matrix W has values (singular values) only on the diagonal, and the other elements are all 0. Two auxiliary matrices M and N are defined, wherein:
Figure BDA0003174051080000142
there are two possible cases of rotation matrix of the target camera relative to the main camera: ra=u×mt×vt or rb=u×w×vt, there are two cases of rotation matrix of the target camera relative to the main camera: ta=uχnχ or tb= -uχ UT., where MT is the transposed matrix of matrix M and UT is the transposed matrix of matrix U. Four possibilities are shared by two-by-two combinations, but only one combination is to make the depth of the three-dimensional coordinate point formed by the two-dimensional space matching feature pair be positive, and the combination is a rotation matrix and a translation matrix of the target camera.
According to the method, the device and the system, the data matching mode is carried out on the optical cameras and the main camera frame by frame, enough matching data can be obtained, and accurate external parameters of the target camera are finally obtained after the essential matrix and the decomposed essential matrix are respectively solved according to the enough matching data.
In one embodiment, after step S5, the method further includes:
the method comprises the steps of carrying out iterative optimization on all matching data of a target internal parameter and a target external parameter of a main camera, the target internal parameter and the target external parameter of the target camera and the main camera, and obtaining a target internal parameter and a target external parameter of the optimized main camera and the target internal parameter and the target external parameter of the target camera by a cost function in an iterative optimization process, wherein the iterative optimization process is as follows:
converting world coordinates p to camera coordinates:
P’=R*p+T={X,Y,Z}
wherein R and T are optical camera external parameters;
p' is projected onto the normalized plane to obtain normalized coordinates:
Pc={u,v,1}={X/Z,Y/Z,1}
considering the distortion condition of the normalized coordinates, performing de-distortion:
u’=u*(1+k1*r*r+k2*r*r*r*r)
v’=v*(1+k1*r*r+k2*r*r*r*r)
wherein k1, k2 and r are distortion coefficients;
calculate pixel coordinates M (Us, vs):
Us=fx*u’+cx
Vs=fy*v’+cy
wherein fx, fy, cx, cy is an optical camera reference;
let the pixel coordinates N (U0, V0) detected by the optical camera, the reprojection error e of the world coordinate p be:
e=||N-M|| 2
Substituting all matching data of the main camera and the target camera into, the overall cost function is:
Figure BDA0003174051080000161
in the iterative process, when the error is reduced to be within a preset threshold range, stopping calculation, and outputting all the internal parameters and external parameters of the optical camera after iterative optimization.
Solving the least square formula is equivalent to simultaneously adjusting the internal parameters, the external parameters and the world coordinate points of the optical camera, so that very high calibration accuracy is obtained, the overall error is continuously reduced along with the increase of iterative optimization times, when the error is reduced to be within a preset threshold range meeting the requirement, the calculation is stopped, and the calibration information of the optimized internal parameters, the external parameters and the like of the camera is output, so that the whole iterative optimization process is completed.
In order to obtain accurate internal and external parameter data, in the embodiment, all matching data corresponding to the main camera and the target internal and external parameters of the two cameras are substituted into an optimization process, a cost function in the optimization process is a reprojection error, and the relatively accurate internal and external parameters of the cameras are finally obtained through iterative optimization.
Step S6, obtaining all target external parameters: and (3) marking the target camera with the obtained target external parameters as a main camera, and repeating the previous operation with other optical cameras which are not matched with the matching data until the target external parameters of all the optical cameras are obtained.
In step S5, when other optical cameras are matched with the main camera, there may be insufficient matching data between the optical camera and the main camera, and at this time, the main camera needs to be redefined, and the optical camera that fails to match the matching data is subjected to the matching and external parameter calculating process again. The step defines the optical camera with the calculated external parameters as another main camera, and the main camera and the optical camera which cannot be matched with the matching data repeat the operation of the step S5 to match and calculate the external parameters until all the optical cameras obtain the external parameters.
Step S7, optimizing: and inputting the target internal parameters, the target external parameters and all acquired coordinate data of all the optical cameras into a preset beam method adjustment model, wherein the output result of the beam method adjustment model is the optimized target internal parameters of all the optical cameras.
After steps S1 to S6, the relatively accurate internal and external parameters of all the optical cameras are obtained, but because the parameters are obtained by matching and calculating the cameras in pairs in the calculation process, the overall relation of all the cameras is not considered, and therefore, the parameters need to be optimized integrally. The method adopts a beam method Adjustment model (BA) in a Ceres nonlinear optimization library. The whole BA aims at minimizing the re-projection error, the input data of the BA is coordinate data acquired by all optical cameras, the coordinate data are matched, the internal and external parameters of all cameras are matched, and the output result of the BA is high-precision camera internal reference information.
Step S8, calibrating a center point: the method comprises the steps of defining the heights of a plurality of marking points of a calibration rod as zero, obtaining three-dimensional space coordinates of the plurality of marking points according to position coordinate information of the plurality of marking points, calculating the three-dimensional space coordinates under camera parameters according to target external parameters of a main camera, wherein the main camera is a main camera corresponding to a camera serial number with the most effective data, substituting the three-dimensional space coordinates of the plurality of marking points and the three-dimensional space coordinates under the camera parameters into an equation, solving an European transformation rotation matrix and a translation matrix through iteration nearest points, and the equation is as follows:
P=RP′+T
wherein P is the three-dimensional space coordinates of a plurality of mark points, P' is the three-dimensional space coordinates under camera parameters, R is an European transformation rotation matrix, and T is a translation matrix;
and (3) obtaining pose information of the calibration rod, wherein the pose information is an European transformation rotation matrix and a European transformation translation matrix, and after the pose information is acted on the target external parameters of the main camera, the target external parameters of all cameras relative to the calibration center point are obtained. Specifically, the rotation matrix of the calibration rod is R, the translation matrix is T, and external parameters of the main camera and other optical cameras are R0 and T0 at this time, so that after the pose information of the calibration rod is applied to all the cameras, the rotation matrix in external parameters of the optical cameras is R x R0, and the translation matrix is R x t0+t.
After steps S1 to S7, when the high-precision internal and external parameters of all the cameras are obtained, the external parameters of the optical cameras are rotated and translated relative to the main camera, that is, the center of the space is located at the optical center of the main camera, which is inconvenient for subsequent application. In practical applications, the external parameter of the optical camera is to be relative to the center point of the field, so that the two-dimensional calibration rod needs to be placed at the center point of the field, in step S8, the calibration rod is regarded as a rigid body, the coordinate position information of a plurality of mark points on the calibration rod is known, the defining height is 0 at this time, then the three-dimensional space coordinates of the plurality of mark points are obtained, and if the calibration rod is provided with five mark points, then the three-dimensional space coordinates of the five mark points are p= { P1, …, P5}. The three-dimensional space coordinates under the current camera parameters can be calculated through the optical camera acquisition data and the external parameter data obtained through the optimization in the step S7, and are marked as P ' = { P '1, … and P '5}, R and T can be solved by using an iteration closest point (Iternate ClosestPoint, ICP), the ICP solution is carried out by adopting an SVD decomposition method, so that the pose information of the current calibration rod is obtained, and the external parameter data of all cameras relative to the field center point is obtained after the pose information is acted on the current camera external parameter data.
According to the multi-camera calibration method based on the optical dynamic capturing under the large space environment, a plurality of optical cameras capture the mark points of the calibration rods in motion, a large amount of coordinate data are obtained, the internal and external parameters of each camera are calculated through a certain algorithm on the large amount of coordinate data, the complexity of the algorithm is reduced in both the matching process and the parameter calculation process, the calibration time is faster, and manpower and material resources are saved. And the internal and external parameters obtained by all cameras are subjected to various optimization to obtain high-precision internal and external parameters, so that sufficient and necessary conditions are provided for the operation of converting the subsequent two-dimensional space coordinates into three-dimensional space coordinates, and a foundation is laid for high-precision positioning and tracking in the whole optical dynamic capturing system.
In one embodiment, a multi-camera calibration device based on optical dynamic capturing in a large space environment is provided, as shown in fig. 4, the device includes:
the initial data acquisition module is used for acquiring camera serial numbers of a plurality of optical cameras, acquiring multi-frame data captured by each optical camera on a calibration rod in swinging, classifying the multi-frame data containing coordinate data according to frames, and acquiring a plurality of corresponding initial data from each frame, wherein each initial data comprises the camera serial numbers and the corresponding coordinate data;
The effective data recording module is used for eliminating initial data with coordinate points less than the preset number in the coordinate data of each frame, detecting whether the coordinate data contain coordinate points belonging to a plurality of marking points of the calibration rod in the rest initial data, if so, recording the plurality of coordinate points and corresponding camera serial numbers to form effective data, otherwise, eliminating the initial data, and obtaining a plurality of corresponding effective data in each frame;
all target internal reference modules are obtained and used for initializing each optical camera according to a plurality of effective data to obtain target internal references of each optical camera;
the method comprises the steps of determining a main camera module, wherein the camera serial number with the most effective data is determined as a main camera, rotation information of the main camera is defined as a unit matrix, translation information of the main camera is defined as a zero matrix, and the unit matrix and the zero matrix are target external parameters of the main camera;
the calibration calculation module is used for matching other optical cameras with the main camera according to a plurality of effective data of each frame, marking the optical camera containing the matched data as a target camera, obtaining rotation information and translation information of the target camera through rotation information and translation information of the main camera, wherein the rotation information and the translation information are target external parameters of the target camera;
And obtaining all target external parameters, namely marking the target camera with the obtained target external parameters as a main camera, and repeating the previous operation with other optical cameras which are not matched with the matching data until the target external parameters of all the optical cameras are obtained.
The embodiment of the invention is based on the same description as the multi-camera calibration method in the large space environment based on optical dynamic capturing, so that the embodiment of the invention will not be repeated.
In one embodiment, a multi-camera calibration device in a large space environment based on optical dynamic capturing is provided, the device comprising: the method comprises a memory, a processor and a multi-camera calibration program which is stored in the memory and can run on the processor and is based on the optical dynamic capturing, wherein the steps in the multi-camera calibration method based on the optical dynamic capturing in the large space environment are realized when the multi-camera calibration program based on the optical dynamic capturing in the large space environment is executed by the processor.
In one embodiment, a computer readable storage medium stores a multi-camera calibration program in a large space environment based on optical dynamic capturing, where the steps in the multi-camera calibration method in the large space environment based on optical dynamic capturing in the above embodiments are implemented when the multi-camera calibration program in the large space environment based on optical dynamic capturing is executed by a processor. Wherein the storage medium may be a non-volatile storage medium.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program to instruct related hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above-described embodiments represent only some exemplary embodiments of the invention, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (10)

1. The method for calibrating the multi-camera in the large space environment based on the optical dynamic capturing is characterized by comprising the following steps:
acquiring camera serial numbers of a plurality of optical cameras, acquiring multi-frame data captured by each optical camera on a calibration rod in swinging, classifying the multi-frame data containing coordinate data according to frames, and obtaining a plurality of corresponding initial data by each frame, wherein each initial data comprises the camera serial numbers and the corresponding coordinate data;
removing initial data with coordinate points less than the preset number in the coordinate data of each frame, detecting whether the coordinate data contain coordinate points belonging to a plurality of marking points of the calibration rod in the rest initial data, if so, recording the coordinate points and corresponding camera serial numbers to form effective data, otherwise, removing the initial data, and obtaining a plurality of corresponding effective data in each frame;
initializing each optical camera according to the plurality of effective data to obtain target internal parameters of each optical camera;
according to all effective data of all frames, analyzing a camera serial number with the largest occurrence frequency in the coordinate data, marking an optical camera corresponding to the camera serial number as a main camera, defining rotation information of the main camera as a unit matrix, defining translation information of the main camera as a zero matrix, and using the unit matrix and the zero matrix as target external parameters of the main camera;
According to the effective data of each frame, matching other optical cameras with the main camera, marking the optical camera containing the matched data as a target camera, and obtaining rotation information and translation information of the target camera through the rotation information and the translation information of the main camera, wherein the rotation information and the translation information are target external parameters of the target camera;
the target camera with the obtained target external parameters is recorded as a main camera, and the previous operation is repeated with other optical cameras which are not matched with the matching data until the target external parameters of all the optical cameras are obtained;
and inputting the target internal parameters, the target external parameters and all acquired coordinate data of all the optical cameras into a preset beam method adjustment model so as to minimize the re-projection error, wherein the output result of the beam method adjustment model is the optimized target internal parameters of all the optical cameras.
2. The method for calibrating a multi-camera in a large space environment based on optical dynamic capturing according to claim 1, wherein the step of eliminating initial data having less than a preset number of coordinate points in the coordinate data of each frame, and detecting whether the coordinate data contains coordinate points belonging to a plurality of mark points of the calibration rod in the remaining initial data comprises:
Acquiring the number of coordinate points in each coordinate data in a frame, judging whether the number of the coordinate points is less than the number of a plurality of marking points of the calibration rod, and if so, eliminating the coordinate data from the corresponding frame;
if the number of the coordinate points is not less than the preset maximum number, continuously judging whether the number of the coordinate points is greater than the preset maximum number, if so, eliminating the coordinate data from the corresponding frames, and obtaining initial data after eliminating each frame;
and acquiring position relation data of a plurality of mark points on the calibration rod, and detecting whether the coordinate data contains a plurality of coordinate points of the position relation data or not in the initial data after the elimination.
3. The method for calibrating multiple cameras in a large space environment based on optical dynamic capturing as claimed in claim 1, wherein initializing each optical camera according to the effective data to obtain the target internal reference of each optical camera comprises:
the target internal parameters of the optical camera comprise imaging length, imaging width and focal length, and in all effective data corresponding to the optical camera, the maximum value of the abscissa and the maximum value of the ordinate of the coordinate data are searched, the maximum value of the abscissa is recorded as the imaging length of the optical camera, and the maximum value of the ordinate is recorded as the imaging width of the optical camera;
The focal length of the optical camera is obtained by the following calculation formula:
let the imaging length be W, the imaging width be H, then imaging length ratio alpha, imaging width beta ratio are respectively:
alpha=W/(W+H)
beta=H/(W+H);
the value fx of the focal length of the optical camera in the imaging length direction and the value fy of the focal length of the optical camera in the imaging width direction are as follows:
fx=W*0.5/alpha
fy=H*0.5/beta;
wherein fx and fy are focal lengths of the optical camera.
4. The method for calibrating multiple cameras in a large space environment based on optical dynamic capturing as set forth in claim 1, wherein the matching the other optical cameras with the main camera based on the effective data of each frame, and recording the optical camera containing the matching data as a target camera, includes:
searching whether the effective data contains the camera serial number of the main camera or not frame by frame, and if the effective data does not contain the camera serial number of the main camera, continuously searching the next frame;
if the camera serial number of the main camera is contained, continuously searching whether the coordinate data of other optical cameras in the effective data contains enough matching data one by one, and if the effective data with the number of frames being more than a preset number of frames simultaneously contains the coordinate data of the main camera and the current optical camera, considering that the main camera and the current camera contain enough matching data;
If the matching data is not contained, the next optical camera is continuously searched, if the matching data is contained, the current optical camera is marked as a target camera, and finally a plurality of target cameras and corresponding coordinate data are obtained in each frame.
5. The method for calibrating a multi-camera in a large space environment based on optical dynamic capturing according to claim 1, wherein the obtaining rotation information and translation information of the target camera through the rotation information and the translation information of the main camera, wherein the rotation information and the translation information are target parameters of the target camera, comprises:
in any one frame of the matching data, respectively acquiring coordinate data of the main camera and the target camera, acquiring position relation data of a plurality of marking points on the calibration rod, matching the coordinate data of the main camera with the coordinate data of the target camera according to the position relation data to obtain a plurality of groups of two-dimensional space feature pairs, constructing a linear equation set by the plurality of groups of two-dimensional space feature pairs and the two optical camera parameters, and solving an essential matrix;
and decomposing the essential matrix through a singular value decomposition algorithm to obtain rotation information and translation information of the target camera.
6. The method for calibrating a multi-camera in a large space environment based on optical dynamic capturing according to claim 1, wherein rotation information and translation information of the target camera are obtained through the rotation information and the translation information of the main camera, and after the rotation information and the translation information are external parameters of the target camera, the method further comprises:
the method comprises the steps of carrying out iterative optimization on all matching data of a main camera, a target internal parameter and a target external parameter of the target camera, the main camera and the target camera together, and obtaining optimized target internal parameters and target external parameters of the main camera and target internal parameters and target external parameters of the target camera by a cost function of a reprojection error in the iterative optimization process, wherein the iterative optimization process comprises the following steps:
converting world coordinates p to camera coordinates:
P’=R*p+T={X,Y,Z}
wherein R and T are optical camera external parameters;
p' is projected onto the normalized plane to obtain normalized coordinates:
Pc={u,v,1}={X/Z,Y/Z,1}
and (3) performing de-distortion:
u’=u*(1+k1*r*r+k2*r*r*r*r)
v’=v*(1+k1*r*r+k2*r*r*r*r)
calculate pixel coordinates M (Us, vs):
Us=fx*u’+cx
Vs=fy*v’+cy
wherein fx, fy, cx, cy is an optical camera reference;
let the pixel coordinates N (U0, V0) detected by the optical camera, the reprojection error e of the world coordinate p be:
e=||N-M|| 2
Substituting all matching data of the main camera and the target camera into, then the overall cost function is:
Figure FDA0003174051070000041
in the iterative process, when the error is reduced to be within a preset threshold range, stopping calculation, and outputting all the internal parameters and external parameters of the optical camera after iterative optimization.
7. The method for calibrating a multi-camera in a large space environment based on optical dynamic capturing as claimed in claim 1, further comprising:
the method comprises the steps of defining the heights of a plurality of marking points of a calibration rod as zero, obtaining three-dimensional space coordinates of the plurality of marking points according to position coordinate information of the plurality of marking points, calculating three-dimensional space coordinates under camera parameters according to target external parameters of a main camera, wherein the main camera is a main camera corresponding to a camera serial number with the most effective data, solving according to the three-dimensional space coordinates of the plurality of marking points and the three-dimensional space coordinates under the camera parameters to obtain pose information of the calibration rod, and obtaining target external parameters of all cameras relative to a calibration center point according to the pose information.
8. A multi-camera calibration device in a large space environment based on optical dynamic capturing, the device comprising:
the initial data acquisition module is used for acquiring camera serial numbers of a plurality of optical cameras, acquiring multi-frame data captured by each optical camera on a calibration rod in swinging, classifying the multi-frame data containing coordinate data according to frames, acquiring a plurality of corresponding initial data from each frame, wherein each initial data comprises the camera serial numbers and the corresponding coordinate data;
The effective data recording module is used for eliminating initial data with coordinate points less than the preset number in the coordinate data of each frame, detecting whether the coordinate data contain coordinate points belonging to a plurality of marking points of the calibration rod in the rest initial data, if so, recording the coordinate points and corresponding camera serial numbers to form effective data, otherwise, eliminating the initial data, and obtaining a plurality of corresponding effective data in each frame;
obtaining all target internal parameters modules, which are used for initializing each optical camera according to a plurality of effective data to obtain target internal parameters of each optical camera;
the method comprises the steps of determining a main camera module, wherein the camera serial number with the most effective data is determined to be a main camera, rotation information of the main camera is defined to be a unit matrix, translation information of the main camera is defined to be a zero matrix, and the unit matrix and the zero matrix are target external parameters of the main camera;
the calibration calculation module is used for matching other optical cameras with the main camera according to a plurality of effective data of each frame, marking the optical camera containing the matched data as a target camera, and obtaining rotation information and translation information of the target camera through the rotation information and the translation information of the main camera, wherein the rotation information and the translation information are target external parameters of the target camera;
And obtaining all target external parameters, namely marking the target camera with the obtained target external parameters as a main camera, and repeating the previous operation with other optical cameras which are not matched with the matching data until the target external parameters of all the optical cameras are obtained.
9. A multi-camera calibration device in a large space environment based on optical dynamic capturing, the device comprising:
a memory, a processor and an optical-dynamic-capture-based multi-camera calibration program stored on the memory and operable on the processor, which when executed by the processor, implements the steps of the optical-dynamic-capture-based multi-camera calibration method in a large-space environment according to any one of claims 1 to 7.
10. A computer readable storage medium, wherein the computer readable storage medium stores thereon a multi-camera calibration program under a large space environment based on optical dynamic capturing, and the steps of the multi-camera calibration method under the large space environment based on optical dynamic capturing according to any one of claims 1 to 7 are implemented when the multi-camera calibration program under the large space environment based on optical dynamic capturing is executed by a processor.
CN202110828055.1A 2019-12-27 2019-12-27 Multi-camera calibration method and related equipment in large space environment based on optical dynamic capturing Active CN113592954B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110828055.1A CN113592954B (en) 2019-12-27 2019-12-27 Multi-camera calibration method and related equipment in large space environment based on optical dynamic capturing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110828055.1A CN113592954B (en) 2019-12-27 2019-12-27 Multi-camera calibration method and related equipment in large space environment based on optical dynamic capturing
CN201911379253.3A CN111145270B (en) 2019-12-27 2019-12-27 Multi-camera calibration method based on optical dynamic capture in large space environment and related equipment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201911379253.3A Division CN111145270B (en) 2019-12-27 2019-12-27 Multi-camera calibration method based on optical dynamic capture in large space environment and related equipment

Publications (2)

Publication Number Publication Date
CN113592954A CN113592954A (en) 2021-11-02
CN113592954B true CN113592954B (en) 2023-06-09

Family

ID=70521082

Family Applications (3)

Application Number Title Priority Date Filing Date
CN202110828055.1A Active CN113592954B (en) 2019-12-27 2019-12-27 Multi-camera calibration method and related equipment in large space environment based on optical dynamic capturing
CN201911379253.3A Active CN111145270B (en) 2019-12-27 2019-12-27 Multi-camera calibration method based on optical dynamic capture in large space environment and related equipment
CN202110792868.XA Active CN113592950B (en) 2019-12-27 2019-12-27 Multi-camera calibration method and related equipment in large space environment based on optical dynamic capturing

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN201911379253.3A Active CN111145270B (en) 2019-12-27 2019-12-27 Multi-camera calibration method based on optical dynamic capture in large space environment and related equipment
CN202110792868.XA Active CN113592950B (en) 2019-12-27 2019-12-27 Multi-camera calibration method and related equipment in large space environment based on optical dynamic capturing

Country Status (2)

Country Link
CN (3) CN113592954B (en)
WO (1) WO2021129791A1 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113592954B (en) * 2019-12-27 2023-06-09 深圳市瑞立视多媒体科技有限公司 Multi-camera calibration method and related equipment in large space environment based on optical dynamic capturing
CN111784778B (en) * 2020-06-04 2022-04-12 华中科技大学 Binocular camera external parameter calibration method and system based on linear solving and nonlinear optimization
CN112085800B (en) * 2020-08-14 2024-07-12 深圳市瑞立视多媒体科技有限公司 Calibration rod data screening method and device and computer equipment
CN112215896B (en) * 2020-09-01 2024-01-30 深圳市瑞立视多媒体科技有限公司 Multi-camera calibrated camera frame data processing method and device and computer equipment
CN113487726B (en) * 2021-07-12 2024-05-14 未来元宇数字科技(北京)有限公司 Motion capture system and method
CN113744349A (en) * 2021-08-31 2021-12-03 湖南航天远望科技有限公司 Infrared spectrum image measurement alignment method, device and medium
CN114092564B (en) * 2021-10-29 2024-04-09 上海科技大学 External parameter calibration method, system, terminal and medium for non-overlapping vision multi-camera system
CN114004901B (en) * 2022-01-04 2022-03-18 南昌虚拟现实研究院股份有限公司 Multi-camera calibration method and device, terminal equipment and readable storage medium
CN114596341B (en) * 2022-02-16 2024-07-23 合肥工业大学 Multi-camera high-precision three-dimensional pose tracking method for large-view-field moving target
CN114742905B (en) * 2022-06-13 2022-09-27 魔视智能科技(武汉)有限公司 Multi-camera parameter calibration method, device, equipment and storage medium
CN115375772B (en) * 2022-08-10 2024-01-19 北京英智数联科技有限公司 Camera calibration method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100007506A (en) * 2008-07-14 2010-01-22 성균관대학교산학협력단 New calibration method of multi-view camera for a optical motion capture system
JP2016006415A (en) * 2014-05-29 2016-01-14 アニマ株式会社 Method and apparatus for estimating position of optical marker in optical motion capture
JP2016011951A (en) * 2014-06-04 2016-01-21 アニマ株式会社 Method and device of acquiring positional information of virtual marker, and motion measurement method
CN107767424A (en) * 2017-10-31 2018-03-06 深圳市瑞立视多媒体科技有限公司 Scaling method, multicamera system and the terminal device of multicamera system
CN107808402A (en) * 2017-10-31 2018-03-16 深圳市瑞立视多媒体科技有限公司 Scaling method, multicamera system and the terminal device of multicamera system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9723272B2 (en) * 2012-10-05 2017-08-01 Magna Electronics Inc. Multi-camera image stitching calibration system
CN103035008B (en) * 2012-12-15 2015-08-12 北京工业大学 A kind of weighted demarcating method of multicamera system
US9916660B2 (en) * 2015-01-16 2018-03-13 Magna Electronics Inc. Vehicle vision system with calibration algorithm
US10692262B2 (en) * 2017-01-12 2020-06-23 Electronics And Telecommunications Research Institute Apparatus and method for processing information of multiple cameras
CN107358633A (en) * 2017-07-12 2017-11-17 北京轻威科技有限责任公司 Join scaling method inside and outside a kind of polyphaser based on 3 points of demarcation things
CN107767420B (en) * 2017-08-16 2021-07-23 华中科技大学无锡研究院 Calibration method of underwater stereoscopic vision system
CN108288294A (en) * 2018-01-17 2018-07-17 视缘(上海)智能科技有限公司 A kind of outer ginseng scaling method of a 3D phases group of planes
CN109029433B (en) * 2018-06-28 2020-12-11 东南大学 Method for calibrating external parameters and time sequence based on vision and inertial navigation fusion SLAM on mobile platform
CN109767474B (en) * 2018-12-31 2021-07-27 深圳积木易搭科技技术有限公司 Multi-view camera calibration method and device and storage medium
CN113592954B (en) * 2019-12-27 2023-06-09 深圳市瑞立视多媒体科技有限公司 Multi-camera calibration method and related equipment in large space environment based on optical dynamic capturing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100007506A (en) * 2008-07-14 2010-01-22 성균관대학교산학협력단 New calibration method of multi-view camera for a optical motion capture system
JP2016006415A (en) * 2014-05-29 2016-01-14 アニマ株式会社 Method and apparatus for estimating position of optical marker in optical motion capture
JP2016011951A (en) * 2014-06-04 2016-01-21 アニマ株式会社 Method and device of acquiring positional information of virtual marker, and motion measurement method
CN107767424A (en) * 2017-10-31 2018-03-06 深圳市瑞立视多媒体科技有限公司 Scaling method, multicamera system and the terminal device of multicamera system
CN107808402A (en) * 2017-10-31 2018-03-16 深圳市瑞立视多媒体科技有限公司 Scaling method, multicamera system and the terminal device of multicamera system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A multi-camera calibration method using a 3-axis frame and wand;Ki-Young Shin等;International Journal of Precision Engineering and Manufacturing;第13卷(第2期);283–289 *
Calibration of the Multi-camera Registration System for Visual Navigation Benchmarking;Adam Schmidt等;《International Journal of Advanced Robotic Systems》;第11卷(第6期);1-12 *
Multi-camera Calibration Method for Optical Motion;Ki Young Shin等;《Journal of the Korea Society of Computer and Information》;第14卷(第6期);41-49 *

Also Published As

Publication number Publication date
CN113592950B (en) 2023-06-16
WO2021129791A1 (en) 2021-07-01
CN113592950A (en) 2021-11-02
CN111145270B (en) 2021-08-10
CN111145270A (en) 2020-05-12
CN113592954A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN113592954B (en) Multi-camera calibration method and related equipment in large space environment based on optical dynamic capturing
CN113744347B (en) Method, device, equipment and storage medium for calibrating sweeping field and simultaneously calibrating field in large space environment
CN113643378B (en) Active rigid body pose positioning method in multi-camera environment and related equipment
Zhang et al. 3D dynamic scene analysis: a stereo based approach
US8897539B2 (en) Using images to create measurements of structures through the videogrammetric process
CN111815707B (en) Point cloud determining method, point cloud screening method, point cloud determining device, point cloud screening device and computer equipment
CN112184824B (en) Camera external parameter calibration method and device
CN111127559B (en) Calibration rod detection method, device, equipment and storage medium in optical dynamic capture system
Wang et al. Recognition and location of the internal corners of planar checkerboard calibration pattern image
CN111179433A (en) Three-dimensional modeling method and device for target object, electronic device and storage medium
KR20200023211A (en) A method for rectifying a sequence of stereo images and a system thereof
Li et al. Cross-ratio–based line scan camera calibration using a planar pattern
CN113706635B (en) Long-focus camera calibration method based on point feature and line feature fusion
CN117848234A (en) Object scanning mechanism, method and related equipment
KR101673144B1 (en) Stereoscopic image registration method based on a partial linear method
CN114782556B (en) Camera and laser radar registration method and system and storage medium
Lv et al. Three-dimensional laser scanning under the pinhole camera with lens distortion
Hörster et al. Calibrating and optimizing poses of visual sensors in distributed platforms
de Lima et al. Toward a smart camera for fast high-level structure extraction
Yu et al. Multi-view 2D–3D alignment with hybrid bundle adjustment for visual metrology
De Boi et al. How to turn your camera into a perfect pinhole model
Alves et al. Automatic 3D shape recovery for rapid prototyping
Msallam et al. Construction of a 3D point cloud from a pair of 2D images of a calibrated stereo camera
Thisse et al. 3D Dense & Scaled Reconstruction Pipeline with Smartphone Acquisition
CN116630806A (en) Object detection method and device under target size, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant