CN116188591A - Multi-camera global calibration method and device and electronic equipment - Google Patents

Multi-camera global calibration method and device and electronic equipment Download PDF

Info

Publication number
CN116188591A
CN116188591A CN202211692103.XA CN202211692103A CN116188591A CN 116188591 A CN116188591 A CN 116188591A CN 202211692103 A CN202211692103 A CN 202211692103A CN 116188591 A CN116188591 A CN 116188591A
Authority
CN
China
Prior art keywords
camera
calibrated
coordinate system
auxiliary
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211692103.XA
Other languages
Chinese (zh)
Inventor
于连栋
王帅东
万茂森
赵会宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202211692103.XA priority Critical patent/CN116188591A/en
Publication of CN116188591A publication Critical patent/CN116188591A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The embodiment of the disclosure relates to the field of multi-camera global calibration, and provides a multi-camera global calibration method and device and electronic equipment, wherein the method comprises the following steps: obtaining an internal reference matrix and a distortion coefficient of each camera to be calibrated and each auxiliary camera; acquiring a rotation matrix and a translation vector between each camera to be calibrated and the auxiliary camera based on the internal reference matrix and the distortion coefficient; acquiring three-dimensional point cloud data aiming at the same target area under each camera coordinate system to be calibrated; converting the three-dimensional point cloud data into an auxiliary camera coordinate system to obtain auxiliary point cloud data; registering the auxiliary point cloud data to obtain a transformation matrix from each auxiliary point cloud data to a global coordinate system; based on the rotation matrix, the translation vector and the transformation matrix, respectively determining a final transformation matrix from each camera to be calibrated to the global coordinate system, and performing global calibration on each camera to be calibrated based on the final transformation matrix. The embodiment of the disclosure can effectively solve the problems of complex actual operation and low calibration precision of multi-camera global calibration under the condition of small overlapping or non-overlapping view fields.

Description

Multi-camera global calibration method and device and electronic equipment
Technical Field
The disclosure relates to the technical field of multi-camera global calibration, in particular to a multi-camera global calibration method and device and electronic equipment.
Background
The multi-camera global calibration has important significance for the multi-vision measurement system. In recent years, with the development of modern industrial manufacturing technology, vision measurement has the characteristics of non-contact and high efficiency, and high-precision industrial vision measurement equipment has the characteristics of high precision and no blind area, which generally requires the use of a plurality of cameras and requires unifying the plurality of cameras under the same coordinate system.
In the prior art, the calibration method of the multi-camera is generally completed by using a calibration plate, wherein the method takes two combinations of the multi-camera as double targets to calculate the corresponding transformation relation, and finally selects one camera as a reference camera, and all camera coordinate systems are converted into the reference camera coordinate system. However, the above method has a problem in that when the common field of view between two phases is small, a complete calibration plate cannot be accommodated in the common field of view, and a double-target calibration condition cannot be satisfied, so that calibration becomes difficult or even impossible. When no common view field exists between two cameras, although the existing multi-camera calibration method can arrange a plurality of control points in the whole measurement area, and meanwhile, precision measurement equipment such as a dual theodolite or a laser tracker is used for unifying the control points under the same coordinate system, the method has the defects of large operation intensity, low working efficiency and the like because a large number of control points are required to be arranged.
Disclosure of Invention
The disclosure aims to at least solve one of the problems existing in the prior art, and provides a multi-camera global calibration method and device and electronic equipment.
In one aspect of the present disclosure, a multi-camera global calibration method is provided, including:
respectively acquiring internal reference matrixes and distortion coefficients of each camera to be calibrated and each auxiliary camera;
based on the internal reference matrix and the distortion coefficient, respectively acquiring a rotation matrix and a translation vector between each camera to be calibrated and the auxiliary camera;
respectively acquiring three-dimensional point cloud data aiming at the same target area under each camera coordinate system to be calibrated;
based on the rotation matrix and the translation vector, respectively converting three-dimensional point cloud data of each camera to be calibrated under the coordinate system of the auxiliary camera to obtain auxiliary point cloud data of each camera to be calibrated;
registering the auxiliary point cloud data based on a preset point cloud registration algorithm to obtain a transformation matrix from each auxiliary point cloud data to a global coordinate system;
and respectively determining a final transformation matrix from each camera to be calibrated to the global coordinate system based on the rotation matrix, the translation vector and the transformation matrix, and performing global calibration on each camera to be calibrated based on the final transformation matrix.
Optionally, the obtaining the internal reference matrix and the distortion coefficient of each camera to be calibrated and each auxiliary camera respectively includes:
placing the auxiliary camera at a target position, so that the view field of the auxiliary camera at the target position can cover the view field of all cameras to be calibrated;
based on preset targets, shooting a group of calibration pictures by using each camera to be calibrated and each auxiliary camera, wherein each preset target in each group of calibration pictures is positioned at different positions corresponding to the field of view of the camera, and the sum of the positions of each preset target in each group of calibration pictures covers the field of view of the corresponding camera; the preset targets comprise a plurality of characteristic points which are arranged according to a preset arrangement rule, and the characteristic points form an asymmetric center pattern;
based on the calibration pictures, obtaining an internal reference matrix and a distortion coefficient of each camera to be calibrated and each auxiliary camera respectively by using a Zhang Zhengyou camera calibration method.
Optionally, based on the calibration picture, the obtaining the internal reference matrix and the distortion coefficient of each camera to be calibrated and the auxiliary camera by using a Zhang Zhengyou camera calibration method includes:
determining a world coordinate system based on the preset target, and respectively determining world coordinates of each feature point in the calibration picture based on the world coordinate system;
based on the calibration picture and the world coordinates of each feature point, respectively determining the image coordinates of each feature point in the calibration picture under a corresponding camera coordinate system;
based on the world coordinates and the image coordinates, the internal reference matrix and the distortion coefficients of each camera to be calibrated and the auxiliary camera are respectively obtained by using a Zhang Zhengyou camera calibration method.
Optionally, the determining a world coordinate system based on the preset target, and determining world coordinates of each feature point in the calibration picture based on the world coordinate system respectively includes:
taking a target feature point in the preset target as an origin of the world coordinate system, and establishing the world coordinate system;
and respectively determining the world coordinates of the feature points in the calibration picture according to the preset arrangement rule based on the world coordinate system.
Optionally, the acquiring rotation matrices and translation vectors between the cameras to be calibrated and the auxiliary camera based on the internal reference matrix and the distortion coefficients respectively includes:
shooting the preset targets in different postures by using the cameras to be calibrated and the auxiliary camera respectively to obtain a plurality of image groups corresponding to each camera to be calibrated;
and respectively determining the rotation matrix and the translation vector between each camera to be calibrated and the auxiliary camera based on a plurality of image groups corresponding to each camera to be calibrated, the internal reference matrix and the distortion coefficient.
Optionally, the obtaining three-dimensional point cloud data for the same target area under each camera coordinate system to be calibrated includes:
forming a three-dimensional scanning system by each camera to be calibrated and a laser respectively;
sequentially carrying out three-dimensional scanning on a preset step gauge block by utilizing each three-dimensional scanning system to obtain three-dimensional image data corresponding to each camera to be calibrated;
and carrying out three-dimensional reconstruction on the preset step block based on the three-dimensional image data to obtain the three-dimensional point cloud data of the preset step block under each camera coordinate system to be calibrated.
Optionally, based on the rotation matrix and the translation vector, converting three-dimensional point cloud data under the coordinate system of each camera to be calibrated to coordinate systems of the auxiliary cameras to obtain auxiliary point cloud data of each camera to be calibrated, including:
according to the following formula (1), determining the auxiliary point cloud data under each camera coordinate system to be calibrated respectively:
orientationf=R i *orientation i +T i (1)
wherein, orientation i Representing three-dimensional point cloud data of an ith camera coordinate system to be calibrated, R i Representing a rotation matrix, T, between an ith camera to be calibrated and an auxiliary camera i Representing a translation vector between an ith camera to be calibrated and an auxiliary camera, and determining an orientation by orientation i And corresponding auxiliary point cloud data.
Optionally, the determining, based on the rotation matrix, the translation vector and the transformation matrix, a final transformation matrix from each camera to be calibrated to the global coordinate system includes:
determining a final transformation matrix of each camera to be calibrated to the global coordinate system according to the following formula (2):
Figure BDA0004021656990000041
wherein Tran iq Representing a transformation matrix from auxiliary point cloud data corresponding to an ith camera to be calibrated to a global coordinate system, and SRT i Representing the final transformation matrix of the ith camera to be calibrated to the global coordinate system.
In another aspect of the present disclosure, there is provided a multi-camera global calibration apparatus including:
the first acquisition module is used for respectively acquiring an internal reference matrix and a distortion coefficient of each camera to be calibrated and the auxiliary camera;
the second acquisition module is used for respectively acquiring a rotation matrix and a translation vector between each camera to be calibrated and the auxiliary camera based on the internal reference matrix and the distortion coefficient;
the third acquisition module is used for respectively acquiring three-dimensional point cloud data aiming at the same target area under each camera coordinate system to be calibrated;
the conversion module is used for respectively converting the three-dimensional point cloud data under the coordinate system of each camera to be calibrated into the coordinate system of the auxiliary camera based on the rotation matrix and the translation vector to obtain the auxiliary point cloud data of each camera to be calibrated;
the registration module is used for registering the auxiliary point cloud data based on a preset point cloud registration algorithm to obtain a transformation matrix from each auxiliary point cloud data to a global coordinate system;
and the global calibration module is used for respectively determining a final transformation matrix from each camera to be calibrated to the global coordinate system based on the rotation matrix, the translation vector and the transformation matrix, and carrying out global calibration on each camera to be calibrated based on the final transformation matrix.
In another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the multi-camera global calibration method described above.
In another aspect of the disclosure, a computer readable storage medium is provided, storing a computer program which, when executed by a processor, implements the multi-camera global calibration method described above.
Compared with the prior art, the method and the device have the advantages that the auxiliary cameras are used, the coordinate systems of the cameras to be calibrated can be unified based on the coordinate systems of the auxiliary cameras, the operation process is simple and feasible, and the problems that the actual operation of multi-camera global calibration is complex and the calibration accuracy is low under the condition of small overlapping view fields or non-overlapping view fields can be effectively solved.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which the figures do not depict a proportional limitation unless expressly stated otherwise.
FIG. 1 is a flow chart of a method for global calibration of multiple cameras according to an embodiment of the present disclosure;
FIG. 2 is a graphical schematic of a target provided by another embodiment of the present disclosure;
fig. 3 is a schematic diagram of a pose conversion relationship between a camera to be calibrated and an auxiliary camera according to another embodiment of the present disclosure;
FIG. 4 is a schematic view of a step block provided by another embodiment of the present disclosure;
FIG. 5 is a schematic structural diagram of a multi-camera global calibration device according to another embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to another embodiment of the present disclosure.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. However, those of ordinary skill in the art will understand that in various embodiments of the present disclosure, numerous technical details have been set forth in order to provide a better understanding of the present disclosure. However, the technical solutions claimed in the present disclosure can be implemented without these technical details and with various changes and modifications based on the following embodiments. The following divisions of the various embodiments are for convenience of description, and should not be construed as limiting the specific implementations of the disclosure, and the various embodiments may be mutually combined and referred to without contradiction.
One embodiment of the present disclosure relates to a method for global calibration of multiple cameras, the flow of which is shown in fig. 1, including:
step S110, obtaining an internal reference matrix and a distortion coefficient of each camera to be calibrated and each auxiliary camera respectively.
Illustratively, step S110 includes: the auxiliary camera is placed at the target position, so that the field of view of the auxiliary camera at the target position can cover the fields of view of all cameras to be calibrated. Based on the preset targets, shooting a group of calibration pictures by using each camera to be calibrated and the auxiliary camera respectively, wherein each preset target in each group of calibration pictures is positioned at different positions corresponding to the field of view of the camera respectively, and the sum of the positions of each preset target in each group of calibration pictures covers the field of view of the corresponding camera. The preset targets comprise a plurality of characteristic points which are arranged according to a preset arrangement rule, and the characteristic points form a non-central symmetrical graph. Based on the calibration pictures, obtaining internal reference matrixes and distortion coefficients of each camera to be calibrated and the auxiliary camera respectively by using a Zhang Zhengyou camera calibration method.
Specifically, as shown in fig. 2, the preset target may use a plurality of dots arranged in an array as feature points. The characteristic points form a non-central symmetrical graph, which means that the graph formed by the characteristic points cannot be overlapped with the graph before rotation after rotating leftwards or rightwards by 180 degrees. It should be noted that, the preset target may be a target of another pattern other than the target shown in fig. 2, as long as the target includes a plurality of feature points arranged according to a preset arrangement rule, and the pattern formed by the plurality of feature points is a non-centrosymmetric pattern.
When the internal reference matrix and the distortion coefficient of each camera to be calibrated and each auxiliary camera are obtained, the auxiliary camera can be placed at a position which can enable the vision of the auxiliary camera to cover the vision of all cameras to be calibrated, the aperture and the focal length of each auxiliary camera and each camera to be calibrated are adjusted to meet clear imaging requirements, a group of calibration pictures with the number of 15-20 are shot for the preset targets shown in fig. 2 by using the auxiliary camera and each camera to be calibrated respectively, each preset target in each group of calibration pictures is located at different positions corresponding to the vision of the camera respectively, the sum of the positions of each preset target in each group of calibration pictures covers the vision of the corresponding camera, and then the internal reference matrix and the distortion coefficient of each camera to be calibrated and each auxiliary camera are obtained by using each group of calibration pictures by using a Zhang Zhengyou camera calibration method respectively.
According to the method and the device, the internal reference matrix and the distortion coefficient of each camera to be calibrated and the auxiliary camera are obtained by using the same preset target, so that the accuracy of internal reference calibration of each camera to be calibrated and the auxiliary camera can be improved.
Exemplary, based on the calibration picture, the method for calibrating Zhang Zhengyou cameras respectively obtains the internal reference matrix and the distortion coefficient of each camera to be calibrated and the auxiliary camera, including: and determining a world coordinate system based on the preset target, and respectively determining the world coordinates of each feature point in the calibration picture based on the world coordinate system. And respectively determining the image coordinates of each feature point in the calibration picture under the corresponding camera coordinate system based on the calibration picture and the world coordinates of each feature point. Based on world coordinates and image coordinates, obtaining an internal reference matrix and a distortion coefficient of each camera to be calibrated and the auxiliary camera respectively by using a Zhang Zhengyou camera calibration method.
Specifically, for example, when the preset target is shown in fig. 2, the embodiment may detect each feature point in the calibration picture by using the detectcircumvolegridpoints function in MATLAB, and return the image coordinates of each feature point under the corresponding camera coordinate system according to the world coordinates corresponding to the detected circle centers of each feature point.
According to the method, the internal reference matrix and the distortion coefficient of each camera to be calibrated and the auxiliary camera are obtained by utilizing the world coordinates and the image coordinates of the feature points, so that the internal reference calibration accuracy of each camera to be calibrated and the auxiliary camera can be further improved.
Illustratively, determining a world coordinate system based on a preset target, and determining world coordinates of each feature point in the calibration picture based on the world coordinate system, respectively, includes: and taking the target feature point in the preset target as an origin of a world coordinate system, and establishing the world coordinate system. And respectively determining the world coordinates of each feature point in the calibration picture according to a preset arrangement rule based on the world coordinate system.
Specifically, for example, when the preset target is shown in fig. 2, the first dot at the upper left corner of the preset target, that is, the first dot in the first row, may be taken as a target feature point, the center of the circle of the target feature point is taken as the origin of the world coordinate system, the Z-axis of the target feature point is marked as 0 in the direction perpendicular to the upward direction of the plane of the target feature point, so as to obtain the world coordinate system, and then, according to the arrangement rule of each feature point in the preset target, the center coordinates of each feature point are calculated, where the center coordinates are the world coordinates of the corresponding feature point.
In the embodiment, the world coordinate system is established by taking the target feature point in the preset target as the origin, so that the calculation by using the world coordinate system in the subsequent step can be facilitated.
And step S120, based on the internal reference matrix and the distortion coefficient, respectively acquiring a rotation matrix and a translation vector between each camera to be calibrated and the auxiliary camera.
Illustratively, step S120 includes: and shooting preset targets in different postures by utilizing the cameras to be calibrated and the auxiliary camera respectively, so as to obtain a plurality of image groups corresponding to each camera to be calibrated.
Specifically, for example, as shown in fig. 3, when the cameras to be calibrated are the first camera to be calibrated 101, the second camera to be calibrated 102, and the third camera to be calibrated 103, the auxiliary camera is 120, and the first target 131, the second target 132, and the third target 133 are different preset targets corresponding to the first camera to be calibrated 101, the second camera to be calibrated 102, and the third camera to be calibrated 103, the first target 131 in different postures can be shot simultaneously by the first camera to be calibrated 101 and the auxiliary camera 120 to obtain the first image group corresponding to the first camera to be calibrated 101
Figure BDA0004021656990000081
Where t=1, 2, …, N denotes a photographing order,n represents the shooting times and N is not less than 3, the first image group is + ->
Figure BDA0004021656990000082
The method comprises the steps of capturing an image of the first target 131 at the t time of the first camera to be calibrated 101 and capturing an image of the first target 131 at the t time of the auxiliary camera 120. Then, the second target 132 in different postures is shot by using the second camera 102 to be calibrated and the auxiliary camera 120 at the same time, so as to obtain a second image group corresponding to the second camera 102 to be calibrated->
Figure BDA0004021656990000083
The second image group->
Figure BDA0004021656990000084
Including an image of the second target 132 taken the t-th time by the second camera 102 to be calibrated and an image of the second target 132 taken the t-th time by the auxiliary camera 120. Shooting a third target 133 in different postures by using the third camera 103 to be calibrated and the auxiliary camera 120 simultaneously to obtain a third image group corresponding to the third camera 103 to be calibrated>
Figure BDA0004021656990000085
The third image group->
Figure BDA0004021656990000086
Including the image of the third target 133 taken the t-th time by the third camera 103 to be calibrated and the image of the third target 133 taken the t-th time by the auxiliary camera 120.
And respectively determining a rotation matrix and a translation vector between each camera to be calibrated and the auxiliary camera based on a plurality of image groups corresponding to each camera to be calibrated, the internal reference matrix and the distortion coefficient.
Specifically, for example, the present step may be to obtain the first image group from the previous step
Figure BDA0004021656990000091
Inputting a double-target fixed tool box in MATLAB, and importing a first to-be-marked by utilizing a fixed internal reference matrix estimation option in the double-target fixed tool boxThe reference matrix and distortion coefficients of the fixation camera 101 are such that the dual object fixation toolbox is based on the first image set +.>
Figure BDA0004021656990000092
And the internal reference matrix and distortion coefficient of the first to-be-calibrated camera 101 to obtain a rotation matrix R between the first to-be-calibrated camera 101 and the auxiliary camera 120 1 And translation vector T 1 . Then, the same method is used, and the double target toolbox in MATLAB is used to base the second image group +.>
Figure BDA0004021656990000093
And the internal matrix and distortion coefficients of the second camera 102 to be calibrated, third image group +.>
Figure BDA0004021656990000094
And the internal reference matrix and the distortion coefficient of the third camera to be calibrated 103 to obtain a rotation matrix R between the second camera to be calibrated 102 and the auxiliary camera 120 2 And translation vector T 2 Rotation matrix R between the third camera 103 to be calibrated and the auxiliary camera 120 3 And translation vector T 3
Step S130, three-dimensional point cloud data aiming at the same target area under each camera coordinate system to be calibrated are respectively obtained.
Illustratively, step S130 includes: and forming a three-dimensional scanning system by each camera to be calibrated and the laser respectively. And sequentially carrying out three-dimensional scanning on the preset step gauge blocks by utilizing each three-dimensional scanning system to obtain three-dimensional image data corresponding to each camera to be calibrated. And carrying out three-dimensional reconstruction on the preset step gauge block based on the three-dimensional image data to obtain three-dimensional point cloud data of the preset step gauge block under each camera coordinate system to be calibrated.
Specifically, for example, in conjunction with fig. 3, when the camera to be calibrated includes the first camera to be calibrated 101, the second camera to be calibrated 102, and the third camera to be calibrated 103, the first camera to be calibrated 101, the second camera to be calibrated 102, the third camera to be calibrated 103, and a word line laser may be respectively formed into a three-dimensional scanning system, and each three cameras may be utilizedThe dimension scanning system sequentially performs three-dimensional scanning on the step block shown in fig. 4 to respectively obtain three-dimensional scanning data corresponding to the first camera 101 to be calibrated, the second camera 102 to be calibrated and the third camera 103 to be calibrated, and performs three-dimensional reconstruction on the step block based on the three-dimensional scanning data to obtain three-dimensional point cloud data PC of the step block under the coordinate system of the first camera 101 to be calibrated 1 Three-dimensional point cloud data PC under second camera 102 coordinate system to be calibrated 2 Three-dimensional point cloud data PC under coordinate system of third camera 103 to be calibrated 3
It should be noted that, the preset step block may be a step block with other heights or other step shapes besides the step block shown in fig. 4, so long as each three-dimensional scanning system is used to perform three-dimensional scanning on the unified step block. When each three-dimensional scanning system is used for carrying out three-dimensional scanning on a preset step gauge block, the position and the posture of the step gauge block are required to be kept unchanged all the time so as to prevent the influence on the final calibration precision.
Step S140, based on the rotation matrix and the translation vector, converting the three-dimensional point cloud data of each camera to be calibrated under the coordinate system of the auxiliary camera to obtain the auxiliary point cloud data of each camera to be calibrated.
Illustratively, step S140 includes: according to the following formula (1), auxiliary point cloud data of each camera to be calibrated are respectively determined:
orientationf=R i *orientation i +T i (1)
wherein, orientation i And the three-dimensional point cloud data under the ith camera coordinate system to be calibrated is represented and used for indicating the position and the gesture of the three-dimensional point cloud data under the ith camera coordinate system to be calibrated. R is R i Representing the rotation matrix between the ith camera to be calibrated and the auxiliary camera. T (T) i Representing the translation vector between the ith camera to be calibrated and the auxiliary camera. The orientation represents orientation i Corresponding auxiliary point cloud data for indicating orientation i And the position and the gesture of the corresponding auxiliary point cloud data.
Step S150, registering the auxiliary point cloud data based on a preset point cloud registration algorithm to obtain a transformation matrix from each auxiliary point cloud data to the global coordinate system.
Specifically, the global coordinate system herein refers to a preset camera coordinate system to be calibrated, which is applicable to all cameras to be calibrated. For example, in conjunction with fig. 3, the global coordinate system may be the coordinate system of the first to-be-calibrated camera 101, the coordinate system of the second to-be-calibrated camera 102, or the coordinate system of the third to-be-calibrated camera 103.
When auxiliary point cloud data are aligned, auxiliary point cloud data corresponding to one camera to be calibrated can be selected as target point cloud, auxiliary point cloud data corresponding to other cameras to be calibrated are used as alignment point cloud, a preset point cloud registration algorithm such as iterative closest point (Iterative Closest Point, ICP) algorithm with scale, normal distribution transformation (Normal Distributions Transform, NDT) algorithm and the like is utilized to complete registration among the auxiliary point cloud data corresponding to each camera to be calibrated, and a registration result can be used as a transformation matrix from each auxiliary point cloud data to a global coordinate system.
For example, after all three-dimensional point cloud data are converted into the coordinate system of the auxiliary camera to obtain each auxiliary point cloud data, when the auxiliary point cloud data are aligned, one auxiliary point cloud data PC of the camera to be calibrated is selected m (m is more than or equal to 1 and less than or equal to I), and auxiliary point cloud data PC m As a target point cloud and to assist the point cloud data PC m Taking the coordinate system of the camera to be calibrated as a global coordinate system, and taking auxiliary point cloud data PC m Other auxiliary point cloud data PC n (n is not less than 1 and not more than I, and n is not equal to m) is used as a registration point cloud, registration from the registration point cloud to a target point cloud is completed by using a preset point cloud registration algorithm, and a registration result can be used as a transformation matrix from auxiliary point cloud data to a global coordinate system, wherein I represents the total number of cameras to be calibrated, and m and n are positive integers. Specifically, in this embodiment, auxiliary point cloud data PC for the target area in the first to-be-calibrated camera coordinate system is selected 1 As target point cloud, auxiliary point cloud data PC aiming at the same target area under a second camera coordinate system to be calibrated 2 A third phase to be calibratedAuxiliary point cloud data PC (personal computer) aiming at same target area under machine coordinate system 3 For registering point clouds, registering each three-dimensional point cloud data by utilizing an iterative closest point algorithm with a scale to obtain a transformation matrix Tran of each auxiliary point cloud data iq Wherein Tran iq Representing the i-th alignment point cloud, i.e. the transformation matrix of the i-th auxiliary point cloud data to the target point cloud, i=1, 2,3. In particular, the transformation matrix of the auxiliary point cloud data to the target point cloud under the first to-be-calibrated camera coordinate system can be expressed as
Figure BDA0004021656990000111
E is 3*3 identity matrix.
Step S160, based on the rotation matrix, the translation vector and the transformation matrix, determining a final transformation matrix from each camera to be calibrated to the global coordinate system, and based on the final transformation matrix, performing global calibration on each camera to be calibrated.
Illustratively, determining a final transformation matrix of each camera to be calibrated to the global coordinate system based on the rotation matrix, the translation vector, and the transformation matrix, respectively, includes:
according to the following formula (2), determining a final transformation matrix from each camera to be calibrated to a global coordinate system respectively:
Figure BDA0004021656990000112
wherein Tran iq Representing a transformation matrix from auxiliary point cloud data corresponding to an ith camera to be calibrated to a global coordinate system, and SRT i Representing the final transformation matrix of the ith camera to be calibrated to the global coordinate system. At this time, based on the final transformation matrix from each camera to be calibrated to the global coordinate system, calibration of all cameras to be calibrated and global calibration of each camera to be calibrated can be completed.
Compared with the prior art, the method and the device have the advantages that the auxiliary cameras are used, the coordinate systems of the cameras to be calibrated can be unified based on the coordinate systems of the auxiliary cameras, the operation process is simple and feasible, and the problems that the actual operation of multi-camera global calibration is complex and the calibration accuracy is low under the condition of small overlapping view fields or non-overlapping view fields can be effectively solved.
Another embodiment of the present disclosure relates to a multi-camera global calibration apparatus, as shown in fig. 5, comprising:
the first obtaining module 501 is configured to obtain an internal reference matrix and a distortion coefficient of each camera to be calibrated and each auxiliary camera respectively;
the second obtaining module 502 is configured to obtain a rotation matrix and a translation vector between each camera to be calibrated and the auxiliary camera based on the internal reference matrix and the distortion coefficient;
a third obtaining module 503, configured to obtain three-dimensional point cloud data for the same target area in each camera coordinate system to be calibrated;
the conversion module 504 is configured to convert the three-dimensional point cloud data under the coordinate system of each camera to be calibrated to the coordinate system of the auxiliary camera based on the rotation matrix and the translation vector, so as to obtain auxiliary point cloud data of each camera to be calibrated;
the registration module 505 is configured to register the auxiliary point cloud data based on a preset point cloud registration algorithm, so as to obtain a transformation matrix from each auxiliary point cloud data to the global coordinate system;
the global calibration module 506 is configured to determine a final transformation matrix from each camera to be calibrated to the global coordinate system based on the rotation matrix, the translation vector and the transformation matrix, and perform global calibration on each camera to be calibrated based on the final transformation matrix.
The specific implementation method of the multi-camera global calibration device provided by the embodiment of the present disclosure may be described with reference to the multi-camera global calibration method provided by the embodiment of the present disclosure, which is not repeated herein.
Compared with the prior art, the method and the device for calibrating the multi-camera global calibration can unify the coordinate systems of the cameras to be calibrated based on the coordinate systems of the auxiliary cameras by using the auxiliary cameras, and can effectively solve the problems that the actual operation of multi-camera global calibration is complex and the calibration precision is low under the condition of small overlapping fields of view or no overlapping fields of view.
Another embodiment of the present disclosure relates to an electronic device, as shown in fig. 6, comprising:
at least one processor 601; the method comprises the steps of,
a memory 602 communicatively coupled to the at least one processor 601; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory 602 stores instructions executable by the at least one processor 601, the instructions being executable by the at least one processor 601 to enable the at least one processor 601 to perform the multi-camera global calibration method described in the above embodiments.
Where the memory and the processor are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the buses connecting the various circuits of the one or more processors and the memory together. The bus may also connect various other circuits such as peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or may be a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor is transmitted over the wireless medium via the antenna, which further receives the data and transmits the data to the processor.
The processor is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And memory may be used to store data used by the processor in performing operations.
Another embodiment of the present disclosure relates to a computer readable storage medium storing a computer program which, when executed by a processor, implements the multi-camera global calibration method described in the above embodiment.
That is, it will be understood by those skilled in the art that all or part of the steps of the method described in the above embodiments may be implemented by a program stored in a storage medium, including several instructions for causing a device (which may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the method described in the various embodiments of the disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific embodiments for carrying out the present disclosure, and that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure.

Claims (10)

1. The multi-camera global calibration method is characterized by comprising the following steps of:
respectively acquiring internal reference matrixes and distortion coefficients of each camera to be calibrated and each auxiliary camera;
based on the internal reference matrix and the distortion coefficient, respectively acquiring a rotation matrix and a translation vector between each camera to be calibrated and the auxiliary camera;
respectively acquiring three-dimensional point cloud data aiming at the same target area under each camera coordinate system to be calibrated;
based on the rotation matrix and the translation vector, respectively converting three-dimensional point cloud data of each camera to be calibrated under the coordinate system of the auxiliary camera to obtain auxiliary point cloud data of each camera to be calibrated;
registering the auxiliary point cloud data based on a preset point cloud registration algorithm to obtain a transformation matrix from each auxiliary point cloud data to a global coordinate system;
and respectively determining a final transformation matrix from each camera to be calibrated to the global coordinate system based on the rotation matrix, the translation vector and the transformation matrix, and performing global calibration on each camera to be calibrated based on the final transformation matrix.
2. The method for global calibration of multiple cameras according to claim 1, wherein the obtaining the internal reference matrix and the distortion coefficient of each camera to be calibrated and each auxiliary camera respectively comprises:
placing the auxiliary camera at a target position, so that the view field of the auxiliary camera at the target position can cover the view field of all cameras to be calibrated;
based on preset targets, shooting a group of calibration pictures by using each camera to be calibrated and each auxiliary camera, wherein each preset target in each group of calibration pictures is positioned at different positions corresponding to the field of view of the camera, and the sum of the positions of each preset target in each group of calibration pictures covers the field of view of the corresponding camera; the preset targets comprise a plurality of characteristic points which are arranged according to a preset arrangement rule, and the characteristic points form an asymmetric center pattern;
based on the calibration pictures, obtaining an internal reference matrix and a distortion coefficient of each camera to be calibrated and each auxiliary camera respectively by using a Zhang Zhengyou camera calibration method.
3. The method for global calibration of a multi-camera according to claim 2, wherein the obtaining the reference matrix and the distortion coefficient of each camera to be calibrated and the auxiliary camera by using a Zhang Zhengyou camera calibration method based on the calibration picture comprises:
determining a world coordinate system based on the preset target, and respectively determining world coordinates of each feature point in the calibration picture based on the world coordinate system;
based on the calibration picture and the world coordinates of each feature point, respectively determining the image coordinates of each feature point in the calibration picture under a corresponding camera coordinate system;
based on the world coordinates and the image coordinates, the internal reference matrix and the distortion coefficients of each camera to be calibrated and the auxiliary camera are respectively obtained by using a Zhang Zhengyou camera calibration method.
4. A multi-camera global calibration method according to claim 3, wherein determining a world coordinate system based on the preset target and determining world coordinates of each feature point in the calibration picture based on the world coordinate system respectively includes:
taking a target feature point in the preset target as an origin of the world coordinate system, and establishing the world coordinate system;
and respectively determining the world coordinates of the feature points in the calibration picture according to the preset arrangement rule based on the world coordinate system.
5. The method according to claim 4, wherein the obtaining rotation matrices and translation vectors between the cameras to be calibrated and the auxiliary camera based on the internal reference matrix and the distortion coefficients, respectively, includes:
shooting the preset targets in different postures by using the cameras to be calibrated and the auxiliary camera respectively to obtain a plurality of image groups corresponding to each camera to be calibrated;
and respectively determining the rotation matrix and the translation vector between each camera to be calibrated and the auxiliary camera based on a plurality of image groups corresponding to each camera to be calibrated, the internal reference matrix and the distortion coefficient.
6. The method for global calibration of multiple cameras according to any one of claims 1 to 5, wherein the respectively obtaining three-dimensional point cloud data for the same target area in each camera coordinate system to be calibrated includes:
forming a three-dimensional scanning system by each camera to be calibrated and a laser respectively;
sequentially carrying out three-dimensional scanning on a preset step gauge block by utilizing each three-dimensional scanning system to obtain three-dimensional image data corresponding to each camera to be calibrated;
and carrying out three-dimensional reconstruction on the preset step block based on the three-dimensional image data to obtain the three-dimensional point cloud data of the preset step block under each camera coordinate system to be calibrated.
7. The method for global calibration of multiple cameras according to any one of claims 1 to 5, wherein the converting three-dimensional point cloud data under the coordinate system of each camera to be calibrated to the coordinate system of the auxiliary camera based on the rotation matrix and the translation vector to obtain auxiliary point cloud data of each camera to be calibrated includes:
according to the following formula (1), determining the auxiliary point cloud data under each camera coordinate system to be calibrated respectively:
orientationf=R i *orientation i +T i (1)
wherein, orientation i Representing three-dimensional point cloud data of an ith camera coordinate system to be calibrated, R i Representing a rotation matrix, T, between an ith camera to be calibrated and an auxiliary camera i Representing a translation vector between an ith camera to be calibrated and an auxiliary camera, and determining an orientation by orientation i And corresponding auxiliary point cloud data.
8. The method according to claim 7, wherein determining a final transformation matrix of each camera to be calibrated to the global coordinate system based on the rotation matrix, the translation vector, and the transformation matrix, respectively, comprises:
determining a final transformation matrix of each camera to be calibrated to the global coordinate system according to the following formula (2):
Figure FDA0004021656980000041
wherein Tran iq Representing a transformation matrix from auxiliary point cloud data corresponding to an ith camera to be calibrated to a global coordinate system, and SRT i Representing the final transformation matrix of the ith camera to be calibrated to the global coordinate system.
9. A multi-camera global calibration device, characterized in that the multi-camera global calibration device comprises:
the first acquisition module is used for respectively acquiring an internal reference matrix and a distortion coefficient of each camera to be calibrated and the auxiliary camera;
the second acquisition module is used for respectively acquiring a rotation matrix and a translation vector between each camera to be calibrated and the auxiliary camera based on the internal reference matrix and the distortion coefficient;
the third acquisition module is used for respectively acquiring three-dimensional point cloud data aiming at the same target area under each camera coordinate system to be calibrated;
the conversion module is used for respectively converting the three-dimensional point cloud data under the coordinate system of each camera to be calibrated into the coordinate system of the auxiliary camera based on the rotation matrix and the translation vector to obtain the auxiliary point cloud data of each camera to be calibrated;
the registration module is used for registering the auxiliary point cloud data based on a preset point cloud registration algorithm to obtain a transformation matrix from each auxiliary point cloud data to a global coordinate system;
and the global calibration module is used for respectively determining a final transformation matrix from each camera to be calibrated to the global coordinate system based on the rotation matrix, the translation vector and the transformation matrix, and carrying out global calibration on each camera to be calibrated based on the final transformation matrix.
10. An electronic device, comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the multi-camera global calibration method of any one of claims 1 to 8.
CN202211692103.XA 2022-12-28 2022-12-28 Multi-camera global calibration method and device and electronic equipment Pending CN116188591A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211692103.XA CN116188591A (en) 2022-12-28 2022-12-28 Multi-camera global calibration method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211692103.XA CN116188591A (en) 2022-12-28 2022-12-28 Multi-camera global calibration method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN116188591A true CN116188591A (en) 2023-05-30

Family

ID=86443441

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211692103.XA Pending CN116188591A (en) 2022-12-28 2022-12-28 Multi-camera global calibration method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN116188591A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116962649A (en) * 2023-09-19 2023-10-27 安徽送变电工程有限公司 Image monitoring and adjusting system and line construction model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116962649A (en) * 2023-09-19 2023-10-27 安徽送变电工程有限公司 Image monitoring and adjusting system and line construction model
CN116962649B (en) * 2023-09-19 2024-01-09 安徽送变电工程有限公司 Image monitoring and adjusting system and line construction model

Similar Documents

Publication Publication Date Title
CN107633536B (en) Camera calibration method and system based on two-dimensional plane template
CN109035320B (en) Monocular vision-based depth extraction method
CN110689579A (en) Rapid monocular vision pose measurement method and measurement system based on cooperative target
CN107194974B (en) Method for improving multi-view camera external parameter calibration precision based on multiple recognition of calibration plate images
CN109272555B (en) External parameter obtaining and calibrating method for RGB-D camera
CN109255818B (en) Novel target and extraction method of sub-pixel level angular points thereof
CN112229323B (en) Six-degree-of-freedom measurement method of checkerboard cooperative target based on monocular vision of mobile phone and application of six-degree-of-freedom measurement method
CN115457147A (en) Camera calibration method, electronic device and storage medium
CN116188591A (en) Multi-camera global calibration method and device and electronic equipment
JP2016218815A (en) Calibration device and method for line sensor camera
CN113822920B (en) Method for acquiring depth information by structured light camera, electronic equipment and storage medium
CN110310309B (en) Image registration method, image registration device and terminal
CN114998447A (en) Multi-view vision calibration method and system
CN113963067A (en) Calibration method for calibrating large-view-field visual sensor by using small target
CN111563936A (en) Camera external parameter automatic calibration method and automobile data recorder
CN111336938A (en) Robot and object distance detection method and device thereof
CN112581544B (en) Camera calibration method without public view field based on parameter optimization
CN115018922A (en) Distortion parameter calibration method, electronic device and computer readable storage medium
CN112927299B (en) Calibration method and device and electronic equipment
CN110310312B (en) Image registration method, image registration device and terminal
CN114332247A (en) Calibration method and device for multi-view vision measurement, storage medium and camera equipment
CN112750165B (en) Parameter calibration method, intelligent driving method, device, equipment and storage medium thereof
CN109919998B (en) Satellite attitude determination method and device and terminal equipment
CN112163519A (en) Image mapping processing method, device, storage medium and electronic device
CN112684250B (en) Calibration method for high-power millimeter wave intensity measurement system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination