CN114092569B - Binocular camera online calibration method and system based on multi-sensor fusion - Google Patents

Binocular camera online calibration method and system based on multi-sensor fusion Download PDF

Info

Publication number
CN114092569B
CN114092569B CN202210057189.2A CN202210057189A CN114092569B CN 114092569 B CN114092569 B CN 114092569B CN 202210057189 A CN202210057189 A CN 202210057189A CN 114092569 B CN114092569 B CN 114092569B
Authority
CN
China
Prior art keywords
camera
binocular camera
coordinate system
calibration
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210057189.2A
Other languages
Chinese (zh)
Other versions
CN114092569A (en
Inventor
范柘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Aware Information Technology Co ltd
Original Assignee
Anville Information Technology Tianjin Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anville Information Technology Tianjin Co ltd filed Critical Anville Information Technology Tianjin Co ltd
Priority to CN202210057189.2A priority Critical patent/CN114092569B/en
Publication of CN114092569A publication Critical patent/CN114092569A/en
Application granted granted Critical
Publication of CN114092569B publication Critical patent/CN114092569B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Abstract

The invention provides a binocular camera online calibration method and system based on multi-sensor fusion, belonging to the technical field of computer vision; wherein the method comprises the following steps: carrying out first self-calibration on the mutual postures of a left camera and a right camera of the binocular camera; acquiring real-time detection data of the multiple sensors, and performing second self-calibration on the posture of the binocular camera under a carrier coordinate system according to the real-time detection data; the on-line calibration of the invention comprises two aspects, namely self-calibration of mutual postures of a left camera and a right camera and self-calibration of a binocular camera relative to the ground; the scheme of the invention can deal with the problem of binocular camera deviation caused by various interference factors, and can quickly realize self-calibration after deviation.

Description

Binocular camera online calibration method and system based on multi-sensor fusion
Technical Field
The invention relates to the technical field of computer vision, in particular to a binocular camera online calibration method and system based on multi-sensor fusion.
Background
The binocular camera is a common device in computer vision processing, and calculates and analyzes two images of an object shot from different positions by using a parallax principle to obtain object space geometric information. Through loading binocular camera system on the carrier, can realize the automatic removal of carrier in the space scene, perhaps keep away the barrier. Before the binocular camera is actually put into use, various parameters of the binocular camera need to be calibrated offline by using a standard calibration board or a reference object. Typically, the calibration includes data such as camera lens parameters (internal reference), the mutual pose of the binocular left and right eyes, and the pose of the binocular camera within the vehicle coordinate system (as shown in fig. 1). The parameters (including distortion model, focal length, optical center, etc.) of the camera lens are determined when the camera lens is delivered from a factory, and generally cannot be changed in the using process, so that the offline calibration meets the requirement. The mutual posture of the left eye and the right eye of the binocular is the posture of the coordinate system of the left eye and the right eye in the coordinate system of the opposite side, and enough and accurate posture parameters can be obtained during off-line calibration.
However, during the use process, under the action of various factors (impact, shock, etc.), the mutual posture of the left and right eyes may change, thereby degrading the performance of the binocular system, as well as the posture of the binocular camera in the coordinate system of the vehicle, and after the impact and shock, the equipment is likely to shift (as shown in fig. 2). The offset of the binocular camera device may cause the performance of the binocular system to be degraded or even fail to work properly. Moreover, due to different use scenes of the binocular camera equipment, offline calibration of the normally used binocular camera is very complicated in some special scenes, and the cost is extremely high.
The analysis shows that the binocular camera calibration technology in the prior art is difficult to deal with various interference factors in a real scene, has high calibration cost and is difficult to meet the use requirement.
Disclosure of Invention
In order to at least solve the technical problems in the background art, the invention provides a binocular camera online calibration method and system based on multi-sensor fusion, electronic equipment and a storage medium, so that self-calibration of the binocular camera is realized when the posture of the binocular camera deviates, calibration procedures are reduced, and the use cost is reduced.
The invention provides a binocular camera online calibration method based on multi-sensor fusion, which comprises the following steps: s10, performing first self-calibration on the mutual postures of the left camera and the right camera of the binocular camera;
and S20, acquiring real-time detection data of the multiple sensors, and performing second self-calibration on the posture of the binocular camera under the carrier coordinate system according to the real-time detection data.
Optionally, in step S10, the performing a first self-calibration on the mutual postures of the left camera and the right camera of the binocular camera includes:
s110, acquiring images shot by a left camera and a right camera at the same time aiming at the same scene, and performing feature extraction and matching on the images to obtain a first matching feature point set;
s111, performing parallel correction on each first matching feature point in the first matching feature point set by using a parallel correction algorithm;
s112, judging whether the first matched feature points after parallel correction meet a first condition, if so, judging that the first attitude parameters of the binocular camera are qualified, and outputting calibration parameters; if not, turning to S113;
and S113, correcting the posture parameters of the binocular camera.
Optionally, the determining whether the corrected first matching feature point meets a first condition includes:
if the first matched feature points after parallel correction are distributed in the same row/column, judging that the first attitude parameter of the binocular camera is qualified; and if the matched feature points after parallel correction are distributed in different rows/columns, judging that the first attitude parameter of the binocular camera is unqualified.
Optionally, in step S113, the correcting the pose parameters of the binocular camera includes:
and (3) rotation correction: constructing an error function, and performing rotation correction on the rotation angle by using a nonlinear optimization means to enable the matched feature points after parallel correction to be distributed in the same row;
and (3) translation correction: and obtaining a scale factor by utilizing epipolar constraint, carrying out scaling solving on all translation matrixes solved by the epipolar constraint according to the same scale factor to obtain a translation matrix, and carrying out translation correction according to the translation matrix.
Optionally, the constructing an error function includes the following steps:
s1130, taking the left camera as a reference, taking a rotation matrix of the left eye as a unit matrix, and taking a translation matrix as a 0 matrix; let the rotation matrix of the right eye be R and the translation matrix be T, decompose R into two rotation matrices
Figure DEST_PATH_IMAGE001
And
Figure 850414DEST_PATH_IMAGE002
s1131, solving according to a Bouguet algorithm to obtain a left correction matrix and a right correction matrix, wherein the left correction matrix and the right correction matrix are respectively as follows:
Figure DEST_PATH_IMAGE003
Figure 314064DEST_PATH_IMAGE004
wherein, in the step (A),
Figure DEST_PATH_IMAGE005
only related to T;
then, for one 3D point, its imaging points on the left and right imaging planes are:
Figure 896748DEST_PATH_IMAGE006
and
Figure DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 59614DEST_PATH_IMAGE008
represents the left camera 3 x 3 internal reference matrix,
Figure DEST_PATH_IMAGE009
3 x 3 reference matrix representing the right camera,
Figure 53240DEST_PATH_IMAGE010
is the three-dimensional coordinate of a certain point in the world coordinate system,
Figure 94533DEST_PATH_IMAGE008
and
Figure DEST_PATH_IMAGE011
respectively are three-dimensional coordinates of the same world point in a left camera coordinate system and a right camera coordinate system,
Figure 274716DEST_PATH_IMAGE005
a translation matrix representing a right eye camera relative to a left eye camera;
s1132, constructing an error function as an included angle between a connecting line of the two imaging points and the positive direction of the X axis of the image coordinate system:
Figure 509999DEST_PATH_IMAGE012
wherein, in the step (A),
Figure DEST_PATH_IMAGE013
representing the included angle between the connecting line of the imaging points of the next 3D point of the world coordinate system on the imaging planes of the left camera and the right camera and the positive direction of the X axis of the image coordinate system;
the rotation matrix R is optimized to minimize the error, i.e. the optimal solution for R can be optimized.
Optionally, in step S20, the acquiring real-time detection data of the multiple sensors, and performing a second self-calibration on the pose of the binocular camera in the vehicle coordinate system according to the real-time detection data includes:
s200, defining a carrier coordinate system and a ground coordinate system;
s201, obtaining mileage information of the carrier, solving an offset value of the binocular camera relative to a carrier coordinate system according to the mileage information, and calibrating the binocular camera on line according to the offset value.
Optionally, in step S201, the solving an offset value of the binocular camera relative to the vehicle coordinate system according to the mileage information includes:
s2010, generating a V disparity map through the disparity map, extracting ground pixels in the V disparity map to obtain coordinates of ground points in a left camera coordinate system, and reconstructing the ground;
s2011, solving a ground equation under the carrier coordinate system according to the calibration parameters;
s2012, performing feature matching on the front and rear frame images input by the left camera to obtain second matching feature points of the front and rear frame images;
s2013, performing three-dimensional reconstruction on the second matching feature points of the front frame and the rear frame respectively according to the ground equation to obtain 3D coordinates of the second matching feature points at different moments;
s2014, obtaining a first motion parameter between the front frame image and the rear frame image by solving an over-determined equation P' = R × P + T
Figure 221997DEST_PATH_IMAGE014
Wherein the first motion parameter
Figure 747656DEST_PATH_IMAGE014
The method comprises the steps of (1) rotating a matrix R and translating the matrix T;
s2015, performing segmented integration on detection data of the IMU to obtain a second motion parameter of the carrier between the front frame image and the back frame image
Figure DEST_PATH_IMAGE015
S2016, according to the first motion parameters of the binocular camera
Figure 734460DEST_PATH_IMAGE014
And the second motion parameter of the vehicle
Figure 825913DEST_PATH_IMAGE015
By the equation
Figure 722194DEST_PATH_IMAGE016
Solving to obtain a second attitude parameter of the binocular camera in the carrier coordinate system, wherein,
Figure DEST_PATH_IMAGE017
a transformation matrix representing the conversion of the camera coordinate system to the carrier coordinate system;
s2017, determining an offset value of the binocular camera relative to the carrier coordinate system according to the second attitude parameter and the calibration parameter.
The invention provides a binocular camera online calibration system based on multi-sensor fusion, which comprises a processing module, a storage module and a communication module, wherein the processing module is respectively connected with the storage module and the communication module; wherein the content of the first and second substances,
the storage module is used for storing executable computer program codes;
the communication module is used for acquiring detection data of the binocular camera and the multi-sensor and transmitting the detection data to the processing module;
the processing module is configured to execute the method according to any one of the preceding claims by calling the executable computer program code in the storage module.
A third aspect of the present invention provides an electronic device comprising:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to perform the method of any of the preceding claims.
A fourth aspect of the invention provides a computer storage medium having stored thereon a computer program which, when executed by a processor, performs a method as set forth in any one of the preceding claims.
According to the scheme of the invention, the mutual posture of a left camera and a right camera of the binocular camera is subjected to first self-calibration; and acquiring real-time detection data of the multiple sensors, and performing second self-calibration on the posture of the binocular camera under a carrier coordinate system according to the real-time detection data. The on-line calibration of the invention comprises two aspects, namely self-calibration of mutual postures of the left camera and the right camera and self-calibration of the binocular camera relative to the ground. The scheme of the invention can deal with the problem of binocular camera deviation caused by various interference factors, and can quickly realize self-calibration after deviation.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic view of a binocular camera calibration in the prior art.
Fig. 2 is a schematic view of a binocular camera offset of the prior art.
Fig. 3 is a schematic flow chart of a binocular camera online calibration method based on multi-sensor fusion, which is disclosed by the embodiment of the invention.
Fig. 4 is a schematic diagram of a mutual posture self-calibration process of the left and right cameras of the binocular camera.
Fig. 5 is a schematic diagram of parallel correction of the left and right cameras of the binocular camera of the present invention.
Fig. 6 is a schematic view of the translation correction of the left and right binocular cameras of the present invention.
Fig. 7 is a flow chart of pose self-calibration of the left and right cameras of the binocular camera in the vehicle coordinate system.
Fig. 8 is a schematic view of the calibration of the left and right cameras of the binocular camera in the vehicle coordinate system.
Fig. 9 is a schematic structural diagram of a binocular camera online calibration system based on multi-sensor fusion disclosed by the embodiment of the invention.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged under appropriate circumstances in order to facilitate the description of the embodiments of the invention herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the present invention, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "center", "vertical", "horizontal", "lateral", "longitudinal", and the like indicate an orientation or positional relationship based on the orientation or positional relationship shown in the drawings. These terms are used primarily to better describe the invention and its embodiments and are not intended to limit the indicated devices, elements or components to a particular orientation or to be constructed and operated in a particular orientation.
Moreover, some of the above terms may be used to indicate other meanings besides the orientation or positional relationship, for example, the term "on" may also be used to indicate some kind of attachment or connection relationship in some cases. The specific meanings of these terms in the present invention can be understood by those skilled in the art as appropriate.
Furthermore, the terms "mounted," "disposed," "provided," "connected," and "sleeved" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; can be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements or components. The specific meanings of the above terms in the present invention can be understood by those of ordinary skill in the art according to specific situations.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
Example one
Fig. 3 is a schematic flow chart of a binocular camera online calibration method based on multi-sensor fusion according to an embodiment of the present invention. As shown in fig. 3, the binocular camera online calibration method based on multi-sensor fusion according to the embodiment of the present invention includes the following steps: s10, performing first self-calibration on the mutual postures of the left camera and the right camera of the binocular camera;
and S20, acquiring real-time detection data of the multiple sensors, and performing second self-calibration on the posture of the binocular camera under the carrier coordinate system according to the real-time detection data.
In the embodiment of the invention, compared with an offline calibration method in the prior art, the online calibration method is designed. Specifically, the on-line calibration of the invention comprises two aspects, namely self-calibration of the mutual posture of the left camera and the right camera and self-calibration of the binocular camera relative to the ground. The scheme of the invention can deal with the problem of binocular camera deviation caused by various interference factors, and can quickly realize self-calibration after deviation.
It should be noted that there are various embodiments of the method of the present invention, on one hand, the method is implemented by a processing device located on the vehicle together with the binocular camera, that is, the processing device is electrically connected to the binocular camera, and meanwhile, the processing device may also acquire a vehicle coordinate system, and further complete the first self-calibration and the second self-calibration through a series of processing calculations; the processing device may be any conventional processor, controller, microcontroller, or state machine, or may be implemented by a combination of computing devices, such as a dsp and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a dsp core, or any other similar configuration, such as a smartphone, pc, tablet, wearable device, and other smart hardware devices, such as a smart robot. On the other hand, the method can be realized through a server side, namely the server side is communicated with the carrier to realize indirect communication with the binocular camera, a calibration result is obtained through a series of processing calculation, and the calibration result is transmitted to the binocular camera of the carrier, so that the first self-calibration and the second self-calibration are realized; the server side can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server for providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN, big data and artificial intelligence platform and the like.
Optionally, in step S10, the performing a first self-calibration on the mutual postures of the left camera and the right camera of the binocular camera includes:
s110, acquiring images shot by a left camera and a right camera at the same time aiming at the same scene, and performing feature extraction and matching on the images to obtain a first matching feature point set;
s111, performing parallel correction on each first matching feature point in the first matching feature point set by using a parallel correction algorithm;
s112, judging whether the first matched feature points after parallel correction meet a first condition, if so, judging that the first attitude parameters of the binocular camera are qualified, and outputting calibration parameters; if not, turning to S113;
and S113, correcting the posture parameters of the binocular camera.
In the embodiment of the present invention, referring to fig. 4, first, self-calibration is performed on left and right cameras of a binocular camera, that is, the left and right cameras are controlled to shoot at the same time while aiming at the same scene, then, feature extraction and matching are performed on shot images respectively obtained from the left and right cameras, so as to obtain a first matching feature point set, after parallel correction processing (referring to fig. 5) is performed on each first matching feature point in the first matching feature point set, if they meet a predetermined condition, it is determined that a first posture parameter of the binocular camera is qualified, otherwise, there is a problem, and further optimization is required. Therefore, the scheme of the invention realizes the self-calibration of the left camera and the right camera of the binocular camera and avoids the mutual deviation of the left camera and the right camera.
Optionally, the determining whether the corrected first matching feature point meets a first condition includes:
if the first matched feature points after parallel correction are distributed in the same row/column, judging that the first attitude parameter of the binocular camera is qualified; and if the matched feature points after parallel correction are distributed in different rows/columns, judging that the first attitude parameter of the binocular camera is unqualified.
In the embodiment of the present invention, a horizontal or vertical calibration object may be arranged in the foregoing scene in advance, so for a binocular camera with qualified pose parameters, the first matching feature points should be located in the same row or the same column, and conversely, the first matching feature points are not located in the same row or the same column. The invention sets a first condition by utilizing the principle to detect whether a certain deviation exists between the left camera and the right camera, and further obtains a judgment result whether the first posture parameter of the binocular camera is qualified.
It should be noted that the parallelism correction algorithm adopts a Bouguet algorithm, and certainly, other parallelism correction algorithms are not excluded in the present invention except for the Bouguet algorithm, and are not described herein again.
Optionally, in step S113, the correcting the pose parameters of the binocular camera includes:
and (3) rotation correction: constructing an error function, and performing rotation correction on the rotation angle by using a nonlinear optimization means to enable the matched feature points after parallel correction to be distributed in the same row;
and (3) translation correction: and obtaining a scale factor by utilizing epipolar constraint, carrying out scaling solving on all translation matrixes solved by the epipolar constraint according to the same scale factor to obtain a translation matrix, and carrying out translation correction according to the translation matrix.
In the embodiment of the present invention, the correction of the pose parameters of the binocular camera by the present invention includes two aspects, namely, rotation correction and translation correction (refer to fig. 6), and the pose parameters of the binocular camera after correction are qualified.
Optionally, the constructing an error function includes the following steps:
s1130, taking the left camera as a reference, taking a rotation matrix of the left eye as a unit matrix, and taking a translation matrix as a 0 matrix; let the rotation matrix of the right eye be R and the translation matrix be T, decompose R into two rotation matrices
Figure 580822DEST_PATH_IMAGE001
And
Figure 322381DEST_PATH_IMAGE002
s1131, solving according to a Bouguet algorithm to obtain a left correction matrix and a right correction matrix, wherein the left correction matrix and the right correction matrix are respectively as follows:
Figure 268341DEST_PATH_IMAGE003
Figure 982526DEST_PATH_IMAGE004
wherein, in the step (A),
Figure 732045DEST_PATH_IMAGE005
only related to T;
then, for one 3D point, its imaging points on the left and right imaging planes are:
Figure 217909DEST_PATH_IMAGE006
and
Figure 283954DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 115512DEST_PATH_IMAGE018
represents the left camera 3 x 3 internal reference matrix,
Figure 355257DEST_PATH_IMAGE009
3 x 3 reference matrix representing the right camera,
Figure 438620DEST_PATH_IMAGE010
is the three-dimensional coordinate of a certain point in the world coordinate system,
Figure 752314DEST_PATH_IMAGE008
and
Figure 895719DEST_PATH_IMAGE011
three-dimensional coordinates of the same world point in a left camera coordinate system and a right camera coordinate system respectively,
Figure 150989DEST_PATH_IMAGE005
a translation matrix representing a right eye camera relative to a left eye camera;
s1132, constructing an error function as an included angle between a connecting line of the two imaging points and the positive direction of the X axis of the image coordinate system:
Figure 521532DEST_PATH_IMAGE012
wherein, in the process,
Figure 827749DEST_PATH_IMAGE013
representing the included angle between the connecting line of the imaging points of the next 3D point of the world coordinate system on the imaging planes of the left camera and the right camera and the positive direction of the X axis of the image coordinate system;
the rotation matrix R is optimized to minimize the error, i.e. the optimal solution for R can be optimized.
In the embodiment of the present invention, through the series of processing, an accurate error function can be obtained, which is further beneficial to implementing the rotation correction.
Optionally, in step S20, the acquiring real-time detection data of the multiple sensors, and performing a second self-calibration on the pose of the binocular camera in the vehicle coordinate system according to the real-time detection data includes:
s200, defining a carrier coordinate system and a ground coordinate system;
s201, obtaining mileage information of the carrier, solving an offset value of the binocular camera relative to a carrier coordinate system according to the mileage information, and calibrating the binocular camera on line according to the offset value.
In the embodiment of the present invention, referring to fig. 7, since the transformation relationship between the vehicle coordinate system and the ground coordinate system is fixed, and the pose of the binocular camera in the vehicle coordinate system is fixed, the ground "seen" by the binocular camera should be fixed. In view of this, when the second self-calibration is performed during the operation of the vehicle, the present invention further bases on real-time detection data of multiple sensors carried by the vehicle, that is, mileage information (which can be obtained from real-time detection data of a wheel speed meter, an IMU (inertial measurement unit), etc.) of the vehicle is introduced, and the deviation of the binocular camera with respect to the coordinate system of the vehicle is solved by fusing the mileage information, thereby completing the on-line calibration of the binocular camera.
Optionally, referring to fig. 8, in step S201, the solving an offset value of the binocular camera with respect to the vehicle coordinate system according to the mileage information includes:
s2010, generating a V disparity map through the disparity map, extracting ground pixels in the V disparity map to obtain coordinates of ground points in a left camera coordinate system, and reconstructing the ground;
s2011, solving a ground equation under the carrier coordinate system according to the calibration parameters;
s2012, performing feature matching on the front and rear frame images input by the left camera to obtain second matching feature points of the front and rear frame images;
s2013, performing three-dimensional reconstruction on the second matching feature points of the front frame and the rear frame respectively according to the ground equation to obtain 3D coordinates of the second matching feature points at different moments;
s2014, obtaining a first motion parameter between the front and rear frame images by solving an over-determined equation P' = R x P + T
Figure 142055DEST_PATH_IMAGE014
Wherein the first motion parameter
Figure 963250DEST_PATH_IMAGE014
The method comprises the steps of (1) rotating a matrix R and translating the matrix T;
s2015, performing segmented integration on detection data of the IMU to obtain a second motion parameter of the carrier between the front frame image and the back frame image
Figure 391345DEST_PATH_IMAGE015
S2016, according to the first motion parameters of the binocular camera
Figure 99538DEST_PATH_IMAGE014
And the second motion parameter of the vehicle
Figure 584746DEST_PATH_IMAGE015
By the equation
Figure 893236DEST_PATH_IMAGE016
Solving to obtain a second attitude parameter of the binocular camera in the vehicle coordinate system, wherein,
Figure 856513DEST_PATH_IMAGE017
a transformation matrix representing the conversion of the camera coordinate system to the carrier coordinate system;
s2017, determining an offset value of the binocular camera relative to the carrier coordinate system according to the second attitude parameter and the calibration parameter.
In the embodiment of the invention, the ground is reconstructed to obtain a ground equation under a carrier coordinate system, then the 3D coordinates of second matching feature points in an image frame can be obtained, accordingly, a first motion parameter Hc of a carrier can be identified from an image shot by a binocular camera, then, the data of an inertial measurement unit IMU is subjected to segmentation integration, accordingly, an actual second motion parameter Hv of the carrier between two frame images can be determined, and then, an offset value of an attitude parameter of the binocular camera in the carrier coordinate system can be calculated.
Example two
Referring to fig. 9, fig. 9 is a schematic structural diagram of a binocular camera online calibration system based on multi-sensor fusion, disclosed in the embodiment of the present invention. As shown in fig. 9, a binocular camera online calibration system 100 based on multi-sensor fusion according to an embodiment of the present invention includes a processing module 101, a storage module 102, and a communication module 103, where the processing module 101 is connected to the storage module 102 and the communication module 103; wherein the content of the first and second substances,
the storage module 102 is configured to store executable computer program codes;
the communication module 103 is configured to acquire detection data of the binocular camera and the multi-sensor and transmit the detection data to the processing module 101;
the processing module 101 is configured to execute the method according to the first embodiment by calling the executable computer program code in the storage module 102.
For specific functions of the binocular camera online calibration system based on multi-sensor fusion in this embodiment, reference is made to the first embodiment, and since the system in this embodiment adopts all technical solutions of the first embodiment, at least all beneficial effects brought by the technical solutions of the first embodiment are achieved, and details are not repeated herein.
EXAMPLE III
Referring to fig. 10, fig. 10 is an electronic device according to an embodiment of the disclosure, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the method according to the first embodiment.
Example four
The embodiment of the invention also discloses a computer storage medium, wherein a computer program is stored on the storage medium, and the computer program executes the method in the first embodiment when being executed by a processor.
The electronic device involved in the present invention includes a computing unit that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) or a computer program loaded from a storage unit into a Random Access Memory (RAM). In the RAM, various programs and data required for the operation of the device can also be stored. The computing unit, the ROM, and the RAM are connected to each other by a bus. An input/output (I/O) interface is also connected to the bus.
A plurality of components in an electronic device are connected to an I/O interface, including: input unit, output unit, memory cell and communication unit. The input unit may be any type of device capable of inputting information to the electronic device, and the input unit may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device. The output unit may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. The storage unit may include, but is not limited to, a magnetic disk, an optical disk. The communication unit allows the electronic device to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, WiFi devices, WiMax devices, cellular communication devices, and/or the like.
The computing unit may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of computational units include, but are not limited to, Central Processing Units (CPUs), Graphics Processing Units (GPUs), various specialized Artificial Intelligence (AI) computational chips, various computational units running machine learning model algorithms, Digital Signal Processors (DSPs), and any suitable processors, controllers, microcontrollers, etc. The computing unit performs the various methods and processes described above. For example, in some embodiments, the topic type identification method can be implemented as a computer software program tangibly embodied in a machine-readable medium, such as a memory unit. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device via the ROM and/or the communication unit. In some embodiments, the computing unit may be configured to perform the topic type identification method in any other suitable manner (e.g., by way of firmware).
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As used in this disclosure, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.

Claims (6)

1. A binocular camera online calibration method based on multi-sensor fusion comprises the following steps: s10, performing first self-calibration on the mutual postures of the left camera and the right camera of the binocular camera;
s20, acquiring real-time detection data of the multiple sensors, and performing second self-calibration on the posture of the binocular camera under a carrier coordinate system according to the real-time detection data;
in step S10, the performing a first self-calibration on the mutual postures of the left camera and the right camera of the binocular camera includes:
s110, acquiring images shot by a left camera and a right camera at the same time aiming at the same scene, and performing feature extraction and matching on the images to obtain a first matching feature point set;
s111, performing parallel correction on each first matching feature point in the first matching feature point set by using a parallel correction algorithm;
s112, judging whether the first matched feature points after parallel correction meet a first condition, if so, judging that the first attitude parameters of the binocular camera are qualified, and outputting calibration parameters; if not, turning to S113;
s113, correcting the posture parameters of the binocular camera;
the judging whether the corrected first matching feature point meets a first condition includes:
if the first matched feature points after parallel correction are distributed in the same row/column, judging that the first attitude parameter of the binocular camera is qualified; if the matched feature points after parallel correction are distributed in different rows/columns, determining that the first attitude parameter of the binocular camera is unqualified;
in step S113, the correcting the pose parameters of the binocular camera includes:
and (3) rotation correction: constructing an error function, and performing rotation correction on the rotation angle by using a nonlinear optimization means to enable the matched feature points after parallel correction to be distributed in the same row;
and (3) translation correction: obtaining a scale factor by utilizing epipolar constraint, carrying out scaling solving on all translation matrixes solved by the epipolar constraint according to the same scale factor to obtain a translation matrix, and carrying out translation correction according to the translation matrix;
the method for constructing the error function comprises the following steps:
s1130, taking the left camera as a reference, taking a rotation matrix of the left eye as a unit matrix, and taking a translation matrix as a 0 matrix; let the rotation matrix of the right eye be R and the translation matrix be T, decompose R into two rotation matrices
Figure DEST_PATH_IMAGE002
And
Figure DEST_PATH_IMAGE004
s1131, solving according to a Bouguet algorithm to obtain a left correction matrix and a right correction matrix, wherein the left correction matrix and the right correction matrix are respectively as follows:
Figure DEST_PATH_IMAGE006
Figure DEST_PATH_IMAGE008
wherein, in the step (A),
Figure DEST_PATH_IMAGE010
only related to T;
then, for one 3D point, its imaging points on the left and right imaging planes are:
Figure DEST_PATH_IMAGE012
and
Figure DEST_PATH_IMAGE014
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE016
represents the left camera 3 x 3 internal reference matrix,
Figure DEST_PATH_IMAGE018
3 x 3 reference matrix representing the right camera,
Figure DEST_PATH_IMAGE020
is the three-dimensional coordinate of a certain point in the world coordinate system,
Figure DEST_PATH_IMAGE022
and
Figure DEST_PATH_IMAGE024
respectively are three-dimensional coordinates of the same world point in a left camera coordinate system and a right camera coordinate system,
Figure DEST_PATH_IMAGE026
a translation matrix representing a right eye camera relative to a left eye camera;
s1132, constructing an error function as an included angle between a connecting line of the two imaging points and the positive direction of the X axis of the image coordinate system:
Figure DEST_PATH_IMAGE028
wherein, in the step (A),
Figure DEST_PATH_IMAGE030
the included angle between the connecting line of imaging points of the next 3D point on the imaging planes of the left camera and the right camera in the world coordinate system and the positive direction of the X axis of the image coordinate system is represented;
the rotation matrix R is optimized to minimize the error, i.e. the optimal solution for R can be optimized.
2. The binocular camera online calibration method based on multi-sensor fusion of claim 1, wherein: in step S20, the acquiring real-time detection data of the multiple sensors, and performing second self-calibration on the pose of the binocular camera under the vehicle coordinate system according to the real-time detection data includes:
s200, defining a carrier coordinate system and a ground coordinate system;
s201, obtaining mileage information of the carrier, solving an offset value of the binocular camera relative to a carrier coordinate system according to the mileage information, and calibrating the binocular camera on line according to the offset value.
3. The binocular camera online calibration method based on multi-sensor fusion according to claim 2, wherein: in step S201, the solving of the offset value of the binocular camera with respect to the vehicle coordinate system according to the mileage information includes:
s2010, generating a V disparity map through the disparity map, extracting ground pixels in the V disparity map to obtain coordinates of ground points in a left camera coordinate system, and reconstructing the ground;
s2011, solving a ground equation under the carrier coordinate system according to the calibration parameters;
s2012, performing feature matching on the front and rear frame images input by the left camera to obtain second matching feature points of the front and rear frame images;
s2013, performing three-dimensional reconstruction on the second matching feature points of the front frame and the rear frame respectively according to the ground equation to obtain 3D coordinates of the second matching feature points at different moments;
s2014, obtaining the front and rear frames by solving an over-determined equation P' = R P + TFirst motion parameter between images
Figure DEST_PATH_IMAGE032
Wherein the first motion parameter
Figure 997051DEST_PATH_IMAGE032
The method comprises the steps of (1) rotating a matrix R and translating the matrix T;
s2015, performing segmented integration on detection data of the IMU to obtain a second motion parameter of the carrier between the front frame image and the back frame image
Figure DEST_PATH_IMAGE034
S2016, according to the first motion parameters of the binocular camera
Figure 179771DEST_PATH_IMAGE032
And the second motion parameter of the vehicle
Figure 661306DEST_PATH_IMAGE034
By the equation
Figure DEST_PATH_IMAGE036
Solving to obtain a second attitude parameter of the binocular camera in the carrier coordinate system, wherein,
Figure DEST_PATH_IMAGE038
a transformation matrix representing the conversion of the camera coordinate system to the carrier coordinate system;
s2017, determining an offset value of the binocular camera relative to the carrier coordinate system according to the second attitude parameter and the calibration parameter.
4. A binocular camera online calibration system based on multi-sensor fusion comprises a processing module, a storage module and a communication module, wherein the processing module is respectively connected with the storage module and the communication module; wherein the content of the first and second substances,
the storage module is used for storing executable computer program codes;
the communication module is used for acquiring detection data of the binocular camera and the multi-sensor and transmitting the detection data to the processing module;
the method is characterized in that: the processing module for executing the method according to any one of claims 1-3 by calling the executable computer program code in the storage module.
5. An electronic device, comprising:
a memory storing executable program code;
a processor coupled with the memory;
the method is characterized in that: the processor calls the executable program code stored in the memory to perform the method of any of claims 1-3.
6. A computer storage medium having a computer program stored thereon, characterized in that: the computer program, when executed by a processor, performs the method of any one of claims 1-3.
CN202210057189.2A 2022-01-19 2022-01-19 Binocular camera online calibration method and system based on multi-sensor fusion Active CN114092569B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210057189.2A CN114092569B (en) 2022-01-19 2022-01-19 Binocular camera online calibration method and system based on multi-sensor fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210057189.2A CN114092569B (en) 2022-01-19 2022-01-19 Binocular camera online calibration method and system based on multi-sensor fusion

Publications (2)

Publication Number Publication Date
CN114092569A CN114092569A (en) 2022-02-25
CN114092569B true CN114092569B (en) 2022-08-05

Family

ID=80308830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210057189.2A Active CN114092569B (en) 2022-01-19 2022-01-19 Binocular camera online calibration method and system based on multi-sensor fusion

Country Status (1)

Country Link
CN (1) CN114092569B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106679648A (en) * 2016-12-08 2017-05-17 东南大学 Vision-inertia integrated SLAM (Simultaneous Localization and Mapping) method based on genetic algorithm
CN110189382A (en) * 2019-05-31 2019-08-30 东北大学 A kind of more binocular cameras movement scaling method based on no zone of mutual visibility domain

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107747941B (en) * 2017-09-29 2020-05-15 歌尔股份有限公司 Binocular vision positioning method, device and system
CN109976344B (en) * 2019-03-30 2022-05-27 南京理工大学 Posture correction method for inspection robot
CN110296691B (en) * 2019-06-28 2020-09-22 上海大学 IMU calibration-fused binocular stereo vision measurement method and system
CN112184824A (en) * 2019-07-05 2021-01-05 杭州海康机器人技术有限公司 Camera external parameter calibration method and device
CN111862235B (en) * 2020-07-22 2023-12-29 中国科学院上海微系统与信息技术研究所 Binocular camera self-calibration method and system
CN112700502B (en) * 2020-12-29 2023-08-01 西安电子科技大学 Binocular camera system and binocular camera space calibration method
CN112785702B (en) * 2020-12-31 2023-06-20 华南理工大学 SLAM method based on tight coupling of 2D laser radar and binocular camera
CN112907681A (en) * 2021-02-26 2021-06-04 北京中科慧眼科技有限公司 Combined calibration method and system based on millimeter wave radar and binocular camera
CN113538592B (en) * 2021-06-18 2023-10-27 深圳奥锐达科技有限公司 Calibration method and device for distance measuring device and camera fusion system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106679648A (en) * 2016-12-08 2017-05-17 东南大学 Vision-inertia integrated SLAM (Simultaneous Localization and Mapping) method based on genetic algorithm
CN110189382A (en) * 2019-05-31 2019-08-30 东北大学 A kind of more binocular cameras movement scaling method based on no zone of mutual visibility domain

Also Published As

Publication number Publication date
CN114092569A (en) 2022-02-25

Similar Documents

Publication Publication Date Title
WO2018119889A1 (en) Three-dimensional scene positioning method and device
CN106940704B (en) Positioning method and device based on grid map
JP6902122B2 (en) Double viewing angle Image calibration and image processing methods, equipment, storage media and electronics
EP4307233A1 (en) Data processing method and apparatus, and electronic device and computer-readable storage medium
TW202008310A (en) Method, device and apparatus for binocular image depth estimation, program and storage medium thereof
CN108230384A (en) Picture depth computational methods, device, storage medium and electronic equipment
CN113129352A (en) Sparse light field reconstruction method and device
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
CN114170290A (en) Image processing method and related equipment
CN114663686A (en) Object feature point matching method and device, and training method and device
CN111753739A (en) Object detection method, device, equipment and storage medium
Chen et al. A closed-form solution to single underwater camera calibration using triple wavelength dispersion and its application to single camera 3D reconstruction
WO2020092051A1 (en) Rolling shutter rectification in images/videos using convolutional neural networks with applications to sfm/slam with rolling shutter images/videos
CN114092569B (en) Binocular camera online calibration method and system based on multi-sensor fusion
CN113886510A (en) Terminal interaction method, device, equipment and storage medium
CN113269689A (en) Depth image completion method and system based on normal vector and Gaussian weight constraint
CN117237409A (en) Shooting game sight correction method and system based on Internet of things
EP4086853A2 (en) Method and apparatus for generating object model, electronic device and storage medium
CN116309158A (en) Training method, three-dimensional reconstruction method, device, equipment and medium of network model
CN112396694B (en) 3D face video generation method based on monocular camera
CN113887289A (en) Monocular three-dimensional object detection method, device, equipment and product
CN114415698A (en) Robot, positioning method and device of robot and computer equipment
CN113793349A (en) Target detection method and device, computer readable storage medium and electronic equipment
CN113592021A (en) Stereo matching method based on deformable and depth separable convolution
JP7425169B2 (en) Image processing method, device, electronic device, storage medium and computer program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230112

Address after: 201206 room 815, building 2, No. 111 Xiangke Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai

Patentee after: SHANGHAI AWARE INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 300171 1-1-604, No. 2, Xinpu Road, Hedong District, Tianjin

Patentee before: Anville information technology (Tianjin) Co.,Ltd.

TR01 Transfer of patent right