CN113034621A - Combined calibration method, device, equipment, vehicle and storage medium - Google Patents

Combined calibration method, device, equipment, vehicle and storage medium Download PDF

Info

Publication number
CN113034621A
CN113034621A CN202110562086.7A CN202110562086A CN113034621A CN 113034621 A CN113034621 A CN 113034621A CN 202110562086 A CN202110562086 A CN 202110562086A CN 113034621 A CN113034621 A CN 113034621A
Authority
CN
China
Prior art keywords
homonymous
homonymous point
target
point pair
groups
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110562086.7A
Other languages
Chinese (zh)
Other versions
CN113034621B (en
Inventor
刘春�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110562086.7A priority Critical patent/CN113034621B/en
Publication of CN113034621A publication Critical patent/CN113034621A/en
Application granted granted Critical
Publication of CN113034621B publication Critical patent/CN113034621B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a joint calibration method, a joint calibration device, joint calibration equipment, a vehicle and a storage medium. The method comprises the following steps: acquiring M groups of homonymous point pairs, initial back-projection errors of the M groups of homonymous point pairs and homonymous point pair sets; carrying out point pair change processing on the homonymous point pair set based on the M groups of homonymous points to obtain a target homonymous point pair set; obtaining target back projection errors of the M groups of homonymous point pairs based on the target homonymous point pair set, coordinates corresponding to all homonymous point pairs in the M groups of homonymous point pairs, and internal references and initial external references of a camera for collecting the two-dimensional image; and determining the target external parameters corresponding to the camera based on the initial back projection error, the target back projection error, the coordinates corresponding to all the homonymous point pairs in the target homonymous point pair set, the internal parameters and the initial external parameters of the M groups of homonymous point pairs. The scheme enables the target external parameters obtained based on the target homonymous point pair set to be more accurate, and applicable scenes comprise but are not limited to high-precision maps, automatic driving, vehicle-road coordination and the like.

Description

Combined calibration method, device, equipment, vehicle and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a joint calibration method, apparatus, device, vehicle, and storage medium.
Background
The camera calibration comprises internal reference calibration and external reference calibration, the internal reference calibration technology and tools are mature at present, the precision can be guaranteed, and the external reference calibration is not a unified method due to the fact that the use scenes are different. The currently commonly used external reference calibration method is usually to calibrate by using a manually selected reference point (a homologous point), but in the calibration process, because the manually selected reference point is not highly accurate and the calibration precision is determined according to the selected reference point, the calibration precision is greatly influenced by human factors and is not accurate enough.
Disclosure of Invention
In view of this, the embodiment of the present application provides a joint calibration method, apparatus, device, vehicle, and storage medium, which can improve the accuracy of camera external reference calibration.
In a first aspect, an embodiment of the present application provides a joint calibration method, where the method includes: acquiring M groups of homonymous point pairs, initial back projection errors of the M groups of homonymous point pairs and a homonymous point pair set, wherein the M groups of homonymous point pairs are selected from point cloud data and two-dimensional images corresponding to the point cloud data, each group of homonymous point pairs in the homonymous point pair set belong to the M groups of homonymous point pairs, and M is an integer greater than 1; performing point pair change processing on the homonymous point pair set based on the M groups of homonymous point pairs to obtain a target homonymous point pair set; obtaining target back projection errors of the M groups of homonymous point pairs based on the target homonymous point pair set, coordinates corresponding to each homonymous point pair in the M groups of homonymous point pairs, and internal references and initial external references of a camera for collecting the two-dimensional image; and determining the target external parameters corresponding to the camera based on the initial back projection error and the target back projection error of the M groups of homonymous point pairs, the coordinates corresponding to each homonymous point pair in the target homonymous point pair set, and the internal parameters and the initial external parameters of the camera for acquiring the two-dimensional image.
In a second aspect, an embodiment of the present application provides a joint calibration apparatus, where the apparatus includes: the device comprises a data acquisition module, a point pair change module, a target error acquisition module and a target external parameter acquisition module. The system comprises a data acquisition module, a data processing module and a data processing module, wherein the data acquisition module is used for acquiring M groups of homonymous point pairs, initial back projection errors of the M groups of homonymous point pairs and a homonymous point pair set, the M groups of homonymous point pairs are selected from point cloud data and two-dimensional images corresponding to the point cloud data, each group of homonymous point pairs in the homonymous point pair set belongs to the M groups of homonymous point pairs, and M is an integer greater than 1; the point pair change module is used for carrying out point pair change processing on the homonymous point pair set based on the M groups of homonymous point pairs to obtain a target homonymous point pair set; a target error obtaining module, configured to obtain target back-projection errors of the M groups of homonymous point pairs based on the target homonymous point pair set, coordinates corresponding to each homonymous point pair in the M groups of homonymous point pairs, and an internal reference and an initial external reference of a camera that acquires the two-dimensional image; and the target external parameter obtaining module is used for determining the target external parameters corresponding to the camera based on the initial back projection error and the target back projection error of the M groups of homonymous point pairs, the coordinates corresponding to all homonymous point pairs in the target homonymous point pair set, and the internal parameters and the initial external parameters of the camera for acquiring the two-dimensional image.
In one possible implementation, the target error obtaining module includes: an optimization processing submodule and a target back-projection error obtaining submodule. And the optimization processing submodule is used for optimizing the initial external parameters by utilizing a preset optimization algorithm based on the coordinates corresponding to all the homonymous point pairs in the target homonymous point pair set and the internal parameters of the camera for collecting the two-dimensional image to obtain the processed initial external parameters. And the target back-projection error obtaining submodule is used for obtaining a target back-projection error corresponding to the processed initial external parameter according to the processed initial external parameter, the coordinates corresponding to each homonymous point pair in the M groups of homonymous point pairs and the internal parameter.
In one possible implementation, the target external reference obtaining module includes an update determining sub-module and a target external reference obtaining sub-module. And the updating determining submodule is used for determining whether to update the homonymous point pair set into the target homonymous point pair set according to the target back casting error and the initial back casting error. And the target external parameter obtaining submodule is used for updating the homonymous point pair set to the target homonymous point pair set when the homonymous point pair set is determined to be updated to the target homonymous point pair set, taking the processed initial external parameter as a new initial external parameter, taking the target back projection error as a new initial back projection error, and taking the processed initial external parameter as the target external parameter corresponding to the camera when the frequency of point pair change processing on the homonymous point pair set is determined to reach a preset frequency.
In one possible embodiment, the update determination submodule comprises a comparison unit and an update determination unit. And the comparison unit is used for comparing the target back projection error with the initial back projection error. An update determining unit, configured to determine to update the homonymous point pair set to the target homonymous point pair set when the target backcasting error is not greater than the initial backcasting error. And when the target back casting error is larger than the initial back casting error, obtaining an acceptance probability according to the initial back casting error and the target back casting error, and determining whether to update the homonymous point pair set to the target homonymous point pair set according to the acceptance probability.
In a possible implementation manner, the update determining unit is further configured to perform probability calculation according to the number of times of processing the set of homonymous point pairs, the initial back-casting error, the target back-casting error, and the number of homonymous point pairs included in the set of target homonymous point pairs, so as to obtain the acceptance probability.
In one possible implementation, the target back-projection error obtaining submodule includes a function building unit, a calculation formula obtaining unit, and an optimization unit. And the function construction unit is used for constructing a coordinate conversion function based on the internal parameters and the initial external parameters of the camera for acquiring the two-dimensional image. And the calculation formula obtaining unit is used for respectively substituting the coordinates corresponding to all the homonymous point pairs in the target homonymous point pair set into the coordinate conversion function to obtain a coordinate conversion calculation formula. And the optimization unit is used for carrying out constrained optimization on the initial external parameters in the coordinate conversion calculation formula by using a sequential quadratic programming algorithm to obtain the processed initial external parameters.
In a possible embodiment, the function construction unit is further configured to construct an internal reference matrix based on internal parameters of a camera that acquires the two-dimensional image; constructing an external parameter matrix based on initial external parameters of a camera acquiring the two-dimensional image; and constructing a coordinate conversion function between the two-dimensional coordinates and the three-dimensional coordinates in the corresponding point pair based on the internal reference matrix and the external reference matrix.
In a possible implementation manner, the point pair change module is further configured to randomly add a same-name point pair to the set of same-name point pairs or randomly reduce a same-name point pair from the set of same-name point pairs based on the M groups of same-name point pairs to obtain a target set of same-name point pairs, where the same-name point pairs added to the set of same-name point pairs belong to the M groups of same-name point pairs.
In one possible implementation, the data obtaining module further includes: a calculation submodule and an initial back-projection error obtaining submodule. And the calculating submodule is used for calculating the back projection error of each group of homonymous point pairs in the homonymous point pair set based on the corresponding coordinates of each group of homonymous point pairs in the M groups of homonymous point pairs and the initial external reference and the internal reference of the camera for collecting the two-dimensional image. And the initial back-projection error obtaining submodule is used for carrying out mean value calculation on the back-projection errors of each group of homonymous point pairs in the M groups of homonymous point pairs to obtain the initial back-projection errors.
In a possible implementation manner, the obtaining module includes a data obtaining sub-module, a base map obtaining sub-module, and a same name point obtaining sub-module. And the data acquisition submodule is used for acquiring point cloud data and a two-dimensional image corresponding to the point cloud data. And the base map obtaining submodule is used for back projecting the point cloud data to the two-dimensional image to obtain a base map. And the homonym point acquisition sub-module is used for selecting M groups of homonym point pairs based on the base map, each group of homonym point pairs comprises a two-dimensional point in the two-dimensional image and a three-dimensional point in the point cloud data, and the coordinates corresponding to each group of homonym point pairs comprise a two-dimensional coordinate corresponding to a two-dimensional point in the two-dimensional image and a three-dimensional coordinate corresponding to a three-dimensional point in the point cloud data.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory; one or more programs are stored in the memory and configured to be executed by the processor to implement the methods described above.
In a fourth aspect, embodiments of the present application provide a vehicle, including a memory; a camera for acquiring a two-dimensional image; the laser radar is used for collecting point cloud data; one or more processors respectively connected with the camera and the laser radar; one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the methods as described above.
In a fifth aspect, the present application provides a computer-readable storage medium, in which a program code is stored, wherein the program code performs the above-mentioned method when executed by a processor.
In a sixth aspect, embodiments of the present application provide a computer program product or a computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device obtains the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method described above.
The embodiment of the application provides a joint calibration method, a joint calibration device, equipment, a vehicle and a storage medium, wherein M groups of homonymous point pairs, initial back-projection errors of the M groups of homonymous point pairs and homonymous point pair sets are obtained; performing point pair change processing on the homonymous point pair set based on the M groups of homonymous point pairs to obtain a target homonymous point pair set; obtaining target back projection errors of the M groups of homonymous point pairs based on the target homonymous point pair set, coordinates corresponding to all homonymous point pairs in the M groups of homonymous point pairs, and internal references and initial external references of a camera for collecting the two-dimensional image; and determining the target external parameters corresponding to the camera based on the initial back projection error and the target back projection error of the M groups of homonymous point pairs, the coordinates corresponding to each homonymous point pair in the target homonymous point pair set, and the internal parameters and the initial external parameters of the camera for acquiring the two-dimensional image. The homonymous point pair set can be optimized based on the initial back-projection error and the target back-projection error in the calibration process, so that homonymous points in the target homonymous point pair set obtained through optimization are superior homonymous points in M groups of homonymous point pairs, and target external parameters obtained based on the target homonymous point pair set are more accurate.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram illustrating a system architecture proposed in an embodiment of the present application;
fig. 2 shows a flowchart of a joint calibration method proposed in the embodiment of the present application;
FIG. 3 shows a schematic flow chart of step S110 in FIG. 2;
FIG. 4 shows another flow diagram of a combined calibration method proposed in the embodiment of the present application;
FIG. 5 shows a flowchart illustrating step S230 in FIG. 4;
FIG. 6 shows a flowchart illustrating step S250 in FIG. 4;
FIG. 7 is a diagram showing the variation of the difference between the initial back-projection error and the target back-projection error obtained before and after the point pair variation processing;
FIG. 8 shows a schematic of M groups of homonymous point pairs and a schematic of homonymous point pairs in a set of homonymous point pairs;
FIG. 9 is a diagram illustrating the variation of the number of point pairs in a set of target homonymous point pairs;
FIG. 10 shows a two-dimensional image captured by a camera;
FIG. 11 shows a schematic diagram of point cloud data acquired by a lidar;
FIG. 12 is a schematic flow chart diagram illustrating a combined calibration method according to an embodiment of the present application;
FIG. 13 shows a base map derived by backprojecting the point cloud data of FIG. 11 onto the two-dimensional image of FIG. 10 based on internal and initial external camera parameters;
FIG. 14 shows a base map derived by backprojecting the point cloud data of FIG. 11 onto the two-dimensional image of FIG. 10 based on camera internal and target external parameters;
fig. 15 shows a block diagram of a combined calibration apparatus according to an embodiment of the present application;
fig. 16 is a block diagram illustrating a structure of a data acquisition module according to an embodiment of the present disclosure;
FIG. 17 is a block diagram of a target error obtaining module according to an embodiment of the present disclosure;
FIG. 18 is a block diagram illustrating a target external reference obtaining module according to an embodiment of the present disclosure;
FIG. 19 shows a block diagram of an electronic device for performing the method of an embodiment of the present application;
fig. 20 shows a schematic structural diagram of a vehicle according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Hereinafter, terms that may be referred to in the embodiments of the present application will be described.
Camera calibration, which means that in the image measurement process and machine vision application, in order to determine the correlation between the three-dimensional geometric position of a certain point on the surface of a space object and the corresponding point in an image, a geometric model of camera imaging must be established, and the geometric model parameters are camera parameters. Under most conditions, the parameters must be obtained through experiments and calculation, and the process of solving the parameters is called camera calibration (or video camera calibration). The solved parameters comprise internal and external parameters of the camera set, i.e. the calibration of the camera set may comprise a process of determining the internal and external parameters of the camera set. In image measurement or machine vision application, camera parameter calibration is a very critical link, accuracy of a calibration result and stability of an algorithm directly influence accuracy of a result generated by camera work, and the camera external reference calibration is mainly aimed at in the embodiment of the application.
Internal reference refers to parameters related to the characteristics of the camera itself, such as the focal length, imaging center and pixels of the camera.
The external parameter is an external parameter of the camera, and refers to a parameter in a world coordinate system or a specified coordinate system, such as a rotation and translation parameter of the camera.
The point cloud data refers to a set of vectors in a three-dimensional coordinate system, that is, each point in each point cloud data is a three-dimensional point, and the point cloud contains information about the position of each point, that is, the x, y, z coordinates in a three-dimensional space, which is necessary information. And secondly there may be one or more of colour information, reflection intensity information etc. The color information is usually obtained by a camera to obtain a color image, and then the color information (RGB) of the pixel at the corresponding position is assigned to the corresponding point in the point cloud. The reflected intensity information is obtained by the echo intensity collected by the receiving device of the laser scanner, and the intensity information is related to the surface material, roughness and incident angle direction of the target, and the emission energy and laser wavelength of the instrument.
The two-dimensional image photograph is a planar image captured by a camera and not including depth information. The two-dimensional image only has four directions of left and right, up and down, and has no front and back.
The homonymous point, also called homonymous image point (corresponding image point), refers to the image forming point of any target point on different photos, because the same object point is obtained by secondary photography at (two) different photographic points during aerial photography, when the elevation of the image point is observed and measured stereoscopically, the position and coordinate value of the homonymous image point must be accurately determined and measured, so as to ensure the quality and precision of stereoscopic observation and measurement. In this embodiment, the point cloud data obtained by using a laser method for a target object and the two-dimensional image photo obtained by using a camera set are referred to as a three-dimensional coordinate and a two-dimensional coordinate corresponding to a same target point, which are respectively obtained from the point cloud data and the two-dimensional photo, the two-dimensional coordinate and the three-dimensional coordinate corresponding to the same target point are corresponding to a same-name point, and the two-dimensional coordinate and the three-dimensional coordinate corresponding to the target point together form a group of same-name point pairs.
With the wider application range of the camera, for example, the camera is widely applied to the application scenes of assistant driving, automatic driving controllers and the like at present, and the accuracy requirement on the external reference calibration of the camera under the scenes is very high. Therefore, when calibrating the external parameters of the camera in such a scene, the external parameters of the camera are usually calibrated by adopting a combined calibration mode of a laser radar and the camera, a measurable planar target object (such as a brand) is selected in the calibration process, the target object is measured by laser in a three-dimensional manner and is dynamically photographed, and calibration parameters are obtained by comparing and calculating three-dimensional points and two-dimensional points in a same-name point pair. The calibration precision is determined according to the pair of homonymous points which are manually selected, so that the calibration precision is greatly influenced by human factors and is not accurate enough.
In view of this, the present application provides a joint calibration method, apparatus, device, vehicle, and storage medium, by obtaining M groups of homonymous point pairs, initial back-projection errors of the M groups of homonymous point pairs, and a homonymous point pair set; performing point pair change processing on the homonymous point pair set based on the M groups of homonymous point pairs to obtain a target homonymous point pair set; obtaining target back projection errors of the M groups of homonymous point pairs based on the target homonymous point pair set, coordinates corresponding to all homonymous point pairs in the M groups of homonymous point pairs, and internal references and initial external references of a camera for collecting the two-dimensional image; and determining the target external parameters corresponding to the camera based on the initial back projection error and the target back projection error of the M groups of homonymous point pairs, the coordinates corresponding to each homonymous point pair in the target homonymous point pair set, and the internal parameters and the initial external parameters of the camera for acquiring the two-dimensional image. The homonymous point pair set can be optimized based on the initial back-projection error and the target back-projection error in the calibration process, so that homonymous points in the target homonymous point pair set obtained through optimization are superior homonymous points in M groups of homonymous point pairs, and target external parameters obtained based on the target homonymous point pair set are more accurate.
Specifically, compared with the situation that target outliers obtained based on the selected homonym are not accurate enough due to the fact that the homonym selected manually is inaccurate, the joint calibration method provided in the embodiment of the present application can delete the inaccurate homonym from the homonym point pair set and retain the accurate homonym point by performing point pair change on the homonym point pair set and according to the obtained back-projection error before and after the point pair change in the process of selecting the homonym point pair for determining the outliers, so that the homonym point pair in the obtained target homonym point pair set is more accurate, and further, the target outliers obtained from the target homonym point pair set are more accurate.
Fig. 1 shows a schematic diagram of an exemplary system architecture 10 to which the technical solutions of the embodiments of the present application can be applied.
As shown in fig. 1, the system architecture 10 may be applied to an automobile, a drone, or an airplane, for example.
The system architecture 10 may include a camera 11, a laser radar 12, a network 13, a server 14, and a terminal device 15 (the terminal device 15 may be one or more of a vehicle-mounted terminal, a smart phone, a tablet computer, a portable computer, a desktop computer, and the like). Network 13 is the medium used to provide communication links between camera 11, lidar 12, server 14, and terminal device 15. Network 13 may include various types of connections, such as wired communication links, wireless communication links, and so forth.
It should be understood that the number of cameras 11, lidar 12, network 13, server 14, and terminal devices 15 in fig. 1 is merely illustrative. There may be any number of cameras 11, lidar 12, network 13, server 14, and terminal devices 15, as desired for implementation. Such as server 14 cloud server, and the server 14 in fig. 1 may include at least one of a server, a server cluster composed of a plurality of servers, a cloud computing platform, and a virtualization center.
Taking the system architecture 10 applied to a vehicle as an example, the camera 11 may be a vehicle-mounted camera, and the camera 11 and the laser radar 12 may be installed at a head position of the vehicle, for example, a front side of a vehicle body is used for acquiring two-dimensional images and three-dimensional point cloud data of a road in front of the head and two sides of the road during driving of the vehicle.
In an embodiment of the present application, the server 14 may obtain a two-dimensional image of a target road scene collected by the camera 11, the server 14 may also obtain point cloud data of the target road scene collected by the three-dimensional scanner 12, and after obtaining the two-dimensional image and the three-dimensional point cloud, the server 14 may perform joint calibration by using the two-dimensional image and the three-dimensional point cloud data, where a specific calibration process is as follows:
the server 14 acquires M sets of homonymous point pairs selected from the point cloud data and the two-dimensional image corresponding to the point cloud data, and obtaining an initial backprojection error for the M sets of homonymous point pairs and a set of homonymous point pairs selected from the M sets of homonymous point pairs, and performing point pair change processing on the homonymous point pair set, such as performing point pair change processing on the homonymous point pair set in an increasing, reducing or replacing manner to obtain target homonymous point pairs, and obtaining target back projection errors of the M groups of homonymous point pairs based on the target homonymous point pair set, coordinates corresponding to each homonymous point pair in the M groups of homonymous point pairs, and internal parameters and initial external parameters of a camera for acquiring a two-dimensional image, and determining the target external parameters corresponding to the camera based on the initial back projection error, the target back projection error, the coordinates corresponding to all the homonymous point pairs in the target homonymous point pair set and the internal parameters and the initial external parameters of the camera for acquiring the two-dimensional image. In the calibration process, a better homonymous point pair set can be obtained based on the target back-projection error and the initial back-projection error, homonymous point pairs in the better homonymous point pair set are better homonymous point pairs in M groups of homonymous point pairs, and therefore target external parameters obtained based on the better homonymous point set are more accurate.
It should be noted that the joint calibration method provided in the embodiment of the present application is generally executed by the server 14, and accordingly, the joint calibration apparatus is generally disposed in the server 14. However, in other embodiments of the present application, the terminal device 15 may also have a similar function as the server 14, so as to execute the joint calibration method provided in the embodiments of the present application.
The implementation details of the technical solution of the embodiment of the present application are set forth in detail below:
fig. 2 schematically shows a flowchart of a joint calibration method according to an embodiment of the present application, where an execution subject of the joint calibration method may be a server, such as the server 104 shown in fig. 1, or any terminal device with data processing capability.
Referring to fig. 2, the joint calibration method at least includes steps S110 to S130, which are described in detail as follows:
step S110: and acquiring M groups of homonymous point pairs, initial back-projection errors of the M groups of homonymous point pairs, and homonymous point pair sets.
The M groups of homonymous point pairs are selected from point cloud data and two-dimensional images corresponding to the point cloud data, each group of homonymous point pairs in the homonymous point pair set belong to the M groups of homonymous point pairs, and M is an integer greater than 1.
Before obtaining the M groups of homonymous point pairs and the homonymous point pair set, the point cloud data and the two-dimensional image corresponding to the point cloud data may be obtained, and the M groups of homonymous point pairs may be determined from the point cloud data and the corresponding two-dimensional image.
The manner of obtaining the point cloud data and the two-dimensional image corresponding to the point cloud data may be:
if the laser radar for acquiring the point cloud data and the camera for acquiring the two-dimensional image are relatively fixed and are both in a static state (the longitude, the latitude and the elevation do not change) and are used for acquiring data corresponding to the same target, the point cloud data acquired by the laser radar for acquiring the data of the target corresponds to the two-dimensional image acquired by the two-dimensional image for acquiring the data of the target.
If the laser radar used for collecting the point cloud data and the camera used for collecting the two-dimensional image are relatively fixed but move along with the vehicle or the unmanned aerial vehicle and other equipment, the equipment can collect track data when the equipment moves, the track data is composed of a large number of track points, and each track point comprises information such as position (longitude, latitude and elevation), attitude (course angle, pitch angle and roll angle), and corresponding time. The point cloud data may be determined by the positioning information acquired by the positioning system based on the device and the radar points acquired by the laser radar, and each point in the point cloud data has an absolute position (latitude, longitude and elevation) and a reflection intensity of each point. The two-dimensional images are acquired by a camera based on equipment, and each two-dimensional image is recorded according to strict time in the acquisition process. Therefore, the point cloud data and the two-dimensional image corresponding to the point cloud data can be determined according to the two-dimensional image, the point cloud data and the track point collected by the equipment.
There are various ways to obtain the M groups of identical points.
As an implementation manner, the obtained point cloud data may be fitted, the image formed after fitting and the two-dimensional image corresponding to the point cloud data are respectively subjected to mesh division, each first mesh is matched with each mesh in the two-dimensional image for M first meshes in the graph formed after fitting, a second mesh matched with each first mesh is obtained, a point is respectively selected from each first mesh and the second mesh matched with the first mesh to obtain a group of homonymy points, and the group of homonymy points includes one two-dimensional point and one three-dimensional point, so that M groups of homonymy points can be selected.
As another mode, the point cloud data may be back-projected to a two-dimensional image corresponding to the point cloud data to obtain a base map, and M groups of homologous points are selected from the base map.
The mode of selecting M groups of homologous points from the base map may be selecting M groups of homologous points from the base map based on a user operation.
Referring to fig. 3, in this manner, the step S110 may specifically include the following steps:
step S112: and acquiring point cloud data and a two-dimensional image corresponding to the point cloud data.
For obtaining the point cloud data and the two-dimensional image corresponding to the point cloud data, reference may be made to the foregoing detailed description, and details are not repeated here.
Step S114: and back projecting the point cloud data to the two-dimensional image to obtain a base map.
The method for back projecting the point cloud data to the two-dimensional image may specifically be that coordinate conversion is performed on each three-dimensional point in the point cloud data based on a coordinate conversion function established by internal parameters and initial external parameters of the camera to obtain a two-dimensional point corresponding to each point in the point cloud data, and the two-dimensional points obtained through coordinate conversion are marked in the two-dimensional image to obtain a base map.
Step S116: and selecting M groups of same-name point pairs based on the base map.
Each group of homonymous point pairs comprises a two-dimensional point in the two-dimensional image and a three-dimensional point in the point cloud data, and the corresponding coordinates of each group of homonymous point pairs comprise a two-dimensional coordinate corresponding to the two-dimensional point in the two-dimensional image and a three-dimensional coordinate corresponding to the three-dimensional point in the point cloud data.
The mode of selecting M groups of homonymous point pairs based on the base map can be realized by fitting an image obtained by two-dimensional points obtained by conversion in the base map and an image in the two-dimensional image to select M groups of homonymous point pairs.
As another mode, coordinate conversion may be performed on each three-dimensional point in the point cloud data to obtain a converted image, and a plurality of homonymy point pairs may be selected according to the converted image and the two-dimensional image.
The method for selecting M homonymous point pairs according to the converted image and the two-dimensional image may be that the converted image is subjected to meshing to obtain a plurality of third grid images, the two-dimensional image is subjected to meshing to obtain a fourth grid image, M third grid images are selected from the plurality of grid images, each selected third grid image is respectively matched with each selected fourth grid image to obtain a fourth grid matched with each selected third grid, and one point is respectively selected from each selected third grid and the fourth grid matched with the selected third grid to obtain a group of homonymous points.
There are various ways to obtain the initial back-projection error of M sets of homonymous point pairs.
As an embodiment, the back projection error of each group of homonymous point pairs in the homonymous point pair set may be calculated based on the coordinates corresponding to each group of homonymous point pairs in the M groups of homonymous point pairs and the initial external reference and the internal reference of the camera acquiring the two-dimensional image; and carrying out mean value calculation on the back throw errors of each group of homonymous point pairs in the M groups of homonymous point pairs to obtain initial back throw errors.
As another embodiment, the back projection error of each group of homonymous point pairs in the homonymous point pair set may be calculated based on the coordinates corresponding to each group of homonymous point pairs in the M groups of homonymous point pairs and the initial external reference and the internal reference of the camera acquiring the two-dimensional image; and accumulating the back-projection errors of each group of homonymous point pairs in the M groups of homonymous point pairs to obtain initial back-projection errors.
In the two manners, the coordinates corresponding to each group of homonymous point pairs include a two-dimensional coordinate corresponding to one two-dimensional point in the two-dimensional image and a three-dimensional coordinate corresponding to one three-dimensional point in the point cloud data.
Based on the coordinates corresponding to each group of homonymous point pairs in the M groups of homonymous point pairs and the initial external reference and the internal reference of the camera for acquiring the two-dimensional image, the way of calculating the back-projection error of each group of homonymous point pairs in the homonymous point pair set may specifically be: the method comprises the steps of constructing a coordinate conversion function based on initial external reference and internal reference of a camera for collecting two-dimensional images, substituting a three-dimensional coordinate in a coordinate corresponding to a group of homonymous point pairs into the coordinate conversion function for coordinate conversion aiming at each group of homonymous point pairs in M groups of homonymous point pairs to obtain a first conversion coordinate (the first conversion coordinate is a two-dimensional coordinate) obtained by three-dimensional coordinate conversion, and calculating the first conversion coordinate and the two-dimensional coordinate corresponding to the three-dimensional coordinate by using a distance calculation formula, so that a back projection error corresponding to each group of homonymous point pairs can be obtained.
Based on the coordinates corresponding to each group of homonymous point pairs in the M groups of homonymous point pairs and the initial external reference and the internal reference of the camera for acquiring the two-dimensional image, the way of calculating the back-projection error of each group of homonymous point pairs in the homonymous point pair set may also be: the method comprises the steps of constructing a coordinate conversion function based on initial external reference and internal reference of a camera for collecting two-dimensional images, substituting a two-dimensional coordinate in a coordinate corresponding to a group of homonymous point pairs into the coordinate conversion function for coordinate conversion aiming at each group of homonymous point pairs in M groups of homonymous point pairs to obtain a second conversion coordinate (the second conversion coordinate is a three-dimensional coordinate) obtained by the two-dimensional coordinate conversion, and calculating the second conversion coordinate and the three-dimensional coordinate corresponding to the two-dimensional coordinate by using a distance calculation formula, so that back projection errors corresponding to each group of homonymous point pairs can be obtained.
The manner of obtaining the set of homonymous point pairs may be to select at least one group of homonymous point pairs from the M groups of homonymous point pairs, where a set formed by the at least one group of homonymous point pairs is a homonymous point pair set.
The method for selecting at least one group of homonymous point pairs from the M groups of homonymous point pairs may be selecting randomly, or sorting each group of homonymous point pairs in the M groups of homonymous point pairs according to a preset rule, and selecting at least one group of homonymous point pairs according to a sorting sequence.
Step S120: and performing point pair change processing on the homonymous point pair set based on the M groups of homonymous point pairs to obtain a target homonymous point pair set.
There are various ways to perform point pair change processing on the same-name point pair set.
As an embodiment, a set of homonymous point pairs may be added, and it should be understood that the added point pairs to the set of homonymous point pairs are from M groups of homonymous point pairs.
In this way, the manner of adding the same-name point pairs to the same-name point pair set may be to randomly add one or more same-name point pairs to the same-name point pair set, or to select one or more same-name point pairs from M groups of same-name point pairs according to a certain selection rule and add the same-name point pairs to the same-name point pair set. The selection rule may be determined based on corresponding back projection errors of the same-name point pairs.
As another embodiment, the same-name point pairs may be reduced from the same-name point pair set.
In this way, the method for reducing the same-name point pairs from the same-name point pair set may be to randomly reduce one or more same-name point pairs from the same-name point pair set, or to select one or more same-name point pairs from the same-name point pairs according to a certain selection rule and delete the selected one or more same-name point pairs from the same-name point pair set. The selection rule may be determined based on corresponding back projection errors of the same-name point pairs.
As another embodiment, one or more homonymous point pairs in the homonymous point pair set may be replaced, where the replaced homonymous point pairs are from M groups of homonymous point pairs. The above replacement mode may be random replacement, or may be replacement according to a selection rule, for example, replacement is performed according to a sorting order of corresponding back-throw errors under the initial external reference according to each group of identical points.
As still another embodiment, the homonymous point pairs may be randomly added to or subtracted from the homonymous point pair set, and the randomly added homonymous point pairs may be from the M groups of homonymous point pairs.
In this way, the number of pairs of homonymous points may be one or more by randomly increasing or decreasing.
Step S130: and obtaining target back projection errors of the M groups of homonymous point pairs based on the target homonymous point pair set, the coordinates corresponding to all homonymous point pairs in the M groups of homonymous point pairs, and the internal reference and the initial external reference of the camera for acquiring the two-dimensional image.
The initial external reference may be obtained from mechanical parameters corresponding to the installation position of the camera when the camera is installed, or may be obtained based on coordinates corresponding to each group of homonymous point pairs in the M groups of homonymous point pairs and the internal reference of the camera.
The coordinates corresponding to each homonymy point pair comprise a two-dimensional coordinate and a three-dimensional coordinate, wherein the two-dimensional coordinate refers to the coordinate of the two-dimensional point in the homonymy point pair corresponding to the two-dimensional image, and the three-dimensional marking refers to the coordinate of the three-dimensional point in the homonymy point pair corresponding to the point cloud data.
In one manner, in step S130, the initial external reference is optimized by using the target set of homonymous point pairs and the internal reference of the camera that acquires the two-dimensional image, so as to obtain the optimized initial external reference, and the back-projection errors of the M groups of homonymous point pairs are obtained based on the coordinates corresponding to each homonymous point pair in the M groups of homonymous point pairs, the optimized initial external reference, and the internal reference of the camera.
In this embodiment, the method for optimizing the initial external parameter may use an optimization algorithm, and the optimization algorithm may be at least one of a sequential quadratic programming algorithm, a gradient descent method, or a lagrange multiplier method.
Step S140: and determining the target external parameters corresponding to the camera based on the initial back projection error and the target back projection error of the M groups of homonymous point pairs, the coordinates corresponding to each homonymous point pair in the target homonymous point pair set, and the internal parameters and the initial external parameters of the camera for acquiring the two-dimensional image.
As an embodiment, in step S140, it may be determined whether to update the set of homonymous point pairs to the target set of homonymous point pairs according to the initial back-projection error and the target back-projection error of the M groups of homonymous point pairs, and when determining to update the set of homonymous point pairs, the initial external parameters may be optimized by using an optimization algorithm based on coordinates corresponding to each homonymous point pair in the target set of homonymous point pairs and the internal parameters of the camera that collects the two-dimensional image, so as to determine the target external parameters corresponding to the camera.
In this manner, if it is determined that the set of homonymous point pairs is not to be updated to the set of target homonymous point pairs, the process may return to step S120 to achieve that the set of target homonymous point pairs is better than the set of obtained homonymous point pairs, so that the target outliers determined based on the set of target homonymous point pairs are more accurate.
By adopting the joint calibration method provided by the application, the point pair change processing can be carried out on the homonymous point pair set to obtain a target homonymous point pair set; acquiring target back projection errors of M groups of homonymous point pairs based on the target homonymous point pair set and internal reference and initial external reference of a camera for acquiring a two-dimensional image; and determining target external parameters corresponding to the camera based on the initial back projection error and the target back projection error. And further, in the calibration process, the homonymous point pair set is optimized based on the initial back-projection error and the target back-projection error, namely the homonymous point pairs in the homonymous point pair set are the optimal ones in M groups of homonymous point pairs, so that the target external parameters obtained according to the optimized homonymous point pair set are the optimal external parameters.
Referring to fig. 4, another embodiment of the present application provides a joint calibration method, which includes the following steps:
step S210: and acquiring M groups of homonymous point pairs, initial back-projection errors of the M groups of homonymous point pairs and homonymous point pair sets.
The M groups of homonymous point pairs are selected from point cloud data and two-dimensional images corresponding to the point cloud data, each group of homonymous point pairs in the homonymous point pair set belong to the M groups of homonymous point pairs, and M is an integer greater than 1.
Step S220: and performing point pair change processing on the homonymous point pair set based on the M groups of homonymous point pairs to obtain a target homonymous point pair set.
Step S230: and optimizing the initial external parameters by using a preset optimization algorithm based on the coordinates corresponding to all homonymous point pairs in the target homonymous point pair set and the internal parameters of the camera for acquiring the two-dimensional image to obtain the processed initial external parameters.
The preset optimization algorithm may be one or more of a sequential quadratic programming algorithm, a gradient descent method, a newton method, a firefly algorithm, a termite-swarm algorithm, and the like.
Referring to fig. 5, as another embodiment, step S230 includes:
step S232: and constructing a coordinate conversion function based on the internal reference and the initial external reference of the camera for acquiring the two-dimensional image.
As an implementation manner, in step S232, an internal reference matrix may be specifically constructed based on internal references of a camera acquiring the two-dimensional image, an external reference matrix may be constructed based on initial external references of the camera acquiring the two-dimensional image, and a coordinate conversion function between the two-dimensional coordinates and the three-dimensional coordinates in the pair of corresponding points may be constructed based on the internal reference matrix and the external reference matrix.
Wherein k is an internal reference matrix, an
Figure 164852DEST_PATH_IMAGE001
R is a rotation matrix, an
Figure 770408DEST_PATH_IMAGE002
T is a translation vector, and
Figure 487829DEST_PATH_IMAGE003
and rt together form the extrinsic matrix. The coordinate conversion function is
Figure 883038DEST_PATH_IMAGE004
Figure 884361DEST_PATH_IMAGE005
Is a two-dimensional point, and
Figure 644506DEST_PATH_IMAGE006
Figure 911540DEST_PATH_IMAGE007
are corresponding three-dimensional points, an
Figure 801785DEST_PATH_IMAGE008
The coordinate transfer function satisfies
Figure 408347DEST_PATH_IMAGE009
Wherein s is a two-dimensional image scale, u is a two-dimensional point abscissa in the homonymous point pair, v is a two-dimensional point ordinate in the homonymous point pair,
Figure 588661DEST_PATH_IMAGE010
is the lateral focal length of the camera and,
Figure 342991DEST_PATH_IMAGE011
is the longitudinal focal length of the camera and,
Figure 17686DEST_PATH_IMAGE012
is the abscissa of the center point of the image,
Figure 229486DEST_PATH_IMAGE013
is the ordinate of the center point of the image,
Figure 659331DEST_PATH_IMAGE014
for the x-axis translation vector,
Figure 838639DEST_PATH_IMAGE015
as a y-axis translation vector, the y-axis translation vector,
Figure 300714DEST_PATH_IMAGE016
is a z-axis translation vector and is,
Figure 944184DEST_PATH_IMAGE017
the x-axis coordinates of the three-dimensional points in the homonymous point pair,
Figure 217034DEST_PATH_IMAGE018
the y-axis coordinates of the three-dimensional points in the homonymous point pair,
Figure 631441DEST_PATH_IMAGE019
z-axis coordinates for three-dimensional points in a homonymous point pair。
Step S234: and respectively substituting the coordinates corresponding to all the homonymous point pairs in the target homonymous point pair set into a coordinate conversion function to obtain a coordinate conversion calculation formula.
It should be understood that the corresponding coordinates of each set of homonymous point pairs include two-dimensional coordinates and three-dimensional coordinates corresponding to the two-dimensional points and three-dimensional points, respectively, in the set of homonymous point pairs.
Step S236: and performing constrained optimization on the initial external parameters in the coordinate conversion calculation formula by using a sequential quadratic programming algorithm to obtain the processed initial external parameters.
The constrained optimization may refer to that the initial external parameter corresponds to a preset value range, that is, the values of the initial external parameter during the optimization process or after being processed by the optimization are all within the preset value range.
After the two-dimensional points and the three-dimensional points corresponding to each group of homonymous point pairs in the target homonymous point pair set are respectively substituted into the coordinate conversion function, u, v and,
Figure 647939DEST_PATH_IMAGE020
Figure 145916DEST_PATH_IMAGE021
And
Figure 104514DEST_PATH_IMAGE022
the initial extrinsic parameters are known quantities, and the r and the t are subjected to constrained optimization by using a sequential quadratic programming algorithm, so that the calibration of the initial extrinsic parameters subjected to optimization in the target homonymous point pair set is more accurate.
Step S240: and obtaining a target back-projection error corresponding to the processed initial external reference according to the processed initial external reference, the coordinates corresponding to each homonymous point pair in the M groups of homonymous point pairs and the internal reference.
In step S240, a coordinate conversion function is constructed based on the processed initial external reference and the processed internal reference, it should be understood that the respective coordinates corresponding to each of the M groups of corresponding point pairs are entered into the coordinate conversion function to obtain back-projection errors corresponding to each of the corresponding point pairs, and the back-projection errors corresponding to each of the corresponding point pairs are accumulated or averaged to obtain a target back-projection error corresponding to the processed initial external reference.
Step S250: and determining whether to update the homonymous point pair set into the target homonymous point pair set according to the target back casting error and the initial back casting error.
There are various ways to determine whether to update the homonymous point pair set to the target homonymous point pair set.
As an embodiment, the homonymous point pair set may be updated to the target homonymous point pair set when a difference between the target backcasting error and the initial backcasting error is greater than a preset threshold. And when the difference value between the target back-projection error and the initial back-projection error is smaller than or equal to a preset threshold value, not updating.
In this manner, the preset threshold may be a constant such as 0 or 0.01.
As another embodiment, it may also be determined whether to update the homonymous point pair set to the target homonymous point pair set by a difference between the target backcasting error and the initial backcasting error and the number of updates.
Referring to fig. 6, in this manner, the step S250 includes the following steps:
step S252: the target back-projection error is compared to the initial back-projection error.
Step S254: and if the target back casting error is smaller than the initial back casting error, determining to update the homonymous point pair set into the target homonymous point pair set.
Step S256: and if the target back-projection error is larger than the initial back-projection error, obtaining an acceptance probability according to the initial back-projection error and the target back-projection error, and determining whether to update the homonymous point pair set to the target homonymous point pair set according to the acceptance probability.
There are various ways to obtain the acceptance probability according to the initial back-projection error and the target back-projection error.
As an embodiment, an acceptance probability may be obtained according to a difference between the initial back-projection error and the target back-projection error and a preset corresponding relationship between the difference and the acceptance probability, and the preset corresponding relationship stores a plurality of receiving probabilities corresponding to the difference ranges and each difference range.
As another embodiment, an acceptance probability may be obtained by performing an annealing algorithm according to the number of times of processing the set of homonymous point pairs, the initial back-projection error, the target back-projection error, and the number of homonymous point pairs included in the set of target homonymous point pairs.
In this embodiment, the step S256 specifically includes: and performing probability calculation according to the number of times of processing the homonymous point pair set, the initial back-casting error, the target back-casting error and the number of homonymous point pairs in the target homonymous point pair set to obtain the acceptance probability.
When the probability calculation is performed to obtain the acceptance probability, specifically, the acceptance probability may be obtained by performing calculation using a preset probability calculation formula according to the number of times of processing the set of homonymous point pairs, the initial back-casting error, the target back-casting error, and the number of homonymous point pairs included in the set of target homonymous point pairs, where the preset probability calculation formula is
Figure 992836DEST_PATH_IMAGE023
Figure 875341DEST_PATH_IMAGE024
In order to be the initial back-projection error,
Figure 916240DEST_PATH_IMAGE025
in order to target the back-projection error,
Figure 796472DEST_PATH_IMAGE026
Figure 421357DEST_PATH_IMAGE027
is a preset constant and is used as a reference,
Figure 841974DEST_PATH_IMAGE028
is a constant number between 0 and 1,
Figure 252227DEST_PATH_IMAGE029
the number of times of the point pair change processing is performed on the same-name point pair set,
Figure 45303DEST_PATH_IMAGE030
the number of homonymous point pairs included in the target homonymous point pair set.
Wherein T denotes the annealing temperature, since
Figure 970533DEST_PATH_IMAGE028
Is a constant number between 0 and 1,
Figure 132524DEST_PATH_IMAGE029
the number of times of the point pair change processing is performed on the same-name point pair set,
Figure 194021DEST_PATH_IMAGE030
is the number of homonymous point pairs included in the target homonymous point pair set, and therefore follows
Figure 665323DEST_PATH_IMAGE029
And T is gradually increased and gradually reduced, so that the increase of the back-projection probability error can be accepted according to a certain probability during the start of iteration, and the optimization is ensured not to fall into the local optimization. With continuous iteration, the annealing temperature is gradually reduced, the probability of accepting the reverse casting error is gradually reduced, the homonymous point pairs included in the homonymous point pair set gradually converge to the global optimum position (namely, the homonymous point pairs included in the target homonymous point pair set are gradually made to be the optimal homonymous point pairs in the M groups of homonymous point pairs), correspondingly, after the point pair change processing times of the homonymous point pair set reach the preset times and the target homonymous point pair set is determined, the homonymous point pairs included in the target homonymous point pair set can be regarded as the optimal homonymous point pairs in the M groups of homonymous point pairs.
It should be understood that the preset number of times may be, but is not limited to, 5 times, 10 times, 20 times, 50 times, etc., and may also be set according to M, the number of point pairs subjected to the point pair change processing each time, and the number of same-name point pairs in the same-name point pair set acquired for the first time.
As an embodiment, if the number of point pairs subjected to the point pair change processing each time is one, and the first obtained same-name point pair set has a same-name point of M/2, the preset number of times may be M/2.
If the update is determined, step S260 is executed: and updating the homonymous point pair set to a target homonymous point pair set, taking the processed initial external parameter as a new initial external parameter, taking the target inverse projection error as a new initial inverse projection error, and returning to the step S220 until the frequency of the point pair change processing on the homonymous point pair set reaches a preset frequency, and taking the processed initial external parameter as the target external parameter corresponding to the camera.
If the point pair set is determined not to be updated, the step S220 is executed again, until the number of times of the point pair change processing on the same-name point pair set reaches the preset number, and the processed initial external parameter is used as the target external parameter corresponding to the camera.
Referring to fig. 7, fig. 7 is a schematic diagram illustrating a variation of a difference between an initial back-projection error and a target back-projection error obtained before and after performing a point-to-point variation process. In the process of executing steps S210 to S260 and executing steps S220 to S260 in a loop for the preset number of times, as the point pair change processing is continuously performed on the same-name point pair set, the corresponding initial external references before and after performing the point pair change processing gradually tend to be consistent, and correspondingly, the difference between the initial back-projection error and the target back-projection error fluctuates greatly from the beginning to gradually decrease and converge.
Similarly, as shown in fig. 8 and 9, the left diagram (a) in fig. 8 shows a schematic diagram of M groups of same-name point pairs d, and the right diagram (b) in fig. 8 shows the intention of the same-name point pairs e included in the target same-name point pair set. Fig. 9 shows a diagram of the variation of the number of point pairs in the set of target homonymous point pairs. Because a large number of homonymous point pairs in the M groups of homonymous point pairs are all optimal homonymous point pairs, if homonymous point pairs are randomly increased or decreased in the homonymous point pair set, the homonymous point pairs which are not optimal in the homonymous point pair set are gradually removed, the optimal homonymous point pairs in the M homonymous point pairs which do not belong to the homonymous point pair set are gradually added into the homonymous point pair set, and corresponding initial external references before and after point pair change processing each time gradually tend to be consistent in the process of continuously performing point pair change processing on the homonymous point pair set, and the optimal homonymous point pairs in the M groups of homonymous point pairs are also gradually added into the homonymous point pair set to obtain a target homonymous point pair set, so that the number of homonymous point pairs in the target homonymous point pair set tends to be gradually stable in the point pair.
According to the joint calibration method provided by the embodiment of the application, the point pair change processing is carried out on the homonymous point pair set to obtain a target homonymous point pair set; optimizing the initial external parameters by using a preset optimization algorithm to obtain the processed initial external parameters; and obtaining a target back-projection error corresponding to the processed initial external parameter according to the processed initial external parameter. Determining whether to update the homonymous point pair set into a target homonymous point pair set according to the target reverse casting error and the initial reverse casting error; and if the update is determined, updating the homonymous point pair set into a target homonymous point pair set, taking the processed initial external parameter as a new initial external parameter, taking the target back projection error as a new initial back projection error, returning to execute the step of carrying out point pair change processing on the homonymous point pair set to obtain the target homonymous point pair set, and taking the processed initial external parameter as the target external parameter corresponding to the camera until the number of times of carrying out point pair change processing on the homonymous point pair set reaches a preset number. The method can realize that the homonymous point pair set is updated based on the initial back-projection error and the target back-projection error which are obtained before and after the homonymous point pair set is subjected to the point pair change processing each time in the calibration process, so that the target homonymous point pair set is gradually optimized, the homonymous point pairs contained in the finally obtained target homonymous point pair set are the optimal homonymous point pairs in M groups of homonymous point pairs, and further the target external reference obtained based on the target homonymous point pair set is more accurate.
The embodiment of the application further provides a joint calibration method, which is used for calibrating external parameters of a camera arranged on a vehicle, wherein the vehicle comprises a vehicle body, the camera and a laser radar, and the camera and the laser radar are arranged on the front side of the vehicle body and are used for acquiring two-dimensional images and point cloud data in the vehicle advancing direction. Fig. 10 shows a two-dimensional image acquired by a camera, for example, a two-dimensional image of a signboard indicated by g in the figure, and fig. 11 shows point cloud data corresponding to a two-dimensional image acquired by a laser radar, for example, a point cloud data of a signboard indicated by h in the figure. Please refer to fig. 12, which illustrates a camera calibration method according to this embodiment, including the following steps:
step S301: and acquiring point cloud data and a two-dimensional image corresponding to the point cloud data.
Step S302: and back projecting the point cloud data to the two-dimensional image to obtain a base map.
Fig. 13 is a base map obtained by back-projecting the point cloud data in fig. 11 onto the two-dimensional image in fig. 10. For example, taking a sign in fig. 13 as an example, the point cloud data corresponding to the sign indicated by j1 is an image obtained by back projecting the point cloud data onto a two-dimensional image. The method of back projecting the point cloud data to the two-dimensional image can specifically enable a coordinate conversion function established according to the internal parameters and the initial external parameters of the camera to be back projected.
Step S303: and selecting M groups of homonymous point pairs based on the base map, and determining a homonymous point pair set based on the M groups of homonymous point pairs.
Each group of homonymous point pairs comprises a two-dimensional point in the two-dimensional image and a three-dimensional point in the point cloud data, the corresponding coordinates of each group of homonymous point pairs comprise a two-dimensional coordinate corresponding to the two-dimensional point in the two-dimensional image and a three-dimensional coordinate corresponding to the three-dimensional point in the point cloud data, and homonymous point pairs in the homonymous point pair set belong to M groups of homonymous point pairs.
Step S304: and calculating the back projection error of each group of homonymous point pairs in the homonymous point pair set based on the coordinates corresponding to each group of homonymous point pairs in the M groups of homonymous point pairs and the initial external reference and the internal reference of the camera for acquiring the two-dimensional image.
Step S305: and carrying out mean value calculation on the back throw errors of each group of homonymous point pairs in the M groups of homonymous point pairs to obtain initial back throw errors.
Step S306: and based on the M groups of homonymous point pairs, randomly adding homonymous point pairs to the homonymous point pair set or randomly reducing homonymous point pairs from the homonymous point pair set to obtain a target homonymous point pair set.
And adding the same-name point pairs into the same-name point pair set, wherein the same-name point pairs added into the same-name point pair set belong to M groups of same-name point pairs.
Step S307: and optimizing the initial external parameters by using a preset optimization algorithm based on the coordinates corresponding to all homonymous point pairs in the target homonymous point pair set and the internal parameters of the camera for acquiring the two-dimensional image to obtain the processed initial external parameters.
In step S307, a coordinate transformation function may be specifically constructed based on the internal reference and the initial external reference of the camera that acquires the two-dimensional image; respectively substituting coordinates corresponding to all the homonymous point pairs in the target homonymous point pair set into a coordinate conversion function to obtain a coordinate conversion calculation formula; and performing constrained optimization on the initial external parameters in the coordinate conversion calculation formula by using a sequential quadratic programming algorithm to obtain the processed initial external parameters.
Wherein the coordinate conversion function is
Figure 15533DEST_PATH_IMAGE031
K is an internal reference matrix, an
Figure 43532DEST_PATH_IMAGE032
R is a rotation matrix, an
Figure 913530DEST_PATH_IMAGE033
T is a translation vector, and
Figure 40886DEST_PATH_IMAGE034
m2d is a two-dimensional point, an
Figure 940709DEST_PATH_IMAGE035
M3d is the corresponding three-dimensional point, and
Figure 693770DEST_PATH_IMAGE036
the coordinate transfer function satisfies
Figure 933121DEST_PATH_IMAGE037
Wherein s is a two-dimensional image scale, u is a two-dimensional point abscissa in the homonymous point pair,
Figure 979181DEST_PATH_IMAGE038
is the two-dimensional point ordinate of the homonymous point pair,
Figure 366300DEST_PATH_IMAGE039
is the lateral focal length of the camera and,
Figure 673785DEST_PATH_IMAGE040
is the longitudinal focal length of the camera and,
Figure 751331DEST_PATH_IMAGE041
is the abscissa of the center point of the image,
Figure 548386DEST_PATH_IMAGE042
is the ordinate of the center point of the image,
Figure 94905DEST_PATH_IMAGE043
for the x-axis translation vector,
Figure 956813DEST_PATH_IMAGE044
as a y-axis translation vector, the y-axis translation vector,
Figure 967494DEST_PATH_IMAGE045
is a z-axis translation vector and is,
Figure 873133DEST_PATH_IMAGE046
the x-axis coordinates of the three-dimensional points in the homonymous point pair,
Figure 156216DEST_PATH_IMAGE047
the y-axis coordinates of the three-dimensional points in the homonymous point pair,
Figure 867820DEST_PATH_IMAGE048
is the z-axis coordinate of the three-dimensional point in the homonymous point pair.
Step S308: and obtaining a target back-projection error corresponding to the processed initial external reference according to the processed initial external reference, the coordinates corresponding to each homonymous point pair in the M groups of homonymous point pairs and the internal reference.
In step S308, a back-projection error corresponding to the processed initial external parameter may be calculated based on the processed initial external parameter, the coordinate conversion function, and the coordinates and the internal parameters corresponding to each of the M groups of homonymous point pairs.
Step S309: and determining whether to update the homonymous point pair set into the target homonymous point pair set according to the target back casting error and the initial back casting error.
Wherein, the step S309 may specifically be: comparing the target back projection error with the initial back projection error; if the target reverse casting error is smaller than the initial reverse casting error, determining to update the homonymous point pair set into a target homonymous point pair set; if the target back-projection error is larger than the initial back-projection error, calculating by using a preset probability calculation formula according to the number of times of processing the homonymous point pair set, the initial back-projection error, the target back-projection error and the number of homonymous point pairs in the target homonymous point pair set to obtain an acceptance probability, wherein the preset probability calculation formula is
Figure 670691DEST_PATH_IMAGE049
Figure 500893DEST_PATH_IMAGE050
In order to be the initial back-projection error,
Figure 84322DEST_PATH_IMAGE051
in order to target the back-projection error,
Figure 271720DEST_PATH_IMAGE052
Figure 443945DEST_PATH_IMAGE053
is a preset constant and is used as a reference,
Figure 691386DEST_PATH_IMAGE054
is a constant number between 0 and 1,
Figure 762111DEST_PATH_IMAGE055
the number of times of the point pair change processing is performed on the same-name point pair set,
Figure 503933DEST_PATH_IMAGE056
the number of homonymous point pairs included in the target homonymous point pair set. And after the acceptance probability is obtained, determining whether to update the homonymous point pair set to the target homonymous point pair set according to the acceptance probability.
If the update is determined, step S310 is executed: and updating the homonymous point pair set to a target homonymous point pair set, taking the processed initial external parameter as a new initial external parameter, taking the target back projection error as a new initial back projection error, and returning to the step S306 until the frequency of the point pair change processing on the homonymous point pair set reaches a preset frequency, and taking the processed initial external parameter as the target external parameter corresponding to the camera.
If the point pair set is determined not to be updated, returning to step S306, and when the number of times of performing the point pair change processing on the point pair set with the same name reaches a preset number, taking the processed initial external parameter as the target external parameter corresponding to the camera.
As shown in fig. 14, j2 in the figure is another base map obtained by back-projecting the point cloud data corresponding to the signboard image shown in fig. 11 to the two-dimensional image based on the camera internal reference and the target external reference. The specific back projection process is to establish a coordinate conversion function based on the internal reference and the external reference of the camera, respectively substitute each three-dimensional coordinate in the point cloud data into the coordinate conversion function to convert and obtain a two-dimensional coordinate, and display each two-dimensional coordinate obtained by conversion in a two-dimensional image to obtain a base map as shown in fig. 14, and as shown in j2 and g in fig. 14, the point corresponding to the back projection of the point cloud data corresponding to the sign is almost completely located on the two-dimensional image of the sign in fig. 2.
Analyzing and comparing the position relationship between j1 and g in fig. 13 and the position relationship between j2 and g in fig. 14, by using the joint calibration method of the present application, it can be realized that, in the calibration process, the homonymous point pair set is updated based on the initial back-projection error and the target back-projection error obtained before and after the point pair change processing is performed on the homonymous point pair set each time, so as to gradually optimize the target homonymous point pair set, so that the homonymous point pair included in the finally obtained target homonymous point pair set is the optimal homonymous point pair among M groups of homonymous point pairs, and further, the target external parameter obtained based on the target homonymous point pair set is more accurate.
Referring to fig. 15, the present application provides a joint calibration apparatus 400, which includes a data obtaining module 410, a point-to-point variation module 420, a target error obtaining module 430, and a target external reference obtaining module 440.
The data obtaining module 410 is configured to obtain M groups of homonymous point pairs, an initial back-projection error of the M groups of homonymous point pairs, and a homonymous point pair set.
The M groups of homonymous point pairs are selected from point cloud data and two-dimensional images corresponding to the point cloud data, each group of homonymous point pairs in the homonymous point pair set belong to the M groups of homonymous point pairs, and M is an integer greater than 1.
Referring to fig. 16, as an embodiment, the data obtaining module 410 includes a data obtaining sub-module 411, a base map obtaining sub-module 412, and a same name point obtaining sub-module 413.
The data obtaining sub-module 411 is configured to obtain point cloud data and a two-dimensional image corresponding to the point cloud data.
And the base map obtaining submodule 412 is used for back projecting the point cloud data to the two-dimensional image to obtain a base map.
The homonym point obtaining submodule 413 is configured to select M groups of homonym point pairs based on the base map, where each group of homonym point pairs includes one two-dimensional point in the two-dimensional image and one three-dimensional point in the point cloud data, and a coordinate corresponding to each group of homonym point pairs includes a two-dimensional coordinate corresponding to one two-dimensional point in the two-dimensional image and a three-dimensional coordinate corresponding to one three-dimensional point in the point cloud data.
As an embodiment, the data obtaining module 410 further includes: a calculation sub-module 415 and an initial back-throw error acquisition sub-module 416.
The calculating sub-module 415 is configured to calculate a back-projection error of each group of homonymous point pairs in the homonymous point pair set based on the coordinates corresponding to each group of homonymous point pairs in the M groups of homonymous point pairs and the initial external reference and the internal reference of the camera acquiring the two-dimensional image.
And an initial back-projection error obtaining submodule 416, configured to perform mean calculation on the back-projection errors of each of the M sets of homonymous point pairs to obtain an initial back-projection error.
The point pair changing module 420 is configured to perform point pair changing processing on the homonymous point pair set based on the M groups of homonymous point pairs to obtain a target homonymous point pair set.
In an embodiment, the point pair change module 420 is further configured to randomly add a same-name point pair to the same-name point pair set or randomly reduce a same-name point pair from the same-name point pair set based on the M groups of same-name point pairs to obtain a target same-name point pair set, where the same-name point pairs added to the same-name point pair set belong to the M groups of same-name point pairs.
And a target error obtaining module 430, configured to obtain target back-projection errors of the M groups of homonymous point pairs based on the target homonymous point pair set, coordinates corresponding to each homonymous point pair in the M groups of homonymous point pairs, and the internal reference and the initial external reference of the camera acquiring the two-dimensional image.
Referring to fig. 17, as an embodiment, the target error obtaining module 430 includes: an optimization sub-module 432 and a target back-projection error acquisition sub-module 434.
And the optimization processing submodule 432 is configured to perform optimization processing on the initial external parameters by using a preset optimization algorithm based on the coordinates corresponding to each homonymous point pair in the target homonymous point pair set and the internal parameters of the camera for acquiring the two-dimensional image, so as to obtain the processed initial external parameters.
And a target back-projection error obtaining submodule 434, configured to obtain a target back-projection error corresponding to the processed initial external reference according to the processed initial external reference, the coordinates and the internal reference corresponding to each of the M groups of homonymous point pairs.
As an embodiment, the target back-projection error obtaining submodule 434 includes a function building unit, a calculation formula obtaining unit, and an optimization unit.
And the function construction unit is used for constructing a coordinate conversion function based on the internal parameters and the initial external parameters of the camera for acquiring the two-dimensional image.
And the calculation formula obtaining unit is used for respectively substituting the coordinates corresponding to all the homonymous point pairs in the target homonymous point pair set into the coordinate conversion function to obtain a coordinate conversion calculation formula.
And the optimization unit is used for carrying out constrained optimization on the initial external parameters in the coordinate conversion calculation formula by using a sequential quadratic programming algorithm to obtain the processed initial external parameters.
As an implementation manner, the function constructing unit is specifically configured to construct an internal reference matrix based on internal references of a camera that acquires the two-dimensional image; constructing an external parameter matrix based on initial external parameters of a camera acquiring the two-dimensional image; and constructing a coordinate conversion function between the two-dimensional coordinates and the three-dimensional coordinates in the corresponding point pair based on the internal reference matrix and the external reference matrix.
In this manner, the coordinate transfer function is
Figure 281396DEST_PATH_IMAGE057
K is an internal reference matrix, an
Figure 762056DEST_PATH_IMAGE058
R is a rotation matrix, an
Figure 241448DEST_PATH_IMAGE059
T is a translation vector, and
Figure 36228DEST_PATH_IMAGE060
m2d is a two-dimensional point, an
Figure 730515DEST_PATH_IMAGE061
M3d is the corresponding three-dimensional point, and
Figure 67562DEST_PATH_IMAGE062
the coordinate transfer function satisfies
Figure 50561DEST_PATH_IMAGE063
Wherein s is a two-dimensional image scale, u is a two-dimensional point abscissa in the homonymous point pair,
Figure 632721DEST_PATH_IMAGE064
is the two-dimensional point ordinate of the homonymous point pair,
Figure 181514DEST_PATH_IMAGE065
is the lateral focal length of the camera and,
Figure 207239DEST_PATH_IMAGE066
is the longitudinal focal length of the camera and,
Figure 162688DEST_PATH_IMAGE067
is the abscissa of the center point of the image,
Figure 361588DEST_PATH_IMAGE068
is the ordinate of the center point of the image,
Figure 702571DEST_PATH_IMAGE069
for the x-axis translation vector,
Figure 148464DEST_PATH_IMAGE070
as a y-axis translation vector, the y-axis translation vector,
Figure 902794DEST_PATH_IMAGE071
is a z-axis translation vector and is,
Figure 577489DEST_PATH_IMAGE072
the x-axis coordinates of the three-dimensional points in the homonymous point pair,
Figure 803938DEST_PATH_IMAGE073
the y-axis coordinates of the three-dimensional points in the homonymous point pair,
Figure 233782DEST_PATH_IMAGE074
is the z-axis coordinate of the three-dimensional point in the homonymous point pair.
And the target external parameter obtaining module 440 is configured to determine the target external parameters corresponding to the camera based on the initial back-projection error and the target back-projection error of the M groups of homonymous point pairs, the coordinates corresponding to each homonymous point pair in the target homonymous point pair set, and the internal parameters and the initial external parameters of the camera acquiring the two-dimensional image.
Referring to FIG. 18, for one embodiment, the target extrinsic parameter obtaining module 440 includes an update determination sub-module 442 and a target extrinsic parameter obtaining sub-module 444.
The update determining sub-module 442 is configured to determine whether to update the homonymous point pair set to the target homonymous point pair set according to the target inverse casting error and the initial inverse casting error.
The target outlier obtaining sub-module 444 is configured to, when it is determined that the homonymous point pair set is updated to the target homonymous point pair set, update the homonymous point pair set to the target homonymous point pair set, use the processed initial outlier as a new initial outlier, use the target back-projection error as a new initial back-projection error, and use the processed initial outlier as the target outlier corresponding to the camera when it is determined that the number of times of performing the point pair change processing on the homonymous point pair set reaches a preset number of times.
As an embodiment, the update determination submodule 442 includes a comparison unit and an update determination unit.
And the comparison unit is used for comparing the target back projection error with the initial back projection error.
And the updating determining unit is used for determining to update the homonymous point pair set to the target homonymous point pair set when the target back casting error is not larger than the initial back casting error. And when the target back casting error is larger than the initial back casting error, obtaining an acceptance probability according to the initial back casting error and the target back casting error, and determining whether to update the homonymous point pair set into the target homonymous point pair set according to the acceptance probability.
As an embodiment, the update determining unit is further configured to calculate, according to the number of times of processing the set of homonymous point pairs, the initial back-casting error, the target back-casting error, and the number of homonymous point pairs included in the set of target homonymous point pairs, using a preset probability calculation formula, to obtain the acceptance probability, where the preset probability calculation formula is
Figure 147512DEST_PATH_IMAGE075
Figure 875165DEST_PATH_IMAGE076
In order to be the initial back-projection error,
Figure 518636DEST_PATH_IMAGE077
in order to target the back-projection error,
Figure 791485DEST_PATH_IMAGE078
Figure 208823DEST_PATH_IMAGE079
is a preset constant and is used as a reference,
Figure 225320DEST_PATH_IMAGE080
is a constant number between 0 and 1,
Figure 910248DEST_PATH_IMAGE081
the number of times of the point pair change processing is performed on the same-name point pair set,
Figure 416316DEST_PATH_IMAGE082
the number of homonymous point pairs included in the target homonymous point pair set.
It should be noted that the device embodiment and the method embodiment in the present application correspond to each other, and specific principles in the device embodiment may refer to the contents in the method embodiment, which is not described herein again.
An electronic device provided by the present application will be described below with reference to fig. 19.
Referring to fig. 19, based on the joint calibration method provided in the foregoing embodiment, another electronic device 100 capable of executing the foregoing method is further provided in the embodiment of the present application, where the electronic device 100 may be a server or a terminal device, and the terminal device may be a device such as a smart phone, a tablet computer, a computer, or a portable computer. As one way, the electronic device 100 may be the server 14 or the terminal device 15 as shown in fig. 1.
The electronic device 100 includes a processor 102 and a memory 104. The memory 104 stores programs that can execute the content of the foregoing embodiments, and the processor 102 can execute the programs stored in the memory 104.
Processor 102 may include, among other things, one or more cores for processing data and a message matrix unit. The processor 102 interfaces with various components throughout the electronic device 100 using various interfaces and circuitry to perform various functions of the electronic device 100 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 104 and invoking data stored in the memory 104. Alternatively, the processor 102 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 102 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 102, but may be implemented by a communication chip.
The Memory 104 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 104 may be used to store instructions, programs, code sets, or instruction sets. The memory 104 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function, and the like. The storage data area may also store data (e.g., two-dimensional images, point cloud data, peer-to-peer with same name) acquired by the electronic device 100 during use, and the like.
The electronic device 100 may further include a network module for receiving and transmitting electromagnetic waves, and implementing interconversion between the electromagnetic waves and the electrical signals, so as to communicate with a communication network or other devices, for example, an audio playing device. The network module may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The network module may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices via a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The screen can display the interface content and perform data interaction.
The electronic device 100 may further include a network module for receiving and transmitting electromagnetic waves, and implementing interconversion between the electromagnetic waves and the electrical signals, so as to communicate with a communication network or other devices, for example, an audio playing device. The network module may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The network module may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices via a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The screen can display the interface content and perform data interaction.
In some embodiments, the electronic device 100 may further include: a peripheral interface and at least one peripheral device. The processor 102, memory 104, and peripheral interface 106 may be connected by bus or signal lines. Each peripheral device may interface with the peripheral devices through a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency assembly 108, a positioning assembly 112, a camera 114, an audio assembly 116, and a display screen 118, among others
Peripheral interface 106 may be used to connect at least one peripheral device associated with I/O (Input/Output) to processor 102 and memory 104. In some embodiments, the processor 102, memory 104, and peripheral interface 106 are integrated on the same chip or circuit board; in some other embodiments, any one or both of the processor 102, the memory 104, and the peripheral interface 106 may be implemented on a single chip or circuit board, which is not limited in this application.
The Radio Frequency assembly 108 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency assembly 108 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency assembly 108 converts electrical signals to electromagnetic signals for transmission, or converts received electromagnetic signals to electrical signals. Optionally, the radio frequency assembly 108 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency component 108 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency component 108 may further include NFC (Near Field Communication) related circuitry, which is not limited in this application.
The positioning component 112 is used to locate a current geographic location of the electronic device to implement navigation or LBS (location based Service). The positioning component 112 may be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, or the galileo System in russia, and so on.
The camera 114 is used to capture images or video. Optionally, the cameras 114 include front and rear cameras. Generally, the front camera is disposed on the front panel of the electronic apparatus 100, and the rear camera is disposed on the rear surface of the electronic apparatus 100. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera 114 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio components 116 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 102 for processing or inputting the electric signals to the radio frequency assembly 108 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the electronic device 100. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 102 or the radio frequency components 108 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio component 114 may also include a headphone jack.
The display screen 118 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 118 is a touch display screen, the display screen 118 also has the ability to capture touch signals on or over the surface of the display screen 118. The touch signal may be input to the processor 102 as a control signal for processing. At this point, the display screen 118 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 118 may be one, providing the front panel of the electronic device 100; in other embodiments, the display screens 118 may be at least two, respectively disposed on different surfaces of the electronic device 100 or in a folded design; in still other embodiments, the display 118 may be a flexible display disposed on a curved surface or on a folded surface of the electronic device 100. Even further, the display screen 118 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display screen 118 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
Referring to fig. 20, an embodiment of the present application further provides a vehicle 500, where the vehicle 500 may include a camera 510 and a laser radar 520, and the camera 510 is configured to acquire a two-dimensional image; lidar 520 is used to collect point cloud data. The camera 510 and the laser radar 520 may be provided at a roof position of the vehicle body 530, or may be provided at a head portion of the vehicle body 530. Vehicle 500 may also include memory and one or more processors that are connected to camera 510 and lidar 520, respectively; one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the methods described in the method embodiments above.
The embodiment of the application also provides a computer readable storage medium. The computer readable medium has stored therein a program code which can be called by a processor to execute the method described in the above method embodiments.
The computer-readable storage medium may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium includes a non-volatile computer-readable storage medium. The computer readable storage medium has a storage space for program code for performing any of the method steps of the above-described method. The program code can be read from or written to one or more computer program products. The program code may be compressed, for example, in a suitable form.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method described in the various alternative implementations described above.
In summary, according to the joint calibration method, the joint calibration device, the joint calibration equipment, the joint calibration vehicle and the joint calibration storage medium provided by the present application, the target homonymous point pair set is obtained by performing point pair change processing on the homonymous point pair set; obtaining target back projection errors of the M groups of homonymous point pairs based on the target homonymous point pair set, coordinates corresponding to all homonymous point pairs in the M groups of homonymous point pairs, and internal references and initial external references of a camera for collecting the two-dimensional image; and determining the target external parameters corresponding to the camera based on the initial back projection error and the target back projection error of the M groups of homonymous point pairs, the coordinates corresponding to each homonymous point pair in the target homonymous point pair set, and the internal parameters and the initial external parameters of the camera for acquiring the two-dimensional image. The homonymous point pair set can be optimized based on the initial back-projection error and the target back-projection error in the calibration process, so that homonymous points in the target homonymous point pair set obtained through optimization are superior homonymous points in M groups of homonymous point pairs, and target external parameters obtained based on the target homonymous point pair set are more accurate.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (15)

1. A joint calibration method, the method comprising:
acquiring M groups of homonymous point pairs, initial back projection errors of the M groups of homonymous point pairs and a homonymous point pair set, wherein the M groups of homonymous point pairs are selected from point cloud data and two-dimensional images corresponding to the point cloud data, each group of homonymous point pairs in the homonymous point pair set belong to the M groups of homonymous point pairs, and M is an integer greater than 1;
performing point pair change processing on the homonymous point pair set based on the M groups of homonymous point pairs to obtain a target homonymous point pair set;
obtaining target back projection errors of the M groups of homonymous point pairs based on the target homonymous point pair set, coordinates corresponding to each homonymous point pair in the M groups of homonymous point pairs, and internal references and initial external references of a camera for collecting the two-dimensional image;
and determining the target external parameters corresponding to the camera based on the initial back projection error and the target back projection error of the M groups of homonymous point pairs, the coordinates corresponding to each homonymous point pair in the target homonymous point pair set, and the internal parameters and the initial external parameters of the camera for acquiring the two-dimensional image.
2. The joint calibration method according to claim 1, wherein the obtaining target back-projection errors of the M groups of homonymous point pairs based on the target homonymous point pair set, coordinates corresponding to each homonymous point pair in the M groups of homonymous point pairs, and internal parameters and initial external parameters of a camera that acquires the two-dimensional image comprises:
optimizing the initial external parameters by using a preset optimization algorithm based on the coordinates corresponding to all homonymous point pairs in the target homonymous point pair set and the internal parameters of the camera for collecting the two-dimensional image to obtain the processed initial external parameters;
and obtaining a target back-projection error corresponding to the processed initial external parameter according to the processed initial external parameter, the coordinates corresponding to each homonymous point pair in the M groups of homonymous point pairs and the internal parameter.
3. The joint calibration method according to claim 2, wherein the determining the target external parameters corresponding to the camera based on the initial back-projection error and the target back-projection error of the M groups of homonymous point pairs, the coordinates corresponding to each homonymous point pair in the M groups of homonymous point pairs, and the internal parameters and the initial external parameters of the camera acquiring the two-dimensional image comprises:
determining whether to update the homonymous point pair set to the target homonymous point pair set according to the target back casting error and the initial back casting error;
and if the update is determined, updating the homonymous point pair set to the target homonymous point pair set, taking the processed initial external parameters as new initial external parameters, taking the target inverse casting error as a new initial inverse casting error, returning to execute the M groups of homonymous point pairs, and performing point pair change processing on the homonymous point pair set to obtain a target homonymous point pair set, wherein the processed initial external parameters are taken as the target external parameters corresponding to the camera until the number of point pair change processing on the homonymous point pair set reaches a preset number.
4. The joint calibration method according to claim 3, wherein the determining the target external parameters corresponding to the camera based on the initial back-projection error and the target back-projection error of the M sets of homonymous point pairs, the coordinates corresponding to each homonymous point pair in the M sets of homonymous point pairs, and the internal parameters and the initial external parameters of the camera acquiring the two-dimensional image further comprises:
and if the point pairs are determined not to be updated, returning to execute the step of carrying out point pair change processing on the point pair set based on the M groups of point pairs with the same name to obtain a target point pair set with the same name.
5. The joint calibration method of claim 3, wherein determining whether to update the set of homonymous point pairs to the target set of homonymous point pairs according to the target back-casting error and the initial back-casting error comprises:
comparing the target backcasting error to the initial backcasting error;
if the target back casting error is not larger than the initial back casting error, determining to update the homonymous point pair set to the target homonymous point pair set;
if the target back-casting error is larger than the initial back-casting error, obtaining an acceptance probability according to the initial back-casting error and the target back-casting error, and determining whether to update the homonymous point pair set to the target homonymous point pair set according to the acceptance probability.
6. The joint calibration method according to claim 5, wherein the obtaining an acceptance probability according to the initial back-projection error and the target back-projection error comprises:
and performing probability calculation according to the number of times of processing the homonymous point pair set, the initial back-projection error, the target back-projection error and the number of homonymous point pairs in the target homonymous point pair set to obtain an acceptance probability.
7. The joint calibration method according to claim 2, wherein based on coordinates corresponding to each homonymous point pair in the target homonymous point pair set and an internal parameter of a camera acquiring the two-dimensional image, optimizing the initial external parameter by using a preset optimization algorithm to obtain a processed initial external parameter, the method comprises:
constructing a coordinate conversion function based on internal parameters and initial external parameters of a camera for acquiring the two-dimensional image;
respectively substituting coordinates corresponding to all the homonymous point pairs in the target homonymous point pair set into the coordinate conversion function to obtain a coordinate conversion calculation formula;
and performing constrained optimization on the initial external parameters in the coordinate conversion calculation formula by using a sequential quadratic programming algorithm to obtain the processed initial external parameters.
8. The joint calibration method according to claim 7, wherein constructing a coordinate transfer function based on the internal reference and the initial external reference of the camera acquiring the two-dimensional image comprises:
constructing an internal reference matrix based on internal reference of a camera for acquiring the two-dimensional image;
constructing an external parameter matrix based on initial external parameters of a camera acquiring the two-dimensional image;
and constructing a coordinate conversion function between the two-dimensional coordinates and the three-dimensional coordinates in the corresponding point pair based on the internal reference matrix and the external reference matrix.
9. The joint calibration method according to any one of claims 1 to 8, wherein the performing, based on the M groups of homonymous point pairs, a point pair change process on the homonymous point pair set to obtain a target homonymous point pair set includes:
and based on the M groups of homonymous point pairs, randomly adding homonymous point pairs to the homonymous point pair set or randomly reducing homonymous point pairs from the homonymous point pair set to obtain a target homonymous point pair set, wherein the homonymous point pairs added to the homonymous point pair set belong to the M groups of homonymous point pairs.
10. The joint calibration method according to any one of claims 1 to 8, wherein obtaining an initial back-projection error of the M groups of homonymous point pairs comprises:
calculating back projection errors of each group of homonymous point pairs in the homonymous point pair set based on coordinates corresponding to each group of homonymous point pairs in the M groups of homonymous point pairs and initial external reference and internal reference of a camera for collecting the two-dimensional image;
and carrying out mean value calculation on the back throw errors of each group of homonymous point pairs in the M groups of homonymous point pairs to obtain initial back throw errors.
11. The joint calibration method according to any one of claims 1 to 8, wherein obtaining M sets of homologous points comprises:
acquiring point cloud data and a two-dimensional image corresponding to the point cloud data;
back projecting the point cloud data to the two-dimensional image to obtain a base map;
and selecting M groups of homonymy point pairs based on the base map, wherein each group of homonymy point pairs comprises a two-dimensional point in the two-dimensional image and a three-dimensional point in the point cloud data, and the corresponding coordinates of each group of homonymy point pairs comprise a two-dimensional coordinate corresponding to the two-dimensional point in the two-dimensional image and a three-dimensional coordinate corresponding to the three-dimensional point in the point cloud data.
12. A joint calibration apparatus, the apparatus comprising:
the system comprises a data acquisition module, a data processing module and a data processing module, wherein the data acquisition module is used for acquiring M groups of homonymous point pairs, initial back projection errors of the M groups of homonymous point pairs and a homonymous point pair set, the M groups of homonymous point pairs are selected from point cloud data and two-dimensional images corresponding to the point cloud data, each group of homonymous point pairs in the homonymous point pair set belongs to the M groups of homonymous point pairs, and M is an integer greater than 1;
the point pair change module is used for carrying out point pair change processing on the homonymous point pair set based on the M groups of homonymous point pairs to obtain a target homonymous point pair set;
a target error obtaining module, configured to obtain target back-projection errors of the M groups of homonymous point pairs based on the target homonymous point pair set, coordinates corresponding to each homonymous point pair in the M groups of homonymous point pairs, and an internal reference and an initial external reference of a camera that acquires the two-dimensional image;
and the target external parameter obtaining module is used for determining the target external parameters corresponding to the camera based on the initial back projection error and the target back projection error of the M groups of homonymous point pairs, the coordinates corresponding to all homonymous point pairs in the target homonymous point pair set, and the internal parameters and the initial external parameters of the camera for acquiring the two-dimensional image.
13. An electronic device, comprising:
one or more processors;
a memory;
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-11.
14. A vehicle, characterized by comprising:
a memory;
a camera for acquiring a two-dimensional image;
the laser radar is used for collecting point cloud data;
one or more processors respectively connected with the camera and the laser radar;
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-11.
15. A computer-readable storage medium, characterized in that the computer-readable storage medium stores program code that can be invoked by a processor to perform the method according to any one of claims 1 to 11.
CN202110562086.7A 2021-05-24 2021-05-24 Combined calibration method, device, equipment, vehicle and storage medium Active CN113034621B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110562086.7A CN113034621B (en) 2021-05-24 2021-05-24 Combined calibration method, device, equipment, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110562086.7A CN113034621B (en) 2021-05-24 2021-05-24 Combined calibration method, device, equipment, vehicle and storage medium

Publications (2)

Publication Number Publication Date
CN113034621A true CN113034621A (en) 2021-06-25
CN113034621B CN113034621B (en) 2021-07-30

Family

ID=76455541

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110562086.7A Active CN113034621B (en) 2021-05-24 2021-05-24 Combined calibration method, device, equipment, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN113034621B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576228A (en) * 2024-01-16 2024-02-20 成都合能创越软件有限公司 Real-time scene-based camera coordinate calibration method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103617649A (en) * 2013-11-05 2014-03-05 北京江宜科技有限公司 Camera self-calibration technology-based river model topography measurement method
CN104766302A (en) * 2015-02-05 2015-07-08 武汉大势智慧科技有限公司 Method and system for optimizing laser scanning point cloud data by means of unmanned aerial vehicle images
WO2017122529A1 (en) * 2016-01-12 2017-07-20 Mitsubishi Electric Corporation System and method for fusing outputs of sensors having different resolutions
CN107656259A (en) * 2017-09-14 2018-02-02 同济大学 The combined calibrating System and method for of external field environment demarcation
CN109978955A (en) * 2019-03-11 2019-07-05 武汉环宇智行科技有限公司 A kind of efficient mask method for combining laser point cloud and image
US10739784B2 (en) * 2017-11-29 2020-08-11 Qualcomm Incorporated Radar aided visual inertial odometry initialization
CN111680611A (en) * 2020-06-03 2020-09-18 江苏无线电厂有限公司 Road trafficability detection method, system and equipment
CN112396664A (en) * 2020-11-24 2021-02-23 华南理工大学 Monocular camera and three-dimensional laser radar combined calibration and online optimization method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103617649A (en) * 2013-11-05 2014-03-05 北京江宜科技有限公司 Camera self-calibration technology-based river model topography measurement method
CN104766302A (en) * 2015-02-05 2015-07-08 武汉大势智慧科技有限公司 Method and system for optimizing laser scanning point cloud data by means of unmanned aerial vehicle images
WO2017122529A1 (en) * 2016-01-12 2017-07-20 Mitsubishi Electric Corporation System and method for fusing outputs of sensors having different resolutions
CN107656259A (en) * 2017-09-14 2018-02-02 同济大学 The combined calibrating System and method for of external field environment demarcation
US10739784B2 (en) * 2017-11-29 2020-08-11 Qualcomm Incorporated Radar aided visual inertial odometry initialization
CN109978955A (en) * 2019-03-11 2019-07-05 武汉环宇智行科技有限公司 A kind of efficient mask method for combining laser point cloud and image
CN109978955B (en) * 2019-03-11 2021-03-19 武汉环宇智行科技有限公司 Efficient marking method combining laser point cloud and image
CN111680611A (en) * 2020-06-03 2020-09-18 江苏无线电厂有限公司 Road trafficability detection method, system and equipment
CN112396664A (en) * 2020-11-24 2021-02-23 华南理工大学 Monocular camera and three-dimensional laser radar combined calibration and online optimization method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JESSE LEVINSON ET AL: "Automatic Online Calibration of Cameras and Lasers", 《CONFERENCE:ROBOTICS:SCIENCE AND SYSTEMS 2013》 *
赵松 等: "基于立体标定靶的扫描仪与数码相机联合标定", 《测绘科学技术学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576228A (en) * 2024-01-16 2024-02-20 成都合能创越软件有限公司 Real-time scene-based camera coordinate calibration method and system
CN117576228B (en) * 2024-01-16 2024-04-16 成都合能创越软件有限公司 Real-time scene-based camera coordinate calibration method and system

Also Published As

Publication number Publication date
CN113034621B (en) 2021-07-30

Similar Documents

Publication Publication Date Title
CN108665536B (en) Three-dimensional and live-action data visualization method and device and computer readable storage medium
CN110967024A (en) Method, device, equipment and storage medium for detecting travelable area
US20160178728A1 (en) Indoor Positioning Terminal, Network, System and Method
CN110988849B (en) Calibration method and device of radar system, electronic equipment and storage medium
WO2022100265A1 (en) Camera calibration method, apparatus, and system
CN112150560B (en) Method, device and computer storage medium for determining vanishing point
CN111104893B (en) Target detection method, target detection device, computer equipment and storage medium
CN113542600B (en) Image generation method, device, chip, terminal and storage medium
CN113888452A (en) Image fusion method, electronic device, storage medium, and computer program product
CN114125411B (en) Projection device correction method, projection device correction device, storage medium and projection device
CN113034621B (en) Combined calibration method, device, equipment, vehicle and storage medium
WO2022166868A1 (en) Walkthrough view generation method, apparatus and device, and storage medium
CN113191976B (en) Image shooting method, device, terminal and storage medium
CN110490295B (en) Data processing method and processing device
WO2021088497A1 (en) Virtual object display method, global map update method, and device
CN111538009B (en) Radar point marking method and device
CN112308766B (en) Image data display method and device, electronic equipment and storage medium
US20210218948A1 (en) Depth image obtaining method, image capture device, and terminal
CN111127539B (en) Parallax determination method and device, computer equipment and storage medium
CN114623836A (en) Vehicle pose determining method and device and vehicle
US11350043B2 (en) Image processing method and terminal
CN112184543B (en) Data display method and device for fisheye camera
US11527022B2 (en) Method and apparatus for transforming hair
JPWO2018079043A1 (en) Information processing apparatus, imaging apparatus, information processing system, information processing method, and program
CN117671164A (en) High-precision map base map construction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40046040

Country of ref document: HK